chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
1. The following data are for a series of external standards of Cd2+ buffered to a pH of 4.6. [Cd2+] (nM) 15.4 30.4 44.9 59.0 72.7 86.0 \(S_{spike}\) (nA) 4.8 11.4 18.2 26.6 32.3 37.7 (a) Use a linear regression analysis to determine the equation for the calibration curve and report confidence intervals for the slope and the y-intercept. (b) Construct a plot of the residuals and comment on their significance. At a pH of 3.7 the following data were recorded for the same set of external standards. [Cd2+] (nM) 15.4 30.4 44.9 59.0 72.7 86.0 \(S_{spike}\) (nA) 15.0 42.7 58.5 77.0 101 118 (c) How much more or less sensitive is this method at the lower pH? (d) A single sample is buffered to a pH of 3.7 and analyzed for cadmium, yielding a signal of 66.3 nA. Report the concentration of Cd2+ in the sample and its 95% confidence interval. The data in this problem are from Wojciechowski, M.; Balcerzak, J. Anal. Chim. Acta 1991, 249, 433–445. 2. Consider the following three data sets, each of which gives values of y for the same values of x. x y1 y2 y3 10.00 8.04 9.14 7.46 8.00 6.95 8.14 6.77 13.00 7.58 8.74 12.74 9.00 8.81 8.77 7.11 11.00 8.33 9.26 7.81 14.00 9.96 8.10 8.84 6.00 7.24 6.13 6.08 4.00 4.26 3.10 5.39 12.00 10.84 9.13 8.15 7.00 4.82 7.26 6.42 5.00 5.68 4.74 5.73 (a) An unweighted linear regression analysis for the three data sets gives nearly identical results. To three significant figures, each data set has a slope of 0.500 and a y-intercept of 3.00. The standard deviations in the slope and the y-intercept are 0.118 and 1.125 for each data set. All three standard deviations about the regression are 1.24. Based on these results for a linear regression analysis, comment on the similarity of the data sets. (b) Complete a linear regression analysis for each data set and verify that the results from part (a) are correct. Construct a residual plot for each data set. Do these plots change your conclusion from part (a)? Explain. (c) Plot each data set along with the regression line and comment on your results. (d) Data set 3 appears to contain an outlier. Remove the apparent outlier and reanalyze the data using a linear regression. Comment on your result. (e) Briefly comment on the importance of visually examining your data. These three data sets are taken from Anscombe, F. J. “Graphs in Statistical Analysis,” Amer. Statis. 1973, 27, 17-21. 3. Fanke and co-workers evaluated a standard additions method for a voltammetric determination of Tl. A summary of their results is tabulated in the following table. ppm Tl added Instrument Response (μμA) 0.000 2.53 2.50 2.70 2.63 2.70 2.80 2.52 0.387 8.42 7.96 8.54 8.18 7.70 8.34 7.98 1.851 29.65 28.70 29.05 28.30 29.20 29.95 28.95 5.734 84.8 85.6 86.0 85.2 84.2 86.4 87.8 Use a weighted linear regression to determine the standardization relationship for this data. The data in this problem are from Franke, J. P.; de Zeeuw, R. A.; Hakkert, R. Anal. Chem. 1978, 50, 1374–1380.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.06%3A_Exercises.txt
In the presence of H2O2 and H2SO4, a solution of vanadium forms a reddish brown color that is believed to be a compound with the general formula (VO)2(SO4)3. The intensity of the solution’s color depends on the concentration of vanadium, which means we can use its absorbance at a wavelength of 450 nm to develop a quantitative method for vanadium. The intensity of the solution’s color also depends on the amounts of H2O2 and H2SO4 that we add to the sample—in particular, a large excess of H2O2 decreases the solution’s absorbance as it changes from a reddish brown color to a yellowish color [Vogel’s Textbook of Quantitative Inorganic Analysis, Longman: London, 1978, p. 752.]. Developing a standard method for vanadium based on this reaction requires that we optimize the amount of H2O2 and H2SO4 added if we want to maximize the absorbance at 450 nm. Using the terminology of statisticians, we call the solution’s absorbance the system’s response. Hydrogen peroxide and sulfuric acid are factors whose concentrations, or factor levels, determine the system’s response. To optimize the method we need to find the best combination of factor levels. Usually we seek a maximum response, as is the case for the quantitative analysis of vanadium as (VO)2(SO4)3. In other situations, such as minimizing an analysis’s percent error, we seek a minimum response. How we design experiments to optimize the response is the subject of this chapter. 09: Gathering Data One of the most effective ways to think about an optimization is to visualize how a system’s response changes when we increase or decrease the levels of one or more of its factors. We call a plot of the system’s response as a function of the factor levels a response surface. The simplest response surface has one factor and is drawn in two dimensions by placing the responses on the y-axis and the factor’s levels on the x-axis. The calibration curve in Figure $1$ is an example of a one-factor response surface. We also can define the response surface mathematically. The response surface in Figure $1$, for example, is $A = 0.008 + 0.0896C_A \nonumber$ where A is the absorbance and CA is the analyte’s concentration in ppm. For a two-factor system, such as the quantitative analysis for vanadium described earlier, the response surface is a flat or curved plane in three dimensions. As shown in Figure $2$, we place the response on the z-axis and the factor levels on the x-axis and the y-axis. Figure $\PageIndex{2a}$ shows a pseudo-three dimensional wireframe plot for a system that obeys the equation $R = 3.0 - 0.30A + 0.020AB \nonumber$ where R is the response, and A and B are the factors. We also can represent a two-factor response surface using the two-dimensional level plot in Figure $\PageIndex{2b}$, which uses a color gradient to show the response on a two-dimensional grid, or using the two-dimensional contour plot in Figure $\PageIndex{2c}$, which uses contour lines to display the response surface. The response surfaces in Figure $2$ cover a limited range of factor levels (0 ≤ A ≤ 10, 0 ≤ B ≤ 10), but we can extend each to more positive or to more negative values because there are no constraints on the factors. Most response surfaces of interest to an analytical chemist have natural constraints imposed by the factors, or have practical limits set by the analyst. The response surface in Figure $1$, for example, has a natural constraint on its factor because the analyte’s concentration cannot be less than zero; that is, $C_A \ge 0$. If we have an equation for the response surface, then it is relatively easy to find the optimum response. Unfortunately, we rarely know any useful details about the response surface. Instead, we must determine the response surface’s shape and locate its optimum response by running appropriate experiments. The focus of this chapter is on useful experimental methods for characterizing a response surface. These experimental methods are divided into two broad categories: searching methods, in which an algorithm guides a systematic search for the optimum response, and modeling methods, in which we use a theoretical model or an empirical model of the response surface to predict the optimum response.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.01%3A_Response_Surfaces.txt
Figure \(1\) shows a portion of the South Dakota Badlands, a barren landscape that includes many narrow ridges formed through erosion. Suppose you wish to climb to the highest point on this ridge. Because the shortest path to the summit is not obvious, you might adopt the following simple rule: look around you and take one step in the direction that has the greatest change in elevation, and then repeat until no further step is possible. The route you follow is the result of a systematic search that uses a searching algorithm. Of course there are as many possible routes as there are starting points, three examples of which are shown in Figure \(1\). Note that some routes do not reach the highest point—what we call the global optimum. Instead, many routes end at a local optimum from which further movement is impossible. We can use a systematic searching algorithm to locate the optimum response. We begin by selecting an initial set of factor levels and measure the response. Next, we apply the rules of our searching algorithm to determine a new set of factor levels and measure its response, continuing this process until we reach an optimum response. Before we consider two common searching algorithms, let’s consider how we evaluate a searching algorithm. Effectiveness and Efficiency A searching algorithm is characterized by its effectiveness and its efficiency. To be effective, a searching algorithm must find the response surface’s global optimum, or at least reach a point near the global optimum. A searching algorithm may fail to find the global optimum for several reasons, including a poorly designed algorithm, uncertainty in measuring the response, and the presence of local optima. Let’s consider each of these potential problems. A poorly designed algorithm may prematurely end the search before it reaches the response surface’s global optimum. As shown in Figure \(2\), when you climb a ridge that slopes up to the northeast, an algorithm is likely to fail it if limits your steps only to the north, south, east, or west. An algorithm that cannot respond to a change in the direction of steepest ascent is not an effective algorithm. All measurements contain uncertainty, or noise, that affects our ability to characterize the underlying signal. When the noise is greater than the local change in the signal, then a searching algorithm is likely to end before it reaches the global optimum. Figure \(3\), which provides a different view of Figure \(1\), shows us that the relatively flat terrain leading up to the ridge is heavily weathered and very uneven. Because the variation in local height (the noise) exceeds the slope (the signal), our searching algorithm ends the first time we step up onto a less weathered local surface that is higher than the immediately surrounding surfaces. Finally, a response surface may contain several local optima, only one of which is the global optimum. If we begin the search near a local optimum, our searching algorithm may never reach the global optimum. The ridge in Figure \(1\), for example, has many peaks. Only those searches that begin at the far right will reach the highest point on the ridge. Ideally, a searching algorithm should reach the global optimum regardless of where it starts. A searching algorithm always reaches an optimum. Our problem, of course, is that we do not know if it is the global optimum. One method for evaluating a searching algorithm’s effectiveness is to use several sets of initial factor levels, find the optimum response for each, and compare the results. If we arrive at or near the same optimum response after starting from very different locations on the response surface, then we are more confident that is it the global optimum. Efficiency is a searching algorithm’s second desirable characteristic. An efficient algorithm moves from the initial set of factor levels to the optimum response in as few steps as possible. In seeking the highest point on the ridge in Figure \(3\), we can increase the rate at which we approach the optimum by taking larger steps. If the step size is too large, however, the difference between the experimental optimum and the true optimum may be unacceptably large. One solution is to adjust the step size during the search, using larger steps at the beginning and smaller steps as we approach the global optimum.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.02%3A_Searching_Algorithms.txt
A simple algorithm for optimizing the quantitative method for vanadium described earlier is to select initial concentrations for H2O2 and H2SO4 and measure the absorbance. Next, we optimize one reagent by increasing or decreasing its concentration—holding constant the second reagent’s concentration—until the absorbance decreases. We then vary the concentration of the second reagent—maintaining the first reagent’s optimum concentration—until we no longer see an increase in the absorbance. We can stop this process, which we call a one-factor-at-a-time optimization, after one cycle or repeat the steps until the absorbance reaches a maximum value or it exceeds an acceptable threshold value. A one-factor-at-a-time optimization is consistent with a notion that to determine the influence of one factor we must hold constant all other factors. This is an effective, although not necessarily an efficient experimental design when the factors are independent [see Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986]. Two factors are independent when a change in the level of one factor does not influence the effect of a change in the other factor’s level. Table $1$ provides an example of two independent factors. Table $1$. Example of Two Independent Factors factor A factor B response $A_1$ $B_1$ 40 $A_2$ $B_1$ 80 $A_1$ $B_2$ 60 $A_2$ $B_2$ 100 If we hold factor B at level B1, changing factor A from level A1 to level A2 increases the response from 40 to 80, or a change in response, $\Delta R$ of $R = 80 - 40 = 40 \nonumber$ If we hold factor B at level B2, we find that we have the same change in response when the level of factor A changes from A1 to A2. $R = 100 - 60 = 40 \nonumber$ We can see this independence visually if we plot the response as a function of factor A’s level, as shown in Figure $1$. The parallel lines show that the level of factor B does not influence factor A’s effect on the response. Mathematically, two factors are independent if they do not appear in the same term in the equation that describes the response surface. Figure $2$, for example, shows the resulting pseudo-three-dimensional surface and a contour map for the equation $R = 2.0 + 0.12 A + 0.48 B - 0.03A^2 - 0.03 B^2 \nonumber$ which describes a response surface with independent factors because no term in the equation includes both factor A and factor B. The easiest way to follow the progress of a searching algorithm is to map its path on a contour plot of the response surface. Positions on the response surface are identified as (a, b) where a and b are the levels for factor A and for factor B. The contour plot in Figure $\PageIndex{2b}$, for example, shows four one-factor-at-a-time optimizations of the response surface in Figure $\PageIndex{2a}$. The effectiveness and efficiency of this algorithm when optimizing independent factors is clear—each trial reaches the optimum response at (2, 8) in a single cycle. Unfortunately, factors often are not independent. Consider, for example, the data in Table $2$ Table $2$. Example of Two Dependent Factors factor A factor B response $A_1$ $B_1$ 20 $A_2$ $B_1$ 80 $A_1$ $B_2$ 60 $A_2$ $B_2$ 80 where a change in the level of factor B from level B1 to level B2 has a significant effect on the response when factor A is at level A1 $R = 60 - 20 = 40 \nonumber$ but no effect when factor A is at level A2. $R = 80 - 80 = 0 \nonumber$Figure $3$ shows this dependent relationship between the two factors. Factors that are dependent are said to interact and the equation for the response surface includes an interaction term that contains both factor A and factor B. The final term in this equation $R = 5.5 + 1.5 A + 0.6 B - 0.15 A^2 - 0.245 B^2 - 0.0857 AB \nonumber$ for example, accounts for the interaction between factor A and factor B. Figure $4$ shows the resulting pseudo-three-dimensional surface and a contour map for the response surface defined by this equation. The progress of a one-factor-at-a-time optimization for this response surface is shown in Figure $\PageIndex{4b}$. Although the optimization for dependent factors is effective, it is less efficient than that for independent factors. In this case it takes four cycles to reach the optimum response of (3, 7) if we begin at (0, 0).
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.03%3A_One-Factor-at-a-Time_Optimizations.txt
One strategy for improving the efficiency of a searching algorithm is to change more than one factor at a time. A convenient way to accomplish this when there are two factors is to begin with three sets of initial factor levels arranged as the vertices of a triangle (Figure $1$. After measuring the response for each set of factor levels, we identify the combination that gives the worst response and replace it with a new set of factor levels using a set of rules. This process continues until we reach the global optimum or until no further optimization is possible. The set of factor levels is called a simplex. In general, for k factors a simplex is a $k + 1$ dimensional geometric figure [see Spendley, W.; Hext, G. R.; Himsworth, F. R. Technometrics 1962, 4, 441–461, and Deming, S. N.; Parker, L. R. CRC Crit. Rev. Anal. Chem. 1978 7(3), 187–202]. Thus, for two factors the simplex is a triangle. For three factors the simplex is a tetrahedron. To place the initial two-factor simplex on the response surface, we choose a starting point (a, b) for the first vertex and place the remaining two vertices at (a + sa, b) and (a + 0.5sa, b + 0.87sb) where sa and sb are step sizes for factor A and for factor B [see, for example, Long, D. E. Anal. Chim. Acta 1969, 46, 193–206]. The following set of rules moves the simplex across the response surface in search of the optimum response: Rule 1. Rank the vertices from best (vb) to worst (vw). Rule 2. Reject the worst vertex (vw) and replace it with a new vertex (vn) by reflecting the worst vertex through the midpoint of the remaining vertices. The new vertex’s factor levels are twice the average factor levels for the retained vertices minus the factor levels for the worst vertex. For a two-factor optimization, the equations are shown here where vs is the third vertex. $a_{v_n} = 2 \left( \frac {a_{v_b} + a_{v_s}} {2} \right) - a_{v_w} \nonumber$ $b_{v_n} = 2 \left( \frac {b_{v_b} + b_{v_s}} {2} \right) - b_{v_w} \nonumber$ Rule 3. If the new vertex has the worst response, then return to the previous vertex and reject the vertex with the second worst response, (vs) calculating the new vertex’s factor levels using rule 2. This rule ensures that the simplex does not return to the previous simplex. Rule 4. Boundary conditions are a useful way to limit the range of possible factor levels. For example, it may be necessary to limit a factor’s concentration for solubility reasons, or to limit the temperature because a reagent is thermally unstable. If the new vertex exceeds a boundary condition, then assign it the worst response and follow rule 3. Because the size of the simplex remains constant during the search, this algorithm is called a fixed-sized simplex optimization. The following example illustrates the application of these rules. Example $1$ Find the optimum for the response surface described by the equation $R = 5.5 + 1.5 A + 0.6 B - 0.15 A^2 - 0.254 B^2 - 0.0857 AB \nonumber$ using the fixed-sized simplex searching algorithm. Use (0, 0) for the initial factor levels and set each factor’s step size to 1.00. Solution Letting a = 0, b =0, sa = 1.00, and sb = 1.00 gives the vertices for the initial simplex as $\text{vertex 1:} (a, b) = (0, 0) \nonumber$ $\text{vertex 2:} (a + s_a, b) = (1.00, 0) \nonumber$ $\text{vertex 3:} (a + 0.5s_a, b + 0.87s_b) = (0.50, 0.87) \nonumber$ The responses for the three vertices are shown in the following table vertex a b response $v_1$ 0 0 5.50 $v_2$ 1.00 0 6.85 $v_3$ 0.50 0.87 6.68 with $v_1$ giving the worst response and $v_3$ the best response. Following Rule 1, we reject $v_1$ and replace it with a new vertex; thus $a_{v_4} = 2 \left( \frac {1.00 + 0.50} {2} \right) - 0 = 1.50 \nonumber$ $b_{v_4} = 2 \left( \frac {0 + 0.87} {2} \right) - 0 = 0.87 \nonumber$ The following table gives the vertices of the second simplex. vertex a b response $v_2$ 1.50 0 6.85 $v_3$ 0.50 0.87 6.68 $v_4$ 1.50 0.87 7.80 with $v_3$ giving the worst response and $v_4$ the best response. Following Rule 1, we reject $v_3$ and replace it with a new vertex; thus $a_{v_5} = 2 \left( \frac {1.00 + 1.50} {2} \right) - 0.50 = 2.00 \nonumber$ $b_{v_5} = 2 \left( \frac {0 + 0.87} {2} \right) - 0.87 = 0 \nonumber$ The following table gives the vertices of the third simplex. vertex a b response $v_2$ 1.50 0 6.85 $v_4$ 1.50 0.87 780 $v_5$ 2.00 0 7.90 The calculation of the remaining vertices is left as an exercise. Figure $2$ shows the progress of the complete optimization. After 29 steps the simplex begins to repeat itself, circling around the optimum response of (3, 7). The size of the initial simplex ultimately limits the effectiveness and the efficiency of a fixed-size simplex searching algorithm. We can increase its efficiency by allowing the size of the simplex to expand or to contract in response to the rate at which we approach the optimum. For example, if we find that a new vertex is better than any of the vertices in the preceding simplex, then we expand the simplex further in this direction on the assumption that we are moving directly toward the optimum. Other conditions might cause us to contract the simplex—to make it smaller—to encourage the optimization to move in a different direction. We call this a variable-sized simplex optimization. Consult this chapter’s additional resources for further details of the variable-sized simplex optimization.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.04%3A_Simplex_Optimization.txt
A response surface is described mathematically by an equation that relates the response to its factors. If we measure the response for several combinations of factor levels, then we can use a regression analysis to build a model of the response surface. There are two broad categories of models that we can use for a regression analysis: theoretical models and empirical models. Theoretical Models of the Response Surface A theoretical model is derived from the known chemical and physical relationships between the response and its factors. In spectrophotometry, for example, Beer’s law is a theoretical model that relates an analyte’s absorbance, A, to its concentration, CA $A = \epsilon b C_A \nonumber$ where $\epsilon$ is the molar absorptivity and b is the pathlength of the electromagnetic radiation passing through the sample. A Beer’s law calibration curve, therefore, is a theoretical model of a response surface. In Chapter 8 we learned how to use linear regression to build a mathematical model based on a theoretical relationship. Empirical Models of the Response Surface In many cases the underlying theoretical relationship between the response and its factors is unknown. We still can develop a model of the response surface if we make some reasonable assumptions about the underlying relationship between the factors and the response. For example, if we believe that the factors A and B are independent and that each has only a first-order effect on the response, then the following equation is a suitable model. $R = \beta_0 + \beta_a A + \beta_b B \nonumber$ where R is the response, A and B are the factor levels, and $\beta_0$, $\beta_a$, and $\beta_b$ are adjustable parameters whose values are determined by a linear regression analysis. Other examples of equations include those for dependent factors $R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \nonumber$ and those with higher-order terms. $R = \beta_0 + \beta_a A + \beta_b B + \beta_{aa} A^2 + \beta_{bb} B^2 \nonumber$ Each of these equations provides an empirical model of the response surface because it has no rigorous basis in a theoretical understanding of the relationship between the response and its factors. Although an empirical model may provide an excellent description of the response surface over a limited range of factor levels, it has no basis in theory and we cannot reliably extend it to unexplored parts of the response surface. Factorial Designs To build an empirical model we measure the response for at least two levels for each factor. For convenience we label these levels as high, Hf, and low, Lf, where f is the factor; thus HA is the high level for factor A and LB is the low level for factor B. If our empirical model contains more than one factor, then each factor’s high level is paired with both the high level and the low level for all other factors. In the same way, the low level for each factor is paired with the high level and the low level for all other factors. As shown in Figure $1$, this requires 2k experiments where k is the number of factors. This experimental design is known as a 2k factorial design. Note Another system of notation is to use a plus sign (+) to indicate a factor’s high level and a minus sign (–) to indicate its low level. Determining the Empirical Model A 22 factorial design requires four experiments and allows for an empirical model with four variables. With four experiments, we can use a 22 factorial design to create an empirical model that includes four variables: an intercept, first-order effects in A and B, and an interaction term between A and B $R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \nonumber$ The following example walks us through the calculations needed to find this model. Example $1$ Suppose we wish to optimize the yield of a synthesis and we expect that the amount of catalyst (factor A with units of mM) and the temperature (factor B with units of °C) are likely important factors. The response, $R$, is the reaction's yield in mg. We run four experiments and obtain the following responses: run A B R 1 15 20 145 2 25 20 158 3 15 30 135 4 25 30 150 Determine an equation for a response surface that provides a suitable model for predicting the effect of the catalyst and temperture on the reaction's yield. Solution Examining the data we see from runs 1 & 2 and from runs 3 & 4, that increasing factor A while holding factor B constant results in an increase in the response; thus, we expect that higher concentrations of the catalyst have a favorable affect on the reaction's yield. We also see from runs 1 & 3 and from runs 2 & 4, that increasing factor B while holding factor A constant results in a decrease in the response; thus, we expect that an increase in temperature has an unfavorable affect on the reaction's yield. Finally, we also see from runs 1 & 2 and from runs 3 & 4, that $\Delta R$ is more positive when factor B is at its higher level; thus, we expect that there is a positive interaction between factors A and B. With four experiments, we are limited to a model that considers an intercept, first-order effects in A and B, and an interaction term between A and B $R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \nonumber$ We can work out values for this model's coefficients by solving the following set of simultaneous equations: $\beta_0 + 15 \beta_a + 20 \beta_b + (15)(20) \beta_{ab} = \beta_0 + 15 \beta_a + 20 \beta_b + 300 \beta_{ab} = 145 \nonumber$ $\beta_0 + 25 \beta_a + 20 \beta_b + (25)(20) \beta_{ab} = \beta_0 + 25 \beta_a + 20 \beta_b + 500 \beta_{ab} = 158 \nonumber$ $\beta_0 + 15 \beta_a + 30 \beta_b + (15)(30) \beta_{ab} = \beta_0 + 15 \beta_a + 30 \beta_b + 450 \beta_{ab} = 135 \nonumber$ $\beta_0 + 25 \beta_a + 30 \beta_b + (25)(30) \beta_{ab} = \beta_0 + 25 \beta_a + 30 \beta_b + 750 \beta_{ab} = 150 \nonumber$ To solve this set of equations, we subtract the first equation from the second equation and subtract the third equation from the fourth equation, leaving us with the following two equations $10 \beta_a + 200 \beta_{ab} = 13 \nonumber$ $10 \beta_a + 300 \beta_{ab} = 15 \nonumber$ Next, subtracting the first of these equations from the second gives $100 \beta_{ab} = 2 \nonumber$ or $\beta_{ab} = 0.02$. Substituting back gives $10 \beta_{a} + 200 \times 0.02 = 13 \nonumber$ or $\beta_a = 0.9$. Subtracting the equation for the first experiment from the equation for the third experiment gives $10 \beta_b + 150 \beta_{ab} = -10 \nonumber$ Substituting in 0.02 for $\beta_{ab}$ and solving gives $\beta_b = -1.3$. Finally, substituting in our values for $\beta_a$, $\beta_b$, and $\beta_{ab}$ into any of the first four equations gives $\beta_0 = 151.5$. Our final model is $R = 151.5 + 0.9 A - 1.3 B + 0.02 AB \nonumber$ When we consider how to interpret our empirical equation for the response surface, we need to consider several important limitations: 1. The intercept in our model represents a condition far removed from our experiments: In this case, the intercept gives the reaction's yield in the absence of catalyst and at a temperature of 0°C, either of which we may not be useful conditions. In general, it is never a good idea to extrapolate a model far beyond the conditions used to define the model. 2. The sign for a factor's first-order effects may be misleading if there is a significant interaction between it and other factors. Although our model shows that factor B (the temperature) has a negative first-order effect, the positive interaction between the two factors means there are conditions where an increase in B will increase the reaction's yield. 3. It is difficult to judge the relative importance of two or more factors by examining their coefficients if their scales are not the same. This could present a problem, for example, if we reported the amount of catalyst (factor A) using molar concentrations as these values would be three-orders of magnitude smaller than the reported temperatures. 4. When the number of variables is the same as the number of experiments, as is the case here, then there are no degrees of freedom and we have no simple way to test the model's suitability. Determining the Empirical Model Using Coded Factor Levels We can address two of the limitations described above by using coded factor levels in which we assign $+1$ for a high level and $-1$ for a low level. Defining the upper limit and the lower limit of the factors as $+1$ and $-1$ does two things for us: it places the intercept at the center of our experiments, which avoids the concern of extrapolating our model; and it places all factors on a common scale, and which makes it easier to compare the relative effects of the factors. Coding also makes it easier to determine the empirical model's equation when we complete calculations by hand. Example $2$ To explore the effect of temperature on a reaction, we assign 30oC to a coded factor level of $-1$ and assign a coded level $+1$ to a temperature of 50oC. What temperature corresponds to a coded level of $-0.5$ and what is the coded level for a temperature of 60oC? Solution The difference between $-1$ and $+1$ is 2, and the difference between 30oC and 50oC is 20oC; thus, each unit in coded form is equivalent to 10oC in uncoded form. With this information, it is easy to create a simple scale between the coded and the uncoded values, as shown in Figure $2$. A temperature of 35oC corresponds to a coded level of $-0.5$ and a coded level of $+2$ corresponds to a temperature of 60oC. As we see in the following example, factor levels simplify the calculations for an empirical model. Example $3$ Rework Example $1$ using coded factor levels. Solution The table below shows the original factor levels (A and B), their corresponding coded factor levels (A* and B*) and A*B*, which is the empirical model's interaction term. run A B A* B* A*B* R 1 15 20 $-1$ $-1$ $+1$ 145 2 25 20 $+1$ $-1$ $-1$ 158 3 15 30 $-1$ $+1$ $-1$ 135 4 25 30 $+1$ $+1$ $+1$ 150 The empirical equation has four unknowns—the four beta terms—and Table $1$ describes the four experiments. We have just enough information to calculate values for $\beta_0$, $\beta_a$, $\beta_b$, and $\beta_{ab}$. When working with the coded factor levels, the values of these parameters are easy to calculate using the following equations, where n is the number of runs. $\beta_{0} \approx b_{0}=\frac{1}{n} \sum_{i=1}^{n} R_{i} \nonumber$ $\beta_{a} \approx b_{a}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} R_{i} \nonumber$ $\beta_{b} \approx b_{b}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} R_{i} \nonumber$ $\beta_{ab} \approx b_{ab}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} R_{i} \nonumber$ Solving for the estimated parameters using the data in Table $1$ $b_{0}=\frac{145 + 158 + 135 + 150}{4} = 147 \nonumber$ $b_{a}=\frac{-145 + 158 - 135 + 150}{4} = 7 \nonumber$ $b_{b}=\frac{-145 - 11.5 + 135 + 150}{4} = 5.0 \nonumber$ $b_{ab}=\frac{145 - 158 - 135 + 150}{4} = 0.5 \nonumber$ leaves us with the coded empirical model for the response surface. $R = 147 + 7 A^* - 4.5 B^* + 0.5 A^* B^* \nonumber$ Note Do you see why the equations for calculating $b_0$, $b_a$, $b_b$, and $b_{ab}$ work? Take the equation for $b_a$ as an example $\beta_{a} \approx b_{a}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} R_{i} \nonumber$ where $b_{a}=\frac{-145 + 158 - 135 + 150}{4} = 7 \nonumber$ The first and the third terms in this equation give the response when $A^*$ is at its low level, and the second and fourth terms in this equation give the response when $A^*$ is at its high level. In the two terms where $A^*$ is at its low level, $B^*$ is at both its low level (first term) and its high level (third term), and in the two terms where $A^*$ is at its high level, $B^*$ is at both its low level (second term) and its high level (fourth term). As a result, the contribution of $B^*$ is removed from the calculation. The same holds true for the effect of $A^* B^*$, although this is left for you to confirm. We can transform the coded model into a non-coded model by noting that $A = 20 + 5A^*$ and that $B = 25 + 5B^*$, solving for $A^*$ and $B^*$, to obtain $A^* = 0.2 A - 4$ and $B^* = 0.2 B - 5$, and substituting into the coded model and simplifying. $R = 147 + 7 (0.2A - 4) - 4.5 (0.2B - 5) + 0.5(0.2A - 4)(0.2A - 5) \nonumber$ $R = 147 + 1.4A - 28 - 0.9B + 22.5 + 0.02AB - 0.5A - 0.4B + 10 \nonumber$ $R = 151.5 + 0.9A - 1.3B + 0.02AB \nonumber$ Note that this is the same equation that we derived in Example $1$ using uncoded values for the factors. Although we can convert this coded model into its uncoded form, there is no need to do so. If we want to know the response for a new set of factor levels, we just convert them into coded form and calculate the response. For example, if A is 23 and B is 22, then $A^* = 02 \times 23 - 4 = 0.6$ and $B^* = 0.2 \times 22 - 5 = -0.6$ and $R = 147 + 7 \times 0.6 - 4.5 \times (-0.6) + 0.5 \times 0.6 \times (-0.6) = 153.72 \approx 154 \text{ mg} \nonumber$ We can extend this approach to any number of factors. For a system with three factors—A, B, and C—we can use a 23 factorial design to determine the parameters in the following empirical model $R = \beta_0 + \beta_a A + \beta_b B + \beta_c C + \beta_{ab} AB + \beta_{ac} AC + \beta_{bc} BC + \beta_{abc} ABC \nonumber$ where A, B, and C are the factor levels. The terms $\beta_0$, $\beta_a$, $\beta_b$, and $\beta_{ab}$ are estimated using the following eight equations. $\beta_{0} \approx b_{0}=\frac{1}{n} \sum_{i=1}^{n} R_{i} \nonumber$ $\beta_{a} \approx b_{a}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} R_{i} \nonumber$ $\beta_{b} \approx b_{b}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} R_{i} \nonumber$ $\beta_{ab} \approx b_{ab}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} R_{i} \nonumber$ $\beta_{c} \approx b_{c}=\frac{1}{n} \sum_{i=1}^{n} C^*_{i} R \nonumber$ $\beta_{ac} \approx b_{ac}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} C^*_{i} R \nonumber$ $\beta_{bc} \approx b_{bc}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} C^*_{i} R \nonumber$ $\beta_{abc} \approx b_{abc}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} C^*_{i} R \nonumber$ Example $4$ The following table lists the uncoded factor levels, the coded factor levels, and the responses for a 23 factorial design. run A B C A* B* C* A*B* A*C* B*C* A*B*C* R 1 15 30 45 $+1$ $+1$ $+1$ $+1$ $+1$ $+1$ $+1$ 137.5 2 15 30 15 $+1$ $+1$ $-1$ $+1$ $-1$ $-1$ $-1$ 54.75 3 15 10 45 $+1$ $-1$ $+1$ $-1$ $+1$ $-1$ $-1$ 73.75 4 15 10 15 $+1$ $-1$ $-1$ $-1$ $-1$ $+1$ $+1$ 30.25 5 5 30 45 $-1$ $+1$ $+1$ $-1$ $-1$ $+1$ $-1$ 61.75 6 5 30 15 $-1$ $+1$ $-1$ $-1$ $+1$ $-1$ $+1$ 30.25 7 5 10 45 $-1$ $-1$ $+1$ $+1$ $-1$ $-1$ $+1$ 41.25 8 5 10 15 $-1$ $-1$ $-1$ $+1$ $+1$ $+1$ $-1$ 18.75 Determine the coded empirical model for the response surface based on the following equation. $R = \beta_0 + \beta_a A + \beta_b B + \beta_c C + \beta_{ab} AB + \beta_{ac} AC + \beta_{bc} BC + \beta_{abc} ABC \nonumber$ What is the expected response when A is 10, B is 15, and C is 50? Solution The equation for the empirical model has eight unknowns—the eight beta terms—and the table above describes eight experiments. We have just enough information to calculate values for $\beta_0$, $\beta_a$, $\beta_b$, $\beta_{ab}$, $\beta_{ac}$, $\beta_{bc}$, and $\beta_{abc}$; these values are $b_{0}=\frac{1}{8} \times(137.25+54.75+73.75+30.25+61.75+30.25+41.25+18.75 )=56.0 \nonumber$ $b_{a}=\frac{1}{8} \times(137.25+54.75+73.75+30.25-61.75-30.25-41.25-18.75 )=18.0 \nonumber$ $b_{b}=\frac{1}{8} \times(137.25+54.75-73.75-30.25+61.75+30.25-41.25-18.75 )=15.0 \nonumber$ $b_{c}=\frac{1}{8} \times(137.25-54.75+73.75-30.25+61.75-30.25+41.25-18.75 )=22.5 \nonumber$ $b_{ab}=\frac{1}{8} \times(137.25+54.75-73.75-30.25-61.75-30.25+41.25+18.75 )=7.0 \nonumber$ $b_{ac}=\frac{1}{8} \times(137.25-54.75+73.75-30.25-61.75+30.25-41.25+18.75 )=9.0 \nonumber$ $b_{bc}=\frac{1}{8} \times(137.25-54.75-73.75+30.25+61.75-30.25-41.25+18.75 )=6.0 \nonumber$ $b_{abc}=\frac{1}{8} \times(137.25-54.75-73.75+30.25-61.75+30.25+41.25-18.75 )=3.75 \nonumber$ The coded empirical model, therefore, is $R = 56.0 + 18.0 A^* + 15.0 B^* + 22.5 C^* + 7.0 A^* B^* + 9.0 A^* C^* + 6.0 B^* C^* + 3.75 A^* B^* C^* \nonumber$ To find the response when A is 10, B is 15, and C is 50, we first convert these values into their coded form. Figure $3$ helps us make the appropriate conversions; thus, A* is 0, B* is $-0.5$, and C* is $+1.33$. Substituting back into the empirical model gives a response of $R = 56.0 + 18.0 (0) + 15.0 (-0.5) + 22.5 (+1.33) + 7.0 (0) (-0.5) + 9.0 (0) (+1.33) + 6.0 (-0.5) (+1.33) + 3.75 (0) (-0.5) (+1.33) = 74.435 \approx 74.4 \nonumber$ Evaluating an Empirical Model A 2k factorial design can model only a factor’s first-order effect, including first-order interactions, on the response. A 22 factorial design, for example, includes each factor’s first-order effect ($\beta_a$ and $\beta_b$) and a first-order interaction between the factors ($\beta_{ab}$). A 2k factorial design cannot model higher-order effects because there is insufficient information. Here is a simple example that illustrates the problem. Suppose we need to model a system in which the response is a function of a single factor, A. Figure $\PageIndex{4a}$ shows the result of an experiment using a 21 factorial design. The only empirical model we can fit to the data is a straight line. $R = \beta_0 + \beta_a A \nonumber$ If the actual response is a curve instead of a straight-line, then the empirical model is in error. To see evidence of curvature we must measure the response for at least three levels for each factor. We can fit the 31 factorial design in Figure $\PageIndex{4b}$ to an empirical model that includes second-order factor effects. $R = \beta_0 + \beta_a A + \beta_{aa} A^2 \nonumber$ In general, an n-level factorial design can model single-factor and interaction terms up to the (n – 1)th order. We can judge the effectiveness of a first-order empirical model by measuring the response at the center of the factorial design. If there are no higher-order effects, then the average response of the trials in a 2k factorial design should equal the measured response at the center of the factorial design. To account for influence of random errors we make several determinations of the response at the center of the factorial design and establish a suitable confidence interval. If the difference between the two responses is significant, then a first-order empirical model probably is inappropriate. Note One of the advantages of working with a coded empirical model is that b0 is the average response of the 2 $\times$ k trials in a 2k factorial design. Example $5$ One method for the quantitative analysis of vanadium is to acidify the solution by adding H2SO4 and oxidizing the vanadium with H2O2 to form a red-brown soluble compound with the general formula (VO)2(SO4)3. Palasota and Deming studied the effect of the relative amounts of H2SO4 and H2O2 on the solution’s absorbance, reporting the following results for a 22 factorial design [Palasota, J. A.; Deming, S. N. J. Chem. Educ. 1992, 62, 560–563]. H2SO4 H2O2 absorbance $+1$ $+1$ 0.330 $+1$ $-1$ 0.359 $-1$ $+1$ 0.293 $-1$ $-1$ 0.420 Four replicate measurements at the center of the factorial design give absorbances of 0.334, 0.336, 0.346, and 0.323. Determine if a first-order empirical model is appropriate for this system. Use a 90% confidence interval when accounting for the effect of random error. Solution We begin by determining the confidence interval for the response at the center of the factorial design. The mean response is 0.335 with a standard deviation of 0.0094, which gives a 90% confidence interval of $\mu=\overline{X} \pm \frac{t s}{\sqrt{n}}=0.335 \pm \frac{(2.35)(0.0094)}{\sqrt{4}}=0.335 \pm 0.011 \nonumber$ The average response, $\overline{R}$, from the factorial design is $\overline{R}=\frac{0.330+0.359+0.293+0.420}{4}=0.350 \nonumber$ Because $\overline{R}$ exceeds the confidence interval’s upper limit of 0.346, we can reasonably assume that a 22 factorial design and a first-order empirical model are inappropriate for this system at the 95% confidence level. Central Composite Designs One limitation to a 3k factorial design, which would allow us to use an empirical model with second-order effects, is the number of trials we need to run. As shown in Figure $5$, a 32 factorial design requires 9 trials. This number increases to 27 for three factors and to 81 for 4 factors. A more efficient experimental design for a system that contains more than two factors is a central composite design, two examples of which are shown in Figure $6$. The central composite design consists of a 2k factorial design, which provides data to estimate each factor’s first-order effect and interactions between the factors, and a star design that has $2^k + 1$ points, which provides data to estimate second-order effects. Although a central composite design for two factors requires the same number of trials, nine, as a 32 factorial design, it requires only 15 trials and 25 trials when using three factors or four factors. See this chapter’s additional resources for details about the central composite designs.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.05%3A_Mathematical_Models_of_Response_Surfaces.txt
The calculations for determining an empirical model of a response surface using a 2k factorial design, as outlined in Section 9.5, are relatively easy to complete for a small number of factors and for experimental designs without replication where the number of experiments is equal to the number of parameters in the model. If we wish to work with more factors, if we wish to explore other experimental designs, and if we wish to build replication into the experimental design so that we can better evaluate our empirical model, then we need to do so by building a regression model, as we did earlier in Chapter 8. Creating Empirical Models Using R To illustrate how we can use R to create an empirical model, let's use data from an experiment exploring how to optimize a Grignard reaction leading to the synthesis of benzyl-1-cyclopentan-1-ol [Bouzidi, N.; Gozzi, C. J. Chem. Educ. 2008, 85, 1544–1547]. In this study, students begin by studying the effect of six possible factors on the reaction's yield: the volume of diethyl ether used to prepare a solution of benzyl chloride, \(x_1\), the time over which benzyl chloride is added to the reaction mixture, \(x_2\), the stirring time used to prepare the benzyl magnesium chloride, \(x_3\), the relative excess of benzyl chloride to cyclopentanone, \(x_4\), the relative excess of magnesium turnings to benzyl chloride, \(x_5\), and the reaction time, \(x_6\). With six factors to consider, a full 2k factorial design requires 32 experiments, which is labor intensive. Instead, the students begin with a screening study that uses eight experiments to model only the first-order effects of the six factors, as outlined in the following two tables. Table \(1\): Factor Levels for Screening Study factor low level high levcel \(x_1\): volume of diethyl ether in mL 18 50 \(x_2\): addition time in min 60 90 \(x_3\): stirring time in min 20 40 \(x_4\): relative excess of benzyl chloride as % 20 30 \(x_5\): relative excess of magnesium as % 12.5 25 \(x_6\): reaction time in min 30 60 Table \(2\): Experimental Design Showing Coded Factor Levels and Responses run \(x_1\) \(x_2\) \(x_3\) \(x_4\) \(x_5\) \(x_6\) percent yield 1 \(+1\) \(+1\) \(+1\) \(-1\) \(+1\) \(-1\) 72 2 \(-1\) \(+1\) \(+1\) \(+1\) \(-1\) \(+1\) 33 3 \(-1\) \(-1\) \(+1\) \(+1\) \(+1\) \(-1\) 29 4 \(+1\) \(-1\) \(-1\) \(+1\) \(+1\) \(+1\) 74 5 \(-1\) \(+1\) \(-1\) \(-1\) \(+1\) \(+1\) 31 6 \(+1\) \(+1\) \(+1\) \(-1\) \(-1\) \(+1\) 52 7 \(+1\) \(-1\) \(-1\) \(+1\) \(-1\) \(-1\) 47 8 \(-1\) \(-1\) \(-1\) \(-1\) \(-1\) \(-1\) 27 To carry out the calculations in R we first create vectors for the coded factor levels and the responses. ```x1 = c(1,-1,-1,1,-1,1,1,-1) x2 = c(1,1,-1,-1,1,-1,1,-1) x3 = c(1,1,1,-1,-1,1,-1,-1) x4 = c(-1,1,1,1,-1,-1,1,-1) x5 = c(1,-1,1,1,1,-1,-1,-1) x6 = c(-1,1,-1,1,1,1,-1,-1) yield = c(72,33,29,74,31,52,47,27)``` Next, we use the `lm()` function to build a linear regression model that includes just the first-order effects of the factors (see Chapter 8.5 to review the syntax for this function), and the `summary()` function to review the resulting model. ```screening = lm(yield ~ x1 + x2 + x3 + x4 + x5 + x6) summary(screening)``` `Call: ` `lm(formula = yield ~ x1 + x2 + x3 + x4 + x5 + x6) ` `Residuals:` ` 1 2 3 4 5 6 7 8 ` ` 5.875 5.875 -5.875 5.875 -5.875 -5.875 -5.875 5.875 ` `Coefficients:` ` Estimate Std.Error t value Pr(>|t|)` ` (Intercept) 45.625 5.875 7.766 0.0815 .` ` x1 15.625 5.875 2.660 0.2290` ` x2 0.125 5.875 0.021 0.9865` ` x3 0.875 5.875 0.149 0.9059` ` x4 0.125 5.875 0.021 0.9865` ` x5 5.875 5.875 1.000 0.5000` ` x6 1.875 5.875 0.319 0.8033` ` ---` `Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ` `Residual standard error: 16.62 on 1 degrees of freedom ` `Multiple R-squared: 0.8913, Adjusted R-squared: 0.239 ` `F-statistic: 1.366 on 6 and 1 DF, p-value: 0.5749` Because we have one more experiment than there are variables in our empirical model, the summary provides some information on the significance of the model's parameters; however, with just one degree of freedom this information is not really reliable. In addition to the intercept, the three factors with the largest coefficients are the volume of diethyl ether, \(x_1\), the relative excess of magnesium, \(x_5\), and the reaction time, \(x_6\). Having identified three factors for further investigation, the students use a 23 factorial design to explore interactions between these three factors using the experimental design in the following table (see Table \(1\) for the actual factor levels. Table \(3\): Coded Factor Levels and Response for a \(2^3\) Factorial Design run \(x_1\) \(x_5\) \(x_6\) percent yield 1 \(-1\) \(-1\) \(-1\) 28.5 2 \(+1\) \(-1\) \(-1\) 55.5 3 \(-1\) \(+1\) \(-1\) 38 4 \(+1\) \(+1\) \(-1\) 68 5 \(-1\) \(-1\) \(+1\) 49 6 \(+1\) \(-1\) \(+1\) 66 7 \(-1\) \(+1\) \(+1\) 31.5 8 \(+1\) \(+1\) \(+1\) 72 As before, we create vectors for our factors and the response and then use the `lm()` and the `summary()` functions to complete and evaluate the resulting empirical model. ```x1 = c(-1,1,-1,1,-1,1,-1,1) x5 = c(-1,-1,1,1,-1,-1,1,1) x6 = c(-1,-1,-1,-1,1,1,1,1) yield = c(28.5,55.5,38,68,49,66,31.5,72) fact23 = lm(yield ~ x1 * x5 * x6) summary(fact23)``` `Call: ` `lm(formula = yield ~ x1 * x5 * x6) ` `Residuals: ` `ALL 8 residuals are 0: no residual degrees of freedom!` `Coefficients:` ` Estimate Std. Error t value Pr(>|t|)` `(Intercept) 51.0625 NA NA NA ` `x1 14.3125 NA NA NA ` `x5 1.3125 NA NA NA ` `x6 3.5625 NA NA NA ` `x1:x5 3.3125 NA NA NA ` `x1:x6 0.0625 NA NA NA ` `x5:x6 -4.1875 NA NA NA ` `x1:x5:x6 2.5625 NA NA NA ` `Residual standard error: NaN on 0 degrees of freedom ` `Multiple R-squared: 1, Adjusted R-squared: NaN ` `F-statistic: NaN on 7 and 0 DF, p-value: NA` With eight experiments and eight variables in the empirical model, we do not have any ability to evaluate the model statistically. Of the three first-order effects, we see that the volume of diethyl ether, \(x_1\), and reaction time, \(x_6\), are more important than the relative excess of magnesium, \(x_5\). We also see that the interaction between \(x_1\) and \(x_5\) is positive (high values for both favor an increased yield) and that the interaction between \(x_5\) and \(x_6\) is negative (yields improve when one factor is high and the other is low). Finally, the students use a central composite model—which allows for adding second-order effects and curvature in the response surface—to study the effect of the volume of diethyl ether, \(x_1\), and reaction time, \(x_6\), on the percent yield. The relative excess of magnesium, \(x_5\) was set at its high level for this study because this provides for greater percent yields (compare the results for runs 4 and 6 to the results for runs 3 and 5 in Table \(3\)). The following tables provides the experimental design. Table \(4\): Coded Factor Levels and Responses for a Central Composite Experimental Design run \(x_1\) \(x_6\) percent yield 1 \(-1\) \(-1\) 39 2 \(+1\) \(-1\) 66.5 3 \(-1\) \(+1\) 22 4 \(+1\) \(+1\) 72.5 5 \(-1.414\) 0 10.5 6 \(+1.414\) 0 72.5 7 0 \(-1.414\) 38 8 0 \(+1.414\) 70 9 0 0 59 10 0 0 57 11 0 0 54.5 12 0 0 63 As before, we create vectors for our factors and the response, and then use the `lm()` and the `summary()` functions to complete and evaluate the resulting empirical model. ```x1 = c(-1,1,-1,1,-1.414,1.414,0,0,0,0,0,0) x6 = c(-1,-1,1,1,0,0,-1.414,1.414,0,0,0,0) yield = c(39,66.5,22,72.5,10.5,72.5,38,70,59,57,54.5,63) centcomp = lm(yield ~ x1 * x6 + I(x1^2) + I(x6^2)) summary(centcomp)``` `Call: ` `lm(formula = yield ~ x1 * x6 + I(x1^2) + I(x6^2)) ` `Residuals:` ` Min 1Q Median 3Q Max ` `-11.0724 -4.0794 -0.3938 5.2056 9.3695 ` `Coefficients:` ` Estimate Std. Error t value Pr(>|t|)` `(Intercept) 58.375 4.360 13.389 1.07e-05 *** ` `x1 20.712 3.083 6.718 0.000529 *** ` `x6 4.282 3.083 1.389 0.214267 ` `I(x1^2) -7.876 3.447 -2.285 0.062398 . ` `I(x6^2) -1.625 3.447 -0.471 0.654130 ` `x1:x6 5.750 4.360 1.319 0.235317 ` `--- ` `Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ` `Residual standard error: 8.72 on 6 degrees of freedom ` `Multiple R-squared: 0.9, Adjusted R-squared: 0.8167 ` `F-statistic: 10.8 on 5 and 6 DF, p-value: 0.005835` With 12 experiments and just six variables, our model has sufficient degrees of freedom to suggest that it provides a reasonable picture of how the reaction time and the volume of diethyl ether affect the reaction's yield even if the residual errors in the responses range from a minimum of -11.7 to a maximum +9.37. The middle 50% of residual errors range between -4.1 to +5.2 with a median residual error of -0.4. We can compare the actual experimental yields to the yields predicted by the model by combining them into a data frame. ```centcomp_results = data.frame(yield, centcomp\$fitted.values, yield - centcomp\$fitted.values) colnames(centcomp_results) = c("expt yield", "pred yield", "residual error") centcomp_results``` ` expt yield pred yield residual error ` `1 39.0 29.63046 9.3695385 ` `2 66.5 59.55372 6.9462836 ` `3 22.0 26.69375 -4.6937546 ` `4 72.5 79.61701 -7.1170095 ` `5 10.5 13.34036 -2.8403635 ` `6 72.5 71.91285 0.5871540 ` `7 38.0 49.07236 -11.0723566 ` `8 70.0 61.18085 8.8191471 ` `9 59.0 58.37466 0.6253402 ` `10 57.0 58.37466 -1.3746598 ` `11 54.5 58.37466 -3.8746598 ` `12 63.0 58.37466 4.6253402` Using R to Visualize the Response Surface The `plot3D` package provides several functions that we can use to visualize a response surface defined by two factors. Here we consider three functions, one for drawing a two-dimensional contour plot of the response surface, one for drawing a three-dimensional surface plot of the response, and one for plotting a three-dimensional scatter plot of the responses. To begin, we use the `library()` function to make the package available to us (note: you may need to first install the `plot3D` package; see Chapter 1 for details on how to do this). `library(plot3D)` Let's begin by creating a two-dimensional contour plot of our response surface that places the volume of diethyl ether, \(x_1\), on the x-axis and the reaction time, \(x_6\) on the y-axis, and using calculated responses from the model to draw the contour lines. First, we create vectors with values for the x-axis and the y-axis ```x1_axis = seq(-1.5, 1.5, 0.1) x6_axis = seq(-1.5 ,1.5, 0.1)``` Next, we create a function that uses our empirical model to calculate the response for every combination of `x1_axis` and `x6_axis ` `response = function(x,y){coef(centcomp)[1] + coef(centcomp)[2]*x + coef(centcomp)[3]*y + coef(centcomp)[4]*x^2 + coef(centcomp)[5]*y^2 + coef(centcomp)[6]*x*y}` where `coef(centcomp)[i]` is used to extract the ith coefficient from our empirical model. Now we use R's `outer()` function to calculate the response for every combination of the variables `x1_axis` and `x6_axis` `z_axis = outer(X = x1_axis,Y = x6_axis, response)` Finally, we use the `contour2D()` function to create the contour plot in Figure \(1\). `contour2D(x = x1_axis,y = x6_axis, z = z_axis, xlab = "x1: volume", ylab = "x6: time", clab = "yield")` Next, let's create a three-dimensional surface plot of our response surface that places the volume of diethyl ether, \(x_1\), on the x-axis, the reaction time, \(x_6\) on the y-axis, and the calculated responses from the model on the z-axis. For this, we use the `persp3D()` function `persp3D(x = x1_axis, y = x6_axis, z = z_axis, ticktype = "detailed", phi = 15, theta = 25, xlab = "x1: volume", ylab = "x6: time", zlab = "yield", clab = "yield", contour = TRUE, cex.axis = 0.75, cex.lab = 0.75)` where `phi` and `theta` adjust the angle at which we view the response surface—you will have to play with these values to create a plot that is pleasing to look at—and `ticktype` controls how much information is displayed on the axes. The `cex.axis` and `cex.lab` commands adjust the size of the text displayed on the axes, and `countour = TRUE` places a contour plot on the figure's bottom side. Figure \(2\) shows the result. Finally, let's use the `type = "h"` option to overlay a scatterplot of the data used to build the empirical model on top of the three-dimensional surface plot. `scatter3D(x = x1, y = x6, z = yield, add = TRUE, type = "h", pch = 19, col = "black", lwd = 2, colkey = FALSE)` Figure \(3\) shows the result using the data from Table \(4\). Although the general shape of the response surface is consistent with the underlying data, there is sufficient experimental uncertainty in the results of the four replicate experiments used to create this empirical model, as shown by the standard deviation for runs 9—12, to explain why some of the predicted yields have large errors. `sd(yield[9:12]) ` `[1] 3.591077`
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.06%3AUsing_R_to_Model_a_Response_Surface.txt
1. For each of the following equations determine the optimum response using a one-factor-at-a-time searching algorithm. Begin the search at (0,0) by first changing factor A, using a step-size of 1 for both factors. The boundary conditions for each response surface are 0 ≤ A ≤ 10 and 0≤ B ≤ 10. Continue the search through as many cycles as necessary until you find the optimum response. Compare your optimum response for each equation to the true optimum. Note: These equations are from Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987, and pseudo-three dimensional plots of the response surfaces can be found in their Figures 11.4, 11.5 and 11.14. (a) R = 1.68 + 0.24A + 0.56B – 0.04A2 – 0.04B2 μopt = (3,7) (b) R = 4.0 – 0.4A + 0.08AB μopt = (10,10) (c) R = 3.264 + 1.537A + 0.5664B – 0.1505A2 – 0.02734B2 – 0.05785AB μopt = (3.91,6.22) 2. Use a fixed-sized simplex searching algorithm to find the optimum response for the equation in Problem 1c. For the first simplex, set one vertex at (0,0) with step sizes of one. Compare your optimum response to the true optimum. 3. A 2k factorial design was used to determine the equation for the response surface in Problem 1b. The uncoded levels, coded levels, and the responses are shown in the following table. Determine the uncoded equation for the response surface. A B A* B* response 8 8 +1 +1 5.92 8 2 +1 –1 2.08 2 8 –1 +1 4.48 2 2 –1 –1 3.52 4. Koscielniak and Parczewski investigated the influence of Al on the determination of Ca by atomic absorption spectrophotometry using the 2k factorial design shown in the following table [data from Koscielniak, P.; Parczewski, A. Anal. Chim. Acta 1983, 153, 111–119]. [Ca2+] (ppm) [Al3+] (ppm) Ca* Al* response 10 160 +1 +1 54.92 10 0 +1 –1 98.44 4 16 –1 +1 19.18 4 0 –1 –1 38.52 (a) Determine the uncoded equation for the response surface. (b) If you wish to analyze a sample that is 6.0 ppm Ca2+, what is the maximum concentration of Al3+ that can be present if the error in the response must be less than 5.0%? 5. Strange [Strange, R. S. J. Chem. Educ. 1990, 67, 113–115] studied a chemical reaction using the following 23 factorial design. factor high (+1) level low (–1) level X: temperature 140oC 120oC Y: catalyst type B type A Z: [reactant] 0.50 M 0.25 M run X* Y* Z* % yield 1 –1 –1 –1 28 2 +1 –1 –1 17 3 –1 +1 –1 41 4 +1 +1 –1 34 5 –1 –1 +1 56 6 +1 –1 +1 51 7 –1 +1 +1 42 8 +1 +1 +1 36 (a) Determine the coded equation for this data. (b) If $\beta$ terms of less than $\pm 1$ are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation. (c) Explain why the coded equation for this data can not be transformed into an uncoded form. (d) Which is the better catalyst, A or B? (e) What is the yield if the temperature is set to 125oC, the concentration of the reactant is 0.45 M, and we use the appropriate catalyst? 6. Pharmaceutical tablets coated with lactose often develop a brown discoloration. The primary factors that affect the discoloration are temperature, relative humidity, and the presence of a base acting as a catalyst. The following data have been reported for a 23 factorial design [Armstrong, N. A.; James, K. C. Pharmaceutical Experimental Design and Interpretation, Taylor and Francis: London, 1996 as cited in Gonzalez, A. G. Anal. Chim. Acta 1998, 360, 227–241]. factor high (+1) level low (–1) level X: benzocaine present absent Y: temperature 40oC 25oC Z: relative humidity 75% 50% run X* Y* Z* color (arb. unit) 1 –1 –1 –1 1.55 2 +1 –1 –1 5.40 3 –1 +1 –1 3.50 4 +1 +1 –1 6.75 5 –1 –1 +1 2.45 6 +1 –1 +1 3.60 7 –1 +1 +1 3.05 8 +1 +1 +1 7.10 (a) Determine the coded equation for this data. (b) If $\beta$ terms of less than 0.5 are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation. 7. The following data for a 23 factorial design were collected during a study of the effect of temperature, pressure, and residence time on the % yield of a reaction [Akhnazarova, S.; Kafarov, V. Experimental Optimization in Chemistry and Chemical Engineering, MIR Publishers: Moscow, 1982 as cited in Gonzalez, A. G. Anal. Chim. Acta 1998, 360, 227–241]. factor high (+1) level low (–1) level X: temperature 200oC 100oC Y: pressure 0.6 MPa 0.2 MPa Z: residence time 20 min 10 min run X* Y* Z* % yield 1 –1 –1 –1 2 2 +1 –1 –1 6 3 –1 +1 –1 4 4 +1 +1 –1 8 5 –1 –1 +1 10 6 +1 –1 +1 18 7 –1 +1 +1 8 8 +1 +1 +1 12 (a) Determine the coded equation for this data. (b) If $\beta$ terms of less than 0.5 are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation. (c) Three runs at the center of the factorial design—a temperature of 150oC, a pressure of 0.4 MPa, and a residence time of 15 min—give percent yields of 8%, 9%, and 8.8%. Determine if a first-order empirical model is appropriate for this system at $\alpha = 0.05$. 8. Duarte and colleagues used a factorial design to optimize a flow-injection analysis method for determining penicillin [Duarte, M. M. M. B.; de O. Netro, G.; Kubota, L. T.; Filho, J. L. L.; Pimentel, M. F.; Lima, F.; Lins, V. Anal. Chim. Acta 1997, 350, 353–357]. Three factors were studied: reactor length, carrier flow rate, and sample volume, with the high and low values summarized in the following table. factor high (+1) level low (–1) level X: reactor length 1.3 cm 2.0 cm Y: carrier flow rate 1.6 mL/min 2.2 mL/min Z: sample volume 100 μL 150 μL The authors determined the optimum response using two criteria: the greatest sensitivity, as determined by the change in potential for the potentiometric detector, and the largest sampling rate. The following table summarizes their optimization results. run X* Y* Z* $\Delta E$ (mV) sample/h 1 –1 –1 –1 37.45 21.5 2 +1 –1 –1 31.70 26.0 3 –1 +1 –1 32.10 30.0 4 +1 +1 –1 27.30 33.0 5 –1 –1 +1 39.85 21.0 6 +1 –1 +1 32.85 19.5 7 –1 +1 +1 35.00 30.0 8 +1 +1 +1 32.15 34.0 (a) Determine the coded equation for the response surface where $\Delta E$ is the response. (b) Determine the coded equation for the response surface where sample/h is the response. (c) Based on the coded equations in (a) and in (b), do conditions that favor sensitivity also improve the sampling rate? (d) What conditions would you choose if your goal is to optimize both sensitivity and sampling rate? 9. Here is a challenge! McMinn, Eatherton, and Hill investigated the effect of five factors for optimizing an H2-atmosphere flame ionization detector using a 25 factorial design [McMinn, D. G.; Eatherton, R. L.; Hill, H. H. Anal. Chem. 1984, 56, 1293–1298]. The factors and their levels were factor high (+1) level low (–1) level A: H2 flow rate 1460 mL/min 1382 mL/min B: SiH4 20.0 ppm 12.2 ppm C: O2 + N2 flow rate 255 mL/min 210 mL/min D: O2/N2 ratio 1.36 1.19 E: electrode height 75 (arb. unit) 55 (arb. unit) The coded (“+” = +1, “–” = –1) factor levels and responses, R, for the 32 experiments are shown in the following table run A* B* C* D* E* R run A* B* C* D* E* R 1 0.36 17 + 0.39 2 + 0.51 18 + + 0.45 3 + 0.15 19 + + 0.32 4 + + 0.39 20 + + + 0.25 5 + 0.79 21 + + 0.18 6 + + 0.83 22 + + + 0.29 7 + + 0.74 23 + + + 0.07 8 + + + 0.69 24 + + + + 0.19 9 + 0.60 25 + + 0.53 10 + + 0.82 26 + + + 0.60 11 + + 0.42 27 + + + 0.36 12 + + + 0.59 28 + + + + 0.43 13 + + 0.96 29 + + + 0.23 14 + + + 0.87 30 + + + + 0.51 15 + + + 0.76 31 + + + + 0.13 16 + + + + 0.74 32 + + + + + 0.43 (a) Determine the coded equation for this response surface, ignoring $\beta$ terms less than $\pm 0.03$. (b) A simplex optimization of this system finds optimal values for the factors of A = 2278 mL/min, B = 9.90 ppm, C = 260.6 mL/min, and D = 1.71. The value of E was maintained at its high level. Are these values consistent with your analysis of the factorial design. 10. A good empirical model provides an accurate picture of the response surface over the range of factor levels within the experimental design. The same model, however, may yield an inaccurate prediction for the response at other factor levels. For this reason, an empirical model, is tested before it is extrapolated to conditions other than those used in determining the model. For example, Palasota and Deming studied the effect of the relative amounts of H2SO4 and H2O2 on the absorbance of solutions of vanadium using the following central composite design [data from Palasota, J. A.; Deming, S. N. J. Chem. Educ. 1992, 62, 560–563]. run drops of 1% H2SO4 drops of 20% H2O2 1 15 22 2 10 20 3 20 20 4 8 15 5 15 15 6 15 15 7 15 15 8 15 15 9 22 15 10 10 10 11 20 10 12 15 8 The reaction of H2SO4 and H2O2 generates a red-brown solution whose absorbance is measured at a wavelength of 450 nm. A regression analysis on their data yields the following uncoded equation for the response (absorbance $\times$ 1000). $R = 835.90 - 36.82X_1 - 21.34 X_2 + 0.52 X_1^2 + 0.15 X_2^2 + 0.98 X_1 X_2 \nonumber$ where X1 is the drops of H2O2, and X2 is the drops of H2SO4. Calculate the predicted absorbances for 10 drops of H2O2 and 0 drops of H2SO4, 0 drops of H2O2 and 10 drops of H2SO4, and for 0 drops of each reagent. Are these results reasonable? Explain. What does your answer tell you about this empirical model?
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.07%3AExercises.txt
When we try to calibrate an analytical method (Chapter 8) or to optimize an analytical system (Chapter 9), our ability to do so successfully is limited by the uncertainty, or noise, in our measurements and by background signals that interfere with our ability to measure the signal of interest to us. In this chapter we will consider ways to clean up our data by decreasing the contribution of noise to our measurements and by correcting for the presence of background signals. 10: Cleaning Up Data When we make a measurement it is the sum of two parts, a determinate, or fixed contribution that arises from the analyte and an indeterminate, or random, contribution that arises from uncertainty in the measurement process. We call the first of these the signal and we call the latter the noise. There are two broad categories of noise: that associated with obtaining samples and that associated with making measurements. Our interest here is in the latter. What is Noise? Noise is a random event characterized by a mean and standard deviation. There are many types of noise, but we will limit ourselves for now to noise that is stationary, in that its mean and its standard deviation are independent of time, and that is heteroscedastic, in that its mean and its variance (and standard deviation) are independent of the signal's magnitude. Figure $\PageIndex{1a}$ shows an example of a noisy signal that meets these criteria. The x-axis here is shown as time—perhaps a chromatogram—but other units, such as wavelength or potential, are possible. Figure $\PageIndex{1b}$ shows the underlying noise and Figure $\PageIndex{1c}$ shows the underlying signal. Note that the noise in Figure $\PageIndex{1b}$ appears consistent in its central tendency (mean) and its spread (variance) along the x-axis and is independent of the signal's strength. How Do We Characterize the Signal and the Noise? Although we characterize noise by its mean and its standard deviation, the most important benchmark is the signal-to-noise ratio, $S/N$, which we define as $S/N = \frac{S_\text{analyte}}{s_\text{noise}} \nonumber$ where $S_\text{analyte}$ is the signal's value at particular location on the x-axis and $s_\text{noise}$ is the standard deviation of the noise using a signal-free portion of the data. As general rules-of-thumb, we can measure the signal with some confidence when $S/N \ge 3$ and we can detect the signal with some confidence when $3 \ge S/N \ge 2$. For the data in Figure $1$, and using the information in the figure caption, the signal-to-noise ratios are, from left-to-right, 10, 6, and 3. Note To measure the signal with confidence implies we can use the signal's value in a calculation, such as constructing a calibration curve. To detect the signal with confidence means we are certain that a signal is present (and that an analyte responsible for the signal is present) even if we cannot measure the signal with sufficient confidence to allow for a meaningful calculation. How Can We Improve the $S/N$ Ratio? There are two broad approaches that we can use to improve the signal-to-noise ratio: hardware and software. Hardware approaches are built into the instrument and include decisions on how the instrument is set-up for making measurements (for example, the choice of a scan rate or a slit width), and how the signal is processed by the instrument (for example, using electronic filters); such solutions are not of interest to us here in a textbook with a focus on chemometrics. Software solutions are computational approaches in which we manipulate the data either while we are collecting it or after data acquisition is complete.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.1%3A_Signals_and_Noise.txt
In this section we will consider three common computational tools for improving the signal-to-noise ratio: signal averaging, digital smoothing, and Fourier filtering. Signal Averaging The most important difference between the signal and the noise is that a signal is determinate (fixed in value) and the noise is indeterminate (random in value). If we measure a pure signal several times, we expect its value to be the same each time; thus, if we add together n scans, we expect that the net signal, $S_n$, is defined as $S_n = n S \nonumber$ where $S$ is the signal for a single scan. Because noise is random, its value varies from one run to the next, sometimes with a value that is larger and sometimes with a value that is smaller, and sometimes with a value that is positive and sometimes with a value that is negative. On average, the standard deviation of the noise increases as we make more scans, but it does so at a slower rate than for the signal $s_n = \sqrt{n} s \nonumber$ where $s$ is the standard deviation for a single scan and $s_n$ is the standard deviation after n scans. Combining these two equations, shows us that the signal-to-noise ratio, $S/N$, after n scans increases as $(S/N)_n = \frac{S_n}{s_n} = \frac{nS}{\sqrt{n}s} = \sqrt{n}(S/N)_{n = 1} \nonumber$ where $(S/N)_{n = 1}$ is the signal-to-noise ratio for the initial scan. Thus, when $n = 4$ the signal-to-noise ratio improves by a factor of 2, and when $n = 16$ the signal-to-noise ratio increases by a factor of 4. Figure $1$ shows the improvement in the signal-to-noise ratio for 1, 2, 4, and 8 scans. Signal averaging works well when the time it takes to collect a single scan is short and when the analyte's signal is stable with respect to time both because the sample is stable and the instrument is stable; when this is not the case, then we risk a time-dependent change in $S_\text{analyte}$ and/or $s_\text{noise}$ Because the equation for $(S/N)_n$ is proportional to the $\sqrt{n}$, the relative improvement in the signal-to-noise ratio decreases as $n$ increases; for example, 16 scans gives a $4 \times$ improvement in the signal-to-noise ratio, but it takes an additional 48 scans (for a total of 64 scans) to achieve a $8 \times$ improvement in the signal-to-noise ratio. Digital Smoothing Filters One characteristic of noise is that its magnitude fluctuates rapidly in contrast to the underlying signal. We see this, for example, in Figure $1$ where the underlying signal either remains constant or steadily increases or decreases while the noise fluctuates chaotically. Digital smoothing filters take advantage of this by using a mathematical function to average the data for a small range of consecutive data points, replacing the range's middle value with the average signal over that range. Moving Average Filters For a moving average filter, we replace each point by the average signal for that point and an equal number of points on either side; thus, a moving average filtee has a width, $w$, of 3, 5, 7, ... points. For example, suppose the first five points in a sequence are 0.80 0.30 0.80 0.20 1.00 then a three-point moving average ($w = 3)$ returns values of NA 0.63 0.43 0.67 NA where, for example, 0.63 is the average of 0.80, 0.30, and 0.80. Note that we lose $(w - 1)/2 = (3 - 1)/2 = 1$ points at each end of the data set because we do not have a sufficient number of data points to complete a calculation for the first and the last point. Figure $2$ shows the improvement in the $S/N$ ratio when using moving average filters with widths of 5, 9, and 13. One limitation to a moving average filter is that it distorts the original data by removing points from both ends, although this is not a serious concern if the points in question are just noise. Of greater concern is the distortion in a signal's height if we use a range that is too wide; for example, Figure $3$, shows how a 23-point moving average filter (shown in blue) applied to the noisy signal in the upper left quadrant of Figure $2$, reduces the height of the original signal (shown in black). Because the filter's width—shown by the red bar—is similar to the peak's width, as the filter passes through the peak it systematically reduces the signal by averaging together values that are mostly smaller than the maximum signal. Savitzky-Golay Filters A moving average filter weights all points equally; that is, points near the edges of the filter contribute to the average as a level equal to points near the filter's center. A Savitzky-Golay filter uses a polynomial model that weights each point differently, placing more weight on points near the center of the filter and less weight on points at the edge of the filter. Specific values depend on the size of the window and the polynomial model; for example, a five-point filter using a second-order polynomial has weights of $-3/35 \quad \quad 12/35 \quad \quad 17/35 \quad \quad 12/35 \quad \quad -3/35 \nonumber$ For example, suppose the first five points in a sequence are 0.80 0.30 0.80 0.20 1.00 then this Savitzky-Golay filter returns values of NA NA 0.41 NA NA where, for example, the value for the middle point is $0.80 \times \frac{-3}{35} + 0.30 \times \frac{12}{35} + 0.80 \times \frac{17}{35} + 0.20 \times \frac{12}{35} + 1.00 \times \frac{-3}{35} = 0.406 \approx 0.41 \nonumber$ Note that we lose $(w - 1)/2 = (5 - 1)/2 = 2$ points at each end of the data set, where w is the filter's range, because we do not have a sufficient number of data points to complete the calculations. For other Savitzky-Golay smoothing filters, see Savitzky, A.; Golay, M. J. E. Anal Chem, 1964, 36, 1627-1639. Figure $4$ shows the improvement in the $S/N$ ratio when using Savitzky-Golay filters using a second-order polynomial with 5, 9, and 13 points. Because a Savitzky-Golay filter weights points differently than does a moving average smoothing filter, a Savitzky-Golay filter introduces less distortion to the signal, as we see in the following figure. Fourier Filtering This approach to improving the signal-to-noise ratio takes advantage of a mathematical technique called a Fourier transform (FT). The basis of a Fourier transform is that we can express a signal in two separate domains. In the first domain the signal is characterized by one or more peaks, each defined by its position, its width, and its area; this is called the frequency domain. In the second domain, which is called the time domain, the signal consists of a set of oscillations, each defined by its frequency, its amplitude, and its decay rate. The Fourier transform—and the inverse Fourier transform—allow us to move between these two domains. Note The mathematical details behind the Fourier transform are beyond the level of this textbook; for a more in-depth treatment, consult this chapter's resources. Figure $\PageIndex{6a}$ shows a single peak in the frequency domain and Figure $\PageIndex{6b}$ shows its equivalent time domain signal. There are correlations between the two domains: • the further a peak in the frequency domain is from the origin, the greater it corresponding oscillation frequency in the time domain • the broader a peak's width in the frequency domain, the faster its decay rate in the time domain • the greater the area under a peak in the frequency domain, the higher its initial intensity in the time domain We can use a Fourier transform to improve the signal-to-noise ratio because the signal is a single broad peak and the noise appears as a multitude of very narrow peaks. As noted above, a broad peak in the frequency domain has a fast decaying signal in the time domain, which means that while the beginning of the time domain signal includes contributions from the signal and the noise, the latter part of the time domain signal includes contributions from noise only. The figure below shows how we can take advantage of this to reduce the noise and improve the signal-to-noise ratio for the noisy signal in Figure $\PageIndex{7a}$, which has 256 points along the x-axis and has a signal-to-noise ratio of 5.1. First, we use the Fourier transform to convert its original domain into the new domain, the first 128 points of which are shown in Figure $\PageIndex{7b}$ (note: the first half of the data contains the same information as the second half of the data, so we only need to look at the first half of the data). The points at the beginning are dominated by the signal, which is why there is a systematic decrease in the intensity of the oscillations; the remaining points are dominated by noise, which is why the variation in intensity is random. To filter out the noise we retain the first 24 points as they are and set the intensities of the remaining points to zero (the choice of how many points to retain may require some adjustment). As shown in Figure $\PageIndex{7c}$, we repeat this for the remaining 128 points, retaining the last 24 points as they are. Finally, we use an inverse Fourier transform to return to our original domain, with the result in Figure $\PageIndex{7d}$, with the signal-to-noise ratio improving from 5. 1 for the original noisy signal to 11.2 for the filtered signal.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.2%3A_Signal_Averaging.txt
Another form of noise is a systematic background signal on which the analytical signal of interest is overlaid. For example, the following figure shows a Gaussian signal with a maximum value of 50 centered at $x = 125$ superimposed on an exponential background. The dotted line is the Gaussian signal, which has a maximum value of 50 at $x = 125$, and the solid line is the signal as measured, which has a maximum value of 57 at $x = 125$. If the background signal is consistent across all samples, then we can analyze the data without first removing its contribution. For example, the following figure shows a set of calibration standards and their resulting calibration curve, for which the y-intercept of 7 gives the offset introduced by the background. But background signals often are not consistent across samples, particularly when the source of the background is a property of the samples we collect (natural water samples, for example, may have variations in color due to differences in the concentration of dissolved organic matter) or a property of the instrument we are using (such as a variation in source intensity over time). When true, our data may look more like what we see in the following figure, which leads to a calibration curve with a greater uncertainty. Because the background changes gradually with the values for x while the analyte's signal changes quickly, we can use a derivative to the distinguish between the two. One approach is to use a Savitzky-Golay derivative filter using the same approach described in the last section. For example, applying a 7-point first-derivative Savitzky-Golay filter with weights of $-3/28 \quad \quad -2/28 \quad \quad -1/28 \quad \quad 0/28 \quad \quad 1/28 \quad \quad 2/28 \quad \quad 3/28 \nonumber$ to the data in Figure $3$ gives the results shown below. The calibration signal in this case is the difference between the maximum signal and the minimum signal, which are shown by the dotted red lines in the top part of the figure. The fit of the calibration curve to the data and the calibration curve's y-intercept of zero shows that we have successfully compensated for the background signals. For other Savitzky-Golay derivative filters, including second-derivative filters, see Savitzky, A.; Golay, M. J. E. Anal Chem, 1964, 36, 1627-1639. 10.4: Using R to Clean Up Data R has two useful functions, `filter()` and `fft()`, that we can use to smooth or filter noise and to remove background signals. To explore their use, let's first create two sets of data that we can use as examples: a noisy signal and a pure signal superimposed on an exponential background. To create the noisy signal, we first create a vector of 256 values that defines the x-axis; although we will not specify a unit here, these could be times or frequencies. Next we use R's `dnorm()` function to generate a pure Gaussian signal with a mean of 125 and a standard deviation of 10, and R's `rnorm()` function to generate 256 points of random noise with a mean of zero and a standard deviation of 10. Finally, we add the pure signal and the noise to arrive at our noisy signal and then plot the noisy signal and overlay the pure signal. ```x = seq(1,256,1) gaus_signal = 1250 * dnorm(x, mean = 125, sd = 10) noise = rnorm(256, mean = 0, sd = 10) noisy_signal = gaus_signal + noise plot(x = x, y = noisy_signal, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal") lines(x = x, y = gaus_signal, lwd = 2)``` To estimate the signal-to-noise ratio, we use the maximum of the pure signal and the standard deviation of the noisy signal as determined using 100 points divided evenly between the two ends. ```s_to_n = max(gaus_signal)/sd(noisy_signal[c(1:50,201:250)]) s_to_n ``` `[1] 5.14663` To create a signal superimposed on an exponential background, we use R's `exp()` function to generate 256 points for the background's signal, add that to our pure Gaussian signal, and plot the result. ```exp_bkgd = 30*exp(-0.01 * x) plot(x,exp_bkgd,type = "l") signal_bkgd = gaus_signal + exp_bkgd plot(x = x, y = signal_bkgd, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal", ylim = c(0,60)) lines(x = x, y = gaus_signal, lwd = 2, lty = 2)``` Using R's `filter()` Function to Smooth Noise and Remove Background Signals R's `filter()` function takes the general form `filter(x, filter)` where `x` is the object being filtered and `filter` is an object that contains the filter's coefficients. To create a seven-point moving average filter, we use the `rep()` function to create a vector that has seven identical values, each equal to 1/7. `mov_avg_7 = rep(1/7, 7)` Applying this filter to our noisy signal returns the following result ```noisy_signal_movavg = filter(noisy_signal, mov_avg_7) plot(x = x, y = noisy_signal_movavg, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal") lines(x = x, y = gaus_signal, lwd = 2)``` with the signal-to-noise ratio improved to ```s_to_n_movavg = max(gaus_signal)/sd(noisy_signal_movavg[c(1:50,200:250)], na.rm = TRUE) s_to_n_movavg``` `[1] 11.29943` Note that we must add `na.rm = TRUE` to the `sd()` function because applying a seven-point moving average filter replaces the first three and the last three points with values of `NA` which we must tell the `sd()` function to ignore. To create a seven-point Savitzky-Golay smoothing filter, we create a vector to store the coefficients, obtaining the values from the original paper (Savitzky, A.; Golay, M. J. E. Anal Chem, 1964, 36, 1627-1639) and then apply it to our noisy signal, obtaining the results below. ```sg_smooth_7 = c(-2,3,6,7,6,5,-2)/21 noisy_signal_sg = filter(noisy_signal, sg_smooth_7) plot(x = x, y = noisy_signal_sg, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal") lines(x = x, y = gaus_signal, lwd = 2) s_to_n_movavg = max(gaus_signal)/sd(noisy_signal_sg[c(1:50,200:250)], na.rm = TRUE) s_to_n_movavg``` `[1] 7.177931` To remove a background from a signal, we use the same approach, substituting a first-derivative (or higher order) Savitxky-Golay filter. ```sg_fd_7 = c(22, -67, -58, 0, 58, 67, -22)/252 signal_bkgd_sg = filter(signal_bkgd, sg_fd_7) plot(x = x, y = signal_bkgd_sg, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal")``` Using R's `fft()` Function for Fourier Filtering To complete a Fourier transform in R we use the `fft()` function, which takes the form `fft(z, inverse = FALSE)` where `z` is the object that contains the values to which we wish to apply the Fourier transform and where setting `inverse = TRUE` allows for an inverse Fourier transform. Before we apply Fourier filtering to our noisy signal, let's first apply the `fft() `function to a vector that contains the integers 1 through 8. First we create a vector to hold our values and the apply the `fft()` function to the vector, obtaining the following results ```test_vector = seq(1, 8, 1) test_vector_ft = fft(test_vector) test_vector_ft``` `[1] 36+0.000000i -4+9.656854i -4+4.000000i -4+1.656854i -4+0.000000i -4-1.656854i ` `[7] -4-4.000000i -4-9.656854i` Each of the eight results is a complex number with a real and an imaginary component. Note that the real component of the first value is 36, which is the sum of the elements in our test vector. Note, also, the symmetry in the remaining values where the second and eighth values, the third and seventh values, and the fourth and sixth values are identical except for a change in sign for the imaginary component. Taking the inverse Fourier transform returns the original eight values (note that the imaginary terms are now zero), but each is eight times larger in value than in our original vector. ```test_vector_ifft = fft(test_vector_ft, inverse = TRUE) test_vector_ifft``` `[1] 8+0i 16-0i 24+0i 32+0i 40+0i 48+0i 56-0i 64+0i` To compensate for this, we divide by the length of our vector ```test_vector_ifft = fft(test_vector_ft, inverse = TRUE)/length(test_vector) test_vector_ifft``` `[1] 1+0i 2-0i 3+0i 4+0i 5+0i 6+0i 7-0i 8+0i` which returns our original vector. With this background in place, let's use R to complete a Fourier filtering of our noisy signal. First, we complete the Fourier transform of the noisy signal and examine the values for the real component, using R's `Re()` function to extract them. Because of the symmetry noted above, we need only look at the first half of the real components (x = 1 to x = 128). ```noisy_signal_ft = fft(noisy_signal) plot(x = x[1:128], y = Re(noisy_signal_ft)[1:128], type = "l", col = "blue", xlab = "", ylab = "intensity", lwd = 2)``` Next, we look for where the signal's magnitude has decayed to what appears to be random noise and set these values to zero. In this example, we retain the first 24 points (and the last 24 points; remember the symmetry noted above) and set both the real and the imaginary components to 0 + 0i. ```noisy_signal_ft[25:232] = 0 + 0i plot(x = x, y = Re(noisy_signal_ft), type = "l", col = "blue", xlab = "", ylab = "intensity", lwd = 2)``` Finally, we take the inverse Fourier transform and display the resulting filtered signal and report the signal-to-noise ratio. ```noisy_signal_ifft = fft(noisy_signal_ft, inverse = TRUE)/length(noisy_signal_ft) plot(x = x, y = Re(noisy_signal_ifft), type = "l", col = "blue", xlab = "", ylab = "intensity", ylim = c(-20,60), lwd = 3) lines(x = x,y = gaus_signal,lwd =2, col = "black")``` ```s_to_n = 50/sd(Re(noisy_signal_ifft)[c(1:50,200:250)], na.rm = TRUE) s_to_n``` `[1] 9.695329` 10.5: Exercises 1. The goal when smoothing data is to improve the signal-to-noise ratio without distorting the underlying signal. The data in the file problem10_1.csv consists of four columns of data: the vector x, which contains 200 values for plotting on the x-axis; the vector y, which contains 200 values for a step-function that satisfies the following criteria $y = 0 \text{ for } x \le 75 \text{ and for } x \ge 126 \nonumber$ $y = 1 \text{ for } 75 < x < 126 \nonumber$ the vector n, which contains 200 values drawn from random normal distribution with a mean of 0 and standard deviation of 0.1, and the vector s, which is the sum of y and n. In essence, y is the pure signal, n is the noise, and s is a noisy signal. Using this data, complete the following tasks: (a) Determine the mean signal, the standard deviation of the noise, and the signal-to-noise ratio for the noisy signal using just the data in the object s. (b) Explore the effect of applying to the noisy signal, one pass each of moving average filters of widths 5, 7, 9, 11, 13, 15, and 17. For each moving average filter, determine the mean signal, the standard deviation of the noise, and the signal-to-noise ratio. Organize these measurements using a table and comment on your results. Prepare a single plot that displays the original noisy signal and the smoothed signals using widths of 5, 9, 13, and 17, off-setting each so that all five signals are displayed. Comment on your results. (c) Repeat the calculations in (b) using Savitzky-Golay quadratic/cubic smoothing filters of widths 5, 7, 9, 11, 13, 15, and 17; see the original paper for each filter's coefficients. (d) Considering your results for (b) and for (c), what filter and what width provides the greatest improvement in the signal-to-noise ratio with the least distortion of the original signal’s step-function? Be sure to justify your choice. 2. The file problem10_2.csv consists of two columns, each with 1024 points: x is an index for the x-axis and y is noisy data with a hint of a signal. Show that there is a signal in this file by using any one moving average or Savitzky-Golay smoothing filter of your choice and using a Fourier filter. Present your results in a single figure that shows the original signal, the signal after smoothing, and the signal after Fourier filtering. Comment on your results. 3. The file problem 10_3.csv consists of six columns: x is an index for the x-axis and y1, y2, y3, y4, and y5 are signals superimposed on a variable background. Use a Savitzky-Golay nine-point cubic second-derivative filter to remove the background from the data and then build a calibration model using these results, and report the calibration equation and a plot of the calibration curve. See the original paper for the filter's coefficients.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.3%3A_Background_Removal.txt
One of the more intriguing aspects of chemometrics is the ability to discover and extract information from a large data set that appears, at first glance, to lack any defined order. And yet, it is likely that there are determinate factors that explain the data. Consider a data set that consists of the daily concentration of NOX—the combined amounts of NO2 and of NO in the air expressed as µg/m3—in samples of urban air. Although a plot of the concentration of NOX as a function of time likely appears noisy, we can easily identify variables that might affect the daily measurements: • temperature: we need more energy on colder days, which increases the use of fuels that generate NOX emissions • day of the week: perhaps more traffic on work days than on weekends • atmospheric conditions: stronge winds may disperse NOX emissions and stagnation may concentrate NOX emissions • location of air samplers: samplers at busy intersections may give different results from samplers located in city parks The chemometric methods introduced in this chapter—cluster analysis, principal component analysis, and multivariate linear regression—provide ways to probe the underlying factors that provide structure to our data. 11: Finding Structure in Data The signals we measure include contributions from determinate and indeterminate sources, with the determinate components resulting from the analytes in our sample and with the indeterminate sources resulting from noise. When we describe our data as having structure, or that we are looking for structure in our data, our interest is in the determinate contributions to the signal. Consider, for example, the data in the following figure, which shows the visible spectra for 24 samples at 635 wavelengths. Each curve in this figure, such as the one shown in red, is one of the 24 samples that make up this data set and shows the extent to which each of the 635 discrete wavelengths of light are absorbed by that sample: this is the determinate contribution to the data. Looking closely at the spectrum shown in red, we see small variations in the absorbance superimposed on the determinate signal: this is the indeterminate contribution to the data. Although when first examined, the 24 spectra in Figure \(1\) may create a sense of disorder, there is a clear underlying structure to the data. For example, there are four apparent peaks centered at wavelengths around 400 nm, 500 nm, 580 nm, and 800 nm. Each of the individual spectra include one or more of these peaks. Further, at a wavelength of 800 nm, we see that some samples show no absorbance, and presumably lack whatever analyte is responsible for this peak; other samples, however, clearly include contributions from this analyte. This is what we mean by finding structure in data. In this chapter we explore three tools for finding structure in data—cluster analysis, principal component analysis, and multivariate linear regression—that allow us to make sense of that structure. 11.02: Cluster Analysis In the previous section we examined the spectra of 24 samples at 635 wavelengths, displaying the data by plotting the absorbance as a function of wavelength. Another way to examine the data is to plot the absorbance of each sample at one wavelength against the absorbance of the same sample at a second wavelength, as we see in the following figure using wavelengths of 403.3 nm and 508.7 nm. Note that this plot suggests an underlying structure to our data as the 24 points occupy a triangular-shaped space. defined by the samples identified as 1, 2, and 3. We can extend this analysis to three wavelengths, as we see in the following figure, and, to as many as all 635 wavelengths (Of course we cannot examine a plot of this as it exists in 635-dimensional space!). In both Figure \(1\) and Figure \(2\) (and the higher dimensional plots that we cannot display), some samples are closer to each other in space than are other points. For example, in Figure \(1\), samples 7 and 20 are closer to each other than any other pair of samples; samples 2 and 3, however, are further from each other than any other pair of samples. How Does a Cluster Analysis Work? A cluster analysis is a way to examine our data in terms of the similarity of the samples to each other. Figure \(3\) outlines the steps using a small set of six points defined by two variables, a and b. Panel (a) shows the six data points. The two points closest in distance are 3 and 4, which make the first cluster and which we replace with the red point midway between them, as seen in panel (b). The next two points closest in distance are 2 and 6, which make the second cluster and which we replace with the red point between them, as seen in panel (c). Continuing in this way yields the results in panel (d) where the third cluster brings together points 2, 3, 4, and 6, the fourth cluster brings together points 1, 2, 3, 4, and 6, and the final cluster brings together all six points. To visualize the clusters, in terms of the identify of the points in the clusters, the order in which the clusters form, and the relative similarity of difference between points and clusters, we display the information in Figure \(\PageIndex{3d}\) as the dendrogram shown in Figure \(4\), which shows, for example, that the clusters of points 3 and 4, and of 2 and 6 are more similar to each other than they are to point 1 and to point 6. The vertical scale, which is identified as Height, provides a measure of the distance of the individual points or clusters of points from each other. How Do We Interpret the Results of a Cluster Analysis? A cluster analysis of the 24 samples from Figure 11.1.1 is shown in Figure \(5\) using 40 equally-spaced wavelengths. There is much we can learn from this diagram about the structure of these samples, which we can divide into three distinct clusters of samples, as shown by the boxes. The samples within each cluster are more similar to each other than they are to samples in other clusters. One possible explanation for this structure is that the 24 samples are comprised of three analytes, where, for each cluster, one of the analytes is present at a higher concentration than the other two analytes.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.01%3A_What_Do_We_Mean_By_Structure_and_Order.txt
The figure below—which is similar in structure to Figure 11.2.2 but with more samples—shows the absorbance values for 80 samples at wavelengths of 400.3 nm, 508.7 nm, and 801.8 nm. Although the axes define the space in which the points appear, the individual points themselves are, with a few exceptions, not aligned with the axes. The cloud of 80 points has a global mean position within this space and a global variance around the global mean (see Chapter 7.3 where we used these terms in the context of an analysis of variance). Suppose we leave the points in space as they are and rotate the three axes. We might rotate the three axes until one passes through the cloud in a way that maximizes the variation of the data along that axis, which means this new axis accounts for the greatest contribution to the global variance. Having aligned this primary axis with the data, we then hold it in place and rotate the remaining two axes around the primary axis until one them passes through the cloud in a way that maximizes the data's remaining variance along that axis; this becomes the secondary axis. Finally, the third, or tertiary axis, is left, which explains whatever variance remains. In essence, this is what comprises a principal component analysis (PCA). How Does a Principal Component Analysis Work? One of the challenges with understanding how PCA works is that we cannot visualize our data in more than three dimensions. The data in Figure $1$, for example, consists of spectra for 24 samples recorded at 635 wavelengths. To visualize all of this data requires that we plot it along 635 axes in 635-dimensional space! Let's consider a much simpler system that consists of 21 samples for each of which we measure just two properties that we will call the first variable and the second variable. Figure $2$ shows our data, which we can express as a matrix with 21 rows, one for each of the 21 samples, and 2 columns, one for each of the two variables. $[D]_{21 \times 2} \nonumber$ Next, we complete a linear regression analysis on the data and add the regression line to the plot; we call this the first principal component. Projecting our data (the blue points) onto the regression line (the red points) gives the location of each point on the first principal component's axis; these values are called the scores, $S$. The cosines of the angles between the first principal component's axis and the original axes are called the loadings, $L$. We can express the relationship between the data, the scores, and the loadings using matrix notation. Note that from the dimensions of the matrices for $D$, $S$, and $L$, each of the 21 samples has a score and each of the two variables has a loading. $[D]_{21 \times 2} = [S]_{21 \times 1} \times [L]_{1 \times 2} \nonumber$ Next, we draw a line perpendicular to the first principal component axis, which becomes the second (and last) principal component axis, project the original data onto this axis (points in green) and record the scores and loadings for the second principal component. $[D]_{21 \times 2} = [S]_{21 \times 2} \times [L]_{2 \times 2} \nonumber$ Note In matrix multiplication the number of columns in the first matrix must equal the number of rows in the second matrix. The result of matrix multiplication is a new matrix that has a number of rows equal to that of the first matrix and that has a number of columns equal to that of the second matrix; thus multiplying together a matrix that is $5 \times 4$ with one that is $4 \times 8$ gives a matrix that is $5 \times 8$. If we were working with 21 samples and 10 variables, then we would do this: 1. plot the data for the 21 samples in 10-dimensional space where each variable is an axis 2. find the first principal component's axis and make note of the scores and loadings 3. project the data points for the 21 samples onto the 9-dimensional surface that is perpendicular to the first principal component's axis 4. find the second principal component's axis and make note of the scores and loading 5. project the data points for the 21 samples onto the 8-dimensional surface that is perpendicular to the second (and the first) principal component's axis 6. repeat until all 10 principal components are identified and all scores and loadings reported How Do We Interpret the Results of a Principal Component Analysis? The results of a principal component analysis are given by the scores and the loadings. Let's return to the data from Figure $1$, but to make things more manageable, we will work with just 24 of the 80 samples and expand the number of wavelengths from three to 16 (a number that is still a small subset of the 635 wavelengths available to us). The figure below shows the full spectra for these 24 samples and the specific wavelengths we will use as dotted lines; thus, our data is a matrix with 24 rows and 16 columns, $[D]_{24 \times 16}$. A principal component analysis of this data will yield 16 principal component axes. Each principal component accounts for a portion of the data's overall variances and each successive principal component accounts for a smaller proportion of the overall variance than did the preceding principal component. Those principal components that account for insignificant proportions of the overall variance presumably represent noise in the data; the remaining principal components presumably are determinate and sufficient to explain the data. The following table provides a summary of the proportion of the overall variance explained by each of the 16 principal components. Table $1$: The Proportion of Overall Variance Explained by the Principal Components for the Data in Figure $6$. PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 standard deviation 3.3134 2.1901 0.42561 0.17585 0.09384 0.04607 0.04026 0.01253 proportion of variance 0.6862 0.2998 0.01132 0.00193 0.00055 0.00013 0.00010 0.00001 cumulative proportion 0.6862 0.9859 0.99725 0.99919 0.99974 0.99987 0.99997 0.99998 PC9 PC10 PC11 PC12 PC13 PC14 PC15 PC16 standard deviation 0.01049 0.009211 0.007084 0.004478 0.00416 0.003039 0.002377 0.001504 proportion of variance 0.00001 0.000010 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 cumulative proportion 0.99999 0.999990 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 The first principal component accounts for 68.62% of the overall variance and the second principal component accounts for 29.98% of the overall variance. Collectively, these two principal components account for 98.59% of the overall variance; adding a third component accounts for more than 99% of the overall variance. Clearly we need to consider at least two components (maybe three) to explain the data in Figure $1$. The remaining 14 (or 13) principal components simply account for noise in the original data. This leaves us with the following equation relating the original data to the scores and loadings $[D]_{24 \times 16} = [S]_{24 \times n} \times [L]_{n \times 16} \nonumber$ where $n$ is the number of components needed to explain the data, in this case two or three. To examine the principal components more closely, we plot the scores for PC1 against the scores for PC2 to give the scores plot seen below, which shows the scores occupying a triangular-shaped space. Because our data are visible spectra, it is useful to compare the equation $[D]_{24 \times 16} = [S]_{24 \times n} \times [L]_{n \times 16} \nonumber$ to Beer's Law, which in matrix form is $[A]_{24 \times 16} = [C]_{24 \times n} \times [\epsilon b]_{n \times 16} \nonumber$ where $[A]$ gives the absorbance values for the 24 samples at 16 wavelengths, $[C]$ gives the concentrations of the two or three components that make up the samples, and $[\epsilon b]$ gives the products of the molar absorptivity and the pathlength for each of the two or three components at each of the 16 wavelengths. Comparing these two equations suggests that the scores are related to the concentrations of the $n$ components and that the loadings are related to the molar absorptivities of the $n$ components. Furthermore, we can explain the pattern of the scores in Figure $7$ if each of the 24 samples consists of a 1–3 analytes with the three vertices being samples that contain a single component each, the samples falling more or less on a line between two vertices being binary mixtures of the three analytes, and the remaining points being ternary mixtures of the three analytes. Note If there are three components in our 24 samples, why are two components sufficient to account for almost 99% of the over variance? Suppose we prepared each sample by using a volumetric digital pipet to combine together aliquots drawn from solutions of the pure components, diluting each to a fixed volume in a 10.00 mL volumetric flask. For example, to make a ternary mixture we might pipet in 5.00 mL of component one and 4.00 mL of component two. If we are diluting to a final volume of 10 mL, then the volume of the third component must be less than 1.00 mL to allow for diluting to the mark. Because the volume of the third component is limited by the volumes of the first two components, two components are sufficient to explain most of the data. The loadings, as noted above, are related to the molar absorptivities of our sample's components, providing information on the wavelengths of visible light that are most strongly absorbed by each sample. We can overlay a plot of the loadings on our scores plot (this is a called a biplot), as shown here. Each arrow is identified with one of our 16 wavelengths and points toward the combination of PC1 and PC2 to which it is most strongly associated. For example, although difficult to read here, all wavelengths from 672.7 nm to 868.7 nm (see the caption for Figure $6$ for a complete list of wavelengths) are strongly associated with the analyte that makes up the single component sample identified by the number one, and the wavelengths of 380.5 nm, 414.9 nm, 583.2 nm, and 613.3 nm are strongly associated with the analyte that makes up the single component sample identified by the number two. If we have some knowledge about the possible source of the analytes, then we may be able to match the experimental loadings to the analytes. The samples in Figure $1$ were made using solutions of several first row transition metal ions. Figure $10$ shows the visible spectra for four such metal ions. Comparing these spectra with the loadings in Figure $9$ shows that Cu2+ absorbs at those wavelengths most associated with sample 1, that Cr3+ absorbs at those wavelengths most associated with sample 2, and that Co2+ absorbs at wavelengths most associated with sample 3; the last of the metal ions, Ni2+, is not present in the samples
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.03%3A_Principal_Component_Analysis.txt
In Chapter 11.2 we used a cluster analysis of the spectra for 24 samples measured at 16 wavelengths to show that we could divide the samples into three distinct groups, speculating that the samples contained three analytes and that in each group one of the analytes was present at a concentration greater than that of the other two analytes. In Chapter 11.3 we used a principal component analysis of the same set of samples to suggest that the three analytes are Cu2+, Cr3+, and Co2+. In this section we will use a multivariate linear regression analysis to determine the concentration of these analytes in each of the 24 samples. How Does a Calibration Using Multivariate Regression Work? In a simple linear regression analysis, as outlined in Chapter 8, we model the relationship between a single dependent variable, y, and a single dependent variable, x, using the equation $y = \beta_0 + \beta_1 x \nonumber$ where y is a vector of measured responses for the dependent variable, where x is a vector of values for the independent variable, where $\beta_0$ is the expected y-intercept, and where $\beta_1$ is the expected slope. For example, to complete a Beer's law calibration curve for a single analyte, where A is the absorbance and C is the analyte's concentration $A = \epsilon b C \nonumber$ we prepare a set of n standard solutions, each with a known concentration of the analyte and measure the absorbance for each of the standard solutions at a single wavelength. A linear regression analysis returns values for $\epsilon b$, allowing us to determine the concentration of analyte in a sample by measuring its absorbance. See Chapter 8 for a review of how to complete a linear regression analysis using R. In a multivariate linear regression we have j dependent variables, Y, and k independent variables, X, and we measure the dependent variable for each of the n values for the independent variables; we can represent this using matrix notation as $[ Y ]_{n \times j} = [X]_{n \times k} \times [\beta_1]_{k \times j} \nonumber$ In this case, to complete a Beer's law calibration curve we prepare a set of n standard solutions, each of which contains known concentrations of the k analytes, and measure the absorbance of each standard at each of the j wavelengths $[ A ]_{n \times j} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber$ where [A] is a matrix of absorbance values, [C] is a matrix of concentrations, and [$\epsilon b$] is a matrix of $\epsilon b$ values for each analyte at each wavelength. Because matrix algebra does not allow for division, we solve for [$\epsilon b$] by first pre-multiplying both sides of the equation by the transpose of the matrix of concentrations $[C]_{k \times n}^{\text{T}} \times [ A ]_{n \times j} = [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber$ and then pre-multiplying both sides of the equation by $\left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1}$ to give $\left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1} \times [C]_{k \times n}^{\text{T}} \times [ A ]_{n \times j} = \left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1} \times [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber$ Multiplying $\left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1}$ by $\left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)$ is equivalent to multiplying a value by its inverse, which is equal to 1; thus, we have $\left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1} \times [C]_{k \times n}^{\text{T}} \times [ A ]_{n \times j} = [\epsilon b]_{k \times j} \nonumber$ With the $\epsilon b$ matrix in hand, we can determine the concentration of the analytes in a set of samples using the same general approach, as shown here $[ A ]_{n \times j} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber$ $[ A ]_{n \times j} \times [\epsilon b]_{j \times k}^{\text{T}} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \nonumber$ $[ A ]_{n \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \times \left( [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \right)^{-1} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \times \left( [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \right)^{-1} \nonumber$ $[ A ]_{n \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \times \left( [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \right)^{-1} = [C]_{n \times k} \nonumber$ Note Completing these calculations by hand is a chore; see Chapter 11.7 to see how you can complete a multivariate linear regression using R. How Do We Evaluate the Results of a Calibration Using a Multivariate Linear Regression? One way to evaluate the results of a calibration based on a multivariate linear regression is to use it to examine the values for each analyte's $\epsilon b$ values from the calibration and compare them to the spectra of the individual analytes; the shape of the two plots should be similar. Another way to evaluate a calibration based on a multivariate regression calibration is to use it to analyze a set of samples with known concentrations of the analytes.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.04%3A_Multivariate_Regression.txt
To illustrate how we can use R to complete a cluster analysis: use this link and save the file allSpec.csv to your working directory. The data in this file consists of 80 rows and 642 columns. Each row is an independent sample that contains one or more of the following transition metal cations: Cu2+, Co2+, Cr3+, and Ni2+. The first seven columns provide information about the samples: • a sample id (in the form custd_1 for a single standard of Cu2+ or nicu_mix1 for a mixture of Ni2+ and Cu2+) • a list of the analytes in the sample (in the form cuco for a sample that contains Cu2+ and Co2+) • the number of analytes in the sample (a number from 1 to 4 and labeled as dimensions) • the molar concentration of Cu2+ in the sample • the molar concentration of Co2+ in the sample • the molar concentration of Cr3+ in the sample • the molar concentration of Ni2+ in the sample The remaining columns contain absorbance values at 635 wavelengths between 380.5 nm and 899.5 nm. First, we need to read the data into R, which we do using the read.csv() function spec_data <- read.csv("allSpec.csv", check.names = FALSE) where the option check.names = FALSE overrides the function's default to not allow a column's name to begin with a number. Next, we will create a subset of this large data set to work with wavelength_ids = seq(8, 642, 40) sample_ids = c(1, 6, 11, 21:25, 38:53) cluster_data = spec_data[sample_ids, wavelength_ids ] where wavelength_ids is a vector that identifies the 16 equally spaced wavelengths, sample_ids is a vector that identifies the 24 samples that contain one or more of the cations Cu2+, Co2+, and Cr3+, and cluster_data is a data frame that contains the absorbance values for these 24 samples at these 16 wavelengths. Before we can complete the cluster analysis, we first must calculate the distance between the $24 \times 16 = 384$ points that make up our data. To do this, we use the dist() function, which takes the general form dist(object, method) where object is a data frame or matrix with our data. There are a number of options for method, but we will use the default, which is euclidean. cluster_dist = dist(cluster_data, method = "euclidean") cluster_dist 1 6 11 21 22 23 24 25 6 1.53328104 11 1.73128979 0.96493008 21 1.48359716 0.24997370 0.77766228 22 1.49208058 0.32863786 0.68852029 0.09664215 23 1.49457333 0.42903074 0.57495499 0.21089686 0.11755129 24 1.51211374 0.52218072 0.47457024 0.31016429 0.21830998 0.10205547 25 1.55862311 0.61154277 0.39798649 0.39406580 0.30194838 0.19121251 0.09771283 38 1.17069314 0.38098750 0.96982420 0.34254297 0.38830178 0.45418483 0.53114050 0.61729900 Only a small portion of the values in cluster_dist are shown here; each entry shows the distance between two of the 24 samples. With distances calculated, we can use R's hclust() function to complete the cluster analysis. The general form of the function is hclust(object, method) where object is the output created using dist() that contains the distances between points. There are a number of options for method—here we use the ward.D method—saving the output to the object cluster_results so that we have access to the results. cluster_results = hclust(cluster_dist, method = "ward.D") To view the cluster diagram, we pass the object cluster_results to the plot() function where hang = -1 extends each vertical line to a height of zero. By default, the labels at the bottom of the dendrogram are the sample ids; cex adjusts the size of these labels. plot(cluster_results, hang = -1, cex = 0.75) With a few lines of code we can add useful details to our plot. Here, for example, we determine the the fraction of the stock Cu2+ solution in each sample and use these values as labels, and divide the 24 samples into three large clusters using the rect.clust() function where k is the number of clusters to highlight and which indicates which of these clusters to display using a rectangular box. cluster_copper = spec_data$concCu/spec_data$concCu[1] plot(cluster_results, hang = -1, labels = cluster_copper[sample_ids], main = "Copper", xlab = "fraction of stock in sample", sub = "", cex = 0.75) rect.hclust(cluster_results, k = 3, which = c(1,2,3), border = "blue") The following code shows how we can use the same data set of 24 samples and 16 wavelength to complete a cluster diagram for the wavelengths. The use of the t() function within the dist() function takes the transpose of our data so that the rows are the 16 wavelengths and the columns are the 24 samples. We do this because the dist() function calculates distances using the rows. wavelength_dist = dist(t(cluster_data)) wavelength_clust = hclust(wavelength_dist, method = "ward.D") plot(wavelength_clust, hang = -1, main = "wavelengths strongly associated with copper") rect.hclust(wavelength_clust, k = 2, which = 2, border = "blue") The figure below highlights the cluster of wavelengths most strongly associated with the absorption by Cu2+.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.05%3A_Using_R_for_a_Cluster_Analysis.txt
To illustrate how we can use R to complete a cluster analysis: use this link and save the file `allSpec.csv` to your working directory. The data in this file consists of 80 rows and 642 columns. Each row is an independent sample that contains one or more of the following transition metal cations: Cu2+, Co2+, Cr3+, and Ni2+. The first seven columns provide information about the samples: • a sample id (in the form custd_1 for a single standard of Cu2+ or nicu_mix1 for a mixture of Ni2+ and Cu2+) • a list of the analytes in the sample (in the form cuco for a sample that contains Cu2+ and Co2+) • the number of analytes in the sample (a number from 1 to 4 and labeled as dimensions) • the molar concentration of Cu2+ in the sample • the molar concentration of Co2+ in the sample • the molar concentration of Cr3+ in the sample • the molar concentration of Ni2+ in the sample The remaining columns contain absorbance values at 635 wavelengths between 380.5 nm and 899.5 nm. First, we need to read the data into R, which we do using the `read.csv()` function `spec_data <- read.csv("allSpec.csv", check.names = FALSE)` where the option `check.names = FALSE` overrides the function's default to not allow a column's name to begin with a number. Next, we will create a subset of this large data set to work with ```wavelength_ids = seq(8, 642, 40) sample_ids = c(1, 6, 11, 21:25, 38:53) pca_data = spec_data[sample_ids, wavelength_ids ]``` where `wavelength_ids` is a vector that identifies the 16 equally spaced wavelengths, `sample_ids` is a vector that identifies the 24 samples that contain one or more of the cations Cu2+, Co2+, and Cr3+, and cluster_data is a data frame that contains the absorbance values for these 24 samples at these 16 wavelengths. To complete the principal component analysis we will use R's `prcomp()` function, which takes the general form `prcomp(object, center, scale)` where `object` is a data frame or matrix that contains our data, and `center` and `scale` are logical values that indicate if we should first center and scale the data before we complete the analysis. When we center and scale our data each variable (in this case, the absorbance at each wavelength) is adjusted so that its mean is zero and its variance is one. This has the effect of placing all variables on a common scale, which ensures that any difference in the relative magnitude of the variables does not affect the principal component analysis. `pca_results = prcomp(pca_data, center = TRUE, scale = TRUE)` The `prcomp()` function returns a variety of information that we can use to examine the results, including the standard deviation for each principal component, `sdev`, a matrix with the loadings, `rotation`, a matrix with the scores, `x`, and the values use to `center` and `scale` the original data. The `summary()` function, for example, returns the standard deviations for and the proportion of the overall variance explained by each principal component, and the cumulative proportion of variance explained by the principal components. `summary(pca_results)` `Importance of components:` `PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9` `Standard deviation 3.3134 2.1901 0.42561 0.17585 0.09384 0.04607 0.04026 0.01253 0.01049` `Proportion of Variance 0.6862 0.2998 0.01132 0.00193 0.00055 0.00013 0.00010 0.00001 0.00001 ` `Cumulative Proportion 0.6862 0.9859 0.99725 0.99919 0.99974 0.99987 0.99997 0.99998 0.99999` `PC10 PC11 PC12 PC13 PC14 PC15 PC16 ` `Standard deviation 0.009211 0.007084 0.004478 0.00416 0.003039 0.002377 0.001504 ` `Proportion of Variance 0.000010 0.000000 0.000000 0.00000 0.000000 0.000000 0.000000 ` `Cumulative Proportion 0.999990 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000` We can also examine each principal component's variance (the square of its standard deviation) in the form of a bar plot by passing the results of the principal component analysis to the `plot()` function. `plot(pca_results)` As noted above, the 24 samples include one, two, or three of the cations Cu2+, Co2+, and Cr3+, which is consistent with our results if individual solutions are made by combining together aliquots of stock solutions of Cu2+, Co2+, and Cr3+ and diluting to a common volume. In this case, the volume of stock solution for one cation places limits on the volumes of the other cations such that a three-component mixture essentially has two independent variables. To examine the scores for the principal component analysis, we pass the scores to the `plot()` function, here using `pch = 19` to display them as filled points. `plot(pca_results\$x, pch = 19)` By default, the `plot()` function displays the values for the first two principal components, with the first (PC1) placed on the x-axis and the second (PC2) placed on the y-axis. If we wish to examine other principal components, then we must specify them when calling the `plot()` function; the following command, for example, uses the scores for the second and the third principal components. `plot(x = pca_results\$x[,2], y = pca_results\$x[,3], pch = 19, xlab = "PC2", ylab = "PC3")` If we wish to display the first three principal components using the same plot, then we can use the `scatter3D()` function from the `plot3D` package, which takes the general form ```library(plot3D) scatter3D(x = pca_results\$x[,1], y = pca_results\$x[,2], z = pca_results\$x[,3], pch = 19, type = "h", theta = 25, phi = 20, ticktype = "detailed", colvar = NULL)``` where we use the `library()` function to load the package into our R session (note: this assumes you have installed the `plot3D` package). The option `type = "h"` drops a horizontal line from each point down to the plane for PC1 and PC2, which helps us orient the points in space. By default, the plot uses color to show each points value of the third principal component (displayed on the z-axis); here we set `colvar = NULL` to display all points using the same color. Although the plots are not not shown here, we can use the same commands, replacing `x` with `rotation`, to display the loadings. `plot(pca_results\$rotation, pch = 19)` `plot(x = pca_results\$rotation[,2], y = pca_results\$rotation[,3], pch = 19, xlab = "PC2", ylab = "PC3")` `scatter3D(x = pca_results\$rotation[,1], y = pca_results\$rotation[,2], z = pca_results\$rotation[,3], pch = 19, type = "h", theta = 25, phi = 20, ticktype = "detailed", colvar = NULL)` Another way to view the results of a principal component analysis is to display the scores and the loadings on the same plot, which we can do using the `biplot()` function. `biplot(pca_results, cex = c(2, 0.6), xlabs = rep("•", 24))` where the option `xlabs = rep("•", 24)` overrides the function's default to display the scores as numbers, replacing them with dots, and `cex = c(2, 0.6)` is used to increase the size of the dots and decrease the size of the labels for the loadings. In this biplot, the scores are displayed as dots and the loadings are displayed as arrows that begin at the origin and point toward the individual loadings, which are indicated by the wavelengths associated with the loadings. For this set of data, scores and loadings that are co-located with each other represent samples and wavelengths that are strongly correlated with each other. For example, the sample whose score is in the upper right corner is strongly associated with absorbance of light with wavelengths of 613.3 nm, 583.2 nm, 380.5 nm, and 414.9 nm. Finally, we can use use color to highlight features from our data set. For example, the following lines of code creates a scores plot that uses a color pallet to indicate the relative concentration of Cu2+ in the sample. ```cu_palette = colorRampPalette(c("white", "blue")) cu_color = cu_pallete(50)[as.numeric(cut(spec_data\$concCu[sample_ids], breaks = 50))]``` The `colorRampPalette()` function takes a vector of colors—in this case white and blue—and returns a function that we can use to create a palette of colors that runs from pure white to pure blue. We then use this function to create 50 shades of white and blue `cu_palette(50)` `[1] "#FFFFFF" "#F9F9FF" "#F4F4FF" "#EFEFFF" "#EAEAFF" "#E4E4FF" "#DFDFFF" "#DADAFF" ` `[9] "#D5D5FF" "#D0D0FF" "#CACAFF" "#C5C5FF" "#C0C0FF" "#BBBBFF" "#B6B6FF" "#B0B0FF" ` `[17] "#ABABFF" "#A6A6FF" "#A1A1FF" "#9C9CFF" "#9696FF" "#9191FF" "#8C8CFF" "#8787FF" ` `[25] "#8282FF" "#7C7CFF" "#7777FF" "#7272FF" "#6D6DFF" "#6868FF" "#6262FF" "#5D5DFF" ` `[33] "#5858FF" "#5353FF" "#4E4EFF" "#4848FF" "#4343FF" "#3E3EFF" "#3939FF" "#3434FF" ` `[41] "#2E2EFF" "#2929FF" "#2424FF" "#1F1FFF" "#1A1AFF" "#1414FF" "#0F0FFF" "#0A0AFF" ` `[49] "#0505FF" "#0000FF"` where #FFFFFF is the hexadecimal code for pure white and #0000FF is the hexadecimal code for pure blue. The latter part of this line of code `cu_color = cu_pallete(50)[as.numeric(cut(spec_data\$concCu[sample_ids], breaks = 50))]` retrieves the concentrations of copper in each of our 24 samples and assigns a hexadecimal code for a shade of blue that indicates the relative concentration of copper in the sample. Here we see that the first sample has a hexadecimal code of #0000FF for pure blue, which means this sample has the largest concentration of copper and samples 2–8 have hexademical codes of #FFFFFF for pure white, which means these samples do not contain any copper. `cu_color` `[1] "#0000FF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" ` `[9] "#D0D0FF" "#B6B6FF" "#9C9CFF" "#8282FF" "#6868FF" "#D0D0FF" "#B6B6FF" "#9C9CFF" ` `[17] "#8282FF" "#6868FF" "#EAEAFF" "#EAEAFF" "#B6B6FF" "#B6B6FF" "#8282FF" "#8282FF"` Finally, we create the scores plot, using `pch = 21` for an open circle whose background color we designate using `bg = cu_color `and where we use `cex = 2` to increase the size of the points. `plot(pca_results\$x, pch = 21, bg = cu_color, cex = 2)`
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.06%3A_Using_R_for_a_Principal_Component_Analysis.txt
To illustrate how we can use R to complete a multivariate linear regression, use this link and save the file allSpec.csv to your working directory. The data in this file consists of 80 rows and 642 columns. Each row is an independent sample that contains one or more of the following transition metal cations: Cu2+, Co2+, Cr3+, and Ni2+. The first seven columns provide information about the samples: • a sample id (in the form custd_1 for a single standard of Cu2+ or nicu_mix1 for a mixture of Ni2+ and Cu2+) • a list of the analytes in the sample (in the form cuco for a sample that contains Cu2+ and Co2+) • the number of analytes in the sample (a number from 1 to 4 and labeled as dimensions) • the molar concentration of Cu2+ in the sample • the molar concentration of Co2+ in the sample • the molar concentration of Cr3+ in the sample • the molar concentration of Ni2+ in the sample The remaining columns contain absorbance values at 635 wavelengths between 380.5 nm and 899.5 nm. We will use a subset of this data that is identical to that used to illustrate a cluster analysis and a principal component analysis. First, we need to read the data into R, which we do using the read.csv() function spec_data <- read.csv("allSpec.csv", check.names = FALSE) where the option check.names = FALSE overrides the function's default to not allow a column's name to begin with a number. Next, we will create objects to hold the concentrations and absorbances for standard solutions of Cu2+, Cr3+, and Co2+, which are the three analytes wavelength_ids = seq(8, 642, 40) abs_stds = spec_data[1:15, wavelength_ids] conc_stds = data.frame(spec_data[1:15, 4], spec_data[1:15, 5], spec_data[1:15, 6]) abs_samples = spec_data[c(1, 6, 11, 21:25, 38:53), wavelength_ids] where wavelength_ids is a vector that identifies the 16 equally spaced wavelengths, abs_stds is a data frame that gives the absorbance values for 15 standard solutions of the three analytes Cu2+, Cr3+, and Co2+ at the 16 wavelengths, conc_stds is a data frame that contains the concentrations of the three analytes in the 15 standard solutions, and abs_samples is a data frame that contains the absorbances of the 24 sample at the 16 wavelengths. This is the same data used to illustrate cluster analysis and principal component analysis. To solve for the $\epsilon b$ matrix we will write and source the following function that takes two objects—a data frame of absorbance values and a data frame of concentrations—and returns a matrix of $\epsilon b$ values. findeb = function(abs, conc){ abs.m = as.matrix(abs) conc.m = as.matrix(conc) ct = t(conc.m) ctc = ct %*% conc.m invctc = solve(ctc) eb = invctc %*% ct %*% abs.m output = eb invisible(output) } Passing abs_stds and conc_stds to the function eb_pred = findeb(abs_stds, conc_stds) returns the predicted values for $\epsilon b$ that make up our calibration. As we see below, a plot of the $\epsilon b$ values for Cu2+ has the same shape as a plot of the absorbance values for one of the Cu2+ standards. wavelengths = as.numeric(colnames(spec_data[8:642])) old.par = par(mfrow = c(2,1)) plot(x = wavelengths[wavelength_ids], y = eb_pred[1,], type = "b", xlab = "wavelength (nm)", ylab = "eb", lwd = 2, col = "blue") plot(x = wavelengths, y = spec_data[1,8:642], type = "l", xlab = "wavelength (nm)", ylab = "absorbance", lwd = 2, col = "blue") par(old.par) Having completed the calibration, we can determine the concentrations of the three analytes in the 24 samples using the following function, which takes as inputs thea data frame of absobance values and the $\epsilon b$ matrix returned by the function findeb findconc = function(abs, eb){ abs.m = as.matrix(abs) eb.m = as.matrix(eb) ebt = t(eb.m) ebebt = eb %*% ebt invebebt = solve(ebebt) pred_conc = round(abs.m %*% ebt %*% invebebt, digits = 5) output = pred_conc invisible(output) } pred_conc = findconc(abs_samples, eb_pred) To determine the error in the predicted concentrations, we first extract the actual concentrations from the original data set as a data frame, adjusting the column names for clarity. real_conc = data.frame(spec_data[c(1, 6, 11, 21:25, 38:53), 4], spec_data[c(1, 6, 11, 21:25, 38:53), 5], spec_data[c(1, 6, 11, 21:25, 38:53), 6]) colnames(real_conc) = c("copper", "cobalt", "chromium") and determine the difference between the actual concentrations and the predicted concentrations conc_error = real_conc - pred_conc Finally, we can report the mean error, the standard deviation, and the 95% confidence interval for each analyte. means = apply(conc_error, 2, mean) round(means, digits = 6) copper cobalt chromium -0.000280 -0.000153 -0.000210 sds = apply(conc_error, 2, sd) round(sds, digits = 6) copper cobalt chromium 0.001037 0.000811 0.000688 conf.it = abs(qt(0.05/2, 20)) * sds round(conf.it, digits = 6) copper cobalt chromium 0.002163 0.001693 0.001434 Compared to the ranges of concentrations for the three analytes in the 24 samples range(real_conc$copper) [1] 0.00 0.05 range(real_conc$cobalt) [1] 0.0 0.1 range(real_conc\$chromium) [1] 0.0000 0.0375 the mean errors and confidence intervals are sufficiently small that we have confidence in the results.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.07%3A_Using_R_For_A_Multivariate_Regression.txt
The file rare_earths.csv contains data for the 17 rare earth elements, which consists of the lanthanides (La $\rightarrow$ Lu) plus Sc and Y. The data is from Horovitz, O.; Sârbu, C. "Characterization and Classification of Lanthanides by Multivariate-Analysis Methods," J. Chem. Educ. 2005, 82, 473-483. Each row in the file contains data for one element; the columns in the file provide values for the following 16 properties: • mass: atomic mass (g/mol) • density: ($\text{g/cm}^3$) • radius: atomic radius (pm) • en: electronegativity (Pauling scale) • ionenergy_1: first ionization energy (kJ/mol) • ionenergy_2: second ionization energy (kJ/mol) • ionenergy_3: third ionization energy (kJ/mol) • mp: melting point (K) • bp: boiling point (K) • h_fusion: enthalpy of fusion (kJ/mol) • h_atom: enthalpy of atomization (kJ/mol) • entropy: absolute entropy (J/mol•K) • sp_heat: specific heat (J/g•K) • resist: electrical resistivity ($\mu \Omega \text{ cm}$) • head_cond: heat conductivity ($\text{W } \text{cm }^{-1} \text{K}^{-1}$) • gibbs: Gibbs free energy of formation (kJ/mol) Two variables included in the original paper—the enthalpy of vaporization and the surface tension at the melting point—are omitted from this data set as they include missing values. Problems 1-3 draw upon the data in this file. 1. Perform a cluster analysis for the 17 elements in the file rare_earths.csv and comment on the results paying particular attention to the positions of Sc and Y, and the 15 lanthanides. You may wish to compare your results with those reported in the paper cited above. 2. Perform a cluster analysis for the 16 properties in the file rare_earths.csv and comment on the results. You may wish to compare your results with those reported in the paper cited above. 3. Complete a principal component analysis for the 17 elements in the file rare_earths.csv. Create two-dimensional scores plots that compare PC1 to PC2, PC1 to PC3, and PC2 to PC3, and a three-dimensional scores plot for the first three principal components. Comment on your results paying particular attention to the positions of Sc and Y, and the 15 lanthanides. You may wish to compare your results to those from Exercise 11.1 and the results reported in the paper cited above. Create two-dimensional loadings plots that compare PC1 to PC2, PC1 to PC3, and PC2 to PC3, and a three-dimensional loadings plot for the first three principal components. Comment on your results. You may wish to compare your results to those from Exercise 11.2 and the results reported in the paper cited above. 4. The files mvr_abs and mvr_conc contain absorbance values for 10 samples that contain one or more the analytes Co2+, Cu2+, and Ni2+ at five wavelengths, and the mM concentrations of the same analytes in the 10 samples. The data are from Dado, G.; Rosenthal, J. "Simultaneous Determination of Cobalt, Copper, and Nickel by Multivariate Linear Regression," J. Chem. Educ. 1990, 67, 797-800. Use the first seven samples as calibration standards and use a multivariate linear regression to determine the concentrations of the analytes in the last three samples. You may wish to compare your results with those reported in the paper cited above.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.08%3A_Exercises.txt
Table $1$, at the bottom of this appendix, gives the proportion, P, of the area under a normal distribution curve that lies to the right of a deviation, z $z = \frac {X -\mu} {\sigma} \nonumber$ where X is the value for which the deviation is defined, $\mu$ is the distribution’s mean value and $\sigma$ is the distribution’s standard deviation. For example, the proportion of the area under a normal distribution to the right of a deviation of 0.04 is 0.4840 (see entry in red in the table), or 48.40% of the total area (see the area shaded blue in Figure $1$). The proportion of the area to the left of the deviation is 1 – P. For a deviation of 0.04, this is 1 – 0.4840, or 51.60%. Figure $1$. Normal distribution curve showing the area under a curve greater than a deviation of +0.04 (blue) and with a deviation less than –0.04 (green). When the deviation is negative—that is, when X is smaller than $\mu$—the value of z is negative. In this case, the values in the table give the area to the left of z. For example, if z is –0.04, then 48.40% of the area lies to the left of the deviation (see area shaded green in Figure $1$. To use the single-sided normal distribution table, sketch the normal distribution curve for your problem and shade the area that corresponds to your answer (for example, see Figure $2$, which is for Example 4.4.2). This divides the normal distribution curve into three regions: the area that corresponds to our answer (shown in blue), the area to the right of this, and the area to the left of this. Calculate the values of z for the limits of the area that corresponds to your answer. Use the table to find the areas to the right and to the left of these deviations. Subtract these values from 100% and, voilà, you have your answer. Table $1$: Values for a Single-Sided Normal Distribution z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641 0.1 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4365 0.4325 0.4286 0.4247 0.2 0.4207 0.4168 0.4129 0.4090 0.4502 0.4013 0.3974 0.3396 0.3897 0.3859 0.3 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483 0.4 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121 0.5 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776 0.6 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451 0.7 0.2420 0.2389 0.2358 0.2327 0.2296 0.2266 0.2236 0.2206 0.2177 0.2148 0.8 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867 0.9 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611 1.0 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379 1.1 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170 1.2 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.1020 0.1003 0.0985 1.3 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823 1.4 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681 1.5 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559 1.6 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455 1.7 0.0466 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367 1.8 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294 1.9 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233 2.0 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183 2.1 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143 2.2 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110 2.3 0.0107 0.0104 0.0102 0.00964 0.00914 0.00866 2.4 0.00820 0.00776 0.00734 0.00695 0.00657 2.5 0.00621 0.00587 0.00554 0.00523 0.00494 2.6 0.00466 0.00440 0.00415 0.00391 0.00368 2.7 0.00347 0.00326 0.00307 0.00289 0.00272 2.8 0.00256 0.00240 0.00226 0.00212 0.00199 2.9 0.00187 0.00175 0.00164 0.00154 0.00144 3.0 0.00135 3.1 0.000968 3.2 0.000687 3.3 0.000483 3.4 0.000337 3.5 0.000233 3.6 0.000159 3.7 0.000108 3.8 0.0000723 3.9 0.0000481 4.0 0.0000317 12.02: Critical Values for t-Test Assuming we have calculated texp, there are two approaches to interpreting a t-test. In the first approach we choose a value of $\alpha$ for rejecting the null hypothesis and read the value of $t(\alpha,\nu)$ from the table below. If $t_\text{exp} > t(\alpha,\nu)$, we reject the null hypothesis and accept the alternative hypothesis. In the second approach, we find the row in the table below that corresponds to the available degrees of freedom and move across the row to find (or estimate) the a that corresponds to $t_\text{exp} = t(\alpha,\nu)$; this establishes largest value of $\alpha$ for which we can retain the null hypothesis. Finding, for example, that $\alpha$ is 0.10 means that we retain the null hypothesis at the 90% confidence level, but reject it at the 89% confidence level. The examples in this textbook use the first approach. Table $1$: Critical Values of t for the t-Test Values of t for… …a confidence interval of: 90% 95% 98% 99% …an $\alpha$ value of: 0.10 0.05 0.02 0.01 Degrees of Freedom 1 6.314 12.706 31.821 63.657 2 2.920 4.303 6.965 9.925 3 2.353 3.182 4.541 5.841 4 2.132 2.776 3.747 4.604 5 2.015 2.571 3.365 4.032 6 1.943 2.447 3.143 3.707 7 1.895 2.365 2.998 3.499 8 1.860 2.306 2.896 3.255 9 1.833 2.262 2.821 3.250 10 1.812 2.228 2.764 3.169 12 1.782 2.179 2.681 3.055 14 1.761 2.145 2.624 2.977 16 1.746 2.120 2.583 2.921 18 1.734 2.101 2.552 2.878 20 1.725 2.086 2.528 2.845 30 1.697 2.042 2.457 2.750 50 1.676 2.009 2.311 2.678 $\infty$ 1.645 1.960 2.326 2.576 The values in this table are for a two-tailed t-test. For a one-tailed test, divide the $\alpha$ values by 2. For example, the last column has an $\alpha$ value of 0.005 and a confidence interval of 99.5% when conducting a one-tailed t-test.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.01%3A_Single-Sided_Normal_Distribution.txt
The following tables provide values for $F(0.05, \nu_\text{num}, \nu_\text{denom})$ for one-tailed and for two-tailed F-tests. To use these tables, we first decide whether the situation calls for a one-tailed or a two-tailed analysis and calculate Fexp $F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber$ where $S_A^2$ is greater than $s_B^2$. Next, we compare Fexp to $F(0.05, \nu_\text{num}, \nu_\text{denom})$ and reject the null hypothesis if $F_\text{exp} > F(0.05, \nu_\text{num}, \nu_\text{denom})$. You may replace s with $\sigma$ if you know the population’s standard deviation. Table $1$: Critical Values of F for a One-Tailed F-Test $\frac {\nu_\text{num}\ce{->} }{\nu_{denom} \ce{ v }}$ 1 2 3 4 5 6 7 8 9 10 15 20 $\infty$ 1 161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240.5 241.9 245.9 248.0 254.3 2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.43 19.45 19.50 3 10.13 9.552 9.277 9.117 9.013 8.941 8.887 8.845 8.812 8.786 8.703 8.660 8.526 4 7.709 6.994 6.591 6.388 6.256 6.163 6.094 6.041 5.999 5.964 5.858 5.803 5.628 5 6.608 5.786 5.409 5.192 5.050 4.950 4.876 4.818 4.722 4.753 4.619 4.558 4.365 6 5.987 5.143 4.757 4.534 4.387 4.284 4.207 4.147 4.099 4.060 3.938 3.874 3.669 7 5.591 4.737 4.347 4.120 3.972 3.866 3.787 3.726 3.677 3.637 3.511 3.445 3.230 8 5.318 4.459 4.066 3.838 3.687 3.581 3.500 3.438 3.388 3.347 3.218 3.150 2.928 9 5.117 4.256 3.863 3.633 3.482 3.374 3.293 3.230 3.179 3.137 3.006 2.936 2.707 10 4.965 4.103 3.708 3.478 3.326 3.217 3.135 3.072 3.020 2.978 2.845 2.774 2.538 11 4.844 3.982 3.587 3.257 3.204 3.095 3.012 2.948 2.896 2.854 2.719 2.646 2.404 12 4.747 3.885 3.490 3.259 3.106 2.996 2.913 2.849 2.796 2.753 2.617 2.544 2.296 13 4.667 3.806 3.411 3.179 3.025 2.915 2.832 2.767 2.714 2.671 2.533 2.459 2.206 14 4.600 3.739 3.344 3.112 2.958 2.848 2.764 2.699 2.646 2.602 2.463 2.388 2.131 15 4.534 3.682 3.287 3.056 2.901 2.790 2.707 2.641 2.588 2.544 2.403 2.328 2.066 16 4.494 3.634 3.239 3.007 2.852 2.741 2.657 2.591 2.538 2.494 2.352 2.276 2.010 17 4.451 3.592 3.197 2.965 2.810 2.699 2.614 2.548 2.494 2.450 2.308 2.230 1.960 18 4.414 3.555 3.160 2.928 2.773 2.661 2.577 2.510 2.456 2.412 2.269 2.191 1.917 19 4.381 3.552 3.127 2.895 2.740 2.628 2.544 2.477 2.423 2.378 2.234 2.155 1.878 20 4,351 3.493 3.098 2.866 2.711 2.599 2.514 2.447 2.393 2.348 2.203 2.124 1.843 $\infty$ 3.842 2.996 2.605 2.372 2.214 2.099 2.010 1.938 1.880 1.831 1.666 1.570 1.000 Table $2$: Critical Values of F for a Two-Tailed F-Test $\frac {\nu_\text{num}\ce{->} }{\nu_{denom} \ce{ v }}$ 1 2 3 4 5 6 7 8 9 10 15 20 $\infty$ 1 647.8 799.5 864.2 899.6 921.8 937.1 948.2 956.7 963.3 968.6 984.9 993.1 1018 2 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 39.40 39.43 39.45 39.50 3 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 14.42 14.25 14.17 13.90 4 12.22 10.65 9.979 9.605 9.364 9.197 9.074 8.980 8.905 8.444 8.657 8.560 8.257 5 10.01 8.434 7.764 7.388 7.146 6.978 6.853 6.757 6.681 6.619 6.428 6.329 6.015 6 8.813 7.260 6.599 6.227 5.988 5.820 5.695 5.600 5.523 5.461 5.269 5.168 4.894 7 8.073 6.542 5.890 5.523 5.285 5.119 4.995 4.899 4.823 4.761 4.568 4.467 4.142 8 7.571 6.059 5.416 5.053 4.817 4.652 4.529 4.433 4.357 4.259 4.101 3.999 3.670 9 7.209 5.715 5.078 4.718 4.484 4.320 4.197 4.102 4.026 3.964 3.769 3.667 3.333 10 6.937 5.456 4.826 4.468 4.236 4.072 3.950 3.855 3.779 3.717 3.522 3.419 3.080 11 6.724 5.256 4.630 4.275 4.044 3.881 3.759 3.644 3.588 3.526 3.330 3.226 2.883 12 6.544 5.096 4.474 4.121 3.891 3.728 3.607 3.512 3.436 3.374 3.177 3.073 2.725 13 6.414 4.965 4.347 3.996 3.767 3.604 3.483 3.388 3.312 3.250 3.053 2.948 2.596 14 6.298 4.857 4.242 3.892 3.663 3.501 3.380 3.285 3.209 3.147 2.949 2.844 2.487 15 6.200 4.765 4.153 3.804 3.576 3.415 3.293 3.199 3.123 3.060 2.862 2.756 2.395 16 6.115 4.687 4.077 3.729 3.502 3.341 3.219 3.125 3.049 2.986 2.788 2.681 2.316 17 6.042 4.619 4.011 3.665 3.438 3.277 3.156 3.061 2.985 2.922 2.723 2.616 2.247 18 5.978 4.560 3.954 3.608 3.382 3.221 3.100 3.005 2.929 2.866 2.667 2.559 2.187 19 5.922 4.508 3.903 3.559 3.333 3.172 3.051 2.956 2.880 2.817 2.617 2.509 2.133 20 5.871 4.461 3.859 3.515 3.289 3.128 3.007 2.913 2.837 2.774 2.573 2.464 2.085 $\infty$ 5.024 3.689 3.116 2.786 2.567 2.408 2.288 2.192 2.114 2.048 1.833 1.708 1.000
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.03%3A_Critical_Values_for_F-Test.txt
The following table provides critical values for $Q(\alpha, n)$, where $\alpha$ is the probability of incorrectly rejecting the suspected outlier and $n$ is the number of samples in the data set. There are several versions of Dixon’s Q-Test, each of which calculates a value for Qij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for Q here are for a single outlier, Q10, where $Q_\text{exp} = Q_{10} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber$ The suspected outlier is rejected if Qexp is greater than $Q(\alpha, n)$. For additional information consult Rorabacher, D. B. “Statistical Treatment for Rejection of Deviant Values: Critical Values of Dixon’s ‘Q’ Parameter and Related Subrange Ratios at the 95% confidence Level,” Anal. Chem. 1991, 63, 139–146. Table $1$: Critical Values for Dixon's Q-Test $\frac {\alpha \ce{->}} {n \ce{ v }}$ 0.1 0.05 0.04 0.02 0.01 3 0.941 0.970 0.976 0.988 0.994 4 0.765 0.829 0.846 0.889 0.926 5 0.642 0.710 0.729 0.780 0.821 6 0.560 0.625 0.644 0.698 0.740 7 0.507 0.568 0.586 0.637 0.680 8 0.468 0.526 0.543 0.590 0.634 9 0.437 0.493 0.510 0.555 0.598 10 0.412 0.466 0.483 0.527 0.568 12.05: Critical Values for Grubb's Test The following table provides critical values for $G(\alpha, n)$, where $\alpha$ is the probability of incorrectly rejecting the suspected outlier and n is the number of samples in the data set. There are several versions of Grubb’s Test, each of which calculates a value for Gij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for G given here are for a single outlier, G10, where $G_\text{exp} = G_{10} = \frac {|X_{out} - \overline{X}|} {s} \nonumber$ The suspected outlier is rejected if Gexp is greater than $G(\alpha, n)$. Table $1$: Critical Values for the Grubb's Test $\frac {\alpha \ce{->}} {n \ce{ v }}$ 0.05 0.01 3 1.155 1.155 4 1.481 1.496 5 1.715 1.764 6 1.887 1.973 7 2.202 2.139 8 2.126 2.274 9 2.215 2.387 10 2.290 2.482 11 2.355 2.564 12 2.412 2.636 13 2.462 2.699 14 2.507 2.755 15 2.549 2.755 12.06 Critical Values for the Wilcoxson Signed Rank Test The following table provides critical values at $\alpha = 0.05$ for the Wilcoxson signed rank test where n is the number of samples in the data set. An entry of NA means the test cannot be applied. The null hypothesis of no difference between the samples can be rejected when the test statistic is less than or equal to the critical values for the number of samples. Table $1$: Critical Values for Wilcoxson Signed Rank Test with $\alpha = 0.05$ n one-tailed test two-tailed test 5 0 NA 6 2 0 7 3 2 8 5 3 9 8 5 10 10 8 11 13 10 12 17 13 13 21 17 14 25 21 15 30 25 16 35 30 17 41 35 18 47 40 19 53 46 20 60 52 12.07: Critical Values for the Wilcoxson Ranked Sum Test The following table provides critical values at $\alpha = 0.05$ for the Wilcoxson ranked sum test where $n_1$ and $n_2$ are the number of samples in the two sets of data where $n_1 \le n_2$. An entry of NA means the test cannot be applied. The null hypothesis of no difference between the samples can be rejected when the test statistic is less than or equal to the critical values for the number of samples. $1$: Critical Values for Wilcoxson Ranked Sum Test with $\alpha = 0.05$ $n_1$ $n_2$ one-tailed test two-tailed test 3 3 0 NA 3 4 0 NA 3 5 1 0 3 6 2 1 4 4 1 0 4 5 2 1 4 6 3 2 4 7 4 3 5 5 4 2 5 6 5 3 5 7 6 5 5 8 8 6 6 6 7 5 6 7 8 6 6 8 10 8 6 9 12 10 7 7 11 8 7 8 13 10 7 9 15 12 7 10 17 14
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.04%3A_Critical_Values_for_Dixon%27s_Q-Test.txt
A collection of resources on chemometrics and R. 13: Resources Books The following small collection of books provide a broad introduction to chemometric methods of analysis. The text by Miller and Miller is a good entry-level textbook suitable for the undergraduate curriculum. The text by Massart, et. al. is a particularly comprehensive resource. • Anderson, R. L. Practical Statistics for Analytical Chemists, Van Nostrand Reinhold: New York; 1987. • Beebe, K. R.; Pell, R. J.; Seasholtz, M. B. Chemometrics: A Practical Guide, Wiley, 1998. • Brereton, Richard G. Data Driven Extraction for Science, 2nd Edition, Wiley, 2018. • Graham, R. C. Data Analysis for the Chemical Sciences, VCH Publishers: New York; 1993. • Larose, D. T.; Larose, C. D. Discovering Knowledge in Data: An Introduction to Data Mining, Wiley, 2014. • Mark, H.; Workman, J. Statistics in Spectroscopy, Academic Press: Boston; 1991. • Massart, D. L.; Vandeginste, B. G. M.; Lewi, P. J.; Smeyers-Verbeke, J. Handbook of Chemometrics and Qualimetrics: Part A and Part B, Elsevier, 1997. • Miller, J. N.; Miller, J. C. Statistics and Chemometrics for Analytical Chemistry, 7th Edition, Pearson, 2018. • Schutt, R.; O'Neil, C. Doing Data Science: Straight Talk From the Frontline, O'Reilly, 2014. • Sharaf, M. H.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York; 1986. Although not resources on chemometrics, the following books provide a broad introduction to the statistical methods that underlie chemometrics. • Boslaugh, S. Statistics in a Nutshell: A Desktop Quick Reference, O'Reilly, 2013. • Larose, D. T.; Larose, C. D. Discovering Knowledge in Data: An Introduction to Data Mining, Wiley, 2014. • Schutt, R.; O'Neil, C. Doing Data Science: Straight Talk From the Frontline, O'Reilly, 2014. • van Belle, G. Statistical Rules of Thumb, Wiley, 2008. The following books provide more specialized coverage of topics relevant to chemometrics. • Mason, R. L.; Gunst, R. F.; Hess, J. L. Statistical Design and Analysis of Experiments; Wiley: New York, 1989. • Myers, R. H.; Montgomery, D. C. Response Surface Methodology, Wiley, 2002. The following books provide guidance on the visualization of data, both in figures and in tables. • Bertin, J. Semiology of Graphics, esri press, 1983. • Few, S. Now You See It, Analytics Press, 2009. • Few, S. Show Me the Numbers, Analytics Press, 2012. • Few, S. Information Dashboard Design, Analytics Press, 2013. • Robins, N. B. Creating More Effective Graphs, Charthouse, 2013. • Tufte, E. R. Envisioning Information, Graphics Press, 1990. • Tufte, E. R. Visual Explanations Graphics Press, 1997. • Tufte, E. R. The Visual Display of Quantitative Information, Graphics Press, 2001. • Tufte, E. R. Beautiful Evidence, Graphics Press, 2006. The following textbook provides a broad introduction to analytical chemistry, including sections on chemometric topics. • Harvey, D. T. Analytical Chemistry 2.1 (available here and here). Articles The following paper provides a general theory of types of measurements. • Stevens, S. S. "On the Theory of Scales of Measurements," Science, 1946, 103, 677-680. The detection of outliers, particularly when working with a small number of samples, is discussed in the following papers. • Analytical Methods Committee “Robust Statistics—How Not To Reject Outliers Part 1. Basic Concepts,” Analyst 1989, 114, 1693–1697. • Analytical Methods Committee “Robust Statistics—How Not to Reject Outliers Part 2. Inter-laboratory Trials,” Analyst 1989, 114, 1699–1702. • Analytical Methods Committee “Rogues and Suspects: How to Tackle Outliers,” AMCTB 39, 2009. • Analytical Methods Committee “Robust statistics: a method of coping with outliers,” AMCTB 6, 2001. • Analytical Methods Committee “Using the Grubbs and Cochran tests to identify outliers,” Anal. Methods, 2015, 7, 7948–7950. • Efstathiou, C. “Stochastic Calculation of Critical Q-Test Values for the Detection of Outliers in Measurements,” J. Chem. Educ. 1992, 69, 773–736. • Efstathiou, C. “Estimation of type 1 error probability from experimental Dixon’s Q parameter on testing for outliers within small data sets,” Talanta 2006, 69, 1068–1071. • Kelly, P. C. “Outlier Detection in Collaborative Studies,” Anal. Chem. 1990, 73, 58–64. • Mitschele, J. “Small Sample Statistics,” J. Chem. Educ. 1991, 68, 470–473. The following papers provide additional information on error and uncertainty. • Analytical Methods Committee “Optimizing your uncertainty—a case study,” AMCTB 32, 2008. • Analytical Methods Committee “Dark Uncertainty,” AMCTB 53, 2012. • Analytical Methods Committee “What causes most errors in chemical analysis?” AMCTB 56, 2013. • Andraos, J. “On the Propagation of Statistical Errors for a Function of Several Variables,” J. Chem. Educ. 1996, 73, 150–154. • Donato, H.; Metz, C. “A Direct Method for the Propagation of Error Using a Personal Computer Spreadsheet Program,” J. Chem. Educ. 1988, 65, 867–868. • Gordon, R.; Pickering, M.; Bisson, D. “Uncertainty Analysis by the ‘Worst Case’ Method,” J. Chem. Educ. 1984, 61, 780–781. • Guare, C. J. “Error, Precision and Uncertainty,” J. Chem. Educ. 1991, 68, 649–652. • Guedens, W. J.; Yperman, J.; Mullens, J.; Van Poucke, L. C.; Pauwels, E. J. “Statistical Analysis of Errors: A Practical Approach for an Undergraduate Chemistry Lab Part 1. The Concept,” J. Chem. Educ. 1993, 70, 776–779 • Guedens, W. J.; Yperman, J.; Mullens, J.; Van Poucke, L. C.; Pauwels, E. J. “Statistical Analysis of Errors: A Practical Approach for an Undergraduate Chemistry Lab Part 2. Some Worked Examples,” J. Chem. Educ. 1993, 70, 838–841. • Heydorn, K. “Detecting Errors in Micro and Trace Analysis by Using Statistics,” Anal. Chim. Acta 1993, 283, 494–499. • Hund, E.; Massart, D. L.; Smeyers-Verbeke, J. “Operational definitions of uncertainty,” Trends Anal. Chem. 2001, 20, 394–406. • Kragten, J. “Calculating Standard Deviations and Confidence Intervals with a Universally Applicable Spreadsheet Technique,” Analyst 1994, 119, 2161–2165. • Taylor, B. N.; Kuyatt, C. E. “Guidelines for Evaluating and Expressing the Uncertainty of NIST Mea- surement Results,” NIST Technical Note 1297, 1994. • Van Bramer, S. E. “A Brief Introduction to the Gaussian Distribution, Sample Statistics, and the Student’s t Statistic,” J. Chem. Educ. 2007, 84, 1231. • Yates, P. C. “A Simple Method for Illustrating Uncertainty Analysis,” J. Chem. Educ. 2001, 78, 770–771. The following articles provide thoughts on the limitations of statistical analysis based on significance testing. • Analytical Methods Committee “Significance, importance, and power,” AMCTB 38, 2009. • Analytical Methods Committee “An introduction to non-parametric statistics,” AMCTB 57, 2013. • Berger, J. O.; Berry, D. A. “Statistical Analysis and the Illusion of Objectivity,” Am. Sci. 1988, 76, 159–165. • Kryzwinski, M. “Importance of being uncertain,” Nat. Methods 2013, 10, 809–810. • Kryzwinski, M. “Significance, P values, and t-tests,” Nat. Methods 2013, 10, 1041–1042. • Kryzwinski, M. “Power and sample size,” Nat. Methods 2013, 10, 1139–1140. • Leek, J. T.; Peng, R. D. “What is the question?,” Science 2015, 347, 1314–1315. The following papers provide insight into organizing data in spreadsheets and visualizing data. • Analytical Methods Committee “Representing data distributions with kernel density estimates,” AMC Technical Brief, March 2006. • Broman, K. W.; Woo, K. H. "Data Organiztion in Spreadsheets," The American Statistician, 2018, 72, 2-10. • Frigge, M.; Hoaglin, D. C.; Iglewicz, B. “Some Implementations of the Boxplot,” The American Statistician 1989, 43, 50–54. • Midway, S. R. "Principles of Effective Data Visualizations," PATTER, 2020, 1(9). • Schwabish, J. A. "Ten Guidelines for Better Tables," J. Benefit Cost Anal. 2020, 11, 151-178. 13.2: R Resources Books The following books, which I have found useful, either provide a broad introduction to the R programming language, or a more targeted coverage of a particular application. The texts published by O'Reilly have on-line versions made available for free; there entries here provide links to the on-line versions. • Chambers, J. M. Software for Data Analysis: Programming with R, Springer: New York, 2008. • Chang, W. R Graphics Cookbook, O'Reilly, 2013. • Gardner, M. Beginning R: The Statistical Programming Language, Wiley, 2012. • Gillespie, C.; Lovelace, R. Efficient R Programming, O'Reilly, 2020. • Grolemund, G. Hands-On Programming with R, O'Reilly, 2014. • Horton, N. J.; Kleinman, K. Using R and RStudio for Data Management, Statistical Analysis, and Graphics, 2nd Edition, CRC Press, 2015. • James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning with Applications in R, Springer, 2013. • Lander, J. P. R for Everyone: Advanced Analytics and Graphics, Addison Wesley, 2014. • Kabacoff, Robert I. R in Action: Data Analysis and Graphics with R, Manning, 2011. • Maindonald, J.; Braun, J. Data Analysis and Graphics Using R, Cambridge University Press: Cambridge, UK, 2003. • Matloff, N. The Art of R Programming, No Starch Press, 2011. • Sarkar, D. Lattice: Multivariate Data Visualization With R, Springer: New York, 2008. • Vaughn, S. Scientific Inference, Cambridge, 2013. • Wickham, H. ggplot2, Springer, 2009. • Wickham, H.; Grolemund, G. R for Data Science, O'Reilly, 2017. Articles • Doi, J.; Potter, G.; Wong, J. "Web Application Teaching Tools for Statistics Using R and Shiny", Technology Innovations in Statistics Education, 2016, 9.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/13%3A_Resources/13.1%3A_Chemometric_Resources.txt
Why water boils at 100ºC and methane at -161ºC; why blood is red and grass is green; why diamond is hard and wax is soft; why graphite writes on paper and silk is strong; why glaciers flow and iron gets hard when you hammer it; how muscles contract; how sunlight makes plants grow and how living organisms have been able to evolve into ever more complex forms…? The answers to all these problems have come from structural analysis. Max Perutz, July 1996 (Churchill College, Cambridge) With the words pronounced by the Nobel laureate Max Perutz we open these pages (*), a continuing work in progress, intended to guide the interested reader into the fascinating world of Crystallography, which forms part of the scientific knowledge developed by many scientists over many years. This allows us to explain what crystals are, what molecules, hormones, nucleic acids, enzymes, and proteins are, along with their properties and how can we understand their function in a chemical reaction, in a test tube, or inside a living being. The discovery of X-rays in the late 19th century completely transformed the old field of Crystallography, which previously studied the morphology of minerals. The interaction of X-rays with crystals, discovered in the early 20th century, showed us that X-rays are electromagnetic waves with a wavelength of about 10-10 meters and that the internal structure of crystals was regular, arranged in three-dimensional networks, with separations of that order. Since then, Crystallography has become a basic discipline of many branches of Science and particularly of Physics, Chemistry of condensed matter, Biology and Biomedicine. Structural knowledge obtained by Crystallography allows us to produce materials with predesigned properties, from catalyst for a chemical reaction of industrial interest, up to toothpaste, vitro ceramic plates, extremely hard materials for surgery use, or certain aircraft components, just to give some examples of small, or medium sized atomic or molecular materials. Moreover, as biomolecules are the machines of life, like mechanical machines with moving parts, they modify their structure in the course of performing their respective tasks. It would also be extremely illuminating to follow these modifications and see the motion of the moving parts in a movie. To make a film of a moving object, it is necessary to take many snapshots. Faster movement requires a shorter exposure time and a greater number of snapshots to avoid blurring the pictures. This is where the ultrashort duration of the FEL (free electron laser) pulses will ensure sharp, non-blurred pictures of very fast processes (European XFEL or CXFEL). We may suggest you to start getting an overview about Crystallography, or looking at some interesting video clips collected by the International Union of Crystallography. Some of them can directly be reached through the following links: In any case, we suggest you to get a previous overview about the meaning of Crystallography, and if you maintain your interest go deeper into the remaining pages that are shown in the menu on the left (if you don't see the left menu, click here). Enjoy it! (*) We endeavor to assemble these pages and offer them to the interested reader, but obviously we are not immune to errors, inconsistencies or omissions. We are very grateful to several readers who have helped us to correct some previously undetected small errors or that have improved the wording of certain parts of the text. For anything that needs further attention, please, let us know through Martín Martínez Ripoll. These pages were announced by the International Union of Crystallography (IUCr), have been selected as one of the educational web sites and resources of interest to learn crystallography, offered as such in the commemorative web for the International Year of Crystallography, and suggested as the educational website in the brochure prepared by UNESCO for the crystal growing competition for Associated Schools (even in subsequent calls of this competition. The Cambridge Crystallographic Data Centre also offers this website through its Database of Educational Crystallographic Online Resources (DECOR). Martín Martínez Ripoll (1946- ) and Félix Hernández Cano (1941-2005+) were coauthors of a first version of these pages in the early 1990's. Later, in 2002 they produced a PowerPoint presentation dedicated to draw students' attention to the enigmatic beauty of the crystallographic world... This file, called XTAL RUNNER (totally virus free, although in Spanish) can be obtained through this link. If you understand Spanish we also offer you the possibility of reading a short general article of these authors published in 2003, entitled Cristalografía: Transgrediendo los Límites. Today we ask ourselves, where are those glory days gone? Some relevant hints: • This website is designed by combining three visible areas in the same window: a header and a menu on the left (which always remain visible), and a central area with the information obtained. It is what we can call full-screen mode, suitable for desktop computers and tablets. • The full-screen mode may not be suitable for mobile, in which case we suggest using the central-screen mode, that is, the mode using the central area only, and starting the session with the Table of Contents. • In both modes, a small square logo appears in the upper right corner of the central screen that links to the Table of Contents. • All the links that appear in the menu on the left (in full screen mode), or in the Table of Contents (in any of the modes) lead to internal pages that are always displayed on the same window. The remaining links, which necessarily refer to external pages, will always be displayed in a "new window". Going back to a previous page can be achieved using the "Back " link in the top left of the header (full screen mode), or through the browser's own strategy. • From time to time we incorporate some novelties or small corrections into these pages, so occasionally we recommended to reload these pages in your browser, or to clear the navigator caché. This will avoid to see the previously existing pages stored on your computer's caché. • Some companies offer documents directly extracted from these web pages and you have to pay for them. Please, do not participate in this fraud! All the material here presented is freely available to you, although for your personal use only, as it is shown below under the copyright condition.
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.0%3A_Introduction.txt
In the context of this chapter, you will also be invited to visit these sections... We all have heard about natural minerals and crystals. We find them daily without entering a museum. A rock and a mountain are made up of minerals, as crystalline as a lump of sugar, a bit of porcelain or a gold ring. However, only occasionally is the size of a crystal large enough to draw our attention, as is the case of these beautiful mineral examples of: Diamond (pure carbon) - Quartz (silicon dioxide) - Scapolite (aluminum silicate) - Pyrite (iron sulfide) Several of these images are property of Amethyst Galleries, Inc. Other excellent images of minerals can be found through this link. Although you can continue reading these pages without any special difficulty, probably you would like to know some aspects about the historical development of our understanding of the crystals. For these readers we offer some further notes that can be found through this link. The ancient Greeks identified quartz with the word crystal (κρύσταλλος, crustallos, or phonetically kroos'-tal-los = cold + drop), ie, very cold icicles of extraordinary hardness. But the formation of crystals is not a unique property of minerals; they are also found (but not necessarily in a natural manner) in the so-called organic compounds, and even in nucleic acids, in proteins and in viruses... A crystal is a material whose constituents, such as atoms, molecules or ions, are arranged in a highly ordered microscopic structure. These constituents are held together by interatomic forces (chemical bonds) such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. The crystalline state of matter is the state with the highest order, ie, with very high internal correlations and at the greatest distance range. This is reflected in their properties: anisotropic and discontinuous. Crystals usually appear as unadulterated, homogenous and with well-defined geometric shapes (habits) when they are well-formed. However, as we say in Spanish, "the habit does not make the monk" (clothes do not make the man) and their external morphology is not sufficient to evaluate the crystallinity of a material. The movie below shows the process of crystal growth of lysozyme (a very stable enzyme) from an aqueous medium. The duration of the real process, that takes a few seconds on your screen, corresponds approximately to 30 minutes. The original movie was found on an old website offered by George M. Sheldrik. The figure on the left shows a representation of the faces of a given crystal. If your browser allows the Java Runtime, clicking on the image will open a new window and you will be able to turn this object. If you do not have this application, you can still observe the model rotation in continuous mode from this link. Other Java pop-ups of faces and forms (habits) for ideal crystals can be obtained through this link. So, we ask ourselves, what is unique about crystals which distinguishes them from other types of materials? The so-called microscopic crystal structure is characterized by groups of ions, atoms or molecules arranged in terms of some periodic repetition model, and this concept (periodicity) is easy to understand if we look at the drawings in an carpet, in a mosaic, or a military parade... Repeated motifs in a carpet Repeated motifs in a mosaic Repeated motifs in a military parade If we look carefully at these drawings, we will discover that there is always a fraction of them that is repeated. In crystals, the atoms, ions or molecules are packed in such a way that they give rise to "motifs" (a given set or unit) that are repeated every 5 Angstrom, up to the hundreds of Angstrom (1 Angstrom = 10-8 cm), and this repetition, in three dimensions, is known as the crystal lattice. The motif or unit that is repeated, by orderly shifts in three dimensions, produces the network (the whole crystal) and we call it the elementary cell or unit cell. The content of the unit being repeated (atoms, molecules, ions) can also be drawn as a point (the reticular point) that represents every constituent of the motif. For example, each soldier in the figure above could be a reticular point. But there are occasions where the repetition is broken, or it is not exact, and this feature is precisely what distinguishes a crystal from glass, or in general, from materials called amorphous (disordered or poorly ordered)... Planar atomic model of an ordered material (crystal) Planar atomic model of glass (an amorphous material) However, matter is not entirely ordered or disordered (crystalline or non-crystalline) and so we can find a continuous degradation of the order (crystallinity degree) in materials, which goes from the perfectly ordered (crystalline) to the completely disordered (amorphous). This gradual loss of order which is present in materials is equivalent to what we see in the small details of the following photograph of gymnastic training, which is somewhat ordered, but there are some people wearing pants, other wearing skirts, some in different positions or slightly out of line... In the crystal structure (ordered) of inorganic materials, the repetitive units (or motifs) are atoms or ions, which are linked together in such a way that we normally do not distinguish isolated units and hence their stability and hardness (ionic crystals, mainly)... Crystal structure of an inorganic material: α-quartz Where we clearly distinguish isolated units is in the case of the so-called organic materials, where the concept of the isolated entity (molecule) appears. Molecules are made up of atoms linked together. However, the links between the molecules within the crystal are very weak (molecular crystals). Thus, they are generally softer and more unstable materials than the inorganic ones. Crystal structure of an organic material: Cinnamamide Protein crystals also contain molecular units (molecules), as in the organic materials, but much larger. The type of forces that bind these molecules are also similar, but their packing in the crystals leaves many holes that are filled with water molecules (not necessarily ordered) and hence their extreme instability... Crystal structure of a protein: AtHal3. The molecular packing produces very large holes The different packing modes in crystals lead to the so-called polymorphic phases (allotropic phases of the elements) which confer different properties to these crystals (to these materials). For example, we all know the different appearances and properties of the chemical element carbon, which is present in Nature in two different crystalline forms, diamond and graphite: Left: Diamond (pure carbon) Right: Graphite (pure carbon) Graphite is black, soft and an excellent lubricant, suggesting that its atoms must be distributed (packed) in such a way as to explain these properties. However, diamonds are transparent and very hard, so that we can expect their atoms very firmly linked. Indeed, their sub-microscopic structures (at atomic level) show us their differences ... Left: Diamond, with a very compact structure Right: Graphite, showing its layered crystal structure In the diamond structure, each carbon atom is linked to four other ones in the form of a very compact three-dimensional network (covalent crystals), hence its extreme hardness and its property as an electric insulator. However, in the graphite structure, the carbon atoms are arranged in parallel layers much more separated than the atoms in a single layer. Due to these weak links between the atomic layers of graphite, the layers can slide, without much effort, and hence graphite's suitability as a lubricant, its use for pens and as an electrical conductor. And speaking about conductors... The metal atoms in the metallic crystals are structured in such a way that some delocalized electrons give cohesion to the crystals and are responsible for their electrical properties. Before ending this chapter let us introduce a few words about the so-called quasicrystals... A quasicrystal is an "ordered" structure, but not perfectly periodic as the crystals are. The repeating patterns (sets of atoms, etc.) of the quasicrystalline materials can fill all available space continuously, but they do not display an exact repetition by translation. And, as far as symmetry is concerned, while crystals (according to the laws of classical crystallography) can display axes of rotation of order 2, 3, 4 and 6 only, the quasicrystals show other rotational symmetry axes, as for example of order 10 In this website we will not pay attention to the case of quasicrystals. Therefore, if you are interested on it, please go to this link, where Steffen Weber, in a relatively simple way, describes these types of materials from the theoretical point of view, and where some additional sources of information can also be found..Advanced readers should also consult the site offered by Paul J, Steinhardt at the University of Princeton. The Nobel Prize in Chemistry 2011 was awarded to Daniel Shechtman by the discovery of quasicrystals in 1984.. There are obviously many questions that the reader will ask, having come this far, and one of the most obvious ones is: how do we know the structure of crystals? This question, and others, will be answered in following chapters and therefore we encourage you to consult them...
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.01%3A_New_Page.txt
An unexpected result! Discovery of X-rays in 1895. (Illustration by Alejandro Martínez de Andrés, CSIC 2014) By the end of the 19th century, in 1895, Wilhelm Conrad Röntgen (1845-1923), a German scientist from the University of Würzburg, discovered a form of radiation (of unknown nature at that time, and hence the name X-rays) which had the property of penetrating opaque bodies. In the first paragraph of his communication sent to the Society of Physics and Medicine of Wurzburg (1895) he reports the discovery as follows: After producing an electrical discharge with a Ruhmkorff’s coil through a Hittorf’s vacuum tube, or a sufficiently evacuated Lenard, Crookes or similar apparatus, covered with a fairly tight-fitting jacket made of thin, black paperboard, one sees that a cardboard sheet coated with a layer of platinum and barium cyanide, located in the vicinity of the apparatus, lights up brightly in the completely darkened room regardless of whether the coated side is pointing or not to the tube. This fluorescence occurs up to 2 meters away from the apparatus. One can easily be convinced that the cause of the fluorescence proceeds from the discharge apparatus and not from any other source of the line. To learn about some aspects of the discovery, as well as about personal aspects of Röntgen, see also the chapter dedicated to some biographical outlines. But if you can read Spanish, there is an extensive chapter dedicated to both the historical details around Röntgen and his discovery. • Left: Wilhelm Conrad Röntgen (1845-1923), around 1895 with an X-ray photograph of his wife's hand showing her wedding ring . For his discovery Röntgen won the Nobel Prize in Physics in 1901. • Right: Typical hospital radiology equipment X-rays are invisible to our eyes but they can produce visible images if we use photographic plates or special detectors... Left: Radiographic image of a hand Right: Radiographic image of a monkey Left: Radiographic image of a well-done weld Right: Poorly-done weld (black line) A painting and its X-ray photograph showing two superimposed paintings on the same canvas (Charles II of Spain, by Carreño de Miranda, Museo del Prado, Madrid) We all know several applications of X-rays in the medical field: angiography (the study of blood vessels) or the so-called CT scans, but the use of X-rays has also been extended to detect failures in metals or for the analysis of paintings. Many years passed from the discovery of X-rays in 1895 until that finding produced a revolution in the fields of Physics, Chemistry and Biology. The potential applications in these areas came in 1912 indirectly from the hand of Max von Laue (1879-1960), professor at the Universities of Munich, Zurich, Frankfurt, Würzburg and finally Berlin. Paul Peter Ewald (1888-1985) got his friend, Max Laue, interested in his own experiments on the interference between radiations with large wavelengths (practically visible light) on a "crystalline" model based on resonators (note that at that time the question on wave-particle duality was also under discussion). The idea then came to Laue that the much shorter electromagnetic rays, which X-rays were supposed to be, would cause some kind of diffraction or interference phenomena in a medium, and that a crystal could provide this medium. Max von Laue demonstrated the nature of this new radiation by putting crystals of copper sulfate, and of the mineral zinc blende, in front of an X-ray source, obtaining confirmation of his hypothesis and demonstrating both, the undulatory nature of this radiation and the periodic nature of crystals. For these findings he received the Nobel Prize in Physics in 1914. However, those who really benefited from the discovery of the Germans were the British Braggs (father and son), William H. Bragg (1862-1942) and William L. Bragg (1890-1971), who together in 1915 received the Nobel Prize in Physics for demonstrating the usefulness of the phenomenon discovered by von Laue for obtaining the internal structure of crystals - but all this will be the subject of later chapters. This chapter will deal exclusively with the nature and production of X-rays... X-rays are electromagnetic radiations, of the same nature as visible light, ultraviolet or infrared radiations, and the only thing that distinguishes them from other electromagnetic radiations is their wavelength, which is about 10-10 m (equivalent to the unit of length known as one Angstrom). Graphic representation of an electromagnetic wave, showing its associated electric (E) and magnetic (H) fields, moving forwards at the speed of light. The continuous spectrum of visible light (wavelength decreases from red to violet ) Excellent information on the electromagnetic spectrum can be found in some pages offered by NASA. The reader can also learn about X-rays and their applications in Medical Radiography and in the pages of The X-Ray Century. ν(Hz) λ(m) = 3 108 m Hz E(J) = h(J/Hz) ν(Hz) = k(J/K molecule) T(K) h = 6.6 10-34 (J/Hz); k = 1.4 10-23 (J/K molecule); 1 eV = 1.6 10-19 (J) Figure taken from the Berkeley Lab The most interesting X-rays for Crystallography are those having a wavelength close to 1 Angstrom (the hard X-rays in the diagram above), which is a distance very close to the interatomic distances occurring in molecules and crystals. These type of X-rays have a frequency of approximately 3 million THz (tera-hertz) and to an energy of 12.4 keV (kilo-electron-volts), which in turn would correspond to a temperature of about 144 million degrees Celsius. These wavelengths are produced in Crystallography laboratories and in large synchrotrons as ESRF, ALBA, Diamond, DESY, ... X-ray generator in a Crystallography laboratory. The goniometric and detection systems are shown behind the X-ray tube. Aerial photograph of the synchrotron at the ESRF in Grenoble (France). Note its circular geometry The equipment used in crystallographic laboratories to produce X-rays is relatively simple. They have a high voltage generator (50,000 volts) that brings high voltage to the so-called X-ray tube, where the radiation is actually produced. You could also take a look at the web page "The Cathode Ray Tube site". Early X-ray tube (image taken from The Cathode Ray Tube site) Conventional X-ray tubes used for crystallographic studies during the 20th century Static sketch and animation of the X-ray production on a conventional X-ray tube Those 50 kV are supplied as a potential difference (high voltage) between an incandescent filament (through which a low voltage electrical current of intensity i passes: around 5 A at 12 V) and a pure metal (usually copper or molybdenum). This produces an electrical current (of free electrons) between them of about 30 mA. From the incandescent filament (negatively charged) the free electrons jump to the anode (positively charged) causing (in the pure metal) a reorganization in its electronic energy levels. This is a process that generates a lot of heat, so that X-ray tubes must be very well chilled. An alternative to conventional X-ray tubes are the rotating anode generators, in which the anode in the form of a cylinder is maintained in a continuous rotation, so that the incidence of electrons is distributed over its cylindrical surface and thus a higher power can be obtained. Left: Rotating anode generator Right: Rotating anode of polished copper (images taken from Bruker-AXS) The so-called "characteristic X-rays" are produced according to the following scheme: a) Energy state of electrons in an atom of the anode that is going to be reached by an electron from the filament. b) Energy state of the same electrons after impact with the electron from the filament. The incident electron bounces and ejects an electron from the anode, producing the corresponding hole. c) An electron of a higher energy level falls and occupies the hole. This energy jump, perfectly defined, generates the so-called characteristic X-rays of the anodic material. Left: In an X-ray tube the electrons emitted from the cathode are accelerated towards the metal target anode by an accelerating voltage of typically 50 kV. The high energy electrons interact with the atoms in the metal target. Sometimes the electron comes very close to a nucleus in the target and is deviated by the electromagnetic interaction. In this process, which is called bremsstrahlung (braking radiation), the electron loses much energy and a photon (X-ray) is emitted. The energy of the emitted photon can take any value up to a maximum corresponding to the energy of the incident electron. Right: The high energy electron can also cause an electron close to the nucleus in a metal atom to be displaced. This vacancy is filled by an electron further out from the nucleus. The well defined difference in binding energy, characteristic of the material, is emitted as a monoenergetic photon. When detected this X-ray photon gives rise to a characteristic X-ray line in the energy spectrum. Animations taken from Nobelprize.org. Apart from the developments made on the new synchrotron sources, there still exist several attempts to optimize efficiency and power of the "in-house" X-ray sources, as the ones based on the microfocus technology, that is, high brightness sources that additionally use very stable optics mounted to the tube housing, or those based on the use of a liquid metal as anode... Left: New microfocus X-ray tube. Image taken from Incoatec Right: New development for an of X-ray source based on liquid metal anodes. Taken from Excillum. There is an animation showing this technology The energetic restoration of the excited anodic electron is carried out with an X-ray emission with a frequency that corresponds exactly to the specific energy gap (quantum) that the electron needs to return to its initial state. These X-rays therefore show a specific wavelength and are known as characteristic wavelengths of the anode. The most important characteristic wavelengths in X-ray Crystallography are the so-called K-alpha lines (), produced by the electrons falling to the innermost layer of the atom (higher binding energy). However, in addition to these specific wavelengths, a continuous range of wavelengths, very close to each other, is also produced known as the continuous radiation which is due to the braking of the incident electrons when they hit the metal target. Distribution of X-ray wavelengths produced in a conventional X-ray tube where the anode material is copper (Cu), molybdenum (Mo), chromium (Cr) or tungsten (W). Over the so-called continuous spectrum, the characteristic K-alpha () and K-beta () lines are shown. The starting point of the continuous spectrum appears at a wavelength which is approximately 12.4 / V, (Angstrom) where V represents the amount of kV between anode and filament. For a given voltage between the anode and filament, only the characteristic wavelengths of molybdenum are obtained (figure on the left). In synchrotrons, the generation of X-rays is quite different. A synchrotron facility contains a large ring (on the order of kilometers), where electrons move at a very high speed in straight channels that occasionally break to match the curvature of the ring. These electrons are made to change direction to go from one channel to another using magnetic fields of high energy. It is at this moment, when electrons change their direction, that the electrons emit a very high energy radiation known as synchrotron radiation. This radiation is composed of a continuum of wavelengths ranging from microwaves to the so-called hard X-rays. Synchrotrons appearance is very similar to that shown in the following schemes: A synchrotron scheme. The linear accelerator (Linac) and the circular accelerator (Booster) are seen in the center, surrounded by the outer storage ring. The emitted X-rays are directed to the beamlines. Left: General sketch of a synchrotron. The central circle is where the charged particles are accelerated (linac & booster). The outer circle is the storage ring, formed by crooked lines, at the end of which the experimental stations are installed. Right: Outline of the junction of two crooked lines of the storage ring of a synchrotron. X-rays appear due to the change of direction of the charged particles. The interested reader can access a demonstration on the operation of a synchrotron ring through this link, or see the same animation in a larger size through this other link. Outline of the point between two straight segments in the storage ring of a synchrotron. Image taken from the ESRF Details of how X-rays are produced in a synchrotron in the curvature of the electrons' trajectory inside the storage ring. Image taken from the ESRF The X-rays obtained in the synchrotrons have two clear advantages for crystallography: 1. the wavelengths can be tuned at will, and 2. its brilliance is at least 1021 times higher that those obtained with a conventional X-ray tube (see the image below). The brilliance of X-ray sources: conventional X-ray tubes, synchrotrons and the future XFEL. Image taken from the ESRF. The following image shows an outline of an experimental station of a synchrotron: a) the optics hutch, where X-rays are filtered and focused using curved mirrors and monochromators; b) the experimental hutch, where the goniometer, sample and detector are located and where the diffraction experiment is done and, c) the control cabin, where the experiment is monitored and, if required, also evaluated. Outline of an experimental station in a synchrotron Lightsources.org contains news and science highlights from each light source facility, as well as photos and videos, education and outreach resources, a calendar of conferences and events, and information on funding opportunities. The radiation used for crystallography is usually monochromatic (or nearly monochromatic), that is, a radiation with exclusively (or almost exclusively) a single wavelength. In order to achieve this, the so-called monochromators are used, which consist of a system of crystals that, based on Bragg's Law (which will be presented in another chapter), are able to "filter" (through the interaction between the crystals and the X-rays) the polychromatic radiation, allowing only one wavelength (color), as shown below. Outline of a monochromator. Polychromatic radiation (white) coming from the left (below) is "reflected" , in accordance with Bragg's Law, (to be seen in subsequent chapter), in different orientations of the crystal to produce ("to filter") a monochromatic radiation that is reflected again ("filtered") in the secondary crystal. For the moment it is enough that the reader is aware that this law will allow us to understand how the crystals "reflect" the X-rays, behaving as special mirrors . Image taken from the ESRF. X-rays interact with the electrons of matter... A monochromatic beam (ie with a single wavelength) suffers an exceptional attenuation, proportional to the thickness being crossed. This attenuation may arise from several factors: a) the body heats up, b) a fluorescent radiation, with different wavelength, is produced & accompanied by photoelectrons, both being characteristic of the material (this leads to the photo-electron spectroscopies, Auger and PES); and c) scattered X-rays with the same wavelength (coherent and Bragg) or with slightly higher wavelengths (Compton), together with the scattered electrons. Of all these effects, the most important one is fluorescence, where the absorption increases by increasing incident wavelength. However, this behavior has discontinuities (anomalous dispersion) for those energies that correspond to electronic transitions between different energy levels of the material (this leads to the EXAFS spectroscopy). Spectrum emitted by a metallic anode showing its characteristic wavelengths (continuous line). In the same figure, but referred to a vertical axis of absorbance (not drawn) the increasing and discontinuous variation of the absorption (dashed line) of a given material is also shown. This gives an idea of the use of this property as a filter to obtain monochromatic radiation, at least separating the double Kα1 - Kα2 from the rest of the spectrum. This approach, using concrete materials with specific absorption capacities, was used in Crystallography laboratories until the early 1970's to obtain monochromatic radiation. Special mention deserves the recent discovery introduced in the field of femtosecond X-ray protein nanocrystallography. Using this technique (XFEL: X-ray Free Electron Laser), based on the use of X-rays obtained from a free electron laser, "snapshots" of X-ray diffraction can be obtained in the femtoseconds scale. It has been proposed that femtosecond X-ray pulses can be used to outrun even the fastest damage processes by using single pulses so brief that they terminate before the manifestation of damage to the sample in less time than it needed to be damaged by the crystallites radiation.This will imply a giant step to remove virtually all the difficulties in the crystallization process, especially for proteins (see these articles: Nature (2011) 470, 73-77, Nature (2013) and Nature(2014)). In this sense, it is also worth quoting the article published in Radiation Physics and Chemistry (2004) 71, 905-916, which already warned on the future importance of the free electron laser on structural biology. The European XFEL generates ultrashort X-ray flashes, 27,000 times per second and with a brilliance that is a billion times higher than that of the best conventional X-ray radiation sources. Thanks to its outstanding characteristics, which are unique worldwide, the facility opens up completely new research opportunities for scientists and industrial users. It could be interesting to look at the video offered on the web site of the international consortium, or directly through this link. Regarding the use of these powerful X-ray sources for determining the structure of biological macromolecules, the interested readers should consider the very promising results published in Nature (2016) 530, 202-206. This study provides the opportunity to use not only the information contained in the diffraction spots generated by crystals, but also in the very weak intensity distribution found around and between the diffraction spots, the so called continuous diffraction. With X-rays from free-electron lasers crystallographic applications are extended to nanocrystals, and even to single non-crystalline biological objects and even movies of biomolecules in action can be produced. To generate the X-ray flashes, bunches of electrons will first be accelerated to high energies and then directed through special arrangements of magnets (undulators). In the process, the particles will emit radiation that is increasingly amplified until an extremely short and intense X-ray flash is finally created. Recently, the modification that involves replacing the so-called material undulators (magnets) with a new optical device also based on laser technology, dramatically reduces the size of the XFEL by about 10,000 times and the size of the accelerator by 100 times, leading to an incredible reduction in size and price of the so called CXFEL (compact X-ray free-electron laser). In any case, X-rays, like any light "illuminate" and "let to see", but in a different manner than we see with our eyes. We encourage you to go forward, to understand how X-rays allow us "to see" inside crystals, that is, to "see" the atoms and the molecules.
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.02%3A_New_Page.txt
In the context of this chapter, you will also be invited to visit these sections... About symmetry in general Often we don't realize it, but we continuously live with symmetry... Symmetry is the consistency, the repetition of something in space and/or in time, as is shown in the examples below: a wall drawing, the petals of flowers, the two sides of a butterfly, the succession of night and day, a piece of music, etc. Symmetry by repetition of patterns in a wall drawing or in flowers. The wall drawing shows repetition by translation. The flowers show repetitions by rotations. The flower on the left shows repetition around an axis of rotation of order 8 (8 identical petals around the rotation axis). The flower in the middle shows an axis of rotation of order 5 (two different families of petals that are distributed around the rotation axis). In addition, each petal in both flowers shows a plane of symmetry which divides it into two identical parts (approximately), the same as it occurs with the butterfly shown on the right. If the reader is surprised by the fact that we say that the two parts separated by a plane of symmetry (mirror) are only "approximately" identical, is because they are really not identical; they cannot be superimposed, but this is an issue that will be explained in another section. Symmetry by repeating events: Day - Night - Day Symmetry in music. A fragment from "Six unisono melodies" by Bartók. (The diagram at the bottom represents the symmetrization of the one shown above) The word "Symmetry," carefully written with somewhat distorted letters, shows a two-fold axis (a rotation of 180 degrees) perpendicular to the screen. The following sentence also serves to illustrate the concept of symmetry: A MAN, A PLAN, A CANAL: PANAMA where, if we forget the commas and the colon, it becomes: AMANAPLANACANALPANAMA which can be read from right to left with exactly the same meaning as above. It is a case similar to the "palindromic" numbers (232 or 679976). There are many links in which the reader can find information on the concept of symmetry and we have selected some of them: symmetry and shape of space, some others in the context of crystallographic concepts, some with decorative patterns, or in the context of minerals. There is even an international society for the study of symmetry. The essential knowledge on crystal morphology, symmetry elements and their combination to generate repetitive objects in space, were well established between the 17th and 19th Centuries, as stated elsewhere in these pages... Specifically, in finite objects, there are a number of operations (elements of symmetry) describing repetitions. In the wall-drawing (shown above) we find translational operations (the motif is repeated by translation). The repetition of the petals in the flowers show us rotational operations (the motif is repeated by rotation) around a symmetry axis (or rotation axis). And, although not exactly, the symmetry shown in the phrase or in the music fragment (shown above), lead us to consider other symmetry operations known as symmetry planes (reflection planes, or mirror planes); the same operation that occurs when you look into a mirror. Similarly, for example, if we look at the relationship between the three-dimensional objects in some of the pictures shown below, we will discover a new element of symmetry called center of symmetry (or inversion center), which is an imaginary point between objects (or inside the object) as shown in some drawings below. Generally speaking, and taking into account that pure translational operations are not strictly considered as symmetry operations, we can say that finite objects can contain themselves, or may be repeated (excluding translation) by the following symmetry elements: • The identity operation is the simplest symmetry element of all -- it does nothing! But it is important because all objects at the very least have the identity element, and there are many objects that have no other symmetry elements. • The reflection is the symmetry operation that occurs when we put an object in front of a mirror. The image is found perpendicular to the reflection plane and equidistant from that plane, on the opposite side of the plane. The resulting object can be distinguishable or indistinguishable from the original, normally distinguishable, as they cannot be superimposed. If the resulting object is indistinguishable from the original, is because the reflection plane is passing through the object. • The inversion operation occurs through a single point called the inversion center. Each part of the object is moved along a straight line through the inversion center to a point at an equal distance from the inversion center. The resulting object can be distinguishable or indistinguishable from the original, normally distinguishable, as they cannot be superimposed. If the resulting object is indistinguishable from the original, is because the inversion center is inside the object. • The rotation operations (both proper and improper) occur with respect to a line called rotation axis. a) A proper rotation is performed by rotating the object 360°/n, where n is the order of the axis. The resulting rotated object is always indistinguishable from the original. b) An improper rotation is performed by rotating the object 360°/n followed by a reflection through a plane perpendicular to the rotation axis. The resulting object can be distinguishable or indistinguishable from the original, normally distinguishable, as they cannot be superimposed. If the resulting object is indistinguishable from the original, is because the improper rotation axis is passing through the object. In addition to the name of the symmetry elements, we use graphical and numerical symbols to represent them. For example, a rotation axis of order 2 (a binary axis) is represented by the number 2, and a reflection plane is represented by the letter m. Left: Polyhedron showing a two-fold rotation axis (2) passing through the centers of the top and bottom edges Right: Polyhedron showing a reflection plane (m) that relates (as a mirror does) the top to the bottom Hands and molecular models related by a twofold axis (2) perpendicular to the drawing plane Hands and molecular models related through a mirror plane (m) perpendicular to the drawing plane Hands (left and right) related through a center of symmetry Two objects related by a center of symmetry and a polyhedron showing a center of symmetry in its center The association of elements of rotation with centers or planes of symmetry generates new elements of symmetry called improper rotations. Left: A four-fold improper axis implies 90º rotations followed by reflection through a mirror plane perpendicular to the axis. (Animation taken from M. Kastner, T. Medlock & K. Brown, Univ. of Bucknell) Right: Axis of improper rotation, shown vertically, in a crystal of urea. The meaning of numerical triplets shown will be discussed in another chapter. Combining the rotation axes and the mirror planes with the characteristic translations of the crystals (which are shown below), new symmetry elements appear, with some "sliding" components: screw axes (or helicoidal axes) and glide planes. Twofold screw axis. A screw axis consists of a rotation followed by a translation Glide plane. A glide plane consists of a reflection followed by a translation Twofold screw axis applied to a left hand. The hand rotates 180º and moves a half of the lattice translation in the direction of the screw axis, and so on. Note that the hand always remains as a left hand. (Animation taken from M. Kastner, T. Medlock & K. Brown, Univ. of Bucknell) Glide plane.applied to a left hand. The left hand reflects on the plane, generating a right hand that moves a half of the lattice translation in the direction of the glide operation. (Animation taken from M. Kastner, T. Medlock & K. Brown, Univ. of Bucknell) The symmetry elements of types center or mirror plane relate objects in a peculiar way; the same way that our two hands are related one to the other: they are not superimposable. Objects which in themselves do not contain any of these symmetry elements (center or plane) are called chiral and their repetition through these elements (center or plane) produce objects that are called enantiomers with respect to the original ones. The mirror image of one of our hands is the enantiomer of the one we put in front of the mirror. Regarding the chirality of the crystals and of their building units (molecular or not), advanced readers should also consult the article by Howard D. Flack to be found through this link. The mirror image of either of our hands is the enantiomer of the other hand. They are objects not superimposable and as they do not contain (in themselves) symmetry centers or symmetry planes, are called chiral objects. Chiral molecules have different properties than their enantiomers and so it is important that we are able to differentiate them. The correct determination of the absolute configuration or absolute structure of a molecule (differentiation between enantiomers) can be done in a secure manner through X-ray diffraction only, but this will be explained in another chapter Thus, any finite object (such as a quartz crystal, a chair or a flower) shows that certain parts of it are repeated by symmetry operations that go through a point of the object. This set of symmetry operations is known as a symmetry point group. The advanced reader has also the opportunity to visit the nice work on point group symmetry elements offered through these links: A good general web site about symmetry in crystallography is offered by the Department of Chemistry and Biochemistry of the Oklahoma University. Additionally, the reader can download (totally virus free!!!) and run on his own computer this Java application that, as an introduction to the symmetry of the polyhedra, that was developed by Gervais Chapuis and Nicolas Schöni (École Polytechnique Fédérale de Lausanne, Switzerland). Symmetry in crystals In crystals, the symmetry axes (rotation axes) can only be two-fold (2), three-fold (3), four-fold (4) or six-fold (6), depending on the number of times (order of rotation) that a motif can be repeated by a rotation operation, being transformed into a new state indistinguishable from its starting state. Thus, a rotation axis of order 3 (3-fold) produces 3 repetitions (copies) of the motif, one every 120 degrees (= 360/3) of rotation. If the reader wonders why only symmetry axes of order 2, 3, 4 and 6 can occur in crystals, and not 5-, 7-fold, etc., we recommend the explanations given in another section. Improper rotations (rotations followed by reflection through a plane perpendicular to the rotational axis) are designated by the order of rotation, with a bar above that number. The screw axes (or helicoidal axes, ie, symmetry axes involving rotation followed by a translation along the axis) are represented by the order of rotation, with an added subindex that quantifies the translation along the axis. Thus, a screw axis of type 62 means that in each of the six rotations an associated translation occurs of 2/6 of the axis of the elementary cell in that direction. The mirror planes are represented by the letter m. The glide planes (mirror planes involving reflexion and a translation parallel to the plane) are represented by the letters a, b, c, n or d, depending if the translation associated with the reflection is parallel to the reticular translations (a, b, c), parallel to the diagonal of a reticular plane (n), or parallel to a diagonal of the unit cell (d). The letters and numbers that are used to represent the symmetry elements also have an equivalence with some graphic symbols. But in order to keep talking about symmetry in crystals, it is necessary to introduce and remember the fundamental aspect that defines crystals, which is the periodic repetition by translation of motifs (atoms, molecules or ions). This repetition, which is illustrated in two dimensions with gray circles in the figure below, is derived from the mathematical concept of lattice that we will see more properly in another chapter. In a periodic and repetitive set of motifs (gray circles in the two-dimensional figure above) one can find infinite basic units (unit cells) vastly different in appearance and specification, the repetition of which generates the same mathematical lattice. Note that all represented unit cells delimited by black lines contain in total a single circle inside them, since each vertex contains a certain fraction of a circle inside the cell. These are called primitive cells. However, the cell delimited by red lines contains a total of two gray circles inside (one corresponding to the vertices and a complete one in the center). This type of unit cell is generically called non-primitive. Periodic repetition, which is a characteristic of the internal structure of crystals, is represented by a set of translations in the three directions of space, so that crystals can be seen as the stacking of the same block in three dimensions. Each block, of a certain shape and size (but all of them being identical), is called a unit cell or elementary cell. Its size is determined by the length of its three edges (a, b, c) and the angles between them (alpha, beta, gamma: α, β, γ). Stacking of unit cells forming an octahedral crystal and parameters which characterize the shape and size of an elementary cell (or unit cell) As mentioned above, all symmetry elements passing through a point of a finite object, define the total symmetry of the object, which is known as the point group symmetry of the object. Obiously, the symmetry elements that imply any lattice translations (glide planes and screw axes), are not point group operations. There are many symmetry point groups, but in crystals they must be consistent with the crystalline periodicity (translational periodicity). Thus, in crystals, only rotations (symmetry axes) of order 2, 3, 4 and 6 are possible, that is, only rotations of 180º (= 360/2), 120º (= 360/3), 90º (= 360/4) and 60º (= 360/6) are allowed. See also the crystallographic restriction theorem. Therefore, only 32 point groups are allowed in the crystalline state of matter. These 32 point groups are also known in Crystallography as the 32 crystal classes. point group . crystal translational periodicity = 32 crystal classes The motif, represented by a single brick, can also be represented by a lattice point. It shows the point symmetry 2mm The next three tables show animated drawings about the 32 crystal classes, grouped in terms of the so called crystal system (left column), a classification mode in terms of minimal symmetry, as shown below. Links below illustrate the 32 crystal classes using some crystal morphologies These interactive animated drawings need the Java environment and therefore will not run with all browsers Triclinic 1 1 Monoclinic 2 m 2/m Orthorhombic 222 mm2 mmm Tetragonal 4 4 4/m 422 4mm 42m 4/mmm Cubic 23 m3 432 43m m3m Trigonal 3 3 32 3m 3m Hexagonal 6 6 6/m 622 6mm 6m2 6/mmm Links below illustrate the 32 crystal classes using some crystal morphologies These are non-interactive animated gifs obtained from the Java animations appearing in http://webmineral.com. They will run with all browsers Triclinic 1 1 Monoclinic 2 m 2/m Orthorhombic 222 mm2 mmm Tetragonal 4 4 4/m 422 4mm 42m 4/mmm Cubic 23 m3 432 43m m3m Trigonal 3 3 32 3m 3m Hexagonal 6 6 6/m 622 6mm 6m2 6/mmm Links below show animated displays of the symmetry elements in each of the 32 crystal classes: (taken from Marc De Graef) Triclinic 1 1 Monoclinic 2 m 2/m Orthorhombic 222 mm2 mmm Tetragonal 4 4 4/m 422 4mm 42m 4/mmm Cubic 23 m3 432 43m m3m Trigonal 3 3 32 3m 3m Hexagonal 6 6 6/m 622 6mm 6m2 6/mmm Lluis Casas and Eugenia Estop, from the Department of Geology of the University of Barcelona, ​​offer 32 pdf files which, in an interactive way, allow very easily playing with the 32 point groups through the symmetry of crystalline solids. Additionally, the reader can download and run on his own computer this Java application that, as an introduction to the symmetry of the polyhedra, was developed by Gervais Chapuis and Nicolas Schöni (École Polytechnique Fédérale de Lausanne, Switzerland). Alternatively, the interested reader can interactively view some typical polyhedra of the 7 crystal systems, through the Spanish Gemological Institute. Of the 32 crystal classes, only 11 contain the operator center of symmetry, and these 11 centro-symmetric crystal classes are known as Laue groups. crystal class . center of symmetry = 11 Laue groups In addition, the repetition modes by translation in crystals must be compatible with the possible point groups (the 32 crystal classes), and this is why we find only 14 types of translational lattices which are compatible with the crystal classes. These types of lattices (translational repetiton modes) are known as the Bravais lattices (you can see them here). The translational symmetry of an ordered distribution of 3-dimensional objects can be described by many types of lattices, but there is always one of them more suited to the object, ie: the one that best describes the symmetry of the object. As the lattices themselves have their own distribution of symmetry elements, we must fit them to the symmetry elements of the structure. crystal translational periodicity . 32 crystal classes = 14 Bravais lattices A brick wall can be structured with many different types of lattices, with different origins, and defining reticular points representing the brick. But there is a lattice that is more appropriate to the symmetry of the brick and to the way the bricks build the wall. The adequacy of a lattice to the structure is illustrated in the two-dimensional examples shown below. In all three cases two different lattices are shown, one oblique and primitive and one rectangular and centered. In the first two cases, the rectangular lattices are the most appropriate ones. However, the deformation of the structure in the third example leads to metric relationships that make that the most appropriate lattice, the oblique primitive, hexagonal in this case. Adequacy of the lattice type to the structure. The blue lattice is the best one in each case. Finally, combining the 32 crystal classes (crystallographic point groups) with the 14 Bravais lattices, we find up to 230 different ways to replicate a finite object (motif) in 3-dimensional space. These 230 ways to repeat patterns in space, which are compatible with the 32 crystal classes and with the 14 Bravais lattices, are called space groups, and represent the 230 different ways to fit the Bravais lattices to the symmetry of the objects. The interested reader should also consult the excellent work on the symmetry elements present in the space groups, offered by Margaret Kastner, Timathy Medlock and Kristy Brown through this link of the Bucknell University. 32 crystal classes + 14 Bravais lattices = 230 Space groups A wall of bricks showing the most appropriate lattice which best represents both the brick and its symmetry. Note that in this case the point symmetry of the brick and the point symmetry of the reticular point are coincident. The space group, considering the thickness of the brick, is Cmm2. The 32 crystal classes, the 14 Bravais lattices and the 230 space groups can be classified, according to their hosted minimum symmetry, into 7 crystal systems. The minimum symmetry produces some restrictions in the metric values (distances and angles) which describe the shape and size of the lattice. 32 classes, 14 lattices, 230 space groups / crystal symmetry = 7 crystal systems All this is summarized in the following table: Crystal classes (* Laue) Compatible crystal lattices and their symmetry Number of space groups Minimum symmetry Metric restrictions Crystal system 1 1 * P 1 2 1 or 1 none Triclinic 2 m 2/m * P C (I) 2/m 13 One 2 or 2 α=γ=90 Monoclinic 222 2mm mmm * P C (A,B) I F mmm 59 Three 2 or 2 α=β=γ=90 Orthorhombic 4 4 4/m * 422 4mm 42m 4/mmm * P I 4/mmm 68 One 4 or 4 a=b α=β=γ=90 Tetragonal 3 3 * 32 3m 3m * P (R) 3m 6/mmm 25 One 3 or 3 a=b=c α=β=γ (or Hexagonal) Trigonal 6 6 6m * 622 6mm 6m2 6/mmm * P 6/mmm 27 One 6 or 6 a=b α=β=90 γ=120 Hexagonal 23 m3 * 432 43m m3m * P I F m3m 36 Four 3 or 3 a=b=c α=β=γ=90 Cubic Total: 32, 11 * 14 independent 230     7 The 230 crystallographic space groups are listed and described in the International Tables for X-ray Crystallography, where they are classified according to point groups and crystal systems. Chiral compounds that are prepared as a single enantiomer (for instance, biological molecules) can crystallize in only a subset of 65 space groups, those that do not have mirror and/or inversion symmetry operations. A composition of part of the information contained in these tables is shown below, corresponding to the space group Cmm2, where C means that the structure is described in terms of a lattice centered on the faces separated by the c axis. The first m represents a mirror plane perpendicular to the a axis. The second m means another mirror plane (in this case perpendicular to the second main crystallographic direction), the b axis. The number 2 refers to the two-fold axis parallel to the third crystallographic direction, the c axis. Summary of the information shown in the International Tables for X-ray Crystallography for the space group Cmm2 And this is another example for the space group P21/c, centrosymmetric and based on a primitive monoclinic lattice, as it appears in the International Tables for X-ray Crystallography Summary of the information shown in the International Tables for X-ray Crystallography for the space group P21/c The advanced reader can also consult: Crystallographers never get bored! Try to enjoy the beauty, looking for the symmetry of the objects around you, and particularly in the objects shown below ... Look for possible unit-cells and symmetry elements in these structures made with bricks (the solution is obtained clicking on the image) There is a question that surely the readers will have considered... In this chapter we have shown elements of symmetry that operate inside the crystals, but we have not yet said how we can find out the existence of such operations, when in fact, and in the best of cases, we could only visualize the external habit of the crystals if they are well formed! Although we will not answer this question here, we can anticipate that this response will be given by the behavior of the crystals when we illuminate them with that special light that we know as X-rays, but this will be the subject of another chapter. In any case, it doesn't end here! There are many more things to talk about. Go on.
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.03%3A_New_Page.txt
Let's start with a summary of several concepts seen in previous chapters... Any repetitive and periodic distribution of a set of objects (or motifs) can be characterized, or described, by translations that repeat the set of objects periodically. The implied translations generate what we call a direct lattice (or real lattice). Left: Fragment of a distribution of a set of objects that produce a direct lattice in 2 dimensions. As an example, one of the infinite sets of motifs (small tiles) that produce the repetitive and periodic distribution is shown inside the yellow squares. The dimensions of the yellow square represent the translations of the direct lattice Right: Fragment of a mosaic in La Alhambra showing a 2-dimensional periodic pattern. These periodic translations can be discovered in the mosaic and produce a 2-dimensional direct lattice. The red square represents the translations of the smallest direct lattice produced by the periodic distributions of the small pieces of this mosaic.The yellow square represents another possible lattice, a bigger one, non primitive. Periodic stacking of balls, producing a 3-dimensional network (direct lattice). The motif being repeated in the three directions of space is the contents of the small box with blue edges, the so called "unit cell". The translations that describe the periodicity in crystals can be expressed as a linear combination of three basic translations, not coplanar, ie independent, known as reticular or lattice axes (or unit cell axes). These axes define a parallelogram (in 2 dimensions), or a parallelepiped (in 3 dimensions) known as a unit cell (or elementary cell). This elementary area (in 2-dimensional cases), or elementary volume (in 3-dimensional cases), which holds the minimum set of the periodic distribution, generates (by translations) the full distribution which, in our atomic 3-dimensional case, we call crystal. In addition to the fact that the unit cell is the smallest repetitive unit as far as translations is concerned, the reader should note that the system of axes defining the unit cell actually defines the reference system to describe the positional coordinates of each atom within the cell. Left: Elementary cell (or unit cell) defined by the 3 non-coplanar reticular translations (cell axes or lattice axes) Right: Crystal formation by stacking of many unit cells in 3 space directions In general, inside the unit cell there is a minimum set of atoms (ions or molecules) which are repeated inside the cell due to the symmetry elements of the crystal structure. This minimum set of atoms (ions or molecules) which generate the whole contents of the unit cell (after applying the symmetry elements to them) is known as the asymmetric unit. The structural motif shown in the left figure is repeated by a symmetry element (symmetry operation), in this case a screw axis The repetition of the motif (asymmetric unit) generates the full content of the unit cell, and the repetition of unit cells generates the entire crystal The lattice, which is a pure mathematical concept, can be selected in various ways in the same real periodic distribution. However, only one of these lattices "fits" best with the symmetry of the periodic distribution of the motifs... Two-dimensional periodic distribution of one motif containing two objects (a triangle and a circle) Left: Unit cells corresponding to possible direct lattices (=real lattices) that can be drawn over the periodic distribution shown above. Only one of the unit cells (the red one) is more appropriate because it fits much better with the symmetry of the distribution Right: The red cell on the left figure (a centered lattice) fits better with the symmetry of the distribution, and can be decomposed in two identical lattices, one for each object of the motif. As is shown in the figures above, although especially in the right one, any lattice that describes the repetition of the motif (triangle + circle) can be decomposed into two identical equivalent lattices (one for each object of the motif). Thus, the concept of lattice is independent of the complexity of the motif, so that we can use only one lattice, since it represents all the remaining equivalent ones. Once we have chosen a representative lattice, appropriate to the symmetry of the structure, any reticular point (or lattice node) can be described by a vector that is a linear combination (with integer numbers) of the direct reticular axes: R = m a + n b + p c, where m, n and p are integers. Non-reticular points can be reached using the nearest R vector, and adding to it the corresponding fractions of the reticular axes to reach it: r = R + r' = (m a + n b + p c) + (x a + y b + z c) Position vector for any non-reticular point of a direct lattice where x, y, z represent the corresponding dimensionless fractions of axes X/a, Y/b, Z/c, and X, Y, Z the corresponding lengths. Position vector for a non-reticular point (black circle) The reader should also have a look into the chapters about lattices and unit cells offered by the University of Cambridge. Alternatively, the reader can download and run on his own computer this Java application that illustrates the lattice concept (it is totally virus free and was developed by Gervais Chapuis and Nicolas Schöni, École Polytechnique Fédérale de Lausanne, Switzerland). Let's now see some new concepts on direct lattices (= real lattices) ... From a geometric point of view, on a lattice we can consider some reticular lines and reticular planes which are those passing through the reticular points (or reticular nodes). Just as we did with the lattices (choosing one of them from all the equivalent ones), we do the same with the reticular lines and planes. A reticular line or a reticular plane can be used as a representative of the entire family of parallel lines or parallel planes. Following with the argument given above, each motif in a repetitive distribution generates its own lattice, although all these lattices are identical (red and blue). Of the two families of equivalent lattices shown (red and blue) we can choose only one of them, on the understanding that it also represents the remaining equivalent ones. Note that the distance between the planes drawn on each lattice (interplanar spacing) is the same for the blue or red families. However, the family of red planes is separated from the family of blue planes by a distance that depends on the separation between the objects which produced the lattice. This distance between the planes of different families can be called the geometric out-of-phase distance. Left: Family of reticular planes cutting the vertical axis of the cell in 2 parts and the horizontal axis in 1 part. These planes are parallel to the third reticular axis (not shown in the figure). Right: Family of reticular planes cutting the vertical axis of the cell in 3 parts and the horizontal axis in 1 part. These planes are parallel to the third reticular axis (not shown in the figure). The number of parts in which a family of planes cut the cell axes can be associated with a triplet of numbers that identify that family of planes. In the three previous figures, the number of cuts, and therefore the numerical triplets would be (110), (210) and (310), respectively, according to the vertical, horizontal and perpendicular-to-the-figure axes. In this figure, the numerical triplets for the planes drawn are (022), that is, the family of planes does not cut the a axis, but cuts the b and c axes in 2 identical parts, respectively. The plane drawn on the left side of the figure above cuts the a axis in 2 equal parts, the b axis in 2 parts and the c axis in 1 part. Hence, the numerical triplet identifying the plane will be (221). The plane drawn on the right side of the figure cuts the a axis into 2 parts, is parallel to the b axis and cuts the c axis in 1 part. Therefore, the numerical triplet will be (201). A unique plane, as the one drawn in the top right figure, defined by the numerical triplet known as Miller indices, represents and describes the whole family of parallel planes passing through every element of the motif. Thus, in a crystal structure, there will be as many plane families as possible numerical triplets exist with the condition that these numbers are primes, one to each other (not having a common divisor). The Miller indices are generically represented by the triplet of letters hkl. If there are common divisors among the Miller indices, the numerical triplet would represent a single family of planes only. For example, the family with indices (330), which are not strictly reticular, can be regarded as the representative of 3 families of indices (110) with a geometric out-of-phase distance (among the families) of 1/3 of the original (see the figures below). Left: Three families of reticular planes, with indices (110) in three equivalent lattices, showing an out-of-phase distance between them of 1/3 of the interplanar spacing in each family. Right: The same set of planes of the figure on the left drawn over one of the equivalent lattices. Therefore its Miller indices are (330) and its interplanar spacing is 1/3 of the interplanar spacing of the (110) family. Thus, the concept of Miller indices, previously restricted to numerical triplets (being prime numbers), can now be generalized to any triplet of integers. In this way, every family of planes, will "cover" the whole crystal. And therefore, for every point of the crystal we can draw an infinite number of plane families with infinite orientations. Through a point in the crystal (in the example in the center of the cell) we can draw an infinite number of plane families with an infinite number of orientations. In this case only 3 families and 3 orientations are shown. Of course, interplanar spacings can be directly calculated from the Miller indices (hkl) and the values of the reticular parameters (unit cell axes). The table below shows that these relations can be simplified for the corresponding metric of the different lattices. Formula to calculate the interplanar spacings (dhkl) for a family of planes with Miller indices hkl in a unit cell of parameters a, b, c, α, β, γ. Vertical bars (for the triclinic case) mean the function "determinant". In the trigonal case a=b=c=A; α=β=γ. In all cases, obviously, the calculated interplanar spacing also represents the distance between the cell origin and the nearest plane of the family. Interested readers should also have a look into the chapter on lattice planes and Miller indices offered by the University of Cambridge. And now some more concepts on lattices: the so called reciprocal lattice... Any plane can also be characterized by a vector (σhkl) perpendicular to it. Therefore, the projection of the position vector of any point (belonging to the plane), over that perpendicular line is constant and independent of the point. It is the distance of the plane to the origin, ie, the spacing (dhkl). Any plane can be represented by a vector perpendicular to it. Consider the family of planes hkl with the interplanar distance dhkl. From the set of vectors normal to the planes' family, we take the one (σhkl) with length 1/dhkl. The scalar product between this vector and the position vector (d'hkl ) of a point belonging to a plane from the family is an integer (n), and this integer gives us the order of that plane in the hkl family. That is: (σhkl) . (d'hkl) = (1/dhkl) . (n.dhkl) = n (see left figure below) n will be 0 for the plane passing through the origin, 1 for the first plane, 2 for the second, etc. Thus, σhkl represents the whole family of hkl planes having an interplanar spacing given by dhkl. In particular, for the first plane we get: |σhkl| dhkl = 1. If we define 1/dhkl, as the length of the vector σhkl, the product of this vector, times the dhkl spacing of the planes family is the unit. If we take a vector 2 times longer than σhkl, the interplanar spacing of the corresponding new family of planes would be a half. If from this normal vector σhkl of length 1/dhkl, we take another vector, n times (integer) longer (n.σhkl), the above mentioned product (|σhkl| dhkl = 1) would imply that the new vector (n.σhkl) will correspond to a family of planes of indices nh,nk,nl having an interplanar spacing n times smaller. In other words, for instance, the lengths of the following interplanar spacings will bear the relation: d100 = 2.(d200)= 3.(d 300)..., so that σ100 = (1/2).σ200 = (1/3).σ300 ... and similarly for other hkl planes. Therefore, it appears that the moduli (lengths) of the perpendicular vectors (σhkl) are reciprocal to the interplanar spacings. The end points of these vectors (blue arrows in figure below) also produce a periodic lattice that, due to this reciprocal property, is known as the reciprocal lattice of the original direct lattice. The reciprocal points obtained in this way (green points in figure below) are identified with the same numerical triplets hkl (Miller indices) which represent the corresponding plane family. Geometrical construction of some points of a reciprocal lattice (green points) from a direct lattice. To simplify, we assume that the third axis of the direct lattice (c) is perpendicular to the screen. The red lines represent the reticular planes (perpendicular to the screen) and whose Miller indices are shown in blue. As an example: the reciprocal point with indices (3,1,0) will be located on a vector perpendicular to the plane (3,1,0) and its distance to the origin O is inversely proportional to the spacing of that family of planes. Animated example showing how to obtain the reciprocal points from a direct lattice It should now be clear that the direct lattice, and its reticular planes, are directly associated (linked) with the reciprocal lattice. Moreover, in this reciprocal lattice we can also define a unit cell (reciprocal unit cell) whose periodic translations will be determined by three reciprocal axes that form reciprocal angles among them. If the unit cell axes and angles of the direct cell are known by the letters a, b, c, α, β, γ, the corresponding parameters for the reciprocal cell are written with the same symbols, adding an asterisk: a*, b*, c*, α*, β*, γ*. It should also be clear that these reciprocal axes (a*, b*, c*) will correspond to the vectors σ100, σ010 and σ001, respectively, so that any reciprocal vector can be expressed as a linear combination of these three reciprocal vectors: σhkl = h a* + k b* + l c* Position vector of any reciprocal point Geometrical relation between direct and reciprocal unit cells The figure below shows again the strong relationship between the two lattices (direct with blue points, reciprocal in green). In this case, the corresponding third reciprocal axes (c and c*) are perpendicular to the screen. And analytically the relationship between the direct (= real) and reciprocal cells can be written as: Metrical relations among the parameters defining the direct and reciprocal cells. V represents the volume of the direct cell and the symbol x means the cross product between two vectors. The same type of equations can be written by changing the asterisks to the right side of the equations. The volume of the direct cell can be calculated as: V = (a x b) . c = a. b. c (1 - cos2α - cos2β - cos2γ + 2 cos α cos β cos γ)1/2 Note that, in accordance with the definitions given above, the length of a* is the inverse of the interplanar spacing d100 (|a*| = 1/d100), and that |b*| = 1/d010, and that |c*| = 1/d001. Therefore, the following scalar products (dot products) can be written: a.a* = 1, a.b* = 0 and similarly with the other pairs of axes. Summarizing: • Direct space (= real space) is the space where we live..., where atoms are..., where crystals growth..., where we imagine the direct lattices (= real lattices). • Reciprocal space is a mathematical space constructed on the direct space (= real space). It is the space where reciprocal lattices are, which will help us to understand the crystal diffraction phenomena. • Big in direct space (that is, in real space)”, means “small in reciprocal space”. • Small in direct space (that is, in real space)” means “big in reciprocal space”. In addition to this, we recommend to download and execute the Java applet by Nicolas Schoeni and Gervais Chapuis of the Ecole Polytechnique Fédéral de Lausanne (Switzerland) to understand the relation between direct and reciprocal lattices and how to build the latter from a direct lattice. (Free of any kind of virus). See also the pages on reciprocal space offered by the University of Cambridge through this link. And although we are revealing aspects corresponding to the next chapter (see the last paragraph of this page), the reader should also look at the video made by www.PhysicsReimagined.com, showing the geometric relationships between direct and reciprocal lattices, displayed below as an animated gif: The reader is probably asking himself why we need this new concept (the reciprocal lattice). Well, there are reasons which justify it. One of them is that a family of planes can be represented by just one point, which obviously simplifies things. And another important reason is that this new lattice offer us a very simple geometric model that can interpret the diffraction phenomena in crystals. But this will be described in another chapter. Go on!
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.04%3A_New_Page.txt
In the context of this chapter, you will also be invited to visit these sections... Electromagnetic radiations (such as visible light) can interact among themselves and with matter, giving rise to a multitude of phenomena such as reflection, refraction, scattering, polarization... Left: Reflection and refraction of light in the interface between glass with a refractive index 1.5 and air with a refractive index 1.0. TIR = "Total Internal Reflection" Center: Refraction of light after passing through a glass prism. Depending on the wavelength (color) of the incident beam (coming from the left), the angle of refraction varies, ie: it is scattered Right: Polarization of light passing through a polarizer. Depending on the rotation of the polarizer, one of components of the incident beam (coming from the right) is filtered Animations originally taken from physics-animations.com X-ray diffraction is the physical phenomenon that expresses the fundamental interaction between X-rays and crystals (ordered matter). However, to describe the phenomenon, it is advisable to first introduce some physical models that (as all models) do not fully explain reality (as they are an idealization of it), but can be used to help understand the phenomenon. On waves A wave is an undulatory phenomenon (a disturbance) that propagates through space and time, and is regularly repeated. Waves are usually represented graphically by a sinusoidal function (as shown at right), in which we can determine some general parameters that define it. Transverse wave propagation of vibrating longitudinal and circular movements Animations originally taken from physics-animations.com Undulatory phenomena (waves) propagate at a certain speed (v) and can be modeled to meet the so-called wave equation, scalar or vectorial, depending on the nature of the disturbance. The solutions to this equation are usually combinations of trigonometric terms, each of them characterized by: 1) an amplitude (A), which measures the maximum (or minimum) of the disturbance with respect to an equilibrium value, and 2) a phase $\phi$: $\phi$ = 2$\pi$ (K.r - ν.t + $\alpha$) The intensity of an undulatory disturbance, at any point of the wave, is proportional to the square of the disturbance value at that point, and if it is expressed in terms of complex exponentials, this is equivalent to the product of the disturbance by its complex conjugate. The intensity is a measure of the energy flow per unit of time and per unit of area of the wavefront (spherical or flat, depending on the type of wave). A wave is a regular phenomenon, ie it repeats exactly in time (with a period T) and space (with a period λ, the wavelength), so that λ = ν.T, or λ.ν= v. In the expression of the phase ($\phi$), K is the so-called wave vector which gives the sense of progress of the wave (the ray), and is considered with an amplitude 1/λ. Thus, K is the number of repetitions per unit of length. ν is the frequency (the inverse of the period), that is, the number of repetitions (or cycles) per unit of time. We give the name pulse to the magnitude given by: 2$\pi$.ν, which measures the number of repetitions per radian (180/π degrees) of the cycle. In the full electromagnetic spectrum (ie in the distribution of electromagnetic wavelengths) the hard X-rays (the high energy ones) are located around a wavelength of 1 Angstrom in vacuum (for Cu the average wavelength is 1.5418 Angstrom and for Mo it's 0.7107 Angstrom), while visible light has a wavelength in the range of 4000 to 7000 Angstrom. t and r are, respectively, the time and the position vector with which we measure the disturbance, and $\alpha$ is the original phase difference relative to the other components of the wave. We speak of waves being in phase if the difference between the phases of the components is an integer multiple of 2$\pi$, and we say that the waves are in opposition of phase if that difference is an odd multiple of $\pi$. For an easy mathematical treatment to keep track of the relations between phases of the wave components, these terms are usually expressed in an exponential notation, where the exponential imaginary unit i means a phase difference of +$\pi$/2. Possible states of interference of two waves shown at the top, having identical amplitude and frequency. The wave drawn at the bottom (bold line) shows the result of the interference, which has maximum amplitude when interfering waves overlap, i.e. they are in phase. Complete destructive interference is obtained (resulting wave vanishes) when the maxima of one of the component waves coincide with the minima of the other, i.e., when the two waves are in phase opposition. Animation taken from The Pennsylvania State University Undulatory disturbance corresponding to the combination of two elementary waves (blue and green) of similar wavelengths (λ, λ), with the same amplitude (A, A) and relative difference of phase $\alpha$. The disturbance is moving from left to right with a velocity v. The sum of these two elementary waves produces a wave (sum of the individual ones) depicted in red (λ). Interference usually refers to the interaction of waves which are correlated or coherent with each other, either because they come from the same source or because they have the same, or nearly the same, frequency. The solutions to the wave equation, whose amplitude is not inversely dependent on the distance of origin, are called plane waves, since at a given time all points belonging to the plane K.r = constant have the same phase, the plane is perpendicular to the propagation vector K, and propagates with speed v. v is therefore the phase velocity. For a wave resulting from the sum of several components, the pulse travels with the so-called group velocity and interested readers can consult the simulation offered through this link. In the solutions to the equation in which the amplitude depends inversely on the distance, the planes become spheres and thus spherical waves are obtained. However if the distance of observation is very large, they can be considered similar to plane waves at that observation point. Taking into account what it is shown in the figure above, the principle of superposition states that due to a number of coherent sources (which don't vary phase relationships between them), the wave measured at a given time and point, is the sum of the individual waves at that time and point, taking into account the individual phases (the process of interference), as shown above. If there is no coherence between waves, phase relationships vary over time, and to obtain the total intensity of the resultant wave, we just have to add intensities (see figure below): The total disturbance of two non-coherent sources is just the sum of the individual intensities To model the composition of simple trigonometric waves (of type sine or cosine, or in their imaginary exponential form) the Fresnel representation is normally used. In this representation it is assumed that each wave oscillates around the X axis, as the projection of the circular motion of a vector of length equal to its amplitude and with an angular speed equal to the wave pulse ω. In this way, the resultant wave can be obtained by adding the individual vectors and projecting the resultant vector over the same X axis. Fresnel (or Argand) representation in which is shown the composition of several individual waves (fj). |F| is the amplitude of the resultant wave and Φ its phase. Interaction of X-rays with matter X-ray waves interact with matter through the electrons contained in atoms, which are moving at speeds much slower than light. When the electromagnetic radiation (the X-rays) reaches an electron (a charged particle) it becomes a secondary source of electromagnetic radiation that scatters the incident radiation. According to the wavelength and phase relationships of the scattered radiation, we can refer to elastic processes (or inelastic processes: Compton scattering), depending if the wavelength does not change (or changes), and to coherence (or incoherence) if the phase relations are maintained (or not maintained) over time and space. The exchanges of energy and momentum that are produced during these processes can even lead to the expulsion of an electron out of the atom, followed by the occupation of its energy level by electrons located in higher energy levels. All these types of interactions lead to different processes in the materials such as: refraction, absorption, fluorescence, Rayleigh scattering, Compton scattering, polarization, diffraction, reflection, ... The refractive index of all materials in relation to X-rays is close to 1, so that the phenomenon of refraction of X-rays is negligible. This explains why we are not able to produce lenses for X-rays and why the process of image formation, as in the case of visible light, cannot be carried out with X-rays. It does not explain why reflective optics (catoptric system) cannot be used. Only dioptric system is excluded. Absorption means an attenuation of the transmitted beam, losing its energy through all types of interactions, mainly thermal, fluorescence, inelastic scattering, formation of free radicals and other chemical modifications that could lead to degradation of the material. This intensity decrease follows an exponential model dependent on the distance crossed and on a coefficient of the material (the linear absorption coefficient) which depends on the density and composition of the material. The process of fluorescence, in which an electron is pulled out of an atom's energy level, provides information on the chemical composition of the material. Due to the expulsion of electrons from the different energy levels, sharp discontinuities in the absorption of radiation are produced. These discontinuities allow local analysis around an atom (EXAFS). In the Compton effect, the interaction is inelastic and the radiation loses energy. This phenomenon is always present in the interaction of X-rays with matter, but due to its low intensity, its incoherence and its propagation in all directions, its contribution is only found in the background radiation produced through the interaction. By scattering we will refer here to the changes of direction suffered by the incident radiation, and NOT to dispersion (the phenomenon that causes the separation of a wave into components of varying frequency). Left: Variation in the absorption of a material according to the wavelength of the incident radiation Right: Dispersion of visible light into its nearly monochromatic wavelengths Elastic scattering by an electron Interaction of a X-ray front with an isolated electron, which becomes a new X-ray source, producing the X-rays waves in a spherical mode The spherical waves produced by two electrons interact with each other, producing positive and negative interferences Animations originally taken from physics-animations.com When a non-polarized X-ray beam (that is, when its electromagnetic field is vibrating at random in all directions perpendicular to the propagation), interacts with an electron, the interaction takes place primarily through its electric field. Thus, in a first approximation, we can neglect both the magnetic and nuclear interactions. According to the electromagnetic theory of Maxwell, the electron scatters electric waves which propagate perpendicular to the electric field, in such a way that the scattered energy (which crosses the unit of area perpendicular to the direction of propagation and per unit of time) is: Ie(Ks) = I0 [e4 / R02 m2 c4] [( 1 + cos2 2θ) / 2] Thomson scattering model Ks is the scattering vector, R0 is the distance to the observation point, 2θ is the angle between the incident direction and the direction where the scattering is observed; e and m are the charge and mass of the electron, respectively, and c is the speed of propagation of radiation in the vacuum. The equation above describes the Thomson's model established in 1906 [Joseph John Thomson (1856-1940)] for the spherical wave elastically scattered by a free electron, which is similar to the Rayleigh scattering with visible light. The scattered wave is elastic, coherent and spherical. The mass factor (m) in the denominator justifies neglecting the nuclear scattering. The binding forces between atom and electron are not considered in the model. It is assumed that the natural frequencies of vibration of the electron are much smaller than those of the incident radiation. In this "normal" scattering model (in contrast to the anomalous case in which those frequencies are comparable) the scattered wave is in opposition of phase with the incident radiation. The second factor (in brackets equation above) which depends on the θ angle, is known as the polarization factor, because the scattered radiation becomes partially polarized, which creates a certain anisotropy in the vibrational directions of the electron, as well as a reduction in the scattered intensity (depending of the direction). The scattered intensity shows symmetry around the incident direction. As the scattered wave is spherical, the inverse proportionality to the squared distance makes the energy per unit of solid angle a constant. A solid angle is the angle in three-dimensional space that an object subtends at a point. It is a measure of how big that object appears to an observer looking from that point. Metrically it is the constant ratio between the intersecting areas of concentric spheres with a cone, and the corresponding squared radii of the spheres: A1/R12 = A2/R22 = A3/R32 = ... = solid angle in steradians The factor of the geometric "difference of phase" With regard to the phenomenon of diffraction and interference, it is important to consider the phase relationship between two waves due to their different geometric paths. This affects the difference of phase $\alpha$ of the resultant wave: $\phi$ = 2$\pi$(K0.r - ν.t + $\alpha$) in such a way that: $\alpha$ = 2$\pi$ (Ks - K0) rij + $\alpha$ ' where K0 is the wave vector of the incident wave, Ks is the wave vector in the direction of propagation and rij is the vector between the two propagation centers which produces the phase difference. If we have several disturbance centers whose phase differences are measured from a common origin, and we consider the position vectors rj of their phase differences, the phase difference of one of the centers can be written (using unit vectors in the directions of propagation with λK = s) as: $\alpha$j = 2$\pi$ [(s - s0) / λ] rj + $\alpha$ ' This means that all rj points in which the product (s - s0) rj has a constant value (cte) , will have the same phase, given by: $\alpha$ = (cte. 2$\pi$ / λ) + $\alpha$ ' Scattering by an atom An atom that can be considered as a set of Z electrons (its atomic number) can be expected to scatter Z times that which an electron does. But the distances between the electrons of an atom are of the order of the X-rays wavelength, and therefore we can also expect some type of partial destructive interferences among the scattered waves. In fact, an atom scatters Z times (what an electron does) only in the direction of the incident beam, decreasing with the increasing of the θ angle (the angle between the incident radiation and the direction where we measure the scattering). And the more diffuse the electronic distribution of electrons around the nucleus, the greater the reduction. Phase relationships among the electrons in an atom Diagram showing the variation of the amplitudes scattered by an electron, without considering the polarization (left figure), and an atom (right figure). The amplitude (intensity) scattered by an atom decreases with increasing scattering angle. The intensity of the X-rays scattered by the electrons of an atom decreases with increasing scattering angle Scheme taken from School of Crystallography (Birkbeck College, Univ. of London) The atomic scattering factor is the ratio between the amplitude scattered by an atom and a single electron. As the speed of electrons in the atom is much greater than the variation of the electric vector of the wave, the incident radiation only "sees" an average electronic cloud, which is characterized by an electron density of charge ρ(r). If this distribution is considered spherically symmetric, it will just depend on the distance to the nucleus, so that, with: H = 2 sin θ / λ (which is the length of the scattering vector H = Ks- K0 = (s - s0) / λ): f(H) = 4π(0 → ∞) r2 ρ(r) (sin H r / H r) dr Thus, the atomic scattering factor will represent a number of electrons (the effective number of electrons of a particular atom type) that scatter in phase in that direction, so that θ = 0 and f(0) = Z. The hypothesis of isotropy, ie that this atomic factor does not depend on the direction of H, appears to be unsuitable for transition momentum in which d or f orbitals are involved, nor for the valence electrons. By quantum-mechanics calculations we can obtain the values for the atomic scattering factors, and we can derive analytical estimates of the type: f(H) = Σ(1 → 4) ai exp [ -bi H2 ] + c Left: Atomic scattering factors calculated for several ions with the same number of electrons as Ne. One can observe that the O-- has a more diffuse electronic cloud than Si 4+ and thus it shows a faster decay Right: Atomic scattering factors calculated for atoms and ions with different numbers of electrons. Note that the single electron of the hydrogen atom (H) scatters very little as compared with other elements, especially with increasing Θ. Hydrogen will therefore be "difficult to see" among other dispersion effects When the frequency of the incident radiation is close to the natural vibration of the electron linked to the atom, we have to make some corrections (Δ) due to the phase differences that occur between the individual waves scattered by electrons, whose vibration (due to the incident wave) is affected by that linking. Thus: f(H) = f0 + Δ' f + i Δ'' f also written as: f(H) = f0 + f ' + i f '' where f0 is the atomic scattering factor without ligation, as previously defined, and i is the imaginary unit that represents the phase differences between individual scattered waves. This situation occurs for atoms with large atomic numbers (heavy atoms), or with atomic numbers close (but smaller) to the metal atoms in the X-ray anode. These corrections, that will be discussed in another chapter, weakly depend on the θ angle, so that this anomalous effect is better seen at larger values of this angle, although this is where the scattered beams have lower intensity due to thermal effects (see below). [These corrections allow us to distinguish the chirality (Bijvoet, 1951) of the crystals and provide us a method for solving the structure of molecules (SAD, MAD)]. Due to the movement of the atomic thermal vibrations within the material, the effective volume of the atom appears larger, leading to an exponential decrease of the scattering power, characterized by a coefficient B (initially isotropic) in the Debye-Waller (1913, 1923) exponential factor: f(H) exp [ -Biso sin2θ / λ2 ] B is 8π2<u2>, <u2> being the quadratic average amplitude of thermal vibration in the H direction. In the isotropic model of vibration, B is considered to be identical in all directions (with normal values between 3 and 6 Angstroms2 in crystals of organic compounds). In the anisotropic model, B is considered to follow an ellipsoidal vibration model. Unfortunately, these thermal parameters may reflect not only thermal vibration, as they are affected by other factors such as atomic static disorder, absorption, wrong scattering factors, etc. Decrease of the atomic scattering factor due to the thermal vibration If the browser allows it, interested readers can also use this applet made by Steffen Weber which shows the decrease of the atomic scattering factor of an atom when the temperature increases its thermal vibration state. Just write in the left column of the applet the atomic number of an atom (eg 80 for mercury), and the same number in the box shown below. Then activate the box marked with the word "Execute" and note the decrease of the scattering factor as a function of the selected temperature. Now increase the temperature (eg 2), and re-activate the "Execute" box. Scattering by a set of atoms X-rays scattered by a set atoms produce X-ray radiation in all directions, leading to interferences due to the coherent phase differences between the interatomic vectors that describe the relative position of atoms. In a molecule or in an aggregate of atoms, this effect is known as the effect of internal interference, while we refer to an external interference as the effect that occurs between molecules or aggregates. The scattering diagrams below show the relative intensity of each of these effects: Scattering diagrams of a monoatomic material in different states. In the intensity axis we have neglected the background contribution. The figures mainly represent the effect of the external interference, while the internal interference (in this case due to a single atom only) is simply reflected by the relative intensity of the maxima. Note how the thermal movement in the liquid softens and reduces the scattering profile, and how the maxima produced by the glass also decrease. In the crystal, where the phase relations are fixed and repetitive, the scattering profile becomes sharp with well defined peaks, whereas in the other diagrams the peaks are broad and somewhat continuous. In the crystal case the scattering effect is known as diffraction. Note how the scattering phenomenon reflects the internal order of the sample -- the positional correlations between atoms. In the case of monoatomic gases, the effects of interference between atoms m and n lead (in terms of the intensity scattered by an electron) to: I(H) = Ie(H) ΣmΣn fm(H) fn(H) exp [2$\pi$i (s - s0) rm,n / λ] which, when averaged over the duration of the experiment and in all k directions of space, gives rise to the Debye formula: <I(H)> = Ie(H) ΣmΣn fm(H) fn(H) [ sin 2π|H| |rm,n| / 2π|H| |rm,n| ] Geometry of the scattering produced by a set of identical atoms In the case of monoatomic liquids some effects appear at short distances, due to correlations between atomic positions. If the density of atoms per unit of volume (at a distance r from any atom with spherical symmetry) is, on average, ρ(r), then the expression 4$\pi$ r2ρ(r) is known as the radial distribution, and the Debye formula becomes: <I(H)> = Ie(H) N f2(H) [ 1 + (0 → ∞) 4$\pi$r2ρ(r) sin (2$\pi$|H| |r|) / 2$\pi$|H| |r| dr ] All these relationships allow the analysis of the X-ray scattering in amorphous, glassy, liquid and gaseous samples. No matter the possible complexity with which the phenomenon of X-ray scattering is presented. The nonspecialist reader should only remember some simple ideas that are outlined below (drawings taken from the lecture by Stephen Curry)... • X-rays are scattered by electrons contained in atoms. This dispersion effect (which is produced in the form of waves, scattered in all directions of space) contains different intensities (amplitudes), depending on the number of electrons (electron density) contributing to the scattered waves... • Taking an origin in the atomic set and considering a given direction of dispersion, each of the waves scattered in that direction can be represented by a mathematical function (shown in the figure), whose amplitude depends on the electron density ρ(r) existing at the point where the wave arises. S is a magnitude which depends on the angle at which the scattering occurs. • The total scattered wave in each direction is the sum of all the individual waves which scatter in the same direction, f (S). Its intensity (amplitude) will be governed by the phase relationship between the contributing waves, which depends on r (the distance between the points where they originate). This will happen for all space directions... • If we place a detector (such as a photographic plate) to observe the scattered waves, f(S), we obtain a distribution of intensities as shown in the image below... • This "map" of scattered waves (shape and intensities) contains information on the distribution of atoms that are producing the scattering. Mathematically this map is represented by the function f(S), which is the Fourier transform of the atomic distribution, that is, of the electron density function… • We will see later that when the set of atoms are arranged in an orderly fashion, ie, in the form of a crystal, they behave as a very effective dispersion amplifier... • In these circumstances, scattering effects concentrate in certain areas of the detector, very well defined and regularly distributed, known as diffraction... The diffraction allows us to obtain an information about the electronic distribution much richer than the one produced by the scattering of a set of disordered atoms... Scattering by a monoatomic lattice: Diffraction When the set of atoms is structured as a regular three-dimensional lattice (so that the atoms are nodes of the lattice), the precise geometric relationships between the atoms give rise to particular phase differences. In these cases, cooperative effects occur and the sample acts as a three-dimensional diffraction grid. Under these conditions, the effects of external interference produce a scattering structured in terms of peaks with maximum intensity which can be described in terms of another lattice (reciprocal of the atomic lattice) which shows typical patterns, such as those you can see when you look at a streetlight through an umbrella or a curtain. Schematic diagram of diffraction patterns from several two-dimensional point distributions. The parameters of repetition in the diffraction patterns (reciprocal space) carry the * superscript and k means a constant scale factor which depends on the experiment. All points of the diffraction pattern have the same intensity, because it is assumed that the used wavelength is much larger than the points of the direct lattice (see above in the paragraph about scattering by an atom). Relationship between two 2-dimensional lattices, direct lattice (on the left) and reciprocal lattice (on the right). The repetition parameters in reciprocal space carry the * superscript and k is a scale factor that depends on the experiment. d10 and d01 are the corresponding direct lattice spacings. Note that the figures show a direct unit cell and a reciprocal unit cell only, corresponding to the diffraction patterns shown on the left side of the page. See also direct and reciprocal lattices. Structured in a lattice, any atom can be defined by a vector, referred to a common origin: R j,m1,m2,m3 = m1 a + m2 b + m3 c where Rj represents the position of the j node in the lattice; m1, m2, m3, are integers and a, b and c are the vectors defining the lattice. According to this, the intensity scattered by a material would be: I(H) = Ie(H) Σm1Σm'1Σm2Σm'2Σm3Σm'3 fj(H) fj'(H) exp [2$\pi$i (s - s0) rm,m' / λ] where: rm,m' = Rm1,m2,m3 - Rm'1,m'2,m'3 = (m1-m'1) a + (m2 - m'2) b + (m3 - m'3) c And calculating this sum we have: I(H) = Ie(H) [ [ sin2 π(s - s0) M1 a / λ ] / [sin2 π(s - s0) a / λ ] ] . [ [ sin2 π(s - s0) M2 b / λ ] / [sin2 π(s - s0) b / λ ] ] . [ [ sin2 π(s - s0) M3 c / λ ] / [sin2 π(s - s0) c / λ ] ] = Ie(H) IL(H) In this expression, M1, M2, M3 represent the number of unit cells contained in the crystal along the a, b and c directions, respectively, so that in the total sample the number of unit cells would be M = M1.M2.M3 (around 1015 in crystals of an average thickness of 0.5 mm). IL(H) is the factor of external interference due to the monoatomic lattice. It consists of several products of type (sin2 Cx) / sin2 x, where C is a very large number. This function is almost zero for all x values, except in those points where x is an integer multiple of $\pi$, where it takes its maximum value of C2. The total value would be a maximum value only when all three products are other than zero, where it will take the value of M2. That is, the diffraction diagram of the direct lattice is another lattice that takes non-zero values in its nodes and that, due to the Ie(H) factor, varies from one place to another... Due to the finite size of the samples, the small chromatic differences of the incident radiation, the mosaic of the sample, etc., the maxima show some type of spreading around them. Therefore, in order to set the experimental conditions for measurement, one needs a small sample oscillation around the maximum position (rocking) to integrate all these effects and to collect the total scattered energy. Graphical representation of one of the products of the IL(H) function between two consecutive maxima. Note the transformation from scattering to diffraction, that is, from broad to very sharp peaks, as the number of cells M1 increases. The maxima are proportional to M12 and the first minimum appears closer to the maximum with increasing M1. Diffraction by a crystal When the material is not structured in terms of a monoatomic lattice, but is formed by a group of atoms of the same or of different types, the position of every atom with respect to a common origin is given by: R j,m1,m2,m3 = m1 a + m2 b + m3 c + rj = Tm1,m2,m3 + rj Reduction inside a unit cell of the absolute position of an atom through lattice translations that is, that to go from the origin to the atom, at position R, we first go, through the T translation, to the unit cell origin, and from there with the vector r we reach the atom. As the atom is always included within a unit cell, its coordinates referred to the cell are smaller than the axes, and often are expressed as fractions of them: r = X a + Y b + Z c = X/a a + Y/b b + Z/c c = x a + y b + z c where x, y, z, as fractions of axes, are now between -1 and +1. Then, under the conditions initially raised, ie with a monochromatic and depolarised X-ray beam (as a plane wave, formed by parallel rays of a common front wave), perpendicular to the propagation unit vector s0 that completely covers the sample, the kinematic model of interaction indicates that the sample produces diffracted beams in the direction s with an intensity given by: I(H) = Ie(H) IF(H) IL(H) where Ie is the intensity scattered by an electron, IL is the external interference effect due to the three-dimensional lattice structure, and IF is the square of the so-called structure factor, a magnitude which takes into account the effect of all internal interferences due to the geometric phase relationships between all atoms contained in the unit cell. This internal structural effect is: IF(H) = | F2(H) | = F(H) F*(H) As a consequence of the complex representation of waves, mentioned at the beginning, the square of a complex magnitude is obtained by multiplying the complex by its conjugate. Thus, specifically, we give the name structure factor, F(H), to the resultant wave from all scattered waves produced by all atoms in a given direction : F(H) = Σ(1 → n) fj(H) exp [2$\pi$(s - s0) rj / λ] As already stated, the phase differences due to geometric distances R are proportional to (s - s0) R / λ. This means that if we change the origin, the phase differences will be produced according to the geometric changes, in such a way that as the exponential parts of the intensity functions are conjugate complexes, they will affect the intensities in terms of a proportionality constant only. Thus, a change of origin is not relevant to the phenomenon. In the equation of the total intensity, I(H), the conditions to get a maximum lead to the following consequences: • The phenomenon of diffraction in crystal samples is discrete, spectral. • The directions and the periodic repetitions in the reciprocal lattice do not depend on the structure factors. They only depend on the direct lattice. The knowledge of these directions give us the shape and size of the direct unit cell, which actually controls the positions of the diffraction maxima. • The intensity of the diffraction maxima depends on the structure factor in this direction (at that reciprocal point), which only depend on the atomic distribution within the unit cell. In other words, the diffraction intensities are only controlled by the atomic distribution within the cell. Thus, through the intensities we can obtain information about the atomic structure within the unit cell. • The total diffraction pattern is the consequence of the diffraction of the different atomic aggregates within the unit cell, sampled in the diffraction points produced by the crystal lattice (the reciprocal points). • In summary, structural crystallography by X-ray diffraction consists of measuring the intensities of the largest possible amount of diffracted beams in the 3-dimensional diffraction pattern, to get from them the amplitudes of the structure factors, and from these values (through some procedure to allocate the phases for each of these structure factors) to build the electronic distribution in the elementary cell (which can be described in terms of a function whose maxima will give us the atomic positions). Diffraction patterns of: (a) a single molecule, (b) two molecules, (c) four molecules, (d) a periodically distributed linear array of molecules, (e) two linear arrays of molecules, and (f) a two-dimensional lattice of molecules. Note how the pattern of the latter is the pattern of the molecule sampled in the reciprocal points. To clarify what has been said above, the reader can analyze further objects and their corresponding diffraction patterns through this link. Additionally we suggest you to watch the video prepared by the Royal Institution to demonstrate optically the basis of diffraction using a wire coil (representing a molecule) and a laser (representing an X-ray beam). Laue equations, Bragg's interpretation and Ewald's geometric diffraction model We have seen that the diffraction diagram of a direct lattice defined by three translations, a, b and c, can be expressed in terms of another lattice (the reciprocal lattice) with its reciprocal translations: a*, b* and c*, and these translation vectors (direct and reciprocal) meet the conditions of reciprocity: a a* = b b* = c c* = 1 and a b* = a c* = b c* = 0 and they also meet that (for instance): a* = (b x c) / V (x means vectorial or cross product) where V is the volume of the direct unit cell defined by the 3 vectors of the direct cell, and therefore: a* = N100 / d100 where N100 is a unit vector perpendicular to the planes of indices h=1, k=0, l=0, and where d100 is the corresponding interplanar spacing. And similarly with b* and c*. In this way, any vector in the reciprocal lattice will be given by: H*hkl = h a* + k b* + l c* = Nhkl / dhkl in such a way that: |H*hkl| dhkl = 1 On the other hand, we have seen that the maxima in the diffraction diagram of a crystal correspond to the maximum function IL(H), meaning that each of the products that define this function must be individually different from zero, as a sufficient condition to obtain a maximum for the diffracted intensity. If we remember that H = (s - s0) / λ, this also means that the three so-called Laue equations must be fulfilled [Max von Laue (1879-1960)]: H a = h, H b = k, H c = l where h, k, l are integers Laue equations There is also a less formal way to derive and/or to understand the Laue equations, and therefore we invite interested readers to visit this link ... These three Laue conditions are met if the vector H represents a vector of the reciprocal lattice, so that: H = h a* + k b* + l c* since due to the properties of the reciprocal lattice, it can be stated that: Hhkl a = h, Hhkl b = k, Hhkl c = l Said in other words: the three conditions of Laue (Nobel Prize for Physics in 1914) are sufficient to establish that the vector H is a vector of the reciprocal lattice (H =H*hkl). If these three conditions are fulfilled, and taking into account some relationships explained above, we can write: | H | = 2 sin θhkl / λ = | (s - s0) | / λ = | H*hkl | = 1 / dhkl And this is Bragg's Law [William L. Bragg (1890-1971)], that can be rewritten in its usual form as: λ = 2 dhkl sin θhkl But taking into account that geometrically we can consider spacings of type dhkl/2, dhkl/3, and in general dhkl/n (ie, dnh,nk,nl, where n is an integer), the Bragg’s equation (Nobel Prize in Physics in 1915 ) would be in the form: λ = 2 (dhkl /n) sin θnh,nk,nl that is: n λ = 2 dhkl sin θnh,nk,nl where n is an integer number Bragg's Law There is also a less formal way to derive and/or to understand Bragg's Law, and therefore we invite interested readers to visit this link... Moreover, if the Laue conditions are fulfilled (as explained in the following figure) all atoms located on the sequence of planes parallel to the one with indices hkl at a given distance (DP) from the origin (DP being an integer multiple of dhkl) will diffract in phase, and their geometric difference-of-phase factor will be: (s - s0) r = n λ and consequently a diffraction maximum will be produced in the direction: s = s0 + λ H*hkl Nhkl = H*hkl dhkl The plane equation can, therefore, be written as: H*hkl r = H*hkl ri= |H*hkl| |ri| cos (H*hkl , ri) = (1/dhkl) DP = n Bragg's equation has a very simple interpretation... When in the crystal-radiation interaction a maximum of diffraction occurs, it is equivalent to say that the incident beam reflects on the crystal planes of indices hkl and interplanar spacing dhkl. That's why in talking about diffraction maxima, sometimes we use the phrase Bragg's reflection. Moreover, this equation holds all the traditional relations of reciprocity of diffraction, between spacing-direction or position-momentum: the shorter spacing, the larger angle and vice versa; direct lattices with large unit cells produce very close diffracted beams, and vice versa. The figure geometrically describes the direction of the diffraction beam due to the constructive interference between atoms located on the planes with interplanar spacing d(hkl). The figure depicts a description of Bragg's model when different types of atoms are located on their respective parallel planes with Δd spacing. The separation between blue and green planes creates interferences and differences of phases (between the reflected beams) giving rise to changes in intensity (depending of the direction). These intensity changes allow us to get information on the structure of atoms that form the crystal). Readers with installed Java Runtime tools can play with Bragg's model using this applet. On the other hand, we have seen that, in general: H = (s - s0) / λ = -s0/λ + s/λ and this means that the vectors H can be considered as belonging to a sphere of radius 1/λ centered at a point defined by the vector -s0/λ with respect to the origin where the crystal is. This is known as Ewald's sphere (Ewald, 1921), which provides a very easy geometric interpretation of the directions of the diffracted beams. When the H vectors belong to the reciprocal lattice and the end of the vector (a reciprocal point) lies on that spherical surface, diffracted beams are produced, and obviously the crystal planes are in Bragg's position. It's amazing how quickly Paul Peter Ewald (1888-1985) developed this interpretation only some months after Max von Laue experiments. His original article, published in 1913 (in German), is available through this link. The advanced reader can also consult the article published by Ewald in Acta Crystallographica (1969) A25, 103-108. This figure describes Ewald's geometric model. When a reciprocal point , P*(hkl), touches the surface of Ewald's sphere, a diffracted beam is produced starting in the centre of the sphere and passing through the point P*(hkl). Actually the origin of the reciprocal lattice, O*, coincides with the position of the crystal and the diffracted beam will start from this common origin, but being parallel to the one drawn in this figure, exactly as it is depicted in the figure below. This figure shows the whole reciprocal volume that can give rise to diffracted beams when the sample rotates. Changing the orientation of the reciprocal lattice, one can collect all the beams corresponding to the reciprocal points contained in a sphere of radius 2/λ known as the limit sphere. Reciprocal points are shown as small gray spheres . To obtain all possible diffracted beams that a sample can provide, using a radiation of wavelength λ, it is sufficient to conveniently orient the crystal and make it turn, so that its reciprocal points will have the opportunity to lay on the surface of Ewald's sphere. In these circumstances, diffracted beams will originate as described above. With larger wavelengths, the volume of the reciprocal space that can be explored will be smaller, but the diffracted beams will appear more separated. Ewald's model showing how diffraction occurs. The incident X-ray beam, with wavelength λ, shown as a white line, "creates" an imaginary Ewald's sphere of diameter 2/λ (shown in green). The reciprocal lattice (red points) rotate as the crystal rotates, and every time that a reciprocal point cuts the sphere surface a diffracted beam is produced from the center of the sphere (yellow arrows). This Java application can be downloaded from this link. It is totally virus free, and based on the concept of the reciprocal lattice. It allows playing with the Ewald's model to understand the diffraction. Original by Nicolas Schoeni and Gervais Chapuis of the Ecole Polytechnique Fédéral de Lausanne (Switzerland). According to Bragg's Law, the maximum angle at which one can observe diffraction will correspond to the angle where the sin function is maximum (=1). This also means that the theoretical maximum resolution that can be achieved is λ/2. In practice, due to the decrease of the atomic scattering factors by increasing Bragg angles, appreciable intensities will appear only up to a maximum angular value of θmax < 90º and the real maximum resolution reached will be dmin = λ/2 sin θmax. Considering that the interplanar spacings dhkl are a characteristic of the sample, by reducing the wavelength, Bragg's Law indicates that the diffraction angles (θ) will decrease; the spectrum shrinks, but on the other hand, more diffraction data will be obtained, and therefore a better structural resolution will be achieved. According to Ewald's model, the amount of reciprocal space to be measured can be increased by reducing the wavelength, that is, by increasing the radius of the Ewald's sphere It is also very helpful to visit the pages that on reciprocal space are offered by the University of Cambridge through this link, as well as to look at the video made by www.PhysicsReimagined.com, showing the geometric relationships between direct and reciprocal lattices, displayed below as an animated gif: Once the foundations of the theoretical model which describe the phenomenon of diffraction are set, we encourage the reader to visit the pages dedicated to the different experimental methods to measure the diffraction intensities.
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.05%3A_New_Page.txt
In the context of this chapter, you will also be invited to visit these sections... Regardless of the huge improvements that have occurred for X-ray generation, the techniques used to measure the intensities and angles of diffraction patterns have evolved over time. In the first diffraction experiment, Friedrich and Knipping (1912) used a film sensitive to X-rays, but even in the same year, Bragg used an ionization chamber mounted on a rotating arm that, in general, could more accurately determine angles and intensities. However, the film technique had the advantage of being able to collect many diffracted beams at the same time, and thus during the first years of structural Crystallography (from 1920 to 1970) an extensive use of photographic methods was made. Among them the following techniques should be highlighted: Laue, Weissenberg, precession and oscillation. Since the mid-1970's, photographic methods have been gradually replaced by goniometers coupled with point detectors which subsequently have been replaced by area detectors. The Laue method For his first experiments, Max von Laue (1879-1960 (Nobel Prize in Physics in 1914) used continuous radiation (with all possible wavelengths) to impact on a stationary crystal. With this procedure the crystal generates a set of diffracted beams that show the internal symmetry of the crystal. In these circumstances, and taking into account Bragg's Law, the experimental constants are the interplanar spacing d and the crystal position referred to the incident beam. The variables are the wavelength λ and the integer number n: n λ = 2 dhkl sin θnh,nk,nl Thus, for the same interplanar spacing d, the diffraction pattern will contain the diffracted beams corresponding to the first order of diffraction (n=1) of a certain wavelength, the second order (n=2) of half the wavelength (λ/2), the third order (n=3) with wavelength λ/3, etc. Therefore, the Laue diagram is simply a stereographic projection of the crystal. See also the Java simulation offered through this link. Laue diagram of a crystal There are two different geometries in the Laue method, depending on the crystal position with regard to the photographic plate: transmission or reflection: Left: The Laue method in transmission mode Right: The Laue method in reflection mode The Weissenberg method The Weissenberg method is based on a camera with the same name, developed in 1924 by the Austrian scientist Karl Weissenberg (1893-1976). In order to understand Weissenberg’s contribution to X-ray crystallography one should read the two following articles that some years ago were offered to the British Society of Rheology: "Weissenberg’s Influence on Crystallography" (by H. Lipson) (use this link in case of problems) and "Karl Weissenberg and the development of X-ray crystallography" (by M.J. Buerger). The camera consists of a metallic cylinder that contains a film sensitive to X-rays. The crystal is mounted on a shaft (coaxial with the cylinder) that rotates. According to Ewald's model, the reciprocal points will intersect the surface of Ewald's sphere and diffracted beams will be produced. The diffracted beams generate black spots on the photographic film, which when removed from the metallic cylinder, appears as shown below. Left: Scheme and example of a Weissenberg camera. This camera type was used in crystallographic laboratories until about 1975. Right: Camera developed by K. Weissenberg in 1924 Two types of diffraction diagrams can be easily obtained with the Weissenberg cameras, depending on the amount of crystal rotation: oscillation diagrams (rotation of approx. +/-20 degrees) or full rotation diagrams (360 degrees) respectively. Oscillation diagrams are used to center the crystal, that is, to ensure that the rotation of axis coincides exactly with a direct axis, which is equivalent to saying that reciprocal planes (which by geometric construction are perpendicular to a direct axis ) generate lines of spots on the photographic film. Once centering is achieved, the full rotation diagrams are used to evaluate the direct axis of the crystal, which coincides with the spacing between the dot lines on the diagram. Scheme explaining the production of a Weissenberg diagram of the rotation or oscillation variety. When the reciprocal points, belonging to the same reciprocal plane, touch the surface of Ewald's sphere, they produce diffracted beams arranged in cones. As shown in the diagram above, each horizontal line of points represents a reciprocal plane perpendicular to the axis of rotation as projected on the photographic plate. The figure on the left shows the real appearance of a Weissenberg diagram of this type, rotation-oscillation. As explained below, the distance between the horizontal spot lines provides information on the crystal repetition period in the vertical direction of the film. These diagrams were also used to align mounted crystals... This technique requires that the crystal rotation axis is coincident with an axis of its direct lattice, so that the reciprocal planes are collected as lines of spots as is shown on the left. The crystal must be mounted in such a way that the rotation axis coincides with a direct axis of the unit cell. Thus, by definition of the reciprocal lattice, there will be reciprocal planes perpendicular to that axis. The reciprocal points (lying on these reciprocal planes) rotate when the crystal rotates and (after passing through the Ewald sphere) produce diffracted beams that arranged in cones, touch the cylindrical film and appear as aligned spots (photograph on the left). It seems obvious that these diagrams immediately provide information about the repetition period of the direct lattice in the direction perpendicular to the horizontal lines (reciprocal planes). However, those reciprocal planes (two dimensional arrays of reciprocal points) are represented as projections (one dimension) on the film and therefore a strong spot overlapping is to be expected. The problem with spot overlap was solved by Weissenberg by adding a translation mechanism to the camera, in such a way that the cylinder containing the film could be moved in a "back-and-forth" mode (in the direction parallel to the axis of rotation) coupled with the crystal rotation. At the same time, he introduced two internal cylinders (as is shown in the left figure, and also below). In this way, only one of the diffracted cones (those from a reciprocal layer) is "filtered" and therefore allowed to reach the photographic film. Thus, a single reciprocal plane (a 2-dimensional array of reciprocal points) is distributed on the film surface (two dimensions) and therefore the overlap effect is avoided. However, as a consequence of the back and forth translation of the camera during the rotation of the crystal, a deformation is originated in the distribution of the spots (diffraction intensities) The appearance of such a diagram, which produces a geometrical deformation of the collected reciprocal plane, is shown below. Taking into account this deformation, one can easily identify every spot of the selected reciprocal plane and measure its intensity. To select the remaining reciprocal planes one just has to shift the internal cylinders and collect their corresponding diffracted beams (arranged in cones). Left: Details of the Weissenberg camera used to collect a cone of diffracted beams. Two internal cylinders showing a slit, through which a cone of diffracted beams is allowed to reach the photographic film. The outer cylinder, containing the film, moves back-and-forth while the crystal rotates, and so the spots that in the previous diagram type were in a line (see above) are now distributed on the film surface (see the figure on the right). Right: Weissenberg diagram showing the reciprocal plane of indices hk2 of the copper metaborate. The precession method The precession method was developed by Martin J. Buerger (1903-1986) at the beginning of the 1940's as a very clever alternative to collect diffracted intensities without distorting the geometry of the reciprocal planes. As in the Weissenberg technique, precession methodology is also based on a moving crystal, but here the crystal moves (and so does the coupled reciprocal lattice) as the planets do, and hence its name. In this case the film is placed on a planar cassette that moves following the crystal movements. In the precession method the crystal has to be oriented so that the reciprocal plane to be collected is perpendicular to the X-rays' direct beam, ie a direct axis coincides with the direction of the incident X-rays. Two schematic views showing the principle on which the precession camera is based. μ is the precession angle around which the reciprocal plane and the photographic film move. During this movement the reciprocal plane and the film are always kept parallel. The camera designed for this purpose and the appearance of a precession diagram showing the diffraction pattern of an inorganic crystal are shown in the figures below. Left: Scheme and appearance of a precession camera Right: Precession diagram of a perovskite showing cubic symmetry Precession diagrams are much simpler to interpret than those of Weissenberg, as they show the reciprocal planes without any distortion. They show a single reciprocal plane on a photographic plate (picture above) when a circular slit is placed between the crystal and the photographic film. As in the case of Weissenberg diagrams, we can readily measure distances and diffraction intensities. However, with these diagrams it is much easier to observe the symmetry of the reciprocal space. The only disadvantage of the precession method is a consequence of the film, which is flat instead of cylindrical, and therefore the explored solid angle is smaller than in the Weissenberg case. The precession method has been used successfully for many years, even for protein crystals: Left: Precession diagram of a lysozyme crystal. One can easily distinguish a four-fold symmetry axis perpendicular to the diagram. According to the relationships between direct and reciprocal lattices, if the axes of the unit cell are large (as in this case), the separation between reciprocal points is small. Right: Precession diagram of a simple organic compound, showing mm symmetry (two mirror planes perpendicular to the diagram). Note that the distances between reciprocal points is much larger (smaller direct unit cell axes) than in the case of proteins (see the figure on the left). The oscillation method Originally, the methods of rotating the crystal with a wide rotation angle were very successfully used. However, when it was applied to crystals with larger direct cells (ie small reciprocal cells), the collecting time increased. Therefore, these methods were replaced by methods using small oscillation angles, allowing multiple parts of different reciprocal planes to be collected at once. Collecting this type of diagrams at different starting positions of the crystal is sufficient to obtain enough data in a reasonable time. The geometry of collection is described in the figures shown below. Nowadays, with rotating anode generators, synchrotrons, and area detectors (image plate or CCD, see below), this is the method widely used, especially for proteins. Outline of the geometrical conditions for diffraction in the oscillation method. The crystal, and therefore its reciprocal lattice, oscillate in a small angle around an axis (perpendicular to the plane of the figure) which passes through the center. In the figure on the right, the reciprocal area that passes through diffraction conditions, within Ewald's sphere (with radius 2.sin 90/λ), is denoted in yellow. The maximum resolution which can be obtained in the experiment is given by 2.sen θmax). When the reciprocal lattice is oscillated in a small angle around the rotation axis, small areas of different reciprocal planes will cross the surface of Ewald's sphere, reaching diffraction condition. Thus, the detector screen will show diffraction spots from the different reciprocal planes forming small "lunes" on the diagram (figure on the right). A "lune" is a plane figure bounded by two circular arcs of unequal radii, i.e., a crescent. Four-circle goniometers The introduction of digital computers in the late 1970s led to the design of the so-called automatic four-circle diffractometers. These goniometers, with very precise mechanics and by means of three rotation axes, allow crystal samples to be brought to any orientation in space, fulfilling Ewald's requirements to produce diffraction. Once the crystal is oriented, a fourth axis of rotation, which supports the electronic detector, is placed in the right position to collect the diffracted beam. All these movements can be programmed in an automatic mode, with minimal operator intervention. Two different goniometric geometries have been used very successfully for many years. In the Eulerian goniometer (see the figure below) the crystal is oriented through the three Euler angles (three circles): Φ represents the rotation axis around the goniometer head (where the crystal is mounted), χ allows the crystal to roll over the closed circle, and ω allows the full goniometer to rotate around a vertical axis. The fourth circle represents the rotation of the detector, 2θ, which is coaxial with ω. This geometry has the advantage of a high mechanical stability, but presents some restrictions for external devices (for instance, low or high temperature devices) to access the crystal. Left: Scheme and appearance of a four-circle goniometer with Eulerian geometry Right: Rotations in a four-circle goniometer with Eulerian geometry An alternative to the Eulerian geometry is the so-called Kappa geometry, which does not have an equivalent to the closed χ circle. The role of the Eulerian χ rotation is fulfilled by means of two new axes: κ (kappa) and ωκ (see the figure below), in such a way that with a combination of both new angles one can obtain Eulerian χ angles in the range -90 to +90 degrees. The main advantage of this Kappa geometry is the wide accessibility to the crystal. The angles Φ and 2θ are identical to those in Eulerian geometry: Scheme and appearance of a four-circle goniometer with Kappa geometry The detection system widely used during many years for both geometries (Euler and Kappa) was based on small-area counters or point detectors. With these detectors the intensity of the diffracted beams must be measured individually, one after the other, and therefore all angles had to be changed automatically according to previously calculated values. Typical measurement times for such detector systems are around 1 minute per reflection. One of the point detectors more widely used for many years is the scintillation counter, whose scheme is shown below: Scheme of a scintillation counter Area detectors As an alternative to the point detectors, the development of electronic technology has led to the emergence of so-called area detectors which allow the detection of many diffraction beams simultaneously, thereby saving time in the experiment. This technology is particularly useful for proteins and generally for any material that can deteriorate over its exposure to X-rays, since the detection of every collected image (with several hundreds of reflections) is done in a minimum time, on the order of minutes (or seconds if the X-ray source is a synchrotron). One of the area detectors most commonly used is based on the so-called CCD's (Charge Coupled Device) whose scheme is shown below: Schematic view of a CCD with its main components. The X-ray converter, in the figure shown as Phosphor, can also be made with other materials, such as GdOS, etc. The CCD converts X-ray photons at high speed, but its disadvantage is that it operates at very low temperatures (around -70 C). Image taken from ADSC Products CCD-type detectors are usually mounted on Kappa goniometers and their use is widespread in the field of protein crystallography, with rotating anode generators or synchrotron sources. Left: Goniometer with Kappa geometry and CCD detector (Image taken from Bruker-AXS) Right: Details of a Kappa goniometer (in this case with a fixed κ angle) Another type of detector widely used today, especially in protein crystallography, are the Image Plate Scanners, which are usually mounted on a relatively rudimentary goniometer, whose only freedom is a rotation axis parallel to the crystal mounting axis. The sensor itself is a circular plate of material sensitive to X-rays. After exposure, a laser is used to scan the plate and read out the intensities. Left: Image Plate Scanner. (image taken from Marxperts) Right: Components of an Image Plate Scanner The latest technology involves the use of area detectors based on CMOS (complementary metal-oxide semiconductor) technology that has very short readout time, allowing for increased frame rates during the data collection. Area detectors XALOC, the beamline for macromolecular crystallography (left) at the Spanish synchrotron ALBA (right) In summary, a complete data collection with this type of detectors consists of multiple images such as the ones shown below. The collected images are subsequently analyzed in order to obtain the crystal unit cell data, symmetry (space group) and intensities of the diffraction pattern (reciprocal space). This process is explained in more detail in another section. Left: Diffraction image of a protein, obtained with the oscillation method in an Image Plate Scanner. During the exposure time (approx. 5 minutes with a rotating anode generator, or approx. 5 seconds at a synchrotron facility) the crystal rotates about 0.5 degrees around the mounting axis. The read-out of the image takes about 20 seconds (depending on the area of the image plate). This could also be the appearance of an image taken with a CCD detector. However, with a CCD the exposure time would be shorter. Right: A set of consecutive diffraction images obtained with an Image Plate Scanner or a CCD detector. After several images two concentric dark circles appear, corresponding to an infinite number of reciprocal points. They correspond to two consecutive diffraction orders of randomly oriented ice microcrystals that appear due to some defect of the cryoprotector or to some humidity of the cold nitrogen used to cool down the sample. Images are taken from Janet Smith Lab. See also the example published by Aritra Pal and Georg Sheldrick. In all of these described experimental methodologies (except for the Laue method), the radiation used is usually monochromatic (or nearly monochromatic), which is to say, radiation with a single wavelength. Monochromatic radiations are usually obtained with the so-called monochromators, a system composed by single crystals which, based on Bragg's Law, are able to "filter" the polychromatic input radiation and select only one of its wavelengths (color), as shown below: Scheme of a monochromator. A polychromatic radiation (white) coming from the left is "reflected", according to Bragg's Law, "filtering" the input radiation that is reflected again on a secondary crystal. Image taken from ESRF. At present, in crystallographic laboratories or even in the synchrotron lines, the traditional monochromators are being replaced by new optical components that have demonstrated superior efficacy. These components, usually known as "focusing mirrors", can be based on the following phenomena: • total reflection (mirrors, capillaries and wave guides), • refraction (refraction lenses) and • diffraction (crystal systems based on monochromators, multilayer materials, etc.) It can also be very instructive to look at this animated diagram showing the path of each X-ray photon in a given diffraction system: • the photon leaves the source where X-rays are produced, • goes through the various optical elements that channel it in the right direction (mirrors, slits and collimators) • diffracts inside the single crystal, and • finally generates the diffraction spots on a detector The original video can be seen in https://vimeo.com/52155723 In order to get the largest and best collection of diffraction data, crystal samples are usually maintained at a very low temperature (about 100 K, that is, about -170 C) using a dry nitrogen stream. At low temperatures, crystals (and especially those of macromolecules) are more stable and resist the effects of X-ray radiation much better. At the same time, the low temperature further reduces the atomic thermal vibration factors, facilitating their subsequent location within the crystal structure. Cooling system using dry liquid nitrogen. Image taken from Oxford Cryosystems To mount the crystals on the goniometer head, in front of the cold nitrogen stream, crystallographers use special loops (like the one depicted in the left figure) which fix the crystal in a matrix transparent to X-rays. This is especially useful for protein crystals, where the matrix also acts as cryo-protectant (anti-freeze). The molecules of the cryo-protectant spread through the crystal channels replacing the water molecules with the cryo-protectant ones, thus avoiding crystal rupture due to frozen water. Left: Detail of a mounted crystal using a loop filled with an antifreeze matrix Right: Checking the position of the crystal in the goniometric optical center. Video courtesy of Ed Berry In any case, the crystal center must be coincident with the optical center of the goniometer, where the X-ray beam is also passing through. In this way, when the crystal rotates, it will always be centered on that point, and in any of its positions will be bathed by the X-ray beam. Cryo-protection system mounted on a goniometer The nitrogen flow at -170 º C (coming through the upper tube) cools the crystal mounted on the goniometer head. The collimator of the X-ray beam points toward the crystal from the left of the image. Note the slight steam generated by the cold nitrogen when mixed with air humidity. Visually analyzing the quality of the diffraction pattern In summary, all of these methodologies can be used to obtain a data collection, consisting of three Miller indices and an intensity for each diffracted beam, which is to say, the largest number of reciprocal points of the reciprocal lattice. All these data, crystal unit cell dimensions, crystal symmetry (space group) and intensities associated with the reciprocal points (diffraction pattern), will allow us to "see" the internal structure of the crystal, but this issue will be shown in another chapter...
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.06%3A_New_Page.txt
In the context of this chapter, you will also be invited to visit these sections... In previous chapters, we have seen how X-rays interact with periodically structured matter (crystals), and the implicit question that we have raised from these earlier chapters is: Can we "see" the internal structure of crystals?, or in other words, Can we "see" the atoms and molecules that build crystals? The answer is definitely yes! Left: Molecular structure of a pneumococcal surface enzyme Center: Molecular packing in the crystal of a simple organic compound, showing its crystallographic unit cell Right: Geometric details showing several molecular interactions in a fragment of the molecular structure of a protein As the examples above demonstrate, crystallography can show us the structures of very large and complicated molecular structures (left figure) and how molecules pack together in a crystal structure (center figure). We can also see every geometric detail, as well as the different types of interactions, among molecules or parts of them (right figure). However, for a better understanding of the fundamentals on which this response is based, it is necessary to introduce some new concepts or refresh some of the previously seen ones... In previous chapters we have seen that crystals represent the organized and ordered matter, consisting of associations of atoms and/or molecules, corresponding to a natural state of it with a minimum of energy. We also know that crystals can be described by repeating units in the three directions of space, and that this space is known as direct or real space. These repeating units are know as unit cells (which also serve as a reference system to describe the atomic positions). This direct or real space, the same in which we live, can be described by the electron density, ρ(xyz), a function defined in each point of the unit cell of coordinates (xyz), where, in addition, operate symmetry elements which repeat atoms and molecules within the cell. Unit cell (left) whose three-dimensional stacking builds a crystal (right) Motifs (atoms, ions or molecules) do repeat themselves by symmetry operators inside the unit cell. Unit cells are stacked in three dimensions, following the rules of the lattice, building the crystal. We have also learned that X-rays interact with the electrons of the atoms in the crystals, resulting in a diffraction pattern, also know as reciprocal space, with the properties of a lattice (reciprocal lattice) with a certain symmetry, and where we also can define a repeating cell (reciprocal cell). The "points" of this reciprocal lattice contain the information on the diffraction intensity. Left: Interaction between two waves scattered by electrons. The resulting waves show areas of darkness (destructive interference), depending on the angle considered. Image originally taken from physics-animations.com. Right: One of the hundreds of diffraction images of a protein crystal. The black spots on the image are the result of the cooperative scattering (diffraction) from the electrons of all atoms contained in the crystal. Through this cooperative scattering (diffraction), scattered waves interact with each other, producing a single diffracted beam in each direction of space, so that, depending on the phase differences (advance or delay) among the individual scattered waves, they add or subtract, as shown in the two figures below: Interference of two waves with the same amplitude and frequency (animation taken from The Pennsylvania State University) Composition of two scattered waves. A = resultant amplitude; I = resultant intensity (~ A2) (a) totally in phase (the total effect is the sum of both waves) (b) with a certain difference of phase (they add, but not totally) (c) out of phase (the resultant amplitude is zero) Between the two mentioned spaces (direct and reciprocal) there is a holistic relationship (every detail of one of the spaces affects the whole of the other, and vice versa). Mathematically speaking this relationship is a Fourier transform that cannot directly be solved, since the diffraction experiment does not allow us to know one of the fundamental magnitudes of the equation, the relative phases (Φ) of the diffraction beams. Left: Holistic relationship between direct space (left) and reciprocal space (right). Every detail of the direct space (left) depends on the total information contained in the reciprocal space (right), and vice versa... Every detail of the reciprocal space (right) depends on the total information contained in the direct space (left). Right: Graphical representation of the out-of-phase between two waves. Relative phase between waves The diagram below, with the help of the following paragraph, summarizes what the resolution of a crystalline structure through X-ray diffraction implies ... Atoms, ions, and molecules are packed into units (elemental cells) that are stacked in three dimensions to form a crystal in space that we call direct or real space. The diffraction effects of the crystal can be represented as points of a lattice mathematical space that we call the reciprocal lattice. The diffraction intensities, that is, the blackening of these points of the reciprocal lattice, represent the moduli of some fundamental vector quantities, which we call structure factors. If we get to know not only the moduli of these vectors (the intensities), but their relative orientations (that is, their relative phases), we will be able to obtain the value of the electron density function at each point of the elementary cell, providing thus the positions of the atoms that make up the crystal. Outline on basic crystallographic concepts: direct and reciprocal spaces. The issue is to obtain information on the left side (direct space) from the diffraction experiment (reciprocal space). ELECTRON DENSITY In order to know (or to see) the internal structure of a crystal we have to solve a mathematical function known as the "electron density;" a function that is defined at every point in the unit cell (a basic concept of the crystal structure introduced in another chapter). The function of electron density, represented by the letter ρ, has to be solved at each point within the unit cell given by the coordinates (x, y, z), referred to the unit cell axes. At those points where this function takes maximum values (estimated in terms of electrons per cubic Angstrom) is where atoms are located. That means that if we are able to calculate this function, we will "see" the atomic structure of the crystal. Formula 1. Function defining the electron density in a point of the unit cell given by the coordinates (x, y, z) • F(hkl) represents the resultant diffracted beams of all atoms contained in the unit cell in a given direction. These magnitudes (actually waves), one for each diffracted beam, are known as structure factors. Their moduli are directly related to the diffracted intensities. • h, k, l are the Miller indices of the diffracted beams (the reciprocal points) and Φ(hkl) represent the phases of the structure factors. V represents the volume of the unit cell. The function has limitations due to the extent to which the diffraction pattern is observed. The number of observed structure factors is finite, and therefore the synthesis will only be approximate and may show some truncation effects. Left: Appearance of a zone of the electron density map of a protein crystal, before it is interpreted. Right: The same electron density map after its interpretation in terms of a peptidic fragment. The equation above (Formula 1) represents the Fourier transform between the real or direct space (where the atoms are, represented by the function ρ) and the reciprocal space (the X-ray pattern) represented by the structure factor amplitudes and their phases. Formula 1 also shows the holistic character of diffraction, because in order to calculate the value of the electron density in a single point of coordinates (xyz) it is necessary to use the contributions of all structure factors produced by the crystal diffraction. The structure factors F(hkl) are waves and therefore can be represented as vectors by their amplitudes, [F(hkl)], and phases Φ(hkl) measured on a common origin of phases. When the unit cell is centrosymmetric, for each atom at coordinates (xyz) there is an identical one located at (-x,-y,-z). This implies that Friedel's law holds F(h,k,l) = F(-h,-k,-l) and the expression of the electron density (Formula 1) is simplified, becoming Formula 1.1. And the phases of the structure factors are also simplified, becoming 0° or 180°... Formula 1.1. Electron density function in a point of coordinates (x, y, z) in a centrosymmetric unit cell. It is important to realize that the quantity and quality of information provided by the electron density function, ρ, is very dependent on the quantity and quality of the data used in the formula: the structure factors F(hkl) (amplitudes and phases!). We will see later on that the amplitudes of the structure factors are directly obtained from the diffraction experiment. If your browser is Java enabled, as a practical exercise on Fourier transforms we recommend visiting he following links: • or, even better, the Java applet kindly provided by Nicholas Schöni y Gervais Chapui (École Polytechnique Fédérale de Lausanne, Switzerland), that you can download (free of any virus) from the link shown and execute in your own computer. This applet calculates the Fourier transform of a two dimensional density function ρ(x) yielding the complex magnitude G(S), the reciprocal space. The applet is also able to calculate the inverse Fourier transform of G(S). The density function can be either periodic or non-periodic. Numerous tools including drawing tools can be applied in order to understand the role of amplitudes and phases which are of particular importance in diffraction phenomena. As an illustration, the Patterson function of a periodic structure can be simulated. The analytic expression of the structure factors, F(hkl), is simple and involves a new magnitude (ƒj ) called atomic scattering factor (defined in a previous chapter) which takes into account the different scattering powers with which the electrons of the j atoms scatter the X-rays: Formula 2. Structure factor for each diffracted beam. This equation is the Fourier transform of the electron density (Formula 1). The expression takes into account the scattering factors ƒ of all j atoms contained in the crystal unit cell. From the experimental point of view, it is relatively simple to measure the amplitudes [F(hkl)] of all diffracted waves produced by a crystal. We just need an X-ray source, a single crystal of the material to be studied and an appropriate detector. With these conditions fulfilled we can then measure the intensities, I(hkl), of the diffracted beams in terms of: Formula 3. Relationship between the amplitude of the structure factors |F(hkl)| and their intensities I(hkl) K is a factor that puts the experimental structure factors, (Frel) , measured on a relative scale (which depends on the power of the X-ray source, crystal size, etc.) into an absolute scale, which is to say, the scale of the calculated (theoretical) structure factors (if we could know them from the real structure, Formula 2 above). As the structure is unknown at this stage, this factor can be roughly evaluated using the experimental data by means of the so-called Wilson plot. Wilson plot I rel represents the average intensity (in a relative scale) collected in a given interval of θ (the Bragg angle); fj are the atomic scattering factors in that angular range, and λ is the X-ray wavelength. By plotting the magnitudes shown in the left figure (green dots), a straight line is obtained from which the following information can be derived: • The value of the y-axis intercept is the Naperian logarithm of C, a magnitude related to the scale factor K (= 1 / √C), described above. • The slope is equivalent to -2B, where B is the isotropic overall atomic thermal vibration factor. A is an absorption factor, which can be estimated from the dimensions and composition of the crystal. L is known as the Lorentz factor, responsible for correcting the different angular velocities with which the reciprocal points cross the surface of Ewald's sphere. For four-circle goniometers this factor can be calculated as 1/sin 2θ, where θ is the Bragg angle of the reflections. p is the polarization factor, which corrects the polarization effect of the of the incident beam, and is given by the expression (1+cos22θ)/2, where θ also represents the Bragg angle of the reflections (the reciprocal points). THE PHASE PROBLEM However, in order to calculate the electron density (ρ(xyz) in Formula 1, above), and therefore to know the atomic positions inside the unit cell, we also need to know the phases of the different diffracted beams (Φ(hkl) in Formula 1 above). But, unfortunately, this valuable information is lost during the diffraction experiment (there is no experimental technique available to measure the phases!) Thus, we must face the so-called phase problem if we want to solve Formula 1. The phase problem can be very easily understood if we compare the diffraction experiment (as a procedure to see the internal structure of crystals) with a conventional optical microscope... Illustration on the phase problem. Comparison between an optical microscope and the "impossible" X-ray microscope. There are no optical lenses able to combine diffracted X-rays to produce a zoomed image of the crystal contents (atoms and molecules). In a conventional optical microscope the visible light illuminates the sample and the scattered beams can be recombined (with intensity and phase) using a system of lenses, leading to an enlarged image of the sample under observation. In what we might call the impossible X-ray microscope (the process of viewing inside the crystals to locate the atomic positions), the visible light is replaced by X-rays (with wavelengths close to 1 Angstrom) and the sample (the crystal) also scatters this "light" (the X-rays). However, we do not have any system of lenses that could play the role of the optical lenses, to recombine the diffracted waves providing us with a direct "picture" of the internal structure of the crystal. The X-ray diffraction experiment just gives us a picture of the reciprocal lattice of the crystal on a photographic plate or detector. The only thing we can do at this stage is to measure the positions and intensities of the spots collected on the detector. These intensities are proportional to the structure factor amplitudes, [F(hkl)]. But regarding the phases, Φ(hkl), nothing can be concluded for the moment, preventing us from obtaining a direct solution of the electron density function (Formula 1 above). We therefore need some alternatives in order to retrieve the phase values, lost during the diffraction experiment... STRUCTURAL RESolution Once the phase problem is known and understood, let's now see the general steps (see the scheme below) that a crystallographer must face in order to solve the structure of a crystal and therefore locate the positions of atoms, ions or molecules contained in the unit cell... General diagram illustrating the process of resolution of molecular and crystal structures by X-ray diffraction The process consists of different steps that have been treated previously or are described below: • Getting a crystal suitable for the experiment, with adequate quality and size. Something related will be seen in another section. • Obtaining the diffraction pattern with the appropriate wavelength. This has been described in another chapter. • Evaluating the diffraction pattern to get the lattice parameters (unit cell), symmetry (space group) and diffraction intensities. • Solving the electron density function, obtaining any information about the phases of the diffracted beams. This is a key point for the structural resolution that will be discussed below. • Building an initial structural model to explain the values of the electron density function and completing the model locating the remaining atomic positions. This will be seen below. • Refining the model, adjusting all atomic positions to get the calculated diffraction pattern as similar as possible to the experimental diffraction pattern, and finally validate and show the total structural model obtained. This will be seen in another chapter. For the study to be successful, some important aspects must be taken into account, such as: • The compound under study must be pure to be crystallized (if not already, as in the case of natural minerals). • Crystals can be obtained using different techniques, from the most simple evaporation or slow cooling method up to the more complex: vapor (or solvent) diffusion, sublimation, convection, etc. There is enough literature available.. See, for example, the pages of the LEC, Laboratory of Crystallographic Studies, for additional information on specific crystallization techniques. For proteins, the procedure most extensively used is based on vapor diffusion experiments, usually with the "hanging drop" technique, described elsewhere in these pages. In this sense it is very relevant to note the recent advances introduced in the field of femtosecond X-ray protein nanocrystallography, which will mean a giant step to practically eliminate most difficulties in the crystallization process, and in particular for proteins (see the small paragraph dedicated to the X-ray free electron laser). • If appropriate crystals are obtained, they are exposed to X-rays and their diffraction intensities measured using the methods and equipment described in a previous chapter. A careful data evaluation will provide us with the dimensions of the unit cell, the symmetry and, directly from the intensities, the amplitudes of the structure factors [F(hkl)] . Of all these subjects at this stage, the most difficult one concerns the determination of the crystal symmetry, a key question for the successful resolution of the structure. To obtain crystal symmetry, a visual study of the crystal would make no sense and therefore it must be deduced from the symmetry of the diffraction pattern, as indicated in a specific section of these pages. • At this stage, the question about the unknown phases, Φ(hkl), arises, so that they must be somehow evaluated, as we will see below... • If the evaluated phases are correct, the electron density function ρ(xyz) will show a distribution of maxima (atomic positions) consistent and meaningful from the stereochemical point of view. Once an initial structure is known, some additional steps (construction of the detailed model, mathematical refinement and validation) must be carried out. This will lead us to the so-called final model of the structure. But let us come back to the most important issue: how do we solve the phase problem? THE PATTERSON FUNCTION The very first solution to the phase problem was introduced by Arthur Lindo Patterson (1902-1966). Basing his work on the inability to directly solve the electron density function (Formula 1 above or below), and after his training (under the U.S. mathematician Norbert Wiener) on Fourier transforms convolution, Patterson introduced a new function P(uvw) (Formula 4, below) in 1934. This formula, which defines a new space (the Patterson space), can be considered as the most important single development in crystal-structure analysis since the discovery of X-rays by Röntgen in 1895 or X-ray diffraction by Laue in 1914. His elegant formula, known as the Patterson function (Formula 4, below), introduces a simplification of the information contained in the electron density function. The Patterson function removes the term containing the phases, and the amplitudes of the structure factors are replaced by their squares. It is thus a function that can be calculated immediately from the available experimental data (intensities, which are related to the amplitudes of the structure factors). Formally, from the mathematical point of view, the Patterson function is equivalent to the convolution of the electron density (Formula 1, below) with its inverse: ρ(x,y,z) * ρ(-x,-y,-z). Formula 1. The electron density function calculated at the point of coordinates (x,y,z). Formula 4. The Patterson function calculated at the point (u, v, w). This is a simplification of Formula 1, since the summation is done on F2(hkl) and all phases are assumed to be zero. It seems obvious that after omitting the crucial information contained in the phases [Φ(hkl) in Formula 1], the Patterson function will no longer show the direct positions of the atoms in the unit cell, as the electron density function would do.In fact, the Patterson function only provides a map of interatomic vectors (relative atomic positions), the height of its maxima being proportional to the number of electrons of the atoms implied. We will see that this feature means an advantage in detecting the positions of "heavy" atoms (with many electrons) in structures where the remaining atoms have lower atomic numbers. Once the Patterson map is calculated, it has to be correctly interpreted (at least partially) to get the absolute positions (x,y,z) of the heavy atoms within the unit cell. These atomic positions can now be used to obtain the phases Φ(hkl) of the diffracted beams by inverting Formula 1 and therefore this will allow the calculation of the electron density function ρ(xyz), but this will be the object of another section of these pages. THE DIRECT METHODS The phase problem for crystals formed by small and medium size molecules was solved satisfactorily by several authors throughout the twentieth century with special mention to Jerome Karle (1918-2013) and Herbert A. Hauptmann (1917-2011), who shared the Nobel Prize in Chemistry in 1985 (without forgetting the role of Isabella Karle, 1921-2017). The methodology introduced by these authors, known as the direct methods, generally exploit constraints or statistical correlations between the phases of different Fourier components. Center: Jerome Karle (1918-2013) The atomicity of molecules, and the fact that the electron density function should be zero or positive at any point of the unit cell, creates certain limitations in the distribution of phases associated with the structure factors. In this context, the direct methods establish systems of equations that use the intensities of diffracted beams to describe these limitations. The resolution of these systems of equations provides direct information on the distribution of phases. However, since the validity of each of these equations is established in terms of probability, it is necessary to have a large number of equations to overdetermine the phase values of the unknowns (phases Φ(hkl)). The direct methods use equations that relate the phase of a reflection (hkl) with the phases of other neighbor reflections (h',k',l' y h-h',k-k',l-l'), assuming that these relationships are "probably true" (P) ... where Ehkl, Eh´k´l´ and Eh-h',k-k',l-l' are the so called "normalized structure factors", that is, structure factors corrected for thermal motion, brought to an absolute scale and assuming that structures are made of point atoms. In other words, structure factor normalization converts measured |F| values into "point atoms at rest" coefficients known as |E| values. At present, direct methods are the preferred ones for phasing structure factors produced by small or medium sized molecules having up to 100 atoms in the asymmetric unit. However, they are generally not feasible by themselves for larger molecules such as proteins. The interested reader should look into an excellent introduction to direct methods through this link offered by the International Union of Crystallography. METHODS OF STRUCTURAL RESolution FOR MACROMOLECULES For crystals composed of large molecules, such as proteins and enzymes, the phase problem can be solved successfully with three main methods, depending of the case: (i) introducing atoms in the structure with high scattering power. This methodology, known as MIR (Multiple Isomorphous Replacement) is therefore based on the Patterson method. (ii) introducing atoms that scatter X-rays anomalously, also known as MAD (Multi-wavelength Anomalous Diffraction), and (iii) by means of the method known as MR (Molecular Replacement), which uses the previously known structure of a similar protein. MIR (Multiple Isomorphous Replacement) This technique, based on the Patterson method, was introduced by David Harker, but was successfully applied for the first time by Max F. Perutz and John C. Kendrew who received the Nobel Prize in Chemistry in 1962, for solving the very first structure of a protein, hemoglobin. The MIR method is applied after introducing "heavy" atoms (large scatterers) in the crystal structure. However, the difficulty of this methodology lies in the fact that the heavy atoms should not affect the crystal formation or unit cell dimensions in comparison to its native form, hence, they should be isomorphic This method is conducted by soaking the crystal of the sample to be analyzed with a heavy atom solution or by co-crystallization with the heavy atom, in the hope that the heavy atoms go through the channels of the crystal structure and remain linked to amino acid side chains with the ability to coordinate metal atoms (eg SH groups of cysteine). In the case of metalloproteins, one can replace their endogenous metals by heavier ones (for instance Zn by Hg, Ca by Sm, etc.). Heavy atoms (with a large number of electrons) show a higher scattering power than the normal atoms of a protein (C, H, N, O and S), and therefore they appreciably change the intensities of the diffraction pattern when compared with the native protein. These differences in intensity between the two spectra (heavy and native structures) are used to calculate a map of interatomic vectors between the heavy atom positions (Patterson map), from which it is relatively easy to determine their coordinates within the unit cell. Scheme of a Patterson function derived from a crystal containing three atoms in the unit cell. To obtain this function graphically from a known crystal structure (left figure) all possible interatomic vectors are plotted (center figure). These vectors are then moved parallel to themselves to the origin of the Patterson unit cell (right figure). The calculated function will show maximum values at the end of these vectors, whose heights are proportional to the product of the atomic numbers of the involved atoms. The positions at these maxima (with coordinates u, v, w) represent the differences between the coordinates of each pair of atoms in the crystal, ie u=x1-x2, v=y1-y2, w=z1-z2. With the known positions of the heavy atoms, the structure factors are now calculated using Formula 2 (see also the diagram below), that is their amplitudes |Fc(hkl)| and phases Φc(hkl), where the c subscript means "calculated". By using Formula 1, an electron density map, ρ(xyz), is now calculated using the amplitudes of the structure factors observed in the experiment, |Fo(hkl)| (containing the contribution of the whole structure) combined with the calculated phases Φc(hkl). If these phases are good enough, the calculated electron density map will show not only the known heavy atoms, but will also yield additional information on further atomic positions (see diagram below). In summary, the MIR methodology steps are: • Prepare one or several heavy atom derivatives that must be isomorphic with the native protein. A first test of isomorphism is done in terms of the unit cell parameters. • Collect diffraction data from both native and heavy atom derivative(s). • Apply the Patterson method to get the heavy atom positions. • Refine these atomic positions and calculate the phases for all diffracted beams. • Obtain an electron density map with those calculated phases. MAD (Multi-wavelength Anomalous Diffraction) The changes in the intensity of the diffraction data produced by introducing heavy atoms in the protein crystals can be regarded as a chemical modification of the diffraction experiment. Similarly, we can cause changes in the intensity of diffraction by modifying the physical properties of atoms. Thus, if the incident X-ray radiation has a frequency close to the natural vibration frequency of the electrons in a given atom, the atom behaves as an "anomalous scatterer". This produces some changes in the atomic scattering factor, ƒj (see Formula 2), so that its expression is modified by two terms, ƒ' and ƒ'' which account for its real and imaginary components, respectively. For atoms which behave anomalously, its scattering factor is given by the expression shown below (Formula 5). Formula 5. In the presence of anomalous scattering, the atomic scattering factor, ƒ0 , has to be modified adding two new terms, a real and an imaginary part. The advanced reader should also read the section about the phenomenon of anomalous dispersion. The ƒ' and ƒ'' corrections vs. X-ray energy (see below for the case of Cu Kα) can be calculated taking into account some theoretical considerations... Real and imaginary components of the Selenium scattering factor vs. the energy of the incident X-rays. The vertical line indicates the wavelength for CuKα. For X-ray energy values where resonance exists, ƒ' increases dramatically, while the value of ƒ'' decreases. This has practical importance considering that many heavy atoms used in crystallography show absorption peaks at energies (wavelengths) which can be easily obtained with synchrotron radiation. Diffraction data collected in these conditions will show a normal component, mainly due to the light atoms (nitrogen, carbon and hydrogen), and an anomalous part produced by the heavy atoms, which will produce a global change in the phase of each reflection. All this leads to an intensity change between those reflections known as Friedel pairs (pairs of reflections which under normal conditions should have the same amplitudes and identical phases, but with opposite signs). The detectable change in intensity between these reflection pairs (Friedel pairs) is what we call anomalous diffraction. The MAD method, developed by Hendrickson and Kahn, involves diffraction data measurement of the protein crystal (containing a strong anomalous scatterer) using X-ray radiations with different energies (wavelengths): one that maximizes ƒ'', another which minimizes ƒ' and a third measurement at an energy value distinct from these two. Combining these diffractions data sets, and specifically analyzing the differences between them, it is possible to calculate the distribution of amplitudes and phases generated by the anomalous scatterers. The subsequent use of the phases generated by these anomalous scatterers, as a first approximation, can be used to calculate an electron density map for the whole protein. In general, there is no current need to introduce individual atoms as anomalous scatterers in protein crystals. It is relatively easy to obtain recombinant proteins in which methionine residues are replaced by selenium-methionine. Selenium (and even sulfur) atoms of methionine (or cysteine), behave as suitable anomalous scatterers for carrying out a MAD experiment. The MAD method presents some advantages vs. the MIR technique: • As the MAD technique uses data collected from a single crystal, the problems derived from lack of isomorphism, common in the MIR method, do not apply. • While in the absence of anomalous dispersion, the atomic scattering factor (ƒ0) decreases dramatically with the angle of dispersion, its anomalous component (ƒ' + iƒ'' ) is independent of that angle, so that this relative signal increases at a higher resolution of the spectrum, which is to say, at high Bragg angles. Thus, the estimates of phases by MAD are generally better at high resolution. On the contrary, with the MIR method, the lack of isomorphism is larger at high resolution angles and therefore the high resolution intensities (> 3.5 Angstrom) are not suitable for phasing. Argand diagram showing the scattering contribution from an anomalous scatterer in a matrix of normal scatterers. This effect implies that Friedel's law fails. Image taken from "Crystallography 101". • Fp represents the contribution from the normal scatterers to the structure factor (of indices hkl). • Fa and Fa''represent the real (ƒ0 + ƒ' ) and imaginary (ƒ'' ) parts, respectively, of the scattering factor from the anomalous scatterers. • -Fp, -Fa and -Fa" represent the same as Fp, Fa and Fa'', but for the reflection with indices -h, -k, -l. The anomalous behavior of the atomic scattering factor only produces small differences between the intensities (and therefore among the amplitudes of the structure factors) of the reflections that are related by a centre of symmetry or a mirror plane (such as for instance, I(h,k,l) vs. I(-h,-k,-l), or I(h,k,l) vs. I(h,-k,l). Therefore, to estimate these small differences between the experimental intensities, additional precautions must be taken into account. Thus, it is recommended that reflections expected to show these differences are collected on the same diffraction image, or alternatively, after each collected image, rotate the crystal 180 degrees and collect a new image. Moreover, since changes in ƒ' and ƒ'' occur by minimum X-ray energy variations, it is necessary to have good control of the energy values (wavelengths). Therefore, it is essential to use a synchrotron radiation facility, where wavelengths can be tuned easily. The advanced reader should also have a look into the web pages on anomalous scattering, prepared by Bernhard Rupp, as well as the practical summary prepared by Georg M. Sheldrick. MR (Molecular Replacement) If we know the structural model of a protein with a homologous amino acid sequence, the phase problem can be solved by using the methodology known as molecular replacement (MR). The known structure of the homologous protein is regarded as the protein to be determined and serves as a first model to be subsequently refined. This procedure is obviously based on the observation that proteins with similar peptide sequences show a very similar folding. The problem in this case is transferring the molecular structure of the known protein from its own crystal structure to a new crystal packing of the protein with an unknown structure. The positioning of the known molecule into the unit cell of the unknown protein requires determining its correct orientation and position within the unit cell. Both operations, rotation and translation, are calculated using the so-called rotation and translation functions (see below). Scheme of the molecular replacement (MR) method. The molecule with known structure (A) is rotated through the [R] operation and shifted through T to bring it over the position of the unknown molecule (A’). The rotation function. If we consider the case of two identical molecules, oriented in a different way, then the Patterson function will contain three sets of vectors. The first one will contain the Patterson vectors of one of the molecules, ie all interatomic vectors within molecule one (also called eigenvectors). The second set will contain the same vectors but for the second molecule, identical to the first one, but rotated due to their different orientation. The third set of vectors will be the interatomic cross vectors between the two molecules. While the eigenvectors are confined to the volume occupied by the molecule, the cross vectors will extend beyond this limit. If both molecules (known and unknown) are very similar in structure, the rotation function R(α,β,γ) would try to bring the Patterson vectors of one of the molecules to be coincident with those of the other, until they are in good agreement. This methodology was first described by Rossman and Blow. R(α,β,γ) = u P1(u) x P2(ur) du Formula 6. Rotation function P1 is the Patterson function and P2 is the rotated Patterson function, where u is the volume of the Patterson map, where interatomic vectors are calculated. The quality of the solutions of these functions is expressed by the correlation coefficient between both Patterson functions: the experimental one and the calculated one (with the known protein). A high correlation coefficient between these functions is equivalent to a good agreement between the experimental diffraction pattern and the diffraction pattern calculated with the known protein structure. Once the known protein structure is properly oriented and translated (within the unit cell of the unknown protein), an electron density map is calculated using these atomic positions and the experimental structure factors. It is worth consulting the article published on this methodology by Eleanor Dodson. Probably it is valuable for the advanced reader to consult a nice article that, despite having been published in 2010, has not lost its validity in relation to the description of the different methodologies for the determination of the relative phases of the diffraction beams. COMPLETING THE STRUCTURE All these methods (Patterson, direct methods, MIR, MAD, MR) provide (directly or indirectly) knowledge about approximate phases which must be upgraded. As indicated above, the calculated initial phases, Φc(hkl), together with the observed experimental amplitudes, |Fo(hkl)|, allow us to calculate an electron density map, also approximate, over which we can build the structural model. The overall process is summarized in the cyclic diagram shown below. The initial phases, Φc(hkl), are combined with the amplitudes of the experimental (observed) structure factors, |Fo(hkl)|, and an electron density map is calculated (shown at the bottom of the scheme). Alternatively, if the initial known data are the coordinates (xyz) of some atoms, they will provide the initial phases (shown at the top of the scheme), and so on in a cyclic way until the process does not produce any new information. Scheme showing a cyclic process to calculate electron density maps ρ(xyz) which produce further structural information. From several known atomic positions we can always calculate the structure factors: their amplitudes, |Fc(hkl)|, and their phases, Φc(hkl),as shown at the top of the scheme. Obviously, the calculated amplitudes can be rejected, because they are calculated from a partial structure and the experimental ones represent the whole and real structure. Therefore, the electron density map (shown at the bottom of the scheme) is calculated with the experimental (or observed) amplitudes, |Fo(hkl)|, and the calculated phases, Φc(hkl). This function is now evaluated in terms of possible new atomic positions that are added to the previously known ones, and the cycle repeated. Historically this process was known as "successive Fourier syntheses", because the electron density is calculated in terms of a Fourier sum. In any case, from atomic positions or directly from phases, if the information is correct, the function of electron density will be interpretable and will contain additional information (new atomic coordinates) that can be injected into the cyclic procedure shown above until structure completion, which is to say until the calculated function ρ(xyz) shows no changes from the last calculation. The lighter atoms of the structure (those with lower atomic number, ie, usually hydrogen atoms) are the most difficult ones to find on an electron density map. Their scattering power is almost obscured by the scattering of the remaining atoms . For this reason, the location of H atoms is normally done via a somewhat modified electron density function (the difference electron density), whose coefficients are the differences between the observed and calculated structure factors of the model known so far: Formula 7. Function of "difference" electron density In practice, if the structural model obtained is good enough, if the experiment provided precise structure factors, and there are no specific errors such as X-ray absorption, the difference map Δρ will contain enough signal (maxima) where H atoms can be located. Additionally, to get an enhanced signal from the light atoms scattering, this function is usually calculated with the structure factors appearing at lower diffraction angles only, usually with those appearing at sin θ / λ < 0.4, that is, using the region where the scattering factors for hydrogens are still "visible".
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.07%3A_New_Page.txt
The analysis and interpretation of the electron density function, ie the resolution of a crystal structure (molecular or non-molecular) leads to an initial distribution of atomic positions within the unit cell which can be represented by points or small spheres: Once the structural model is completed, having stereochemical sense and including its crystal packing, it is necessary to make use of all the information we can extract from the experimental data, since the diffraction pattern generally contains much more data (intensities) than needed to locate the atoms at their 3-dimensional coordinates. For instance, for a medium sized structure, with 50 independent atoms in the asymmetric unit (in the structural unit which is repeated by the symmetry operations), the diffraction pattern usually contains around 2500 structure factors, which implies approximately 50 observations per atom (each atom needs 3 coordinates). However, for more complex structures, as in the case of macromolecules, the amount of experimental data available normally does not reach these limits. REFINING THE FINAL MODEL The basic parameters associated with a three-dimensional structure are, obviously, the three positional coordinates (x, y, z) for each atom, given in terms of unit cell fractions. But, in general, given the experimental overdetermination mentioned above, the atomic model can become more complex. For instance, associating each atom with an additional parameter reflecting its thermal vibrational state, in a first approach as an isotropic (spherical) thermal vibration around its position of equilibrium. This new parameter is normally shown in terms of different radius of the sphere representing the atom. Thus an isotropic structural model would be represented by 4 variables per atom: 3 positional + 1 thermal. However, for small and medium-sized structures (up to several hundred of atoms), the diffraction experiment usually contains enough data to complete the thermal vibration model, associating a tensor (6 variables) to each atom which expresses the state of vibration in an anisotropic manner, ie distinguishing between different directions of vibration in the form of an ellipsoid (which resembles the shape of a baseball). Therefore, a crystallographic anisotropic model will require 9 variables per atom (3 positional + 6 vibrational). Left: Three bonded atoms represented with the isotropic thermal vibration model Right: The same three atoms shown on the left, but represented using the anisotropic thermal vibration model Left: Anisotropic model of the 3-dimensional structure of a molecule, showing some atoms from neighboring molecules. Right: Anisotropic model of the 3-dimensional structure of a molecule showing its crystal packing. Regardless of the model type, isotropic or anisotropic, the above-mentioned overabundance of experimental data allows a description of the structural model in terms of very precise atomic parameters (positional and vibrational) which lead to very precise geometrical parameters of the whole structure (interatomic distances, bond angles, etc.). This refined model is obtained by the analytical method of least-squares. Using this technique, atoms are allowed to "move" slightly from their previous positions and thermal factors are applied to each atom so that the diffraction pattern calculated with this model is essentially the same as the experimental one (observed), ie minimizing the differences between the calculated and observed structure factors. This process is carried out by minimizing the function: $\sum w | |F_o| - |F_c| |^2 → 0$ Least-squares function used to refine the final model of a crystal structure where $w$ represents a "weight" factor assigned to each observation (intensity), weighting the effects of the less-precise observations vs. the more accurate ones and avoiding possible systematic errors in the experimental observations which could bias the model. Fo and Fc are de observed and calculated structure factors, respectively. Although usually the mentioned experimental overdetermination ensures the success of this analytical process of refinement, it must always be controlled through the stereochemical aspects, ie, ensuring that the positional movements of the atoms are reasonable and which therefore generate distances within the expected values. Similarly, the thermal vibration factors (isotropic or anisotropic) associated with the atoms must always show reasonable values. In addition to the aforementioned control of the model changes during the refinement process, it seems obvious that (if everything goes well), additionally the diffraction pattern calculated (Fc) with the refined model (coordinates + thermal vibration factors) will show increasing similarity to the observed pattern (Fo). The comparison between both patterns (observed vs. calculated) is done via the so-called $R$ parameter, which defines the "disagreement" factor between the two patterns: $R = \dfrac{\sum [ | |F_o| - |F_c| | ]}{|F_o|}$ Disagreement factor of a structural model, calculated in terms of differences between observed and calculated structure factors with the final model The value of the disagreement factor (R) is estimated as a percentage (%), ie, multiplied by 100, so that "well" solved structures, with an appropriate degree of precision, will show an R factor below 0.10 (10% ), which implies that the calculated pattern differs from the observed one (experimental) less than 10%. The diffraction patterns of macromolecules (enzymes, proteins, etc.) usually do not show such large overdetermination of experimental data and therefore it is difficult to reach an anisotropic final model. Moreover, in these cases the values of the R factor are greater than those for small and medium-sized molecules, so that values around or below 20% are usually acceptable. In addition, as a result of this relative scarcity of experimental data, the analytical procedure of refinement (least-squares) must be combined with an interactive stereochemical modeling process and by imposing certain "soft restraints" to the molecular geometry. MODEL VALIDATION The reliability of a structural model has to be assessed in terms of several tests, a procedure known as model validation. Thus, the structural model should be continuously checked and validated using consistent stereochemical criteria (for example, bond lengths and bond angles must be acceptable). For instance a C---O distance of 0.8 Angstrom would not be acceptable for a carbonyl group (C = O). Similarly, the bond angles must also be consistent with an acceptable geometry. These criteria are very restrictive for small or medium-sized structures, but even in the structures of macromolecules they must meet some minimum criteria. Maximum dispersion values generally accepted for interatomic distances and bond angles in the structural model of a macromolecule In the case of proteins, the peptide bond (the bond between two consecutive amino acids) must also satisfy some geometrical restrictions. The torsional angles of this bond should not deviate much from the acceptable values of the usual conformations shown by the amino acid chains, as is shown in the so-called Ramachandran plot: Left: Schematic representation of the peptide bond, showing the two torsional angles (Ψ and Φ) defining it. Right: Ramachandran plot showing the different allowed (acceptable) areas for the torsional angles of the peptide bonds in a macromolecule. The different areas depend on the different structural arrangements (α-helices, β-sheets, etc.) Similarly, the values of the thermal factors associated with each atom should show physically acceptable values. These parameters account for the thermal vibrational mobility of the different structural parts. Thus, in the structure of a macromolecule, these values should be consistent with the internal or external location of the chain, being generally lower for the internal parts, and higher for external parts near the solvent. DEGREE OF RELIABILITY OF THE MODEL A model that has been "validated" according to the criteria described above, that is, which demonstrates: • a reasonable agreement between observed and calculated structure factors, • bond distances, bond angles and torsional angles that meet stereochemical criteria, and • physically reasonable thermal vibration factors, is a reliable model. However, the concept of reliability is not a quantitative parameter which can be written in terms of a single number. Therefore, to interpret a structural model up to its logical consequences one has to bear in mind that it is just a simplified representation, extracted from an electron density function: $\rho(x y z)=\frac{1}{V} \sum_{\substack{h k l}}^{+\infty}|F(h k l)| \cdot e^{-2 \pi i[h x+k y+l z-\phi(h k l)]}$ on which the atoms have been positioned and which is being affected by some conditions described in another section, which we invite you to read. But, in any case, well-done crystallographic work always provides atomic parameters (positional and vibrational) along with their associated precision estimates. This means that any direct crystallographic parameter (atomic coordinates and vibration factors) or derived (distances, angles, etc.) is usually expressed by a number followed by its standard deviation (in parentheses) affecting the last figure. For example, an interatomic distance expressed as 1.541 (2) Angstroms means a distance of 1.541 and a standard deviation of 0.002. THE ABSOLUTE CONFIGURATION (OR ABSOLUTE STEREOCHEMISTRY) As stated in a previous chapter, all molecules or structures in which neither mirror planes nor centres of symmetry are present, have an absolute configuration, that is, that they are different from their mirror images (they cannot be superimposed). Structural models showing two enantiomers of a compound (the two molecules are mirror images) These particular structural differences, very important as far as the molecular properties are concerned, can be unambiguously determined through the diffraction experiment (without using any external standard). This can be carried out using the so-called anomalous scattering effect which atoms show when appropriate X-ray wavelengths are used. This feature is also very succesfully used as a method to solve the phase problem for macromolecular crystals. It doesn't seem difficult to understand that the molecular enantiomers have different properties, as in the end they are different molecules, but regarding their biological activity (if any) the situation is particularly striking. Enantiomeric molecules that are represented in the left figure were introduced in the market by a pharmaceutical company and, obviously, they showed different properties. The properties of DARVON (Dextropropoxyphene Napsylate) are available through this link, while production of NOVRAD (Levopropoxyphene Napsylate) was discontinued. The experimental diffraction signal that allows this structural differentiation is a consequence of the fact that the atomic scattering factor does not behave as a real number when the frequency of X-rays is similar to the natural frequency of the atomic absorption. See also the chapter dedicated to anomalous dispersion. Under these conditions, Friedel's Law is no longer fulfilled and therefore structure factors such as |Fh,k,l | and |F-h,-k,-l | will be slightly different. These differences are evaluated in terms of the so-called Bijvoet estimators, which compare the ratios for observed structure factors for such reflection pairs with the corresponding ratios for the calculated structure factors using the two possible absolute models. Only one of these two comparisons will maintain the same type of bias: $\frac{|F(h k l)|_{o}}{|F(\bar{h} \bar{k} \bar{l})|_{o}} \text { vs. } \frac{|F(h k l)|_{c}}{|F(\bar{h} \bar{k} \bar{l})|_{c}}$ Comparison of Bijvoet ratios - Johannes Martin Bijvoet (1892-1980) Thus, if the quotient between the observed structure factors is <1, the same quotient for the calculated structure factors should also be <1. Or, on the contrary, both quotients should be >1. If this is true for a large number of reflection pairs it will indicate that the absolute model is the right one. If it is not so, the structural model has to be inverted. The interested reader should also have a look into the web pages on anomalous scattering, prepared by Ethan A. Merritt. THE FINAL RESULT The information describing a final crystallographic model is composed of: • Data from the diffraction experiment: wavelength and diffraction pattern (the intensity of thousands or even hundreds of thousands of diffracted waves with their hkl indices), • Unit cell dimensions as derived from the diffraction pattern (from the reciprocal cell), • The symmetry present in the crystal, derived from the reciprocal lattice (from the diffraction pattern), and • Atomic positions (coordinates and thermal vibration factors) and, if needed, the so-called population factor, as indicated in the table below. The atomic positions are usually given as fractional coordinates (fractions of the unit cell axes), but sometimes, especially for macromolecules where the information usually refers to the isolated molecule, they are given as absolute coordinates, ie, expressed in Angstrom and referred to a system of orthogonal axes independent of the crystallographic ones (see below). Information about several atoms of a protein structure using the so-called PDB format (Protein Data Bank), ie atomic coordinates in Angstrom on a system of orthogonal axes, different from the crystallographic ones. For clarity, the estimated standard deviations have been omitted. The population factor is the fraction of atom located in a specific position, although this factor is usually 1. The meaning of this parameter requires an explanation for the beginner, since it could be understood that atoms could be divided in parts, which obviously has no physical meaning. Due to atomic vibrations, and to the fact that the diffraction experiment has a duration in time, it is possible that in some of the unit cells atoms are missing. Thus, instead of a complete occupancy (population factor = 1), the corresponding site, in an average unit cell, will contain only a fraction of the atom. In these cases it is said that the crystal lattice has defects and population factors smaller than 1 reflect a fraction of unit cells where a specific atomic position is occupied. Obviously, a fraction of unit cells where the same position is empty complements the population factor to unity. Therefore, the crystallographic model reflects the average structure of all unit cells during the experiment time. The atomic coordinates and in general all information collected from a crystallographic study, is stored in accessible databases. There are different databases, depending of the type of compound or molecule, but this will be discussed in another chapter of these pages. GRAPHICAL REPRESENTATIONS OF THE MODEL The final structural model (atomic coordinates, thermal factors and, possibly, population factors) directly provide additional information which leads to a detailed knowledge of the structure itself, including bond lengths, bond angles, torsional angles, molecular planes, dipole momentum, etc., and any other structural detail that might be useful for understanding the functionality and/or properties of the material under study. In the case of complex biological molecules, the use of high-quality graphic processors and relatively simple models, greatly facilitates the understanding of the relationship between structure and function, as shown in the figure on the left. At present the available computational and graphic techniques allow us to obtain beautiful and very descriptive models which help to visualize and understand structures, as is shown in the examples below: Left: Model of balls and sticks to represent the structure of a simple inorganic compound. Right: Representation of an inorganic compound, in which a partial polyhedral representation has been added Left: Animated model of sitcks to represent the packing and molecular structure of a simple organic compound. Right: Given the complexity of biological molecules, the models which represent them are usually simple, showing the overall folding and the different structural motifs (α-helices, β-strands, loops, etc.) shown with the ribbon model. The example also shows a stick representation of a cofactor linked to the enzyme. Left: Combined model of ribbons and sticks to represent the dimmeric structure of a protein which also shows a sulfate ion in the middle--represented with balls Right: Representation of the surface of a biological molecule where the colours represent different properties of hydrophobia. The arrow represents the dipolar momentum of the molecule. Finally, using additional information from other techniques (such as cryo-electron microscopy), or combining two different crystal conformations of a molecule, other models are available as shown below. Moreover, using the ultrashort exposure times of X-rays produced by free electron lasers (European XFEL), crystallographers are able to collect diffraction data of macromolecules in different conformations, that is, during the course of performing their respective tasks. In this manner, using a huge number of X-ray snapshots we can produce like a film where we are able to follow the molecular modifications and therefore to understand their function. Left: Combined model of the molecular structure of a protein and an envelope (as obtained by high-resolution electron microscopy) showing a pore formed by the association of four protein molecules  Right: Simplified animated model showing the backbone folding of an enzyme and the structural changes between two molecular states: active (open) and inactive (closed). The structures of both states were determined by crystallography
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.08%3A_New_Page.txt
Readers who have arrived at this chapter in a sequential manner will notice that, apart from the phase problem, the relationship between the diffraction pattern (reciprocal space) and the crystal structure (direct space) is mediated by a Fourier transform represented by the electron density function: \(ρ(xyz)\) (see the drawing on the left). Readers will also know that the relationship between these two spaces is "holistic", meaning that the value of this function, at each point in the unit cell of coordinates \((xyz)\), is the result of "adding" the contribution of "all" structure factors [ie diffracted waves in terms of their amplitudes \(|F(hkl)|\) and phases \(Φ(hkl)\)] contained in the diffraction pattern. They will also remember that the diffraction pattern contains many structural factors (several thousand for a simple structure, and hundreds of thousands for a protein structure). The "jump" between direct and reciprocal spaces, mediated by a Fourier transform represented by the electron density function Moreover, the number of points in the unit cell, where the ρ function has to be calculated, is very high. In a cell of about 100 x 100 x 100 Angstrom3, it would be necessary to calculate at least 1000 points in every unit cell direction to obtain a resolution of 100/1000, which equals 0.1 Angstrom in each direction. This means calculating at least 1000 x 1000 x 1000 = 1,000,000,000 points (one billion points) and at each point to "add" several thousand (or hundreds of thousands) structure factors F(hkl). It should therefore be clear that, regardless of the difficulties of the phase problem, solving a crystal structure implies the use of computers. Finally, the analysis of a crystal or molecular structure also implies calculating many geometric parameters that define interatomic distances, bond angles, torsional angles, molecular surfaces, etc., using the atomic coordinates (xyz). The "hardware" For the reasons described above, since the beginning of the use of Crystallography as a discipline to determine molecular and crystal structures, crystallographers have devoted special attention to the development of calculation tools to facilitate crystallographic work. With this aim, and even before the early computers appeared, the crystallographers introduced the so-called "Beevers-Lipson strips," which were widely used in all Crystallography laboratories. The Beevers-Lipson strips The Beevers-Lipson strips (which were strips of paper containing the values for some trigonometric functions) were used in laboratories to speed up the calculations (by hand) of the Fourier transforms (see above: the electron density function, for example). These strips were introduced in 1936 by A.H. Beevers and H. Lipson. In the 1960s, more than 300 boxes were distributed to nearly all the laboratories in the world. You can also have a look into the description made by the International Union of Crystallography. The nightmare was maintaining upright this box, which had a very narrow base, otherwise it was impossible to maintain the strips correctly stored! As expected, the introduction of early computers (or electro-mechanic calculators) inspired great hope in crystallographers... ENIAC (Electronic Numerical Integrator and Computer, 1945) -- the very first electronic computer. Some pictures of the rooms where it was installed. ENIAC, short for Electronic Numerical Integrator And Computer, was the first general-purpose electronic computer, whose design and construction were financed by the United States Army during the Second World War. It was the first digital computer capable of being reprogrammed to solve a full range of computing problems, especially calculating artillery firing tables for the U.S. Army's Ballistic Research Laboratory. The ENIAC had immediate importance. When it was announced in 1946, it was heralded in the press as a "Giant Brain". It boasted speeds one thousand times faster than electro-mechanical machines, a leap in computing power that no single machine has matched. This mathematical power, coupled with general-purpose programmability, excited scientists and industrialists. Besides its speed, the most remarkable thing about ENIAC was its size and complexity. ENIAC had 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed 27 tons, was roughly 2.6 m by 0.9 m by 26 m, took up 63 m², and consumed 150 kW of power. Later, with the development of Electronics and Microelectronics, which introduced integrated circuits, computers became accessible to crystallographers, who flocked to these facilities with large boxes of "punched cards" (the only means for data storage at that time), containing the diffraction intensities and their own computer programs. A punch card or punched card (or punchcard or Hollerith card or IBM card), is a piece of stiff paper which contains digital information represented by the presence or absence of holes in predefined positions. It was used by crystallographers until the end of the 1970s. Punched paper tape (shown in yellow) and different magnetic tapes (as well as some small disks) used for data storage during the 1970s and 1980s. Around the early 1970s, and for over a decade, crystallographers became a nightmare for the managers and operators of the so-called "computing centers,'' running in some universities and research centers. In the 1980s the laboratories of Crystallography became "flooded" with computers, which for the first time gave crystallographers independence from the large computing centers. The VAX series of computers (sold by the company Digital Equipment Corporation) marked a splendid era for crystallographic calculations. They allowed the use of magnetic tapes and the first hard disk drives, with limited capacity (only a few hundred MB) -- very big and heavy, but they eliminated the need for the tedious punched cards. Nostalgics should have a look into this link.!!! A typical computer (of the VAX series) used in many Crystallography laboratories during the 1980s. Over the years, crystallographic computing has become easy and affordable thanks to personal computers (PC), which meet nearly all the needs of most conventional crystallographic calculations, at least concerning crystals of low and medium complexity (up to hundreds of atoms). Their relative low price and their ability to be assembled into "farms" (for distributed calculation) provide crystallographers the best solution for almost any type of calculation. Left: A typical personal computer (PC) used in the 2000s Right: A typical PC-farm used in the 2000s However, the crystallography applied to macromolecules not only needs what we could call "hard" computing. The management of large electron density maps, which are used to build the molecular structure of proteins, as well as the subsequent structural analysis, requires more sophisticated computers with powerful graphic processors and, if possible, with the capability of displaying 3-dimensional images using specialized glasses... A Silicon Graphics computer used to visualize 3-dimensional electron density maps and structures. The processor and the screen are complemented by an infrared transmitter (black box on the screen) and the glasses used by the crystallographer. The current computing facilities represent a big jump respect to the capabilities available during the mid-twentieth century, as it is shown in the representation of the structural model used for the structural description of penicillin, based on three 2-dimensional electron density maps... And even 3d maps where also used!... Left: Three-dimensional model of the structure of penicillin, based on the use of three 2-dimensional electron density maps, as used by Dorothy C. Hodgkin, Nobel laureate in 1964 Right: Representation of 3d electron density maps used until the middle of the 1970's. The contours are lines of electron density and show the positions of individual atoms in the structure A typical personal computer commonly used since 2010 for crystallographic calculations and also for their graphic capabilities The software At present there are enough personal, institutional or commercial computer program developments, or even computing facilities through remote servers, to fulfill nearly all of the needs for crystallographic computing, as well as many sources from which one can download most of those programs. In this context, it could be useful to check the following links: Crystallographic computer programs Specifically for compounds of small and medium size (molecular or not) we recommend using the Wingx package which can be freely downloaded by courtesy of Louis J. Farrugia, (University of Glasgow, UK). It is easy to install on a PC and contains an interface which includes the most important programs for small and medium size crystallographic problems. Also, for these types of compounds there is a very useful computer program (Mercury), user-friendly and free, which includes powerful graphics and some other analytical tools to analyze crystal structures. It can be downloaded from the Cambridge Crystallographic Data Centre, UK. Protein crystallographers need more specific programs, and in this context we recommended using the link offered by CCP4, Collaborative Computational Project No. 4, Software for Macromolecular X-Ray Crystallography. On the other hand, crystallographic work is currently unimaginable without having access to crystallographic databases, which contain all the structural information that is being published and which have a clear added value for the researcher. The type of structure is what determines its inclusion in any of the existing databases. Thus, metals and intermetallic compounds are made available in the database CRYSTMET; inorganic compounds are centralized in the ICSD database (Inorganic Crystal Structure Database); organic and organometallic in CSD (Cambridge Crystallographic Database); and proteins in PDB (Protein Data Bank), which is a databank (not a database). Other databases, databanks, etc., do not necessarily contain structural information in the most precise sense, but they can also be very helpful for crystallographers. And this is the case of WebCite published by the Cambridge Crystallographic Data Centre (CCDC), containing over 2000 articles with very important information for structural chemistry research in its broadest sense, and in particular to pharmaceutical drug discovery, materials design or drug development, among others. Structural databases and databanks • CRYSTMET: Metals and intermetallic compounds (license required) • ICSD: Inorganic compounds (license required) • CSD: Organic and organometallic compounds (license required) • glycoSCIENCES.de: Carbohydrates • LipidBank: Lipids • PDB: Proteins, Nucleic acids and large complexes • NDB: Nucleic acids As indicated, some of these databases (or databanks) are public (glycoSCIENCES.de, LipidBank, PDB and NDB), and therefore can be searched online. However, others (CRYSTMET, ICSD and CSD) require a license or even a local installation. During the period 1990-2012, CRYSTMET, ICSD and CSD have been licensed free of charge to all CSIC research institutes (CRYSTMET and ICSD) and to all academic institutions in Spain and Latin American countries (CSD). However, due to economic constraints, the CSIC's authorities decided to reduce drastically this program that was managed through the Department of Crystallography and Structural Biology (at the Institute of Physical Chemistry "Rocasolano"). Nowadays this program is maintained in a reduced manner, only for Spanish institutions, as it can be seen through this link.
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.09%3A_New_Page.txt
In the context of this chapter, you will also be invited to visit these sections... As mentioned in the introduction, Crystallography is one of the scientific disciplines that has most clearly influenced the development of Chemistry, Biology, Biochemistry and Biomedicine. Although in other pages we made some reference to the scientists directly involved at the early stages, this chapter is aimed at presenting short biographical outlines. As a supplement of the biographical notes presented in this chapter, the reader can also consult the early historical notes about crystals and Crystallography offered in another section. The biographical outlines object of the present chapter (shown below) have been distributed in groups, in chronological order, using the terminology of some musical sections and tempos, trying to describe their relevance, at least from a historical perspective. 1901"Prelude", by Wilhelm Conrad Röntgen Wilhelm Conrad Röntgen (1845-1923). None of this would have been possible without the contribution of Wilhelm Conrad Röntgen, who won the first Nobel Prize in Physics (1901) for his discovery of X-rays. Although many other biographical personal references to Röntgen can be found on the internet, we recommend visiting the site prepared by Jose L. Fresquet (in Spanish). In the following paragraphs we summarize the most relevant details and add a few others. Wilhelm Conrad Röntgen was born on March 27, 1845, at Lennep in the Lower Rhine Province of Germany, as the only child of a manufacturer and merchant of cloth. His mother was Charlotte Constanze Frowein of Amsterdam, a member of an old Lennep family which had settled in Amsterdam. When he was 3 years old his family moved to Holland. From 16 to 20 years old he studied at the Technical School in Utrecht, and he then moved to Zurich where he got the corresponding academic degree in mechanical engineering. After some years in Zurich, as assistant professor of physics under August Kundt, in 1872 (27 years old), he moved to the University of Würzburg. However, as he couldn't find any job (he previously couldn't pass his exams in Latin and Greek) he moved to Strasbourg where he finally got a position as professor in 1874. Five years later he accepted a teaching position at the University of Giessen and finally at 45 years old, he obtained a professorship in physics at Würzburg, where he became Rector. His work on cathode rays led him to the discovery of a new and different kind of rays. On the evening of November 8, 1895, working with an enclosed and sealed discharge tube (to exclude all light), he found that a paper plate (covered on one side with barium platinocyanide and placed accidentally in the path of the rays) became unexpectedly fluorescent, even when it was as far as two metres from the discharge tube. It took a month until Röntgen understood the importance of this new radiation and he immediately sent a scientific communication to the Society for Physics and Medicine in Würzburg...Specifically, the first sentences of his official statement (written in a nice German language) read: Lässt man durch eine Hittorf’sche Vacuumröhre, oder einen genügend evacuirten Lenard’schen, Crookes’schen oder ähnlichen Apparat die Entladungen eines grösseren Ruhmkorff’s gehen und bedeckt die Röhre mit einem ziemlich eng anliegenden Mantel aus dünnem, schwarzem Carton, so sieht man in dem vollständig verdunkelten Zimmer einen in die Nähe des Apparates gebrachten, mit Bariumplatincyanür angestrichenen Papierschirm bei jeder Entladung hell aufleuchten, fluoresciren, gleichgültig ob die angestrichene oder die andere Seite des Schirmes dem Entladungsapparat zugewendet ist. Die Fluorescenz ist noch in 2 m Entfernung vom Apparat bemerkbar. Man überzeugt sich leicht, dass die Ursache der Fluorescenz vom Entladungsapparat und von keiner anderen Stelle der Leitung ausgeht. After producing an electrical discharge with a Ruhmkorff’s coil through a Hittorf’s vacuum tube, or a sufficiently evacuated Lenard, Crookes or similar apparatus, covered with a fairly tight-fitting jacket made of thin, black paperboard, one sees that a cardboard sheet coated with a layer of platinum and barium cyanide, located in the vicinity of the apparatus, lights up brightly in the completely darkened room regardless of whether the coated side is pointing or not to the tube. This fluorescence occurs up to 2 meters away from the apparatus. One can easily be convinced that the cause of the fluorescence proceeds from the discharge apparatus and not from any other source of the line. Röntgen's discovery quickly produced a social commotion... "Incredible light!". However, almost at the same speed, his public celebrity dropped to a minimum... "his high-flying stopped...". It was during the first months of 1896, after sending to the British Medical Journal an X-ray photograph of a broken arm, that Röntgen began to regain the public's confidence, demonstrating the diagnostic capacity of his discovery. However, it took still many years until his "incredible light" was recognized as of medical interest. It was awarded the first Nobel Prize for Physics in 1901. Wilhelm Conrad Röntgen died in Munich on 10 February 1923 from carcinoma of the intestine. It is not believed his carcinoma was a result of his work with ionizing radiation because of the brief time spent on those investigations and because he was one of the few pioneers in the field who used protective lead shields routinely. If you can read Spanish, there is also an extensive chapter dedicated to both the historical details around Röntgen and his discovery. 1914 "Overture", by Max von Laue, with accompaniment by Paul P. Ewald Max von Laue (1879-1960). If Röntgen's discovery was important for the development of Crystallography, the second qualitative step forward was due to another German, Max von Laue, Nobel Prize for Physics in 1914, who trying to demonstrate the undulatory nature of X-rays, discovered the phenomenon of X-ray diffraction by crystals. A complete biographical description can also be found through this link. Max von Laue was born on October 9, 1879 at Pfaffendorf, in a little town near Koblenz. He was the son of Julius von Laue, an official in the German military administration, who was raised to hereditary nobility in 1913 and who was often sent to various towns, so that von Laue spent his youth in Brandenburg, Altona, Posen, Berlin and Strassburg, going to school in the three last-named cities. At the Protestant school at Strassburg he came under the influence of Professor Goering, who introduced him to the exact sciences, where he studied Mathematics, Physics and Chemistry. However, he soon moved to the University of Göttingen and in 1902 to the University of Berlin, where he began working with Max Planck. A year later, after obtaining his doctorate degree, he returned to Gottingen, and in 1905 he went back to Berlin as assistant to Max Planck, who also won the Nobel Prize for Physics in 1918, ie four years after von Laue. Between 1909 and 1919 he went through the Universities of Munich, Zurich, Frankfurt and Würzburg, and he finally returned to Berlin where he earned a position as a professor. Paul Peter Ewald (1888-1985). It was during this last period, namely in 1912, when he met Paul Peter Ewald in Munich. Ewald was then finishing his doctoral thesis under Arnold Sommerfeld (1868-1951), and he got Laue interested in his experiments on the interference between radiations with large wavelengths (practically visible light) on a "crystalline" model based on resonators. Note that at that time the question on wave-particle duality was also under discussion. The idea then came to Laue that the much shorter electromagnetic rays, which X-rays were supposed to be, would cause some kind of diffraction or interference phenomena in a medium and that a crystal could provide this medium. An excellent historical description of these facts and the corresponding experiments, conducted by Walter Friedrich and Paul Knipping under the direction of Max von Laue, can be found in an article by Michael Eckert. The original article of that experiment, signed by Friedrich, W., Knipping, P. and Laue, M., was published with the reference: Sitzungsberichte der Kgl. Bayer. Akad. der Wiss. (1912) 303–322, although it was later collected by Annalen der Physik (1913) 346, 971-988. It's amazing how quickly Ewald developed the interpretation of Max von Laue experiments, as it can be seen in his original article, published in 1913 (in German), available through this link. Recognizing the role played by Ewald for the development of Crystallography, the International Union of Crystallography grants the Prize and Medal that carry the name of Paul Peter Ewald. And so was it that using a crystal of copper sulfate and some others from zinc blende, in front of an X-ray beam, how Laue got the confirmation on the undulatory nature of the rays discovered by Röntgen (see images below). For this discovery, and its interpretation, Max von Laue received the Nobel Prize for Physics in 1914. But at the same time, his experiment created many questions on the nature of crystals... Left: First X-ray diffraction pattern obtained by Laue and his collaborators using a crystal of copper sulphate Right: One of the first X-ray diffraction patterns obtained by Laue and his collaborators using some crystals of the mineral Blende Laue was always opposed to National Socialism, and after the Second World War he was brought to England for a short time with several other German scientists contributing to the International Union of Crystallography. He returned to Germany in 1946 as director of the Max Planck Institute and professor at the University of Göttingen. He retired in 1958 as director of the Institute of Physical Chemistry Fritz Haber in Berlin, a position to which he had been elected in 1951. On 8 April, 1960, while driving to his laboratory, Laue’s car was struck by a motorcyclist in Berlin, The cyclist, who had received his license only two days earlier, was killed and Laue’s car flipped. Max von Laue (80 years old) died from his injuries sixteen days later on April 24. 1915"Allegro, ma non troppo", by Bragg (father & son) Izquierda: William Henry Bragg (1862-1942) This time it did not happen as with Röntgen. Max von Laue's discovery became immediately known, at least by the British William Henry Bragg (1862-1942) and his son William Lawrence Bragg (1890-1971), who in 1915 shared the Nobel Prize for Physics for demonstrating the usefulness of the phenomenon discovered by von Laue (X-ray diffraction) in studying the internal structure of crystals. They showed that X-rays diffraction can be described as specular reflection by a set of parallel planes through all lattice elements in such a way that a diffracted beam is obtained if: 2.d.sin θ = n.λ where d is the distance between the planes, θ is the angle of incidence, n is an integer and λ is the wavelength. Through this simple approach the determination of crystal structures was made possible. William Henry Bragg studied Mathematics at the Trinity College in Cambridge and subsequently Physics at the Cavendish Laboratory. At the end of 1885, he was appointed professor at the University of Adelaide (Australia), where his son (William Lawrence Bragg) was born. W. Henry Bragg became successively Cavendish Professor of Physics at Leeds (1909-1915), Quain Professor of Physics at the University College London (1915-1925), and Fullerian Professor of Chemistry in the Royal Institution. His son, William Lawrence, studied Mathematics at the University of Adelaide. In 1909, the family returned to England and W. Lawrence Bragg entered as a fellow at Trinity College in Cambridge. In the autumn of 1912, during the same year that Max von Laue made public his experiment, the young W. Lawrence Bragg started examining the phenomenon that occurs when putting a crystal in front of the X-rays, presenting its first results (The diffraction of short electromagnetic waves by a crystal) at the headquarters of the Cambridge Philosophical Society during its meeting in November 11th, 1912. In 1914, W. Lawrence Bragg was appointed Professor of Natural Sciences at Trinity College, and that same year he was awarded the Barnard Medal. The two years (1912-1914) he worked with his father on the experiments of refraction and diffraction by crystals led to a lecture of W.H. Bragg (Bakerian Lecture: X-Rays and Crystal Structure) and to the famous article X-rays and Crystal Structure, also published in 1915. That same year, he (25 years old!) and his father, shared the Nobel Prize in Physics. Father and son were able to explain the phenomenon of X-ray diffraction in crystals through crystallographic planes acting as special mirrors for X-rays (Bragg's Law), and showed that the crystals of substances such as sodium chloride (NaCl or common salt) do not contain molecules of NaCl, but simply ions of Na+ and Cl-, both regularly ordered. These ideas revolutionized Theoretical Chemistry and caused the birth of a new science: X-ray Crystallography. Unfortunately, after the First World War, some difficulties arose between William Lawrence and his father when the general public did not directly credit W. Lawrence with his contributions to their discoveries. Lawrence Bragg desperately wanted to make his own name in research, but he sensed the triumph of their discoveries passing to his father, as the senior man. W. Henry Bragg tried his best to remedy the situation, always pointing out which aspects of their work were his son's ideas; however, much of their work was in the form of joint papers, which made the situation more difficult. Sadly, they never discussed the problem, and the trouble lingered for many years. The close collaboration between father and son ended, but it was natural that their work would continue to overlap. They decided to divide up the available work, and agreed to focus on separate areas of X-ray crystallography. W. Lawrence was to focus on inorganic compounds, metals and silicates, whereas William H. Bragg was to focus on organic compounds. In 1919, William Lawrence was made Langworthy Professor of Physics at Victoria University, Manchester, where he married and remained until 1937. There, in 1929, he published an excellent article on the use of the Fourier series to determine crystal structures, The Determination of Parameters in Crystal Structures by means of Fourier Series. In 1941 father and son were knighted (Sir) and a year later (1942) William Henry died. In subsequent years, William Lawrence was interested in the structure of silicates, metals, and especially in the chemistry of proteins. He was appointed Director of the National Physical Laboratory in Teddington and professor of Experimental Physics at the Cavendish Laboratory (Cambridge). In 1954, he was appointed Director of the Royal Institution in London, establishing his own research group aimed at studying the structure of proteins using X-rays. William Lawrence Bragg died in 1971, aged 81. The IUCr published an obituary that you can reach through this link. The year 2012 represents the centennial of the first single crystal X-ray experiments, performed at the Ludwig Maximilian Universität, Munich (Germany), by Paul Knipping and Walter Friedrich under the supervision of Max von Laue, and especiallythe the experiments done by the Braggs. The interested reader can enjoy reading the chapters published as a reminder by the International Union of Crystallography, to be found through the links shown below. 34-1935"Allegro molto", by Arthur Lindo Patterson, and David Harker as soloist Arthur Lindo Patterson (1902-1966). It is unexplainable how the name of Arthur LIndo Patterson is slowly fading and entering history almost as a stranger, at least since the last decade of the Twentieth Century. Probably his name remains associated only with some crystallographic calculation subroutine. However, as mentioned in another chapter, the contribution of Patterson to Crystallography can be seen as the single most important development after the discovery of X rays by Röntgen in 1895. Arthur Lindo Patherson was born in the early years of the Twentieth Century in New Zealand, but his family soon emigrated to Canada, where he spent his youth. For some unknown reason, he went to school in England before returning to Montreal (Canada) to study Physics at McGuill University, where he obtained his master's degree with a thesis on the production of hard X-rays (with small wavelengths) using the interaction of Radio β radiation with solids. He performed his first experiments on X-ray diffraction during a period of two years at the laboratory of W.H. Bragg at the Royal Institution in London. At that time he was aware that, although in small crystal structures the location of atoms in the unit cell was a relatively simple problem, the situation was virtually unfeasible in the case of molecular compounds, or in general with more complex compounds. After a stay in the lab of W.H. Bragg, Lindo Patterson spent a very productive year in the Kaiser-Wilhelm Institute in Berlin, with a grant from the National Research Council of Canada to work under Hermann Mark. With his work, he contributed decisively to the determination of particle size using X-ray diffraction, and started to become interested in the theory of the Fourier transform, an idea that some years later would become his obsession in connection with the resolution of crystal structures. In 1927, he returned to Canada and a year later completed his PhD at McGuill University. After two years with R.W.G. Wyckoff in the Rockefeller Institute in New York, he accepted a position at the Johnson Foundation for Medical Physicsin Philadelphia which gave him the chance to learn X-ray diffraction applied to biological materials. In 1931 he published two articles on Fourier series as a tool to interpret X-ray diffraction data: Methods in Crystal Analysis: I. Fourier Series and the Interpretation of X-ray Data and Methods in Crystal Analysis: II. The Enhancement Principle and the Fourier Series of Certain Types of Function. In 1933, he moved to the MIT (Massachusetts Institute of Technology) where, through his friendship with the mathematician Norbert Wiener, he started learning Fourier theory, and especially the properties of the Fourier transform and convolution. That was how, in 1934, his equation (the Patterson Function) was formulated in an article entitled A Fourier Series Method for the Determination of the Components of Interatomic Distances in Crystals, opening enormous expectations for the resolution of crystal structures. However, due to the technological precariousness of those days in addressing the large amount of sums involved in his function, it took some years until his discovery became effective in indirectly resolving the phase problem. Patterson's death, in November 1966, resulted from a massive cerebral hemorrhage. In addition to the technical difficulties existing at that time in solving complex mathematical equations, the function introduced by Arthur L. Patterson, clearly presented significant difficulties in the case of complex structures. At least it was so until, in 1935, David Harker (1906-1991), a "trainee", realized the existence of special circumstances that significantly facilitated the interpretation of the Patterson Function, and of which Arthur L. Patterson had not been aware. David Harker was born in California, and graduated in 1928 as a chemist at Berkeley. In 1930, he accepted a job as a technician in the laboratory of the Atmospheric Nitrogen Corp. in New York, where, through the reading of articles related to crystal structures, his interest in crystallography increased. Due to the great economic depression in 1933, he lost the job and returned to California. Using some savings, he was able to enter the California Institute of Technology. There, supervised by Linus Pauling, he began to experiment with the resolution of some simple crystal structures. During one of the weekly talks in Pauling's lab, the function recently introduced by Arthur L. Patterson was described and Harker was immediately aware of the difficulties implied in the many calculations in attaining the Patterson map, but especially the difficulty in interpreting it in structures with many atoms. However, a few nights after the speech, he woke up suddenly and said it has to work!. Indeed, it became clear to Harker that the Patterson map contains regions where the interatomic vectors (between atoms related by symmetry elements) are concentrated. Therefore, in order to look for interatomic vectors, one has only to explore certain areas of the map, and not the entire Patterson unit cell, which simplifies the interpretation qualitatively. From 1936 until 1941, Harker had a professor position to teach Physical Chemistry at Johns Hopkins University, where he learned classical Crystallography and Mineralogy. During the remaining years of the 1940's, he obtained a research position at the General Electric Company and from there, together with his colleague, John S. Kasper, made another important contribution to Crystallography: the Harker-Kasper inequalities, the first contribution to the so-called direct methods for solving the phase problem. During the 1950's, Harker accepted the offer of joining the Irwin Langmuir Brooklyn Polytechnic Institute to solve the structure of ribonuclease. This opportunity helped him to establish the methodology that, years later (1962), was used by Max Perutz and John Kendrew to solve the structure of hemoglobin. In 1959, Harker moved his team and project to the Roswell Park Cancer Institute and completed the ribonuclease structure in 1967. He retired officially in 1976, but remained somewhat active at the Medical Foundation of Buffalo (today the Hauptman-Woodward Institute), until his death in 1991 from pneumonia. There is a nice Harker's obituary written by William Duax. 1940-1960"Andante", score by John D. Bernal John Desmond Bernal (1901-1971). Following the findings and developments by Arthur Lindo Patterson and David Harker, interest was directed to the structure of molecules, especially those related to life: proteins. And in this movement an Irishman settled in England, John Desmond Bernal, played a crucial role to the further development of crystallography. John Desmond Bernal was born in Nenagh, Co. Tipperar, in 1901. The Bernals were originally Sephardic Jews who came to Ireland in 1840 from Spain via Amsterdam and London. They converted to Catholicism and John was Jesuit-educated. John enthusiastically supported the Easter Rising, and, as a boy, organized a Society for Perpetual Adoration. He moved away from religion as an adult, becoming an atheist. Bernal was strongly influenced by the Russian Revolution of 1917 and became a very active member of the Communist Party of Britain. John graduated in 1919 in Mineralogy and Mathematics (applied to symmetry) at the University of Cambridge. In 1923, he obtained a position as assistant in the laboratory of W.H. Bragg at the Royal Institution in London, and in 1927, he returned as a professor to Cambridge. His fellow students in Cambridge nicknamed him ‘Sage’ because of his great knowledge. From there, he attracted many young researchers from Birkbeck College and King's College to the field of macromolecular crystallography. In 1937, he obtained a professor position in London at Birkbeck College, from where he trained many crystallographers (Rosalind Franklin, Dorothy Hodgkin, Aaron Klug and Max Perutz, among others). Undoubtedly, John D. Bernal has earned a prominent position in the Science of the Twentieth Century. He showed that, under appropriate conditions, a protein crystal can maintain its crystallinity under exposure to X-rays. Some of his students were able to solve complex structures such as hemoglobin and other biological materials of importance, such that crystallographic analysis started to revolutionize Biology. John Bernal, who died at the age of 70, was also the engine of crystallographic studies on viruses, together with his collaborator, Isadore Fankuchen. The developments of the Bragg's, based on the previous discovery of Laue and the work by Patterson and Harker, raised the expectations of structural biology. Due to the Second World War, England became an attractive center, especially around John D. Bernal. Max Ferdinand Perutz (1914-2002) was born in Vienna, on May 19th, 1914, into a family of textile manufacturers. They had made their fortune in the 19th Century by the introduction of mechanical spinning and weaving to the Austrian monarchy. Max was sent to school at the Theresianum, a grammar school derived from an officers' academy at the time of the empress Maria Theresia. His parents suggested that he should study law in preparation for entering the family business. However, a good schoolmaster awakened his interest in chemistry and he entered the University of Vienna where he, in his own words, "wasted five semesters in an exacting course of inorganic analysis". His curiosity was aroused, however, by organic chemistry, and especially by a course of organic biochemistry, given by F. von Wessely, in which Sir F.G. Hopkins' work at Cambridge was mentioned. It was here that Perutz decided that Cambridge was the place he wanted to work on his Ph.D. thesis. With financial help from his father, in September 1936, Perutz became a research student at the Cavendish Laboratory in Cambridge under John D. Bernal. His relationship with Lawrence Bragg was also critical, and in 1937 he conducted the first diffraction experiments with hemoglobin crystals which had been crystallized in Keilin's Molteno Institute. Thus, from 1938 until the early fifties, the protein chemistry was done at Keilin's Molteno Institute and the X-ray work at the Cavendish, with Perutz busily bridging the gap between biology and physics on his bicycle. After the invasion of Austria by Hitler, the family business was expropriated, his parents became refugees, and his own funds were soon exhausted. Max Perutz was saved by being appointed research assistant to Lawrence Bragg, under a grant from the Rockefeller Foundation, on January 1st, 1939. The grant continued, with various interruptions due to the war, until 1945, when Perutz was given an Imperial Chemical Industries Research Fellowship. In October 1947, he was made head of the newly constituted Medical Research Council Unit for Molecular Biology. His collaboration with Sir Lawrence Bragg continued through many years. As a memorial to Perutz you probably may consult this obituary published in Nature on the occasion of his death in 2002 (otherwise you always may download this obituary written in Spanish). John Cowdery Kendrew (1917-1997) was born on 24th March, 1917, in Oxford. He graduated in Chemistry in 1939 from Trinity College. He spent the first few months of the war doing research on reaction kinetics in the Department of Physical Chemistry at Cambridge under the supervision of E.A. Moelwyn-Hughes. The personal influence of John D. Bernal led him to work on the structure of proteins and in 1946 he joined the Cavendish Laboratory, working with Max Perutz under the direction of Lawrence Bragg, where he received his Ph.D. in 1949. Kendrew and Perutz formed the entire staff of the Molecular Biology Unit of the recently established (1947) Medical Research Council. Although the work of Kendrew focused on myoglobin, Max Ferdinand Perutz and John Cowdery Kendrew received the Nobel Prize in Chemistry in 1962 for their work on the structure of hemoglobin and both were the first to successfully implement the MIR methodology introduced by David Harker. Rosalind Elsie Franklin (1920-1958). One of the great scientists of those years who also emerged under the direct influence of John D. Bernal, was the controversial and unfortunate Rosalind Franklin. There are many texts concerning Rosalind, but perhaps it is worthwhile to read the detailed pages (in Spanish) prepared by Miguel Vicente: La dama ausente: Rosalind Franklin y la doble hélice and Jaque a la dama: Rosalind Franklin en King's College, both of which do justice to her personality and to her short but fruitful work in the science of the mid-twentieth century. In the summer of 1938, Rosalind Franklin went to Newnham College, Cambridge. She passed her finals in 1941, but was only awarded a titular degree, as women were not entitled to degrees from Cambridge at the time. In 1945, Franklin received her PhD from Cambridge University. After the war Franklin accepted an offer to work in Paris at the Laboratoire de Services Chimiques de L'Etat with Jacques Mering, where she learned X-ray diffraction techniques on coal and related inorganic materials. In January 1951, Franklin started working as a research associate at King's College, London, in the Medical Research Council, in the Biophysics Unit, directed by John Randall. Although originally she was to have worked on X-ray diffraction of proteins and lipids in solution, Randall redirected her work to DNA fibers before she started working at King's, as Franklin was to be the only experienced experimental diffraction researcher at King’s in 1951. In Randall's laboratory, Rosalind's trajectory crossed with that of Maurice Wilkins (1916-2004), as both were dedicated to DNA research. Unfortunately, unfair competition led to a conflict with Wilkins which finally "took its toll". In Rosalind's absence, Wilkins showed the diffraction diagrams, which Rosalind had taken from DNA fibers, to two young scientists lacking excessive scruples... James Watson and Francis Crick. John Bernal called her DNA X-ray photographs "the most beautiful X-ray photographs of any substance ever taken." Rosalind's DNA diagrams provided the establishment of the double helical structure of DNA. It might be interesting for the reader to see this short video prepared by "My Favourite Scientist" (also available through this link). Using a laser pen and some bent wire Andrew Marmery from the Royal Institution in London demonstrates the principles of diffraction and reproduces the characteristic diffraction pattern of the helical structure of DNA (use this other link in case of problems). The interested reader can also access the original manuscripts prepared by Rosalind Franklin on the structure of DNA. Rosalind Franklin died very young, at age 37, from ovarian cancer. Maurice Wilkins (1916-2004) was born in New Zealand. He graduated as a physicist in 1938 from St. John's College, Cambridge, and joined John Randall at the University of Birmingham. After obtaining his PhD in 1940, he joined the Manhattan Project in California. After World War II, in 1945, he returned to Europe when John Randall was organizing the study of biophysics at the University of St. Andrew in Scotland. A year later, he obtained a position at King's College, London, in the newly created Medical Research Council, where he became deputy director in 1950. James Dewey Watson (1928-), born in Chicago, obtained a PhD in Zoology in 1950 at the University of Indiana. He spent a year in Copenhagen as a Merck Fellow and during a symposium held in 1951 in Naples, met Maurice Wilkins, who awoke his interest in the structure of proteins and nucleic acids. Thanks to the intervention of his director (Salvador E. Luria), Watson in the same year got a position to work with John Kendrew at the Cavendish Laboratory, where he also met Francis Crick. After two years at the California Institute of Technology, Watson returned to England in 1955 to work one more year in the Cavendish Laboratory with Crick. In 1956 he joined the Department of Biology at Harvard. Francis Crick (1916-2004) was born in England and studied Physics at London University College. During the war, he worked for the British Admiralty and later went to the laboratory of W. Cochran to study biology and the principles of crystallography. In 1949, through a grant from the Medical Research Council, he joined the laboratory of Max Perutz, where, in 1954, he completed his doctoral thesis. There he met James Watson, who later would determine his career. He spent his last years at the Salk Institute for Biological Studies in California. In connection with the unfortunate story of Rosalind Franklin, Maurice Wilkins, James Watson and Francis Crick received the Nobel Prize in Physiology or Medicine in 1962 for the discovery of the right handed double helix structure of DNA. The decisive role of Rosalind Franklin was forgotten. it is very instructive to observe the video that hhmi biointeractive offers about this discovery. Dorothy C. Hodgkin (1910-1994), was born in Cairo, but she also spent part of her youth in Sudan and Israel, where her father became director of the British School of Archeology in Jerusalem. From 1928 to 1932 she settled in Oxford thanks to a grant from Sommerville College, where she learned the methods of crystallography and diffraction, and soon was attracted by the character and work of John D. Bernal. In 1933, she moved to Cambridge where she spent two happy years, making many friends and exploring a variety of problems with Bernal. In 1934, she returned to Oxford, from where she never left, except for short periods. In 1946, she obtained a position as Associate Professor for Crystallography and although she was initially linked to Mineralogy, her work soon pointed towards the area which had always interested her and which she had learned under John D. Bernal: sterols and other interesting biological molecules. Dorothy Hodgkin took part in the meetings in 1946 which led to the foundation of the International Union of Crystallography and she visited many countries for scientific purposes, including China, the USA and the USSR. She was elected a Fellow of the Royal Society in 1947, a foreign member of the Royal Netherlands Academy of Sciences in 1956, and of the American Academy of Arts and Sciences (Boston) in 1958. In 1964 she was awarded the Nobel Prize in Chemistry. 1970-1980..."Finale", with an unfinished melody... Although what happened in the first 60 years of the Twentieth Century is astonishing and somewhat unique, the "crystallographic melody" continued, and in this sense it is still worthwhile to mention other scientists who made Crystallography go further. William Nunn Lipscomb (1919-2011) was born in Cleveland, Ohio, USA, but moved to Kentucky in 1920, and lived in Lexington throughout his university years. After his bachelors degree at the University of Kentucky, he entered graduate school at the California Institute of Technology in 1941, first in physics. Under the influence of Linus Pauling, he returned to chemistry in early 1942. From then until the end of 1945 he was involved in research and development related to the war. After completing his Ph.D., he joined the University of Minnesota in 1946, and moved to Harvard Universityin 1959. Harvard recognitions include the Abbott and James Lawrence Professorship in 1971, and the George Ledlie Prize, also in 1971. In 1976 Lipscomp was awarded the Nobel Prize in Chemistry for his contributions to the structural chemistry of boranes. This chapter cannot be concluded without mentioning the efforts made by other crystallographers, who during many years tried to solve the phase problem with approaches different from those provided by the Patterson method, ie, trying to solve the problem directly from the intensities of the diffraction pattern and based on probability equations: direct methods. Herbert A. Hauptman (1917-2011), born in New York, graduated in 1939 as a mathematician from Columbia University. His collaboration with Jerome Karle began in 1947 at the Naval Research Laboratory in Washington DC. He earned his PhD in 1954 from the University of Maryland. In 1970, he joined the crystallographers group at the Medical Foundation in Buffalo, where he became research director in 1972. Hauptman was the second non-chemist to win a Chemistry Nobel Prize (the first one was the physicist Ernest Rutherford). Jerome Karle (1918-2013), also from New York, studied mathematics, physics, chemistry and biology, obtaining his master's degree in Biology from Harvard University in 1938. In 1940, he moved to the University of Michigan, where he met and married Isabella Lugosky. He worked on the Manhattan Project at the University of Chicago and earned a doctoral degree in 1944. Finally, in 1946, he moved to the Naval Research Laboratory in Washington DC, where he met Herbert Hauptman. The monograph published in 1953 by Hauptman and Karle, Solution of the Phase Problem I. The Centrosymmetric Crystal, already contained the most important ideas on probabilistic methods which, applied to the phase problem, made them worthy of the Nobel Prize in Chemistry in 1985. However, it would be unfair not to mention the role of Jerome's wife, Isabella Karle (1921-2017), who played an important role, putting the theory into practice. In memory of these important persons, we show this photograph taken in 1994, during the XIII Iberomerican Congress of Crystallography (Montevideo, Uruguay). Left (front to back): Jerome Karle, Isabella Karle and Martin Martinez-Ripoll (author of these pages). Right (front to back): Herbert A. Hauptman and Ray A. Young (neutron expert and one of the pioneers of the Rietveld method) Crystallography is (and has been) one of the most inter- and multidisciplinary sciences. It links together frontier areas of research and has, directly or indirectly, produced the largest number of Nobel Laureates throughout history. Additionally, the International Union of Crystallography (IUCr) established, since 1986, the existence of the Ewald Prize awarded every three years for outstanding contributions to the science of Crystallography. This chapter is dedicated to the many scientists who have made Crystallography one of the most powerful and competitive branches of Science for looking into the "tiny" world of atoms and molecules. It could definitely have been more extensive and detailed, because we cannot forget the participation and effort of many other scientists, past and present, but the important issue is that, after our "finale", "crystallographic music" plays on ... The United Nations in its General Assembly A/66/L.51 (issued on 15 June 2012), after considering the relevant role of Crystallography in Science decided to proclaim 2014 International Year of Crystallography. Click also on the left image! We send congratulations to Gautam R. Desiraju, President of the IUCr, and Sine Larsen, former President of the IUCr, when this initiative was launched! In this context, 11 November 2012 marked the centenary of the presentation of the paper by a young William Lawrence Bragg (1890-1971), where the foundations of X-ray crystallography where outlined. For this reason, the International Union of Crystallography (IUCr) published a fascinating set of articles that the reader can find via the following links: The first 50 years of X-ray diffraction were commemorated in 1962 by the International Union of Crystallography (IUCr) with the publication of an interesting book entitled Fifty Years of X-Ray Diffraction, edited by Paul Peter Ewald. Bart Kahr and Alexander G. Shtukenberg wrote an interesting chapter, Histories of Crystallography by Shafranovskii and Schuh, (included in Recent Advances in Crystallography, where they offer a short summary of the two volumes on the History of Crystallography written by Ilarion Ilarionovich Shafranovskii (1907-1994), a Russian crystallographer who assumed the E.S. Fedorov (1853-1919) Chair of Crystallography at the Leningrad Mining Institute. The chapter of Kahr and Shtukenberg also include many other references, especially those taken from Curtis P. Schuh, author of at least a remarkable book entitled Mineralogy & crystallography: an annotated bio-bibliography of books published 1469 through 1919. M.A. Cuevas-Diarte and S. Alvarez Reverter.are the authors of an extensive and commented chronology on crystallography and structural chemistry, starting in the IV Century BC. It is noteworthy the exhibition offered by the University of Illinois (Vera V. Mainz and Gregory S. Girolami, Crystallography - Defining the Shape of Our Modern World, University of Illinois at Urbana-Champaign), commemorating the 100th Anniversary of the Discovery of X-ray Diffraction, as well as a lecture of Prof. Seymour Mauskopf from the Duke University, to be found also directly through these links: PowerPoint format or pdf format. It is also very interesting to read the articles collected in the special issue of Nature (2014), dedicated to Crystallography, especially: among other from the archive included in the same special issue. Nearly in the same context, Nature has also released this interesting article, entitled Structural biology: More than a crystallographer, about the training currently expected from crystallographers working in the field of structural biology. Science, the journal, also joined the celebration of the International Year of Crystallography, devoting a special issue that you can find via this link.
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.10%3A_Biographical_outlines.txt
The ollowing table shows links to some scientific associations of crystallographic interest, distributed around the world and alphabetically ordered ... Crystallography 1.12: Crystallography in Spain In the context of this chapter, you will also be invited to visit these sections... Crystallography is one of the branches of Science whose importance has been critical to the development of Chemistry around the world. Its influence, which was spectacular in Spain during the last third of the twentieth century, led (through many efforts) to the establishment of several groups of crystallographers whose relevance is nowadays beyond any doubt. However, contrary to what happened in other developed countries, Crystallography in Spain, and especially in academic institutions, seems in general to remain an unresolved matter. This is probably due to the fact that it has erroneously been considered as a minor technical issue, whose application and interpretation is trivial. No scientific discipline has so profoundly influenced the field of structural Biochemistry as that of X-ray diffraction analysis when applied to the crystals of macromolecules. It is among the most prolific techniques available for providing significant new data. In theory, there is no limit in molecular size, and thus it includes, in addition to proteins, the viruses, ribo- and deoxyribonucleic acids and protein complexes. Its influence has also affected the development of Biology and Biomedicine, leading to the so-called structural genomics. The detailed knowledge of the structure of biological macromolecules enables us not only to understand the relationship between structure and function, but also to make rational functional improvement proposals. In contrast with the importance of these issues, the rather large number of research groups in Spain (very competitive in Cellular and Molecular Biology), and the lack of resources dedicated to the few Spanish laboratories active in macromolecular Crystallography becomes very apparent. In an separate part of this chapter the reader will find a brief historical outline on the initial development of Crystallography in Spain. Most crystallographers working in Spain are associated with the Specialized Group for Crystallography and Crystal Growth (Grupo Especializado de Cristalografía y Crecimiento Cristalino, GE3C), a group associated with the Spanish Royal Society of Chemistry. Similarly, European crystallographers are associated with the European Crystallographic Association, ECA. Further, the Spanish Committee of Crystallography is the Spanish association responsible for coordinating the official Spanish representation to the International Union of Crystallography, IUCr. There are some other Spanish associations related to Crystallography, organized according to various radiation sources of interest in the field: Crystallographers working in Spain had the responsibility to organize the XXII Congress and General Assembly of the International Union of Crystallography, IUCr, that was held in Madrid (August 22-30, 2011). This type of congress, held every three years in a different country, had in Madrid over 2,800 worldwide participants and implied an explicit recognition by the IUCr of the "Spanish crystallography". The event was officially supported by the IUCr, the Spanish Ministry of Science and Innovation, the Spanish National Research Council (CSIC), several Spanish universities (especially Alcalá, Complutense of Madrid, Autonomous of Madrid, UIMP, Oviedo, Cantabria and Barcelona). Special mention has to be done to the support received by the BBVA Foundation (which specifically supported the participation of the three 2009 Chemistry Nobel Laureates). AECID and Metro-Madrid helped to the participation of a large number of young researchers from less developed countries. The "Madrid Convention Bureau" supported, in 2005, the preparation of the candidature of Spain to organize this unique event. Crystallographers working in Spain and specially those implied in the Local Organizing Commitee, appreciate the suport obtained from all these organizations. The most relevant research groups located in Spain and using Crystallography as a main research tool can be found below. See also the following link. Nine of the groups listed below made an association in the context of a joint project called The Factory of Crystallization, a collaborative project to create an integrated platform for research and services in crystallization and crystallography. The project was thought as a setting to combine advanced research on crystallization and crystallography and service delivery in these fields to companies and research groups in biomedical, pharmacological, biotechnological, nanotechnological, natural or material sciences. The aim was that any group extracting, synthesizing or, in general, developing a new molecule or potentially interesting material could have access, with an adequate level of confidentiality, to the knowledge and technology needed for crystallization, diffraction data collection and structure solution. The project was funded with € 5.0 M by the former Spanish Ministry of Education and Science (now Ministry of Science and Innovation), as part of the Consolider-Ingenio/2010 program. The following information corresponds to a relatively splendid stage of Crystallography in Spain (during the first decade of the 21st century). Unfortunately, with the passage of time the situation has worsened, so many of the links shown below may no longer be operational.. The list of groups shown below may contain involuntary errors or omissions. Groups who would like to be included here should get in contact through this link. Select a region on the map Andalusia Division for X-ray Diffraction, University of Cádiz Campus Universitario del Río San Pedro, E-11510 Puerto Real Americo Vespuccio 49, Isla de la Cartuja, E-41092 Sevilla Avda. Fuentenueva s/n, E-18071 Granada Américo Vespucio 49, E-41092 Sevilla Edificio Científico Técnico de Química, Ctra. Sacramento, La Cañada de San Urbano, E-04120 Almería Edifício Inst. López Neyra, Avenida del Conocimiento s/n, PT Ciencias de la Salud, E-18100 Armilla Bulevar Louis Pasteur 33 , Campus de Teatinos, Edificio SCAI, Planta 1, B1-04, E-29071 Málaga Go to the map Aragon Institute of Chemical Synthesis and Homogeneous Catalysis (iSQCH), CSIC-University of Zaragoza Pedro Cerbuna 12, E-50009 Zaragoza Plaza de San Francisco s/n, E-50009 Zaragoza Go to the map Asturias Department of Physics, University of Oviedo Julián Clavería 8, E-33006 Oviedo Jesús Arias de Velasco, E-33005 Oviedo Julián Clavería 8, E-33006 Oviedo Go to the map Balearic Islands No data available Go to the map Cantabria Group of High Pressure and Spectroscopy, Faculty of Sciences, University of Cantabria Avda. de los Castros s/n, E-39005 Santander Avda. de los Castros s/n, E-39005 Santander Canary Islands Laboratory of X-Rays and Molecular Materials, Department of Fundamental Physics II, University of La Laguna Avda Astrofí­sico Francisco Sánchez s/n, E-38204 La Laguna Some other groups (no web link) from the Dept. of Fundamental Physics II, making use of the Integrated Service for X-Ray Diffraction, University of La Laguna Avda. Astrofísico Francisco Sánchez s/n, E-38206 La Laguna Go to the map Castile and León E-47002 Valladolid Campus Miguel de Unamumo, E-37007 Salamanca Plaza de Misael Bañuelos s/n, E-09001 Burgos Plaza de la Merced s/n, E-37008 Salamanca Go to the map Castile - La Mancha Grupo de Química Organometálica y Catálisis, Facultad de Ciencias Químicas, Universidad de Castilla-La Mancha Avenida de Camilo José Cela 10, E-13071 Ciudad Real Go to the map Catalonia Department of Crystallography, Institute of Material Sciences of Barcelona, CSIC Campus de la Universidad Autónoma de Barcelona, E-08193 Bellaterra Escola d'Enginyeria Barcelona Est, Campus Diagonal Besos, Building (EEBE) I 2.21, c/ Eduard Maristany 10-14, E-08019 Barcelona c/ Martí i Franqués s/n, E-08028-Barcelona Parque Científico de Barcelona, Baldiri i Reixach 15-21, E-08028 Barcelona Marcel.li Domingo s/n, Campus Sescelades, E-43007 Tarragona Campus de la Universidad Autónoma de Barcelona, E-08193 Bellaterra Avgda. Països Catalans 16, E-43007-Tarragona Go to the map Extremadura No data available Go to the map Galicia Metallosupramolecular Chemistry Group (QI5) Universidad de Vigo, Facultad de Química, E-36310 Vigo Campus Universitario Sur, E-15782 Santiago de Compostela E-15001 A Coruña Facultad de Química, E-36310 Vigo Edificio CACTUS, Campus Universitario Sur s/n, E-15782 Santiago de Compostela Go to the map La Rioja Central X-ray Diffraction Unit (no web site). Somehow dependent from the Department of Chemistry, University of La Rioja Avda. de La Paz 93, E-26006 Logroño Go to the map Madrid Go to the map Crystal Growth Laboratory, Department of Material Physics, Faculty of Sciences, Autonomous University of Madrid Campus de Cantoblanco, E-28049 Madrid Campus Universitario, E-28871 Alcalá de Henares Senda del Rey 9, E-28080 Madrid José Antonio Novais 2, E-28040 Madrid Serrano 119, E-28006 Madrid Some group (with no web link) at the Department of Inorganic Chemistry I, University Complutense of Madrid Ciudad Universitaria, E-28040 Madrid Department of Macromolecular Structures, National Center for Biotechnology, CSIC Darwin 3, Campus de Cantoblanco, E-28049 Madrid Ramiro de Maeztu 9, E-28040 Madrid Institute of Material Sciences of Madrid, CSIC Cantoblanco, Ctra. de Colmenar Km. 15, E-28049 Madrid National Center for Cancer Research, CNIO Melchor Fernández Almagro 3, E-28029 Madrid Edificio C (Aulario), Planta Sótano, Ciudad Universitaria, E-28040 Madrid Go to the map Murcia Department of Mining, Geological and Cartographic Engineering, Area of Chemistry, Technical University of Cartagena Campus Muralla del Mar, E-30202 Cartagena Go to the map Navarre Group of Physical Properties and Applications of Materials, Public University of Navarre Edificio Departamental de los Acebos, Campus Arrosadía, E-31006 Pamplona Go to the map The autonomous city of Ceuta No data available Go to the map The autonomous city of Melilla No data available Go to the map The Basque Country Biophysics Unit, CSIC-University of the Basque Country Campus de Leioa, E-48940 Leioa Campus de Leioa, E-48940 Leioa Campus de Leioa, E-48940 Leioa Campus de Leioa, E-48940 Leioa Campus de Leioa, E-48940 Leioa Ed. 801 A, Parque Tecnológico de Vizcaya, E-48160 Derio Campus de Leioa,40 Leioa Go to the map Valencia Department of Geology, University of Valencia Doctor Moliner 50, E-46100 Burjassot Some group (with no web link) at the Department of Inorganic Chemistry, University of Valencia Doctor Moliner 50, E-46100 Burjassot Some group (with no web link) at the Department of Organic Chemistry, University of Valencia Doctor Moliner 50, E-46100 Burjassot Some group (with no web link) at the Department of Inorganic and Organic Chemistry, University Jaume I Campus del Riu Sec, E-12071 Castellón de la Plana Jaime Roig 11, E-46010 Valencia Campus del Riu Sec, E-12071 Castellón de la Plana Go to the map
textbooks/chem/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/1.11%3A_Crystallographic_Associations.txt
Analytical chemistry is the science of how to make good measurements that we can use to solve a chemical problem. Many problems in analytical chemistry begin with the need to identify what is present in a sample. This is the scope of a qualitative analysis, examples of which include identifying the products of a chemical reaction, screening an athlete’s urine for a performance-enhancing drug, or determining the spatial distribution of Pb on the surface of an airborne particulate. An early challenge for analytical chemists was developing simple chemical tests to identify inorganic ions and organic functional groups. The classical laboratory courses in inorganic and organic qualitative analysis, still taught at some schools, are based on this work. Perhaps the most common analytical problem is a quantitative analysis, examples of which include the elemental analysis of a newly synthesized compound, measuring the concentration of glucose in blood, or determining the difference between the bulk and the surface concentrations of Cr in steel. Much of the analytical work in clinical, pharmaceutical, environmental, and industrial labs involves developing new quantitative methods to detect trace amounts of chemical species in complex samples. Most of the examples in this text are of quantitative analyses. Another important area of analytical chemistry, which receives more limited attention in this text, are methods for characterizing physical and chemical properties. The determination of chemical structure, particle size, and surface structure are examples of a characterization analysis. The purpose of a qualitative, a quantitative, or a characterization analysis is to solve a problem associated with a particular sample. The purpose of a fundamental analysis, on the other hand, is to improve our understanding of the theory that supports an analytical method and to understand better an analytical method’s limitations. Like all areas of chemistry, analytical chemistry is so broad in scope and so much in flux that it is difficult to find a simple definition more revealing than this quote attributed to C. N. Reilly (1925-1981), who was a professor of chemistry at the University of North Carolina at Chapel Hill and one of the most influential analytical chemists of the last half of the twentieth century: "Analytical chemistry is what analytical chemists do." In this chapter we expand upon this simple definition by introducing approaches to making analytical measurements and by developing a shared language for discussing analytical chemistry, more generally, and instrumentation, more specifically. • 1.1: Classification of Analytical Methods Analytical methods often are divided into two classes: classical methods of analysis and instrumental methods of analysis. • 1.2: Types of Instrumental Methods It is useful to organize instrumental methods of analysis into groups based on the chemical or physical properties that we use to generate a signal that we can measure and relate to the analyte of interest to us. • 1.3: Instruments For Analysis The basic components of an instrument include a probe that interacts with the sample, an input transducer that converts the sample's chemical and/or physical properties into an electrical signal, a signal processor that converts the electrical signal into a form that an output transducer can convert into a numerical or visual output that we can understand. In this section we develop a common vocabulary that we can use in later chapters. • 1.4: Selecting an Analytical Method Choosing an analytical method requires matching the method's strengths and weaknesses—its performance characteristics—to the needs of your analysis. • 1.5: Calibration of Instrumental Methods To standardize an analytical methods we need to determine its sensitivity, which relates the signal to the analyte's concentration. There are three general calibration strategies that are outlined here: external standards, standard additions, and internal standards. 01: Introduction Analytical chemistry has a long history. On the bookshelf of my office, for example, there is a copy of the first American edition of Fresenius's A System of Instruction in Quantitative Chemical Analysis, which was published by John Wiley & Sons in 1886. Nearby are many newer texts, such as Bard and Faulkner's Electrochemical Methods: Fundamentals and Applications, the most recent edition of which was published by Wiley in 2000. In 883 pages, Fresnius's text covers essentially all that was known in the 1880s about analytical chemistry and what we now call classical methods of analysis. Bard and Faulkner's text, which is 864 pages, covers just one category of what we now call modern instrumental methods of analysis. Whether a classical method of analysis or a modern instrumental method analysis, the species of interest, which we call the analyte, is probed in a way that provides qualitative or quantitative information. Classical Methods of Analysis The distinguishing feature of a classical method of analysis is that the principal measurements are observations of reactions (Did a precipitate form? Did the solution change color?) or the measurement of one of a small number of physical properties, such as mass or volume. Because these measurements are not selective for a single analyte, a classical method of analysis usually required extensive work to isolate the analyte of interest from other species that would interfere in the analysis. As we see in Figure \(1\), Fresenius's method for determining the amount of nickel in ores required 58 hours, most of which was spent bringing the ore into solution and then isolating the analyte from interferents by a sequence of precipitations and filtrations. The final determination of the amount of nickel in the ore was derived from two measurements of mass: the combined mass of Co and Ni, and the mass of Co. Although of historic interest, we will not consider further classical methods of analysis in this text. Modern Instrumental Methods of Analysis The distinguishing feature of modern instrumental methods of analysis is that it extends measurements to many more physical properties, such as current, potential, the absorption or emission of light, and mass-to-charge ratios, to name a few. Instrumental methods for separating analytes, such as chromatographic separations, and instrumental methods that allow for the simultaneous analysis of multiple analytes make for a much more rapid analysis. By the 1970s, flame atomic absorption spectrometry (FAAS) replaced gravimetry as the standard method for analyzing nickel in ores [see, for example, Van Loon, J. C. Analytical Atomic Absorption Spectroscopy, Academic Press: New York, 1980]. Because FAAS is much more selective than precipitation, there is less need to chemically isolate the analyte; as a result, the time to analyze a single sample decreased to a few hours and the throughput of samples increased to hundreds per day. 1.02: Types of Instrumental Methods It is useful to organize instrumental methods of analysis into several groups based on the chemical or physical properties that we use to generate a signal that we can measure and relate to the analyte of interest to us. One group of instrumental methods is based on the interaction of photons of electromagnetic radiation with matter, which we call collectively spectroscopy. We can divide spectroscopy into two broad classes of techniques. In one class of techniques there is a transfer of energy between the photon and the sample. Table $1$ provides a list of several representative examples. Table $1$. Examples of Spectroscopic Instrumental Methods That Involve an Exchange of Energy Between a Photon and the Sample type of energy transfer region of electromagnetic spectrum spectroscopic technique absorption $\gamma$-ray Mossbauer spectroscopy X-ray X-ray absorption spectroscopy UV/Vis UV/Vis spectroscopy IR infrared spectroscopy microwave raman spectroscopy radio wave electron spin resonance nuclear magnetic resonance emission (thermal excitation) UV/Vis atomic emission spectroscopy photoluminescence X-ray X-ray fluorescence UV/Vis fluorescence spectroscopy phosphorescence spectroscopy atomic fluorescence spectroscopy chemiluminescence UV/Vis chemiluminescence spectroscopy In the second broad class of spectroscopic techniques, the electromagnetic radiation undergoes a change in amplitude, phase angle, polarization, or direction of propagation as a result of its refraction, reflection, scattering, diffraction, or dispersion by the sample. Several representative spectroscopic techniques are listed in Table $2$. Table $2$. Examples of Other Spectroscopic Instrumental Methods region of electromagnetic spectrum type of interaction spectroscopic technique X-ray diffraction X-ray diffraction UV/Vis refraction refractrometry scattering nephelometry turbidimetry dispersion optical rotary dispersion A second group of instrumental methods is based on the measurement of current, charge, or potential at the surface of an electrode, sometimes while controlling one or both of the other two variables, and sometime while stirring the solution. Figure $1$ provides a visual introduction to these methods. Our third group of instrumental methods gathers together a variety of other measurements that can provide a useful analytical signal; these are summarized in Table $3$. Table $3$. Additional Examples of Instrumental Methods of Analysis type of measurement or phenomenon instrumental method piezoelectric effect quartz crystal microbalance mass-to-charge ratio mass spectrometry rate of chemical reaction or physical process kinetic methods flow injection analysis neutron activation analysis isotope diution analysis thermal energy thermal gravimetry differential thermal analysis differential scanning calorimetry Our last group of instrumental methods are used to separate mixtures based on either the equilibrium partitioning of species between two phases or the migration of species in response to an applied electrical field. These methods usally are paired with a suitable instrumental method from Table $1$, Table $2$, Table $3$, or Figure $1$ to provide a way to follow the separation. Table $4$. Examples of Instrumental Methods for Separating Mixtures. basis of separation instrumental method equilibrium partitioning between two phases gas chromatography liquid chromatography supercritical fluid chromatography migration in response to applied electrical field electrophoresis
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.01%3A_Classification_of_Analytical_Methods.txt
An early example of a colorimetric analysis is Nessler’s method for ammonia, which was introduced in 1856. Nessler found that adding an alkaline solution of HgI2 and KI to a dilute solution of ammonia produced a yellow-to-reddish brown colloid in which the colloid’s color depended on the concentration of ammonia. In addition to the sample, Nessler prepared a series of standard solutions, each containing a known amount of ammonia, and placed each in a glass tube with a flat bottom. Allowing sunlight to pass through the tubes from bottom-to-top, Nessler observed them from above, as seen in Figure $1$. By visually comparing the color of the sample to the colors of the standards, Nessler was able to estimate the concentration of ammonia in the sample. Nessler's method converts a sample's chemical and/or physical properties—the color that forms when NH3 reacts with HgI2 and KI—into a signal that we can detect, process, and report as a relative measure of the amount of NH3 in the sample. Although we might not think of a Nessler tube as an instrument, the process of probing a sample in a way that converts its chemical or physical properties into a form of information that we can report is the essence of any instrument. The basic components of an instrument include a probe that interacts with the sample, an input transducer that converts the sample's chemical and/or physical properties into an electrical signal, a signal processor that converts the electrical signal into a form that an output transducer can convert into a numerical or visual output that we can understand. We can represent this as a sequence of actions that take place within the instrument $\text{probe} \rightarrow \text{sample} \rightarrow \text{input transducer} \rightarrow \text{raw data} \rightarrow \text{signal processor} \rightarrow \text{output transducer} \nonumber$ and as a general flow of information $\text{chemical and/or physical information} \rightarrow \text{electrical information} \rightarrow \text{numerical or visual response} \nonumber$ In Nessler’s method, the probe is sunlight, the analyst’s eye is the input transducer, the raw data is the response of the eye's optic nerve to the attenuation of light, the signal processor is the brain, and the output is a visual report of the sample's color relative to the standards. $\text{sunlight} \rightarrow \text{sample} \rightarrow \text{eye} \rightarrow \text{response of optic nerve} \rightarrow \text{brain} \rightarrow \text{visual report of color} \nonumber$ Ways to Encode Information As suggested above, information is encoded in two broad ways: as electrical information (such as currents and potentials) and as information in other, non-electrical forms (such as chemical and physical properties). Non-electrical Information Nessler's method begins and ends with non-electrical forms of information: the sample has a color and we use that color to report that the concentration of NH3 in our sample is greater than 0.50 mg/L and less than 1.00 mg/L. Other non-electrical ways to encode information are the observation that a precipitate forms when we add Ag+ to a solution of NaCl, the balance beam scale that my doctor uses to measure my weight, the percentage of light that passes through a sample, and the volume and moles of Cu(NO3)2 in a graduated cylinder. Electrical Information Although my doctor's balance beam scale encodes my mass by the position of two movable weights along a signal arm–a decidedly non-electrical means of encoding information—the electronic analytical balance that is found in almost all chemistry labs encodes the mass in the form of electrical information (Figure $1$). An electromagnet levitates the sample pan above a permanent cylindrical magnet. When we place an object on the sample pan, it displaces the sample pan downward by a force equal to the product of the sample’s mass and its acceleration due to gravity. The balance detects this downward movement and generates a counterbalancing force by increasing the current to the electromagnet. The current needed to return the balance to its original position is proportional to the object’s mass. Although we tend to use interchangeably, the terms “weight” and “mass,” there is an important distinction between them. Mass is the absolute amount of matter in an object, measured in grams. Weight, W, is a measure of the gravitational force, g, acting on that mass, m: $W = m \times g \nonumber$ An object has a fixed mass but its weight depends upon the acceleration due to gravity, which varies subtly from location-to-location. A balance measures an object’s weight, not its mass. Because weight and mass are proportional to each other, we can calibrate a balance using a standard weight whose mass is traceable to the standard prototype for the kilogram. A properly calibrated balance gives an accurate value for an object’s mass. Electrical information comes in three domains: analog, time, and digital. In the analog domain, the signal shows the amplitude of the electrical signal—say current or potential—as a function of an independent variable, which might be wavelength when recording a spectrum, applied potential in a cyclic voltammetry experiment, or time when separating a mixture by gas chromatography. A time domain signal shows the frequency with which the electrical signal rises above or below a threshold value, as when counting the rate at which ionizing radiation, such as alpha or beta particles, are detected by a Geiger counter. Finally, in the digital domain, the signal is a count of discrete events, such as counting the number of drops dispensed by an autotitrator by allowing the drops to disrupt a beam of light. Input Transducers, Detectors, and Sensors As defined above, a transducer is a device that converts information from a non-electrical form to an electrical form (the input transducer) or from an electrical form to a non-electrical form (the output transducer). Detector is a much broader term that includes all aspects of the instrument from the input transducer to the output transducer; thus, a visible spectrometer is a detector that uses an input transducer to convert the attenuation of the source radiation to a reported absorbance. A sensor is a detector designed to monitor a particular analyte, such as a pH electrode. Output Transducers and Readout Devices An instrument's output transducer converts the information carried in electrical form into a non-electrical form that we can understand. Common examples of output transducers, or readout devices, are a simple meter, a digital display, a physical trace of the signal as a function of a dependent variable, such as a spectrum or a chromatogram, or a photographic plate. Computers in Instruments Many instruments include a computer that provides us with the ability to control the instrument and, perhaps of greater importance, to process the data both by modifying the electrical signal as it passes from the input transducer to the output transducer, and by providing tools for processing the data after it leaves the output transducer.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.03%3A_Instruments_For_Analysis.txt
The analysis of a sample generates a chemical or a physical signal that is proportional to the amount of analyte in the sample. This signal may be anything we can measure, such the examples described in Section 1.2. It is convenient to divide analytical techniques into two general classes based on whether the signal is directly proportional to the mass or moles of analyte, or is directly proportional to the analyte’s concentration. Consider the two graduated cylinders in Figure $1$, each of which contains a solution of 0.010 M Cu(NO3)2. The cylinder on the left contains 10 mL, or $1.0 \times 10^{-4}$ moles of Cu2+, and the cylinder on the right contains 20 mL, or $2.0 \times 10^{-4}$ moles of Cu2+. If a technique responds to the absolute amount of analyte in the sample, then the signal due to the analyte SA is given as $S_A = k_A n_A \label{totalanalysis}$ where nA is the moles or grams of analyte in the sample, and kA is a proportionality constant. Because the cylinder on the right contains twice as many moles of Cu2+ as the cylinder on the left, analyzing its contents gives a signal twice as large as that for the other cylinder. A second class of analytical techniques are those that respond to the analyte’s concentration, CA $S_A = k_A C_A \label{concanalysis}$ In this case, an analysis of the contents of the two cylinders gives the same result. As most instruments respond to the analyte's concentration, we will imit ourselves to using Equation \ref{concanalysis} for the remainder of this section. Defining the Problem To select an appropriate analytical method for a particular problem we need to consider our needs and compare them to the strengths and weaknesses of the available analytical methods. If we are screening samples on a production line to determine if an analyte exceeds a threshold so that we can set them aside for a more careful analysis, then we may wish to give more consideration to speed than to accuracy or precision. On the other hand, if we our analyte is part of a complex mixture, then we may wish to give more consideration to analytical methods that provide for greater selectivity. Or, if we expect that our samples will vary substantially in the concentration of analyte, then we may give more consideration to an analytical method for which Equation \ref{concanalysis} applies over a wide range of concentrations. Performance Characteristics of Instruments As suggested above, when we choose an analytical method, we match the its performance characteristics (or figures of merit) to our needs. Some of these characteristics are quantitative (accuracy, precision, sensitivity, detection limit, selectivity, dynamic range, and selectivity) and others are more qualitative (robustness, ruggedness, scale of operation, time, and cost). Accuracy Accuracy, or bias, is a measure of how close the result of an experiment is to the “true” or expected result. We can express accuracy as an absolute error, e $e = x - \mu \nonumber$ where $x$ is the experimental result and $\mu$ is the expected result, or as a percentage relative error, %er $\% e_r = \frac {x - \mu} {\mu} \times 100 \nonumber$ A method’s accuracy depends on many things, including the signal’s source, the value of kA in Equation \ref{concanalysis}, and the ease of handling samples without loss or contamination. Because it is unlikely that we know the true result, we can use an expected or accepted result to evaluate accuracy. For example, we might use a standard reference material, which has an accepted value for our analyte, to establish the analytical method’s accuracy. You will find a more detailed treatment of accuracy in, including a discussion of sources of errors, in Appendix 1. Precision When we analyze a sample several times, the individual results vary from trial-to-trial. Precision is a measure of this variability. The closer the agreement between individual analyses, the more precise the results. For example, the results shown in the upper half of Figure $2$ for the concentration of potassium in a sample of serum are more precise than those in the lower half of Figure $2$. It is important to understand that precision does not imply accuracy. That the data in the upper half of Figure $2$ are more precise does not mean that the first set of results is more accurate. In fact, neither set of results may be accurate. A method’s precision depends on several factors, including the uncertainty in measuring the signal and the ease of handling samples reproducibly, and is reported as an absolute standard deviation, s $s = \sqrt{\frac {\sum_{i = 1}^{n} (X_i - \overline{X})^{2}} {n - 1}} \label{sd}$ or a relative standard deviation, sr $s_r = \frac {s} {\overline{X}} \label{rsd}$ where $\overline{X}$ is the average, or mean value of the individual measurements. $\overline{X} = \frac {\sum_{i = 1}^n X_i} {n} \label{mean}$ Confusing accuracy and precision is a common mistake. See Ryder, J.; Clark, A. U. Chem. Ed. 2002, 6, 1–3, and Tomlinson, J.; Dyson, P. J.; Garratt, J. U. Chem. Ed. 2001, 5, 16–23 for discussions of this and other common misconceptions about the meaning of error. You will find a more detailed treatment of precision in Appendix 1, including a discussion of sources of errors. Sensitivity The ability to demonstrate that two samples have different amounts of analyte is an essential part of many analyses. A method’s sensitivity is a measure of its ability to establish that such a difference is significant. Sensitivity is often confused with a method’s detection limit, which is the smallest amount of analyte we can determine with confidence. See Pardue, H. L. Clin. Chem. 1997, 43, 1831-1837 for an explanation for why a method's sensitivity is not the same as its detection limit. Sensitivity is equivalent to the proportionality constant, kA, in Equation \ref{concanalysis} [IUPAC Compendium of Chemical Terminology, Electronic version]. If $\Delta S_A$ is the smallest difference we can measure between two signals, then the smallest detectable difference in the analyte's concentration is $\Delta C_A = \frac {\Delta S_A} {k_A} \nonumber$ Suppose, for example, that our analytical signal is a measurement for which the smallest detectable increment is ±0.001 (arbitrary units). If our method’s sensitivity is $0.200 \text{M}^{-1}$, then our method can conceivably detect a difference in concentration of as little as $\Delta C_A = \frac {\pm 0.001 } {0.200 \text{ M}^{-1}} = \pm 0.005 \text{ M}^{-1} \nonumber$ For two methods with the same $\Delta S_A$, the method with the greater sensitivity—that is, the method with the larger kA—is better able to discriminate between smaller amounts of analyte. Detection Limit The International Union of Pure and Applied Chemistry (IUPAC) defines a method’s detection limit as the smallest concentration or absolute amount of analyte that has a signal significantly larger than the signal from a suitable blank [IUPAC Compendium of Chemical Technology, Electronic Version]. Although our interest is in the amount of analyte, in this section we will define the detection limit in terms of the analyte’s signal. Knowing the signal, we can calculate the analyte’s concentration, CA, using Equation \ref{concanalysis}, $S_A = k_A C_A$ where k is the method’s sensitivity. Let’s translate the IUPAC definition of the detection limit into a mathematical form by letting Smb represent the average signal for a method blank, and letting $\sigma_{mb}$ represent the method blank’s standard deviation. To detect the analyte, its signal must exceed Smb by a suitable amount; thus, $(S_A)_{DL} = S_{mb} \pm z \sigma_{mb} \label{detlimit}$ where $(S_A)_{DL}$ is the analyte’s detection limit. The value we choose for z depends on our tolerance for reporting the analyte’s concentration even if it is absent from the sample (what is called a type 1 error). Typically, z is set to three, which corresponds to a probability, $\alpha$, of 0.00135, or 0.135%. As shown in Figure $3$a, there is only a 0.135% probability of detecting the analyte in a sample that actually is analyte-free. A detection limit also is subject to a type 2 error in which we fail to find evidence for the analyte even though it is present in the sample. Consider, for example, the situation shown in Figure $3$b where the signal for a sample that contains the analyte is exactly equal to (SA)DL. In this case the probability of a type 2 error is 50% because half of the sample’s possible signals are below the detection limit. We correctly detect the analyte at the IUPAC detection limit only half the time. The IUPAC definition for the detection limit is the smallest signal for which we can say, at a significance level of $\alpha$, that an analyte is present in the sample; however, failing to detect the analyte does not mean it is not present in the sample. The detection limit often is represented, particularly when discussing public policy issues, as a distinct line that separates detectable concentrations of analytes from concentrations we cannot detect. This use of a detection limit is incorrect [Rogers, L. B. J. Chem. Educ. 1986, 63, 3–6]. As suggested by Figure $3$, for an analyte whose concentration is near the detection limit there is a high probability that we will fail to detect the analyte. An alternative expression for the detection limit, the limit of identification, minimizes both type 1 and type 2 errors [Long, G. L.; Winefordner, J. D. Anal. Chem. 1983, 55, 712A–724A]. The analyte’s signal at the limit of identification, (SA)LOI, includes an additional term, $z \sigma_A$, to account for the distribution of the analyte’s signal. $(S_A)_\text{LOI} = (S_A)_\text{DL} + z \sigma_A = S_{mb} + z \sigma_{mb} + z \sigma_A \label{loi}$ As shown in Figure $4$, the limit of identification provides an equal probability of a type 1 and a type 2 error at the detection limit. When the analyte’s concentration is at its limit of identification, there is only a 0.135% probability that its signal is indistinguishable from that of the method blank. The ability to detect the analyte with confidence is not the same as the ability to report with confidence its concentration, or to distinguish between its concentration in two samples. For this reason the American Chemical Society’s Committee on Environmental Analytical Chemistry recommends the limit of quantitation, (SA)LOQ [“Guidelines for Data Acquisition and Data Quality Evaluation in Environmental Chemistry,” Anal. Chem. 1980, 52, 2242–2249 ]. $(S_A)_\text{LOQ} = S_{mb} + 10 \sigma_{mb} \label{loq}$ Dynamic Range A method's dynamic range (or linear range) runs from its limit of quantication, (Equation \ref{loq}, to the highest concentration for which the sensitivity, kA, remains constant, resulting in a straight-line relationship between $S_A$ and $C_A$. This upper limit is called the limit of linearity, LOL. Between the LOQ and the LOL we can use Equation \ref{concanalysis} to convert a measured signal into the corresponding concentration of the analyte. Above the LOQ the relationship between the signal and the analyte's concentration no longer is a straight-line. Selectivity An analytical method is specific if its signal depends only on the analyte [Persson, B-A; Vessman, J. Trends Anal. Chem. 1998, 17, 117–119; Persson, B-A; Vessman, J. Trends Anal. Chem. 2001, 20, 526–532]. Although specificity is the ideal, few analytical methods are free from interferences. When an interferent, I, contributes to the signal, we expand \ref{totalanalysis} and Equation \ref{concanalysis} to include its contribution to the sample’s signal, Ssamp $S_{samp} = S_A + S_I = k_A C_A + k_I C_I \label{concsamp}$ where SI is the interferent’s contribution to the signal, kI is the interferent’s sensitivity, and CI is the concentration of interferent in the sample. Selectivity is a measure of a method’s freedom from interferences [Valcárcel, M.; Gomez-Hens, A.; Rubio, S. Trends Anal. Chem. 2001, 20, 386–393]. A method’s selectivity for an interferent relative to the analyte is defined by a selectivity coefficient, KA,I $K_{A,I} = \frac {k_I} {k_A} \label{selectcoef}$ which may be positive or negative depending on the signs of kI and kA. The selectivity coefficient is greater than +1 or less than –1 when the method is more selective for the interferent than for the analyte. Although kA and kI usually are positive, they can be negative. For example, some analytical methods work by measuring the concentration of a species that remains after is reacts with the analyte. As the analyte’s concentration increases, the concentration of the species that produces the signal decreases, and the signal becomes smaller. If the signal in the absence of analyte is assigned a value of zero, then the subsequent signals are negative. Determining the selectivity coefficient’s value is easy if we already know the values for kA and kI. As shown by Example $1$, we also can determine KA,I by measuring Ssamp in the presence of and in the absence of the interferent. Example $1$ A method for the analysis of Ca2+ in water suffers from an interference in the presence of Zn2+. When the concentration of Ca2+ is 100 times greater than that of Zn2+, an analysis for Ca2+ has a relative error of +0.5%. What is the selectivity coefficient for this method? Solution Since only relative concentrations are reported, we can arbitrarily assign absolute concentrations. To make the calculations easy, we will let CCa = 100 (arbitrary units) and CZn = 1. A relative error of +0.5% means the signal in the presence of Zn2+ is 0.5% greater than the signal in the absence of Zn2+. Again, we can assign values to make the calculation easier. If the signal for Cu2+ in the absence of Zn2+ is 100 (arbitrary units), then the signal in the presence of Zn2+ is 100.5. The value of kCa is determined using Equation \ref{concanalysis} $k_\text{Ca} = \frac {S_\text{Ca}} {C_\text{Ca}} = \frac {100} {100} = 1 \nonumber$ In the presence of Zn2+ the signal is given by Equation \ref{concsamp}; thus $S_{samp} = 100.5 = k_\text{Ca} C_\text{Ca} + k_\text{Zn} C_\text{Zn} = (1 \times 100) + k_\text{Zn} \times 1 \nonumber$ Solving for kZn gives its value as 0.5. The selectivity coefficient is $K_\text{Ca,Zn} = \frac {k_\text{Zn}} {k_\text{Ca}} = \frac {0.5} {1} = 0.5 \nonumber$ If you are unsure why, in the above example, the signal in the presence of zinc is 100.5, note that the percentage relative error for this problem is given by $\frac {\text{obtained result} - 100} {100} \times 100 = +0.5 \% \nonumber$ Solving gives an obtained result of 100.5. A selectivity coefficient provides us with a useful way to evaluate an interferent’s potential effect on an analysis. Solving Equation \ref{selectcoef} for kI $k_I = K_{A,I} \times k_A \label{ki}$ and substituting in Equation \ref{concanalysis} and simplifying gives $S_{samp} = k_A \{ C_A + K_{A,I} \times C_I \} \label{S_samp}$ An interferent will not pose a problem as long as the term $K_{A,I} \times C_I$ in Equation \ref{S_samp} is significantly smaller than than CA. Example $2$ Barnett and colleagues developed a method to determine the concentration of codeine (structure shown below) in poppy plants [Barnett, N. W.; Bowser, T. A.; Geraldi, R. D.; Smith, B. Anal. Chim. Acta 1996, 318, 309– 317]. As part of their study they evaluated the effect of several interferents. For example, the authors found that equimolar solutions of codeine and the interferent 6-methoxycodeine gave signals, respectively of 40 and 6 (arbitrary units). (a) What is the selectivity coefficient for the interferent, 6-methoxycodeine, relative to that for the analyte, codeine. (b) If we need to know the concentration of codeine with an accuracy of ±0.50%, what is the maximum relative concentration of 6-methoxy-codeine that we can tolerate? Solution (a) The signals due to the analyte, SA, and the interferent, SI, are $S_A = k_A C_A \quad \quad S_I = k_I C_I \nonumber$ Solving these equations for kA and for kI, and substituting into Equation \ref{selectcoef} gives $K_{A,I} = \frac {S_I / C_I} {S_A / C_I} \nonumber$ Because the concentrations of analyte and interferent are equimolar (CA = CI), the selectivity coefficient is $K_{A,I} = \frac {S_I} {S_A} = \frac {6} {40} = 0.15 \nonumber$ (b) To achieve an accuracy of better than ±0.50% the term $K_{A,I} \times C_I$ in Equation \ref{S_samp} must be less than 0.50% of CA; thus $K_{A,I} \times C_I \le 0.0050 \times C_A \nonumber$ Solving this inequality for the ratio CI/CA and substituting in the value for KA,I from part (a) gives $\frac {C_I} {C_A} \le \frac {0.0050} {K_{A,I}} = \frac {0.0050} {0.15} = 0.033 \nonumber$ Therefore, the concentration of 6-methoxycodeine must be less than 3.3% of codeine’s concentration. Problems with selectivity also are more likely when the analyte is present at a very low concentration [Rodgers, L. B. J. Chem. Educ. 1986, 63, 3–6]. Robustness and Ruggedness For a method to be useful it must provide reliable results. Unfortunately, methods are subject to a variety of chemical and physical interferences that contribute uncertainty to the analysis. If a method is relatively free from chemical interferences, we can use it to analyze an analyte in a wide variety of sample matrices. Such methods are considered robust. Random variations in experimental conditions introduces uncertainty. If a method’s sensitivity, k, is too dependent on experimental conditions, such as temperature, acidity, or reaction time, then a slight change in any of these conditions may give a significantly different result. A rugged method is relatively insensitive to changes in experimental conditions. Scale of Operation Another way to narrow the choice of methods is to consider three potential limitations: the amount of sample available for the analysis, the expected concentration of analyte in the samples, and the minimum amount of analyte that will produce a measurable signal. Collectively, these limitations define the analytical method’s scale of operations. We can display the scale of operations visually (Figure $5$) by plotting the sample’s size on the x-axis and the analyte’s concentration on the y-axis. For convenience, we divide samples into macro (>0.1 g), meso (10 mg–100 mg), micro (0.1 mg–10 mg), and ultramicro (<0.1 mg) sizes, and we divide analytes into major (>1% w/w), minor (0.01% w/w–1% w/w), trace (10–7% w/w–0.01% w/w), and ultratrace (<10–7% w/w) components. Together, the analyte’s concentration and the sample’s size provide a characteristic description for an analysis. For example, in a microtrace analysis the sample weighs between 0.1 mg and 10 mg and contains a concentration of analyte between 10–7% w/w and 10–2% w/w. The diagonal lines connecting the axes show combinations of sample size and analyte concentration that contain the same absolute mass of analyte. As shown in Figure $5$, for example, a 1-g sample that is 1% w/w analyte has the same amount of analyte (10 mg) as a 100-mg sample that is 10% w/w analyte, or a 10-mg sample that is 100% w/w analyte. We can use Figure $5$ to establish limits for analytical methods. If a method’s minimum detectable signal is equivalent to 10 mg of analyte, then it is best suited to a major analyte in a macro or meso sample. Extending the method to an analyte with a concentration of 0.1% w/w requires a sample of 10 g, which rarely is practical due to the complications of carrying such a large amount of material through the analysis. On the other hand, a small sample that contains a trace amount of analyte places significant restrictions on an analysis. For example, a 1-mg sample that is 10–4% w/w in analyte contains just 1 ng of analyte. If we isolate the analyte in 1 mL of solution, then we need an analytical method that reliably can detect it at a concentration of 1 ng/mL. Equipment, Time, and Cost Finally, we can compare analytical methods with respect to their equipment needs, the time needed to complete an analysis, and the cost per sample. Methods that rely on instrumentation are equipment-intensive and may require significant operator training. For example, the graphite furnace atomic absorption spectroscopic method for determining lead in water requires a significant capital investment in the instrument and an experienced operator to obtain reliable results. Other methods, such as titrimetry, require less expensive equipment and less training. The time to complete an analysis for one sample often is fairly similar from method-to-method. This is somewhat misleading, however, because much of this time is spent preparing samples, preparing reagents, and gathering together equipment. Once the samples, reagents, and equipment are in place, the sampling rate may differ substantially. For example, it takes just a few minutes to analyze a single sample for lead using graphite furnace atomic absorption spectroscopy, but several hours to analyze the same sample using gravimetry. This is a significant factor in selecting a method for a laboratory that handles a high volume of samples. The cost of an analysis depends on many factors, including the cost of equipment and reagents, the cost of hiring analysts, and the number of samples that can be processed per hour. In general, methods that rely on instruments cost more per sample then other methods. Making the Final Choice Unfortunately, the design criteria discussed in this section are not mutually independent [Valcárcel, M.; Ríos, A. Anal. Chem. 1993, 65, 781A–787A]. Working with smaller samples or improving selectivity often comes at the expense of precision. Minimizing cost and analysis time may decrease accuracy. Selecting a method requires carefully balancing the various design criteria. Usually, the most important design criterion is accuracy, and the best method is the one that gives the most accurate result. When the need for a result is urgent, as is often the case in clinical labs, analysis time may become the critical factor. In some cases it is the sample’s properties that determine the best method. A sample with a complex matrix, for example, may require a method with excellent selectivity to avoid interferences. Samples in which the analyte is present at a trace or ultratrace concentration usually require a concentration method. If the quantity of sample is limited, then the method must not require a large amount of sample.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.04%3A_Selecting_an_Analytical_Method.txt
To standardize an analytical method we also must determine the analyte’s sensitivity, kA, in the following equation $S_{total} = k_A C_A + S_{blank} \label{s_total}$ where $S_{total}$ is the measured signal, $C_A$ is the analyte's concentration, and $S_{reag}$ is the signal in the absence of the analyte. In principle, it is possible to derive the value of kA for any analytical method if we understand fully all the chemical reactions and physical processes responsible for the signal. Unfortunately, such calculations are not feasible if we lack a sufficiently developed theoretical model of the physical processes or if the chemical reaction’s evince non-ideal behavior. In such situations we must determine the value of kA by analyzing one or more standard solutions, each of which contains a known amount of analyte. In this section we consider several approaches for determining the value of kA. For simplicity we assume that $S_{blank}$ is accounted for by a proper reagent blank, allowing us to replace Stotal in with the analyte’s signal, SA. $S_A = k_A C_A \label{sa}$ Single-Point Versus Multiple-Point Standardization The simplest way to determine the value of kA in Equation \ref{sa} is to use a single-point standardization in which we measure the signal for a standard, Sstd, that contains a known concentration of analyte, Cstd. Substituting these values into Equation \ref{sa} and rearrange $k_A = \frac {S_{std}} {C_{std}} \label{ka}$ to give us the value for kA. Having determined kA, we can calculate the concentration of analyte in a sample by measuring its signal, Ssamp, and calculate CA as $C_A = \frac {S_{samp}} {k_A} \label{ca}$ A single-point standardization is the least desirable method for standardizing a method. There are two reasons for this. First, any error in our determination of kA carries over into our calculation of CA. Second, our experimental value for kA is based on a single concentration of analyte. To extend this value of kA to other concentrations of analyte requires that we assume a linear relationship between the signal and the analyte’s concentration, an assumption that often is not true [Cardone, M. J.; Palmero, P. J.; Sybrandt, L. B. Anal. Chem. 1980, 52, 1187–1191]. Figure $1$ shows how assuming a constant value of kA leads to a determinate error in CA if kA becomes smaller at higher concentrations of analyte. Despite these limitations, single-point standardizations find routine use when the expected range for the analyte’s concentrations is small. Under these conditions it often is safe to assume that kA is constant (although you should verify this assumption experimentally). This is the case, for example, in clinical labs where many automated analyzers use only a single standard. The better way to standardize a method is to prepare a series of standards, each of which contains a different concentration of analyte. Standards are chosen such that they bracket the expected range for the analyte’s concentration. A multiple-point standardization should include at least three standards, although more are preferable. A plot of Sstd versus Cstd is called a calibration curve. The exact standardization, or calibration relationship, is determined by an appropriate curve-fitting algorithm. Linear regression, which also is known as the method of least squares, is one such algorithm. Its use is covered in Appendix 1. There are two advantages to a multiple-point standardization. First, although a determinate error in one standard introduces a determinate error, its effect is minimized by the remaining standards. Second, because we measure the signal for several concentrations of analyte, we no longer must assume kA is independent of the analyte’s concentration. Instead, we can construct a calibration curve similar to the “actual relationship” in Figure $1$. External Standards The most common method of standardization uses one or more external standards, each of which contains a known concentration of analyte. We call these standards “external” because they are prepared and analyzed separate from the samples. Appending the adjective “external” to the noun “standard” might strike you as odd at this point, as it seems reasonable to assume that standards and samples are analyzed separately. As we will soon learn, however, we can add standards to our samples and analyze both simultaneously. Single External Standard With a single external standard we determine kA using EEquation \ref{ka} and then calculate the concentration of analyte, CA, using Equation \ref{ca}. Example $1$ A spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Sstd of 0.474 for a single standard for which the concentration of lead is 1.75 ppb. What is the concentration of Pb2+ in a sample of blood for which Ssamp is 0.361? Solution Equation \ref{ka} allows us to calculate the value of kA using the data for the single external standard. $k_A = \frac {S_{std}} {C_{std}} = \frac {0.474} {1.75 \text{ ppb}} = 0.2709 \text{ ppb}^{-1} \nonumber$ Having determined the value of kA, we calculate the concentration of Pb2+ in the sample of blood is calculated using Equation \ref{ca}. $C_A = \frac {S_{samp}} {k_A} = \frac {0.361} {0.2709 \text{ ppb}^{-1}} = 1.33 \text{ ppb} \nonumber$ Multiple External Standards Figure $2$ shows a typical multiple-point external standardization. The volumetric flask on the left contains a reagent blank and the remaining volumetric flasks contain increasing concentrations of Cu2+. Shown below the volumetric flasks is the resulting calibration curve. Because this is the most common method of standardization, the resulting relationship is called a normal calibration curve. When a calibration curve is a straight-line, as it is in Figure $2$, the slope of the line gives the value of kA. This is the most desirable situation because the method’s sensitivity remains constant throughout the analyte’s concentration range. When the calibration curve is not a straight-line, the method’s sensitivity is a function of the analyte’s concentration. In Figure $1$, for example, the value of kA is greatest when the analyte’s concentration is small and it decreases continuously for higher concentrations of analyte. The value of kA at any point along the calibration curve in Figure $1$ is the slope at that point. In either case, a calibration curve allows to relate Ssamp to the analyte’s concentration. Example $2$ A second spectrophotometric method for the quantitative analysis of Pb2+ in blood has a normal calibration curve for which $S_{std} = (0.296 \text{ ppb}^{-1} \times C_{std}) + 0.003 \nonumber$ What is the concentration of Pb2+ in a sample of blood if Ssamp is 0.397? Solution To determine the concentration of Pb2+ in the sample of blood, we replace Sstd in the calibration equation with Ssamp and solve for CA. $C_A = \frac {S_{samp} - 0.003} {0.296 \text{ ppb}^{-1}} = \frac {0.397 - 0.003} {0.296 \text{ ppb}^{-1}} = 1.33 \text{ ppb} \nonumber$ It is worth noting that the calibration equation in this problem includes an extra term that does not appear in Equation \ref{ca}. Ideally we expect our calibration curve to have a signal of zero when CA is zero. This is the purpose of using a reagent blank to correct the measured signal. The extra term of +0.003 in our calibration equation results from the uncertainty in measuring the signal for the reagent blank and the standards. An external standardization allows us to analyze a series of samples using a single calibration curve. This is an important advantage when we have many samples to analyze. Not surprisingly, many of the most common quantitative analytical methods use an external standardization. There is a serious limitation, however, to an external standardization. When we determine the value of kA using Equation \ref{ka}, the analyte is present in the external standard’s matrix, which usually is a much simpler matrix than that of our samples. When we use an external standardization we assume the matrix does not affect the value of kA. If this is not true, then we introduce a proportional determinate error into our analysis. This is not the case in Figure $3$, for instance, where we show calibration curves for an analyte in the sample’s matrix and in the standard’s matrix. In this case, using the calibration curve for the external standards leads to a negative determinate error in analyte’s reported concentration. If we expect that matrix effects are important, then we try to match the standard’s matrix to that of the sample, a process known as matrix matching. If we are unsure of the sample’s matrix, then we must show that matrix effects are negligible or use an alternative method of standardization. Both approaches are discussed in the following section. The matrix for the external standards in Figure $2$, for example, is dilute ammonia. Because the $\ce{Cu(NH3)4^{2+}}$ complex absorbs more strongly than Cu2+, adding ammonia increases the signal’s magnitude. If we fail to add the same amount of ammonia to our samples, then we will introduce a proportional determinate error into our analysis. Standard Additions We can avoid the complication of matching the matrix of the standards to the matrix of the sample if we carry out the standardization in the sample. This is known as the method of standard additions. Single Standard Addition The simplest version of a standard addition is shown in Figure $4$. First we add a portion of the sample, Vo, to a volumetric flask, dilute it to volume, Vf, and measure its signal, Ssamp. Next, we add a second identical portion of sample to an equivalent volumetric flask along with a spike, Vstd, of an external standard whose concentration is Cstd. After we dilute the spiked sample to the same final volume, we measure its signal, Sspike. The following two equations relate Ssamp and Sspike to the concentration of analyte, CA, in the original sample. $S_{samp} = k_A C_A \frac {V_o} {V_f} \label{sa_samp1}$ $S_{spike} = k_A \left( C_A \frac {V_o} {V_f} + C_{std} \frac {V_{std}} {V_f} \right) \label{sa_spike1}$ As long as Vstd is small relative to Vo, the effect of the standard’s matrix on the sample’s matrix is insignificant. Under these conditions the value of kA is the same in Equation \ref{sa_samp1} and Equation \ref{sa_spike1}. Solving both equations for kA and equating gives $\frac {S_{samp}} {C_A \frac {V_o} {V_f}} = \frac {S_{spike}} {C_A \frac {V_o} {V_f} + C_{std} \frac {V_{std}} {V_f}} \label{method_one}$ which we can solve for the concentration of analyte, CA, in the original sample. Example $3$ A third spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Ssamp of 0.193 when a 1.00 mL sample of blood is diluted to 5.00 mL. A second 1.00 mL sample of blood is spiked with 1.00 mL of a 1560-ppb Pb2+ external standard and diluted to 5.00 mL, yielding an Sspike of 0.419. What is the concentration of Pb2+ in the original sample of blood? Solution We begin by making appropriate substitutions into Equation \ref{method_one} and solving for CA. Note that all volumes must be in the same units; thus, we first covert Vstd from 1.00 mL to $1.00 \times 10^{-3} \text{ mL}$. $\frac {0.193} {C_A \frac {1.00 \text{ mL}} {5.00 \text{ mL}}} = \frac {0.419} {C_A \frac {1.00 \text{ mL}} {5.00 \text{ mL}} + 1560 \text{ ppb} \frac {1.00 \times 10^{-3} \text{ mL}} {5.00 \text{ mL}}} \nonumber$ $\frac {0.193} {0.200C_A} = \frac {0.419} {0.200C_A + 0.3120 \text{ ppb}} \nonumber$ $0.0386C_A + 0.0602 \text{ ppb} = 0.0838 C_A \nonumber$ $0.0452 C_A = 0.0602 \text{ ppb} \nonumber$ $C_A = 1.33 \text{ ppb} \nonumber$ The concentration of Pb2+ in the original sample of blood is 1.33 ppb. It also is possible to add the standard addition directly to the sample, measuring the signal both before and after the spike (Figure $5$). In this case the final volume after the standard addition is Vo + Vstd and Equation \ref{sa_samp1}, Equation \ref{sa_spike1}, and Equation \ref{method_one} become $S_{samp} = k_A C_A \label{sa_samp2}$ $S_{spike} = k_A \left( C_A \frac {V_o} {V_o + V_{std}} + C_{std} \frac {V_{std}} {V_o + V_{std}} \right) \label{sa_spike2}$ $\frac {S_{samp}} {C_A} = \frac {S_{spike}} {C_A \frac {V_o} {V_o + V_{std}} + C_{std} \frac {V_{std}} {V_o + V_{std}}} \label{method_two}$ Example $4$ A fourth spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Ssamp of 0.712 for a 5.00 mL sample of blood. After spiking the blood sample with 5.00 mL of a 1560-ppb Pb2+ external standard, an Sspike of 1.546 is measured. What is the concentration of Pb2+ in the original sample of blood? Solution $\frac {0.712} {C_A} = \frac {1.546} {C_A \frac {5.00 \text{ mL}} {5.005 \text{ mL}} + 1560 \text{ ppb} \frac {5.00 \times 10^{-3} \text{ mL}} {5.005 \text{ mL}}} \nonumber$ $\frac {0.712} {C_A} = \frac {1.546} {0.9990C_A + 1.558 \text{ ppb}} \nonumber$ $0.7113C_A + 1.109 \text{ ppb} = 1.546C_A \nonumber$ $C_A = 1.33 \text{ ppb} \nonumber$ The concentration of Pb2+ in the original sample of blood is 1.33 ppb. Multiple Standard Additions We can adapt a single-point standard addition into a multiple-point standard addition by preparing a series of samples that contain increasing amounts of the external standard. Figure $6$ shows two ways to plot a standard addition calibration curve based on Equation \ref{sa_spike1}. In Figure $6$a we plot Sspike against the volume of the spikes, Vstd. If kA is constant, then the calibration curve is a straight-line. It is easy to show that the x-intercept is equivalent to –CAVo/Cstd. Example $5$ Beginning with Equation \ref{sa_spike1} show that the equations in Figure $6$a for the slope, the y-intercept, and the x-intercept are correct. Solution We begin by rewriting Equation \ref{sa_spike1} as $S_{spike} = \frac {k_A C_A V_o} {V_f} + \frac {k_A C_{std}} {V_f} \times V_{std} \nonumber$ which is in the form of the equation for a straight-line $y = y\text{-intercept} + \text{slope} \times x\text{-intercept} \nonumber$ where y is Sspike and x is Vstd. The slope of the line, therefore, is kACstd/Vf and the y-intercept is kACAVo/Vf. The x-intercept is the value of x when y is zero, or $0 = \frac {k_A C_A V_o} {V_f} + \frac {k_A C_{std}} {V_f} \times x\text{-intercept} \nonumber$ $x\text{-intercept} = - \frac {k_A C_A V_o / V_f} {K_A C_{std} / V_f} = - \frac {C_A V_o} {C_{std}} \nonumber$ Because we know the volume of the original sample, Vo, and the concentration of the external standard, Cstd, we can calculate the analyte’s concentrations from the x-intercept of a multiple-point standard additions. Example $6$ A fifth spectrophotometric method for the quantitative analysis of Pb2+ in blood uses a multiple-point standard addition based on Equation \ref{sa_spike1}. The original blood sample has a volume of 1.00 mL and the standard used for spiking the sample has a concentration of 1560 ppb Pb2+. All samples were diluted to 5.00 mL before measuring the signal. A calibration curve of Sspike versus Vstd has the following equation $S_{spike} = 0.266 + 312 \text{ mL}^{-1} \times V_{std} \nonumber$ What is the concentration of Pb2+ in the original sample of blood? Solution To find the x-intercept we set Sspike equal to zero. $S_{spike} = 0.266 + 312 \text{ mL}^{-1} \times V_{std} \nonumber$ Solving for Vstd, we obtain a value of $-8.526 \times 10^{-4} \text{ mL}$ for the x-intercept. Substituting the x-intercept’s value into the equation from Figure $6$a $-8.526 \times 10^{-4} \text{ mL} = - \frac {C_A V_o} {C_{std}} = - \frac {C_A \times 1.00 \text{ mL}} {1560 \text{ ppb}} \nonumber$ and solving for CA gives the concentration of Pb2+ in the blood sample as 1.33 ppb. Since we construct a standard additions calibration curve in the sample, we can not use the calibration equation for other samples. Each sample, therefore, requires its own standard additions calibration curve. This is a serious drawback if you have many samples. For example, suppose you need to analyze 10 samples using a five-point calibration curve. For a normal calibration curve you need to analyze only 15 solutions (five standards and ten samples). If you use the method of standard additions, however, you must analyze 50 solutions (each of the ten samples is analyzed five times, once before spiking and after each of four spikes). Using a Standard Addition to Identify Matrix Effects We can use the method of standard additions to validate an external standardization when matrix matching is not feasible. First, we prepare a normal calibration curve of Sstd versus Cstd and determine the value of kA from its slope. Next, we prepare a standard additions calibration curve using Equation \ref{sa_spike1}, plotting the data as shown in Figure $6$b. The slope of this standard additions calibration curve provides an independent determination of kA. If there is no significant difference between the two values of kA, then we can ignore the difference between the sample’s matrix and that of the external standards. When the values of kA are significantly different, then using a normal calibration curve introduces a proportional determinate error. Internal Standards To use an external standardization or the method of standard additions, we must be able to treat identically all samples and standards. When this is not possible, the accuracy and precision of our standardization may suffer. For example, if our analyte is in a volatile solvent, then its concentration will increase if we lose solvent to evaporation. Suppose we have a sample and a standard with identical concentrations of analyte and identical signals. If both experience the same proportional loss of solvent, then their respective concentrations of analyte and signals remain identical. In effect, we can ignore evaporation if the samples and the standards experience an equivalent loss of solvent. If an identical standard and sample lose different amounts of solvent, however, then their respective concentrations and signals are no longer equal. In this case a simple external standardization or standard addition is not possible. We can still complete a standardization if we reference the analyte’s signal to a signal from another species that we add to all samples and standards. The species, which we call an internal standard, must be different than the analyte. Because the analyte and the internal standard receive the same treatment, the ratio of their signals is unaffected by any lack of reproducibility in the procedure. If a solution contains an analyte of concentration CA and an internal standard of concentration CIS, then the signals due to the analyte, SA, and the internal standard, SIS, are $S_A = k_A C_A \nonumber$ $S_{IS} = k_{SI} C_{IS} \nonumber$ where $k_A$ and $k_{IS}$ are the sensitivities for the analyte and the internal standard, respectively. Taking the ratio of the two signals gives the fundamental equation for an internal standardization. $\frac {S_A} {S_{IS}} = \frac {k_A C_A} {k_{IS} C_{IS}} = K \times \frac {C_A} {C_{IS}} \label{sa_sis}$ Because K is a ratio of the analyte’s sensitivity and the internal standard’s sensitivity, it is not necessary to determine independently values for either kA or kIS. Single Internal Standard In a single-point internal standardization, we prepare a single standard that contains the analyte and the internal standard, and use it to determine the value of K in Equation \ref{sa_sis}. $K = \left( \frac {C_{IS}} {C_A} \right)_{std} \times \left( \frac {S_A} {S_{IS}} \right)_{std} \label{K}$ Having standardized the method, the analyte’s concentration is given by $C_A = \frac {C_{IS}} {K} \times \left( \frac {S_A} {S_{IS}} \right)_{samp} \nonumber$ Example $7$ A sixth spectrophotometric method for the quantitative analysis of Pb2+ in blood uses Cu2+ as an internal standard. A standard that is 1.75 ppb Pb2+ and 2.25 ppb Cu2+ yields a ratio of (SA/SIS)std of 2.37. A sample of blood spiked with the same concentration of Cu2+ gives a signal ratio, (SA/SIS)samp, of 1.80. What is the concentration of Pb2+ in the sample of blood? Solution Equation \ref{K} allows us to calculate the value of K using the data for the standard $K = \left( \frac {C_{IS}} {C_A} \right)_{std} \times \left( \frac {S_A} {S_{IS}} \right)_{std} = \frac {2.25 \text{ ppb } \ce{Cu^{2+}}} {1.75 \text{ ppb } \ce{Pb^{2+}}} \times 2.37 = 3.05 \frac {\text{ppb } \ce{Cu^{2+}}} {\text{ppb } \ce{Pb^{2+}}} \nonumber$ The concentration of Pb2+, therefore, is $C_A = \frac {C_{IS}} {K} \times \left( \frac {S_A} {S_{IS}} \right)_{samp} = \frac {2.25 \text{ ppb } \ce{Cu^{2+}}} {3.05 \frac {\text{ppb } \ce{Cu^{2+}}} {\text{ppb } \ce{Pb^{2+}}}} \times 1.80 = 1.33 \text{ ppb } \ce{Pb^{2+}} \nonumber$ Multiple Internal Standards A single-point internal standardization has the same limitations as a single-point normal calibration. To construct an internal standard calibration curve we prepare a series of standards, each of which contains the same concentration of internal standard and a different concentrations of analyte. Under these conditions a calibration curve of (SA/SIS)std versus CA is linear with a slope of K/CIS. Although the usual practice is to prepare the standards so that each contains an identical amount of the internal standard, this is not a requirement. Example $8$ A seventh spectrophotometric method for the quantitative analysis of Pb2+ in blood gives a linear internal standards calibration curve for which $\left( \frac {S_A} {S_{IS}} \right)_{std} = (2.11 \text{ ppb}^{-1} \times C_A) - 0.006 \nonumber$ What is the ppb Pb2+ in a sample of blood if (SA/SIS)samp is 2.80? Solution To determine the concentration of Pb2+ in the sample of blood we replace (SA/SIS)std in the calibration equation with (SA/SIS)samp and solve for CA. $C_A = \frac {\left( \frac {S_A} {S_{IS}} \right)_{samp} + 0.006} {2.11 \text{ ppb}^{-1}} = \frac {2.80 + 0.006} {2.11 \text{ ppb}^{-1}} = 1.33 \text{ ppb } \ce{Pb^{2+}} \nonumber$ The concentration of Pb2+ in the sample of blood is 1.33 ppb. In some circumstances it is not possible to prepare the standards so that each contains the same concentration of internal standard. This is the case, for example, when we prepare samples by mass instead of volume. We can still prepare a calibration curve, however, by plotting $(S_A / S_{IS})_{std}$ versus CA/CIS, giving a linear calibration curve with a slope of K. You might wonder if it is possible to include an internal standard in the method of standard additions to correct for both matrix effects and uncontrolled variations between samples; well, the answer is yes as described in the paper “Standard Dilution Analysis,” the full reference for which is Jones, W. B.; Donati, G. L.; Calloway, C. P.; Jones, B. T. Anal. Chem. 2015, 87, 2321-2327.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.05%3A_Calibration_of_Instrumental_Methods.txt
• 2.1: Basic Terminology and Laws of Electricity Current, \(I\), is a movement of charge over time and is expressed in amperes, \(A\), where 1 ampere is equivalent to 1 coulomb/sec. In this section we review the convention used to describe currents in electrical circuits and review four laws of electricity. • 2.2: Direct Current (DC) Circuits A direct current, which is the focus of this section, is one that flows in one direction. There are two basic direct current circuits of importance to us: those with two or more resistors connected in a series, and those with two or more resistors arranged parallel to each other. Other direct current circuits can be understood in terms of these two basic circuits. • 2.3: Alternating Current Circuits A direct current has a fixed value that is independent of time. An alternating current, on the other hand, has a value that changes with time. This change in current follows a pattern that we can characterize by it period—the time for one complete cycle—or by its frequency, which is the reciprocal of its period. Frequency is reported in hertz (Hz), which is equivalent to one cycle per second. • 2.4: Semiconductors A semiconductor is a material whose resistivity to the movement of charge falls somewhere between that of a conductor, through which we can move a charge easily, and an insulator, which resists the movement of charge. Some semiconductors are elemental, such as silicon and germanium (both of which we examine more closely in this section) and some are multielemental, such as silicon carbide. 02: Electrical Components and Circuits Current, $I$, is a movement of charge over time and is expressed in amperes, $A$, where 1 ampere is equivalent to 1 coulomb/sec. In this section we review the convention used to describe currents in electrical circuits and review four laws of electricity. Conventional Currents If we connect one end of a wire to the positive terminal of a battery and connect the other end to the negative terminal of the same battery, then electrons will move through the and a current will flow through the wire. The electrons move from the battery's negative terminal through the wire to the battery's positive terminal. The direction of the current, however, runs from the battery's positive terminal to the battery's negative terminal; that is, current is treated as if it is the movement of positive charge. This probably strikes you as odd, but it simply reflects the original understanding of current from a time before the electron was identified. Figure $1$ shows the difference between these two ways of thinking about current. Laws of Electricity There are four basic laws of electricity that are important to us in this chapter: Ohm's law, Kirchhoff's laws, and the power law. Let's take a brief look at each. Ohm's Law Ohm's law explains the relationship between current, $I$, measured in amps ($A$), resistance, $R$, measured in ohms ($\Omega$), and potential, $V$, measured in volts ($V$), and is written as $V = I \times R \label{ohm}$ The voltage is measured between any two points in a circuit using a voltmeter. Kirchhoff's Two Laws The first of Kirchoff's two laws states that the sum of the currents at any point in a circuit must equal zero. $\sum{I} = 0 \label{kirch1}$ The second law states that the sum of the voltages in a closed loop must equal zero. $\sum{V} = 0 \label{kirch2}$ Power Law When a current passes through a resistor, the temperature of the resistor increases and power (energy per unit time) is lost. The amount of power lost, $P$, is the product of current and voltage, with units of joules/sec $P = I \times V \label{power1}$ or, substituting in Ohm's law (Equation \ref{ohm}), we can express power as $P = I^2 \times R = \frac{V^2}{R} \label{power}$ Note An excellent resource for this section and other sections in this chapter is Principles of Electronic Instrumentation by A. James Diefenderfer and published by W. B. Saunders Company, 1972.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/02%3A_Electrical_Components_and_Circuits/2.01%3A_Basic_Terminology_and_Laws_of_Electricity.txt
A direct current, which is the focus of this section, is one that flows in one direction. An alternating current, which is the focus of the next section, is one that periodically switches direction. Basic Direct Current (DC) Circuits There are two basic direct current circuits of importance to us: those with two or more resistors connected in a series, and those with two or more resistors arranged parallel to each other. Other direct current circuits can be understood in terms of these two basic circuits. Resistors in Series Figure $1$ shows an example of a simple DC circuit in which three resistors, with resistances of R1, R2, and R3, are connected in series to the two ends of battery that has a potential of V. A switch is included in the circuit that is used to close the loop and allow a current to flow from the battery's positive terminal to its negative terminal. Kirchoff's first law requires that the sum of the currents at any point in the circuit is zero. Consider point b. If the current that arrives from point a is $I$, then the current that leaves b is $-I$, where the sign tells us about the direction of the current with respect to the point. This requires that $I_a - I_c = 0$, which means that the current has the same absolute value at all points in the circuit. Application of Kirchhoff's second law requires that the sum of the voltages in this circuit equal 0, which is true if the sum of the voltage across each of the three resistors is equal to the voltage of the battery. The voltage across each resistor is given by Ohm's law $V = IR_1 + IR_2 + IR_3 \label{series1}$ If we divide both sides of Equation \ref{series1} by the current, then we have $V = I \times (R_1 + R_2 + R_3) = IR_s \label{series2}$ where $R_s$ is the circuit's effective resistance, which is the sum of the resistances of the individual resistors. One of the useful properties of this circuit is that the voltage drop across an individual resistor is proportional to that resistor's contribution to $R_s$. Consider the points in Figure $1$ labeled $a$ and $b$ that are on opposite sides of the first resistor in this series. The drop in voltage across this resister, $V_{ab}$, is $V_{ab} = I R_1 \label{series3}$ Dividing Equation \ref{series3} by Equation \ref{series1} $\frac{V_{ab}}{V} = \frac{IR_1}{IR_s} = \frac{R_1}{R_s} \label{series4}$ and $V_{ab} = V \times \frac{R_1}{R_s} \label{series5}$ The circuit in Figure $1$ is an example of a simple voltage divider in that it divides the battery's voltage into parts and allows us to use a single battery to select one of several possible voltages. For example, the voltage at between points a and b is $V_{ab} = V \times \frac{R_1}{R_s} \label{series6}$ the voltage at between points a and c is $V_{ac} = V \times \frac{R_1 + R_2}{R_s} \label{series7}$ and the voltage at between points a and d is $V_{ad} = V \times \frac{R_1 + R_2 + R_3}{R_s} = V \times \frac{R_s}{R_s} = V \label{series8}$ Parallel Circuits Figure $2$ shows an example of a simple DC circuit in which three resistors, with resistances of R1, R2, and R3, are connected parallel to each other. A switch is included in the circuit that is used to close the loop and allow a current to flow from the battery's positive terminal to its negative terminal. If we apply Kirchoff's first law to the current at the point identified as a, then the sum of the currents must equal zero and $\sum{I} = 0 = I - I_1 - I_2 - I_3 \label{parallel1}$ where I is the current entering point a and $I_1$, $I_2$, and $I_3$ are the currents passing through the three resistors. Rearranging Equation \ref{parallel1} and substituting into Ohm's law gives $\frac{V}{R_p} = \frac{V}{R_1} + \frac{V}{R_2} + \frac{V}{R_3} \label{parallel2}$ where $R_p$ is the circuit's effective resistance, which is equivalent to $\frac{1}{R_p} = \frac{1}{R_1} + \frac{1}{R_2} + \frac{1}{R_3} \label{parallel3}$ or to $G_p = G_1 + G_2 + G_3 \label{parallel4}$ where $G$ is a resistor's conductance, which is the inverse of its resistance. One of the useful properties of this circuit is that it serves as a current divider. The current passing through the resistor $R_1$ is $\frac{I_1}{I} = \frac{V/R_1}{V/R_p} = \frac{1/R_1}{1/R_p} = \frac{G_1}{G_p} \label{parallel5}$ Multiplying through by the total current gives $I_1 = I \times \frac{G_1}{G_p} \label{parallel6}$ More Complex Circuits The treatment above considers circuits that contain only resistors in series or resistors in parallel. A circuit that contains both resistors in series and resistors in parallel can often be simplified to an equivalent circuit that has only resistors in series or in parallel, or that consists of single resistor. Figure $3$ provides an example. The circuit at the far left shows two parallel resistors, $R_2$ and $R_3$, that, together, are in series with a third resistor, $R_1$. Using Equation \ref{parallel3} we can replace the two resistors in parallel with a single resistor, $R_4$, where $\frac{1}{R_4} = \frac{1}{R_2} + \frac{1}{R_3} \label{complex1}$ giving the equivalent circuit shown in the middle. Finally, we can use Equation \ref{series2} to replace the two resistors in series with the single resistor, $R_5$, as shown on the far right, where $R_5 = R_1 + R_4 \label{complex2}$ Measuring Voltage and Current Figure $4$ shows a digital multimeter that is used to measure voltage or current (amongst other possible measurements that we will not consider here). The measurement of voltages and currents always contains some error, the magnitude of which we consider here. Errors in Measuring Voltage To measure an unknown voltage of $V_x$ with an internal resistance of $R_x$, we include the meter with its internal resistance of $R_m$ as part of a voltage divider circuit. We read the voltage displayed on the meter, $V_m$, and use Equation \ref{series5} to determine $V_x$ $V_m = V_x \times \frac{R_m}{R_m + R_x} \label{meter1}$ If we do not know the value of $R_x$, which is often the case, then we can still report an accurate value for $V_x$ if $R_m >> R_x$, as we can then write $V_m = V_x \times \frac{R_m}{R_m + R_x} \approx V_x \times \frac{R_m}{R_m} \approx V_x \label{meter2}$ The percent error, $E_x$, in $V_x$ $E_x = \frac{V_m - V_x}{V_x} \times 100 = - \frac{R_m}{R_m + R_x} \times 100 \label{meter3}$ For example, suppose that $R_m = 10^3 \times R_x$, then the measurement error is $E = - \frac{R_x}{(10^3 \times R_x) + R_x} \times 100 = - \frac{1}{10^3 + 1} \times 100 = -0.0999\% \label{meter4}$ or approximately –0.1%. Errors in Measuring Current To measure an unknown current, $I_x$, we include the meter in a current divider circuit in which some of $I_x$ is drawn through a load resistor, $R_l$, of known value, and the remaining current is drawn through a known standard resistance set by the meter, $R_m$. Using Equation \ref{parallel5} for a current divider, the fraction of $I_x$ that passes through the meter is $\frac{I_m}{I_x} = \frac{R_m + R_l}{R_m} \label{meter5}$ Solving for $I_x$ gives $I_x = I_m \times \left( \frac{R_m}{R_m + R_l} \right) = I_m \times \left(1 + \frac{R_m}{R_l}\right) \label{meter6}$ If the resistors are selected such that $\frac{R_m}{R_l} << 1$, then the current displayed on the meter, $I_m$, is an accurate measure of $I_x$. The percent error in the reported current is $E_x = - \frac{R_m}{R_m + R_l} \times 100 \label{meter7}$ For example, suppose that $R_m = 10^{-3} \times R_l$, then the measurement error is $E = - \frac{10^{-3} \times R_l}{(10^{-3} \times R_l) + R_l} \times 100 = -\frac{10^{-3}}{10^{-3} + 1} \times 100 = -0.0999\%\label{meter8}$ or approximately $-0.1\%$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/02%3A_Electrical_Components_and_Circuits/2.02%3A_Direct_Current_Circuits_and_Measurements.txt
A direct current has a fixed value that is independent of time. An alternating current, on the other hand, has a value that changes with time. This change in current follows a pattern that we can characterize by it period—the time, $t_p$, for one complete cycle—or by its frequency, $f$, which is the reciprocal of its period $f = \frac{1}{t_p} \label{sine1}$ Frequency is reported in hertz (Hz), which is equivalent to one cycle per second. Sinusoidal Currents and Voltages Although we can draw many periodic signals—and will do so in later chapters—the simplest periodic signal is a sine wave: as shown on the right side of Figure $1$, the sine is a propagating wave whose amplitude, $A$, is a function of time, $t$, which we write as $a(t)$. The left side of Figure $1$ provides a rotating vector representation of the sine wave (a representation we will encounter again in Chapter 19 on NMR spectroscopy). The vector is the arrow that extends from the center of the circle to the circle's edge. It is rotating to the left with an angular velocity given by $\omega$ and that is expressed in radians per the sine wave's period, $t_p$; thus $\omega = \frac{2 \pi}{t_p} = 2 \pi f \label{sine2}$ where $f$ is the frequency. The amplitude of the sine wave as a function of time, $a(t)$, is equivalent to the projection of the rotating vector onto the x-axis; thus $a(t) = A\sin{\omega t} = A\sin{2 \pi f t} \label{sine3}$ In the context of this chapter, the amplitude is either a current, $i$, or a voltage, $v$. $i(t) = I\sin{\omega t} = I\sin{2 \pi f t} \label{sine4}$ $v(t) = V\sin{\omega t} = V\sin{2 \pi f t} \label{sine5}$ where $I$ is the maximum, or peak current, and $V$ is the maximum, or peak voltage. Equations \ref{sine4} and \ref{sine5} require that a sine wave's time-dependent amplitude, $a(t)$, has a value of zero when $t = n \pi$, where $n$ is an integer. There is no reason to insist on this and two sine waves can be separated from each other in time, as shown in Figure $2$, by a phase angle, $\Phi$. The equation for the sine wave when $\Phi \ne 0$ becomes $a(t) = A\sin{(\omega t + \Phi)} = A\sin{(2 \pi f t + \Phi)} \label{sine6}$ One complication of an alternating current is that the net current over the course of a single cycle is zero. This is a problem for us because the equation for power in a resistor is $P = \frac{I^2}{R} \ne 0 \label{sine7}$ Figure $3$ shows several ways to report current in AC circuits. The root-mean-square current, $I_{rms}$, is defined as $I_{rms} = \sqrt{\frac{I_p^2}{2}} = \sqrt{2} \times \frac{I_p}{2} = 0.707 \times I_p \label{sine8}$ and yields the same power in an AC circuit as a direct current of equal value in a DC circuit. The average current, $I_{avg}$, is $I_{avg} = \frac{1}{\pi} \int_{0}^{\pi}I_p \sin{\omega t}\, dt = \frac{2 I_p}{\pi} = 0.6371 \times I_p \label{sine9}$ Capacitors A capacitor is a component of circuits that is capable of storing charge. Figure $4$ shows the design of a typical capacitor and its symbol when constructing an electrical circuit. The capacitor consists of two conducting plates separated by a thin layer of an insulating, or dielectric material. The plates have areas of $A$ and are separated by a distance, $d$. The dielectric material has a dielectric constant, $\epsilon$. A simple capacitor might consist of two pieces of a metal foil separated by air, which serves as as the dielectric material. A capacitor's ability to store charge, $Q$, is given by $Q = C \times V \label{cap1}$ where $V$ is the voltage applied across the two plates and where $C$ is the capacitor's capacitance, which, in turn, is defined as $C = \frac{\epsilon A}{d} \label{cap2}$ Capacitance is measured in units of farads, where one farad is equal to one coulomb per volt. Resistor and Capacitor in Series Figure $5$ shows a resistor, with a resistance of $R$, and a capacitor, with a capacitance of $C$, in series with a voltage source, with a voltage of $V$. When the switch it closed, current flows as the capacitor builds up a charge. From Kirchoff's laws, we know that $V = v_R + v_C = iR + \frac{Q}{C} \label{cap3}$ where $v_R$ and $v_C$ are, respectively, the time-dependent voltages across the resistor and the capacitor. Because $V$ has a fixed value, any increase in $v_C$ as the capacitor is charged is offset by a decrease in $V_r$. Given that the values of $v_C$ and $v_R$—and the associated currents—are time-dependent, we can differentiate Equation \ref{cap3} with respect to time $\frac{dV}{dt} = 0 = \left( R \times \frac{di}{dt} \right) + \left( \frac{1}{C} \times \frac{dq}{dt} \right) = \left(R \times \frac{di}{dt}\right) + \frac{i}{C} \label{cap4}$ Rearranging Equation \ref{cap4} gives $\frac{di}{i} = - \frac{1}{RC}dt \label{cap5}$ Integrating both sides of this equation $\int_{I_{0}}^{i} \frac{1}{i} di = -\frac{1}{RC} \int_{0}^{t} dt \label{cap6}$ leads to the following relationship between the current at time $t$ and the initial current, $I_0$ $i_t = I_0 \times e^{-t/RC} \label{cap7}$ which tells us that the current decreases exponentially as the capacitor becomes fully charged. Replacing the current in equation \ref{cap7} with $\frac{V}{R}$ and substituting back into Equation \ref{cap3} $v_C = V_0 \left( 1 - e^{-t/RC} \right) \label{cap8}$ shows us that during the time the capacitor is being charged, the current flowing through the capacitor is decreasing exponentially to its limit of zero, and the voltage across the capacitor is increasing exponentially to its limit of the applied voltage. Time Constant The value $RC$ in Equation \ref{cap7} and in Equation \ref{cap8} is the circuit's time constant. It takes approximately five time constants for the capacitor to fully charge or fully discharge. Figure $6$ shows the voltage across the capacitor, $v_C$, as it is allowed to charge and to discharge. Time is shown in terms of the number of elapsed time constants, and voltage is expressed as a fraction of the maximum voltage. The dashed line shows that the time constant, $RC$, is equivalent to $0.63 \times$ the maximum voltage. Response of a Series RC Circuit to a Sinusoidal Input If we replace the DC voltage source in Figure $5$ with an AC source, then the capacitor will undergo a continuous fluctuation in its voltage and current as a function of time. We know, form Equation \ref{cap1} that charge, $Q$, is the product of capacitance, $C$, and voltage,$V$, which we can write as a derivative with respect to time. $\frac{dq}{dt} = C \times \frac{dv}{dt} \label{ac1}$ Phase Shift in an AC Circuit In an AC circuit, as we learned earlier in Equation \ref{sine4}, the current, which is equivalent to $dq/dt$ is $i = I_p \sin{2 \pi f t} \label{ac2}$ where $I_p$ is the peak current. Substituting into Equation \ref{ac1} gives $i = I_p \sin{2 \pi f t} = C \times \frac{dv}{dt} \label{ac3}$ Rearranging this equation and integrating over time gives the time-dependent voltage across the capacitor, $v_C$, as $v_C = \frac{I_p}{2 \pi f C} \left( -\cos{2 \pi f t} \right) \label{ac4}$ We can rewrite this equation in terms of a sine function instead of a cosine function by recognizing that the two are 90° out of phase with each other; thus $v_C = \frac{I_p}{2 \pi f C} \left( \sin{2 \pi f t -90} \right) = V_p \left( \sin{2 \pi f t -90} \right) \label{ac5}$ Comparing Equation \ref{ac2} and Equation \ref{ac5}, we see that the current and the voltage are 90° out-of-phase with each other; Figure $7$ shows this visually. Capacitive Reactance, Resistance, and Impedence From Equation \ref{ac5} we see that $V_p = \frac{I_p}{2 \pi f t} \label{ac6}$ Dividing both sides by $I_p$ gives $\frac{V_p}{I_p} = X_C = \frac{1}{2 \pi f t} \label{ac7}$ where $X_C$ is the capacitor's reactance, which, like a resistor's resistance, has units of ohms. Unlike a resistor, however, a capacitor's reactance is frequency dependent and, given the reciprocal relationship between $X_C$ and $f$, it becomes smaller at higher frequencies. In a RC circuit, both the resistor and the capacitor contribute to the circuit's impedence of the alternating current. Because the contribution of the capacitor is 90° out-of-phase to the contribution from the resistor, the net impedence, $Z$, is $Z = \sqrt{R^2 + X_C^2} \label{ac8}$ as shown in Figure $8$ where the vector that represents $Z$ is the hypotonus of a right triangle defined by the resistor's resistance and the capacitor's reactance. Substituting in Equation \ref{ac7} shows the effect of frequency on impedence. $Z = \sqrt{R^2 + \left( \frac{1}{2 \pi f t} \right)^2} \label{ac9}$ Writing Ohm's law in terms of impedence, $V_p = I_p \times Z$, and substituting it into Equation \ref{ac9}, defines $I_p$ and $V_p$ in terms of impedence. $V_p = I_p \times \sqrt{R^2 + \left( \frac{1}{2 \pi f t} \right)^2} \label{ac10}$ $I_p = \frac{V_p}{\sqrt{R^2 + \left( \frac{1}{2 \pi f t} \right)^2}} \label{ac11}$ Filters Based on RC Circuits The frequency dependence of an RC circuit provides us with the ability to attenuate some frequencies and to pass other frequencies. This allows for the selective filtering of an input signal. Here we consider the design of a low-pass filter that removes higher frequency signals, and the design of a high-pass filter that removes lower frequency signals. Figure $9$ shows that (a) a low-pass filter places the resistor before the capacitor and measures the output voltage, $V_{out}$, across the capacitor, and that (b) a high-pass filter places the capacitor before the resistor and measures the output voltage, $V_{out}$, across the resistor. Low-Pass Filter For the low-pass filter in Figure $9a$, the ratio of the voltage across the capacitor, $(V_p)_{out}$, to the peak input voltage, $(V_p)_{in}$, is equal to the fraction of the circuit's impedence, $Z$, attributed to the capacitor's reactance, $X_C$, as expected for a voltage divider that consist of elements in series. $\frac{(V_p)_{out}}{(V_p)_{in}} = \frac{X_C}{Z} = \frac{(2 \pi f C)^{-1}}{\sqrt{R^2 + \left( \frac{1}{2 \pi f C}\right)^2}} \label{lowpass1}$ Figure $10a$ shows the frequency response for a low-pass filter with a $1 \times 10^6 \text{ Hz}$ resistor and a $1 \times 10^{-6} \text{ F}$ capacitor, removing all frequencies greater than approximately $10^1$ Hz. High-Pass Filter For the high-pass filter in Figure $9b$, the ratio of the voltage across the resistor, $(V_p)_{out}$, to the peak input voltage, $(V_p)_{in}$, is equal to the fraction of the circuit's impedence, $Z$, attributed to the resistor's resistance, $R$, as expected for a voltage divider that consist of elements in series. $\frac{(V_p)_{out}}{(V_p)_{in}} = \frac{R}{Z} = \frac{R}{\sqrt{R^2 + \left( \frac{1}{2 \pi f C}\right)^2}} \label{lowpass2}$ Figure $10b$ shows the frequency response for a low-pass filter with a $1 \times 10^5 \text{ Hz}$ resistor and a $1 \times 10^{-7} \text{ F}$ capacitor, removing all frequencies less than approximately $10^{-1}$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/02%3A_Electrical_Components_and_Circuits/2.03%3A_Alternating_Current_Circuits.txt
A semiconductor is a material whose resistivity to the movement of charge falls somewhere between that of a conductor, through which we can move a charge easily, and an insulator, which resists the movement of charge. Some semiconductors are elemental, such as silicon and germanium (both of which we examine more closely in this section) and some are multielemental, such as silicon carbide. Properties of Silicon and Germanium Semiconductors A conductor, such as aluminum or copper, has a resistivity on the order of $10^{-8} - 10^{-6} \ \Omega \cdot m$, which means that its resistance to the movement of electrons is sufficiently small that it carries a current without much effort. An insulator, such as the mineral quartz, $\ce{SiO2}$, has a resistivity on the order of $10^{15} - 10^{19} \ \Omega \cdot m$. Silicon, on the other hand, has a resistivity of approximately $640 \ \Omega \cdot m$ and germainum has a resistivity of about $0.46 \ \Omega \cdot m$. Note The inverse of resistivity is conductivity. Silicon and germanium are in the same group as carbon. If we use a simplified view of atoms, we can treat silicon and germanium as having four valence electrons and an effective nuclear charge, $Z_{eff}$, of \begin{align*} (Z_{eff})_\ce{Si} &= Z - \text{ number of core electrons} \[4pt] &= 14 - 10 \[4pt] &= +4 \[4pt] (Z_{eff})_\ce{Ge} &= Z - \text{ number of core electrons} \[4pt] &= 32 - 28 \[4pt] &= +4 \end{align*} We can increase the conductivity of silicon and germanium by adding to them—this is called doping—a small amount of an impurity. Adding a small amount of In or Ga, which have three valence electrons instead of four valence electrons, leaves a small number of vacancies, or holes, in which an electron is missing. Adding a small amount of As or Sb, which have five valence electrons instead of four valence electrons, leaves a small number of extra electrons. Figure $1$ shows all three possibilities. If we apply a potential across the semiconductor doped with As or Sb, the extra electron moves toward the positive pole, creating a small current, and leaving behind a vacancy, or hole. If we apply a potential across the semiconductor doped with In or Ga, electrons enter the semiconductor from the negative pole, occupying the vacancies, or holes, and creating a small current. In both cases, electrons move toward the positive pole and holes move toward the negative pole. We call an As or Sb doped semiconductor an n-type semiconductor because the primary carrier of charge is an electron; we call an In or Ga doped semiconductor an p-type semiconductor because the primary carrier of charge is the hole. Semiconductor Diodes A diode is an electrical device that is more conductive in one direction than in the opposite direction. A diode takes advantage of the properties of the junction between a p-type and an n-type semiconductor. Properties of pn Junctions Let's use Figure $2$ to make sense of how a semiconductor diode works. The figure is divided into two parts: the left side of the figure, parts (a), (b), and (c), show the behavior of the semiconductor diode when a foward bias is applied, and the right side of the figure, parts (d), (e), and (f), show its behavior when a reverse bias is applied. For both, the semiconductor diode consists of a junction between a n-type semiconductor, which has an excess of electrons and carries a negative charge, and a p-type semiconductor, which has an excess of holes and, thus, a positive charge; this is shown in (a) and (d). How the semiconductor is manufactured is not of important to us. To effect a forward bias, we apply a positive potential to the p-type region and apply a negative potential to the n-type region. As we see in (b), the holes in the p-region move toward the junction and the electrons in the n-region move toward the junction as well. When holes and electrons meet they combine and are eliminated, which is why we see fewer holes and electrons in (c). Additional electrons flow into the n-region and electrons are pulled away from the p-region, as seen in (c), resulting in a current. To effect a reverse bias, we switch the applied potentials so that the p-region has the negative potential and the n-region has the positive potential. The result, as seen in (e) is a brief current as the holes and electrons move away from each other. leaving behind, in (f), a depletion zone that has essentially no electrons or holes. Current-Voltage Curves for Semiconductor Diodes Figure $3$ shows a plot of current as a function of voltage for a semiconductor diode. In forward bias mode the current increases exponentially with an increase in applied voltage, but remains at essentially zero when operated under a reverse bias. The use of a sufficiently large negative potential, however, does result in an sudden and dramatic increase in current; the potential at which this happens is called the breakdown voltage.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/02%3A_Electrical_Components_and_Circuits/2.04%3A_Semiconductors.txt
• 3.1: Operational Amplifiers An operational amplifier (or op amp, for short) is an electrical circuit that has a variety of uses, a few of which we consider in this section: how to amplify and measure the signal from a transducer (detector), and how to perform mathematical operations on signals. In this section we will provide a basic overview of operational amplifiers without worrying about the specific internal details of its electrical circuit. • 3.2: Operational Amplifier Circuits In the last section we noted that an operational amplifier magnifies the difference between two voltage inputs where the gain is typically between 10,000 and 1,000,000. To better control the gain—that is, to make the gain something we can adjust to meet our needs—the operational amplifier is incorporated into a circuit that allows for feedback between the output and the inputs. In this section, we examine two feedback circuits. • 3.3: Amplification and Measurement of Signals The basic components of an instrument are a probe that interacts with the sample, an input transducer that converts the sample's chemical and/or physical properties into an electrical signal, a signal processor that converts the electrical signal into a form we can understand. Information is encoded in two broad ways: as electrical information and as information in other, non-electrical forms. In this section we will consider how we can measure electrical signals. • 3.4: Mathematical Operations Using Operational Amplifiers The circuit for comparing two voltages is an example of using an operational amplifier to complete a mathematical operation. In this section we will examine several additional examples of mathematical operations completed using operational amplifiers. 03: Operational Amplifiers in Chemical Instrumentation (TBD) An operational amplifier (or op amp, for short) is an electrical circuit that has a variety of uses, a few of which we consider in this section: how to amplify and measure the signal from a transducer (detector), and how to perform mathematical operations on signals. In this section we will provide a basic overview of operational amplifiers without worrying about the specific internal details of its electrical circuit. Note An excellent resource for this section and other sections in this chapter is Principles of Electronic Instrumentation by A. James Diefenderfer and published by W. B. Saunders Company, 1972. Symbolic Representation of an Operational Amplifier Figure $1$ provides a symbolic representation of an operational amplifier. The large triangular shape is the operational amplifier, which is an extensive circuit whose exact design is not of interest to us; thus, the simple shape. The operational amplifier has two voltage inputs that are identified as $v_{-}$ and as $v_{+}$ and labeled as $-$ and $+$ on the op amp. The difference between $v_{-}$ and $v_{+}$ is defined as $v_{s}$. The operational amplifier also has a single voltage output that is identified as $v_{out}$. All voltages are measured relative to a circuit common of 0 V, represented by the small triangle at the bottom of the figure, that provides a shared reference; the circuit common is understood to be present even when it is not shown. Not included in this figure are the connections to a power supply, which are necessary for its operation. Inverting and Noninverting Inputs The minus sign and the plus sign that appear as labels on the op amp in Figure $1$ do not mean that one input has a positive value and that the other input has a negative value. Instead, an input to the lead with a negative sign is inverted: if $v_{-}$ is a negative DC voltage, then the output voltage, $v_{out}$, is a positive DC voltage, and if $v_{-}$ is a positive DC voltage, then the output voltage, $v_{out}$, is a negative DC voltage. For an AC input to $v_{-}$, the output is 180° out-of-phase, which implies a reversal in sign. The other input to the op amp is noninverting, which means that applying a positive voltage to $v_{+}$ results in a positive signal at $v_{out}$. Key Properties of Operational Amplifiers The ideal operational amplifier has several important properties that derive from its internal circuitry. The first of these properties is that the op amp's gain, $A$, which is defined as the ratio of the output voltage to the input voltage $A_{op} = - \frac{v_{out}}{v_{s}} = - \frac{v_{out}}{v_{-} - v_{+}} \label{prop1}$ is very large, typically on the order of 104 – 106. We need to be careful when we use the term gain as there can be a significant difference between the gain of the operational amplifier and the gain of the circuit that contains the operational amplifier. The gain of the operational amplifier, which is what we mean by Equation \ref{prop1}, is called the open-loop gain. The gain of a circuit that contains an operational amplifier is called a closed-loop gain. Where there is ambiguity, we will be careful to refer to the op amp's gain, $A_{op}$, or to the circuit's gain, $A_c$, as these are more descriptive. A second property of an operational amplifier is that regardless of the specific values of $v_{-}$ and $v_{+}$, the op amp's internal circuitry is designed such that the current between the two inputs is effectively zero; in essence, the impedence, $Z$, between the two inputs is so large that from Ohm's law, $V = I \times Z$, the current between these two inputs is $I \approx 0$. A large input impedence means we can connect our op amp to a high voltage source and know that it will draw a small current instead of overloading the circuit that includes the op amp. A third property of an operational amplifier is that its output impedence is very small, which means we can draw a current from the circuit that meets our needs—this current is drawn from the op amp's power supply—even if the current into the op amp is zero. For example, if the circuit's gain is small, we can use the operational amplifier to provide a large gain in current.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/03%3A_Operational_Amplifiers_in_Chemical_Instrumentation_(TBD)/3.01%3A_Properties_of_Operational_Amplifiers.txt
In the last section we noted that an operational amplifier magnifies the difference between two voltage inputs $A_{op} = - \frac{v_{out}}{v_{-} - v_{+}} \label{prop1}$ where the gain, $A_{op}$, is typically between 104 and 106. To better control the gain—that is, to make the gain something we can adjust to meet our needs—the operational amplifier is incorporated into a circuit that allows for feedback between the output and the inputs. In this section, we examine two feedback circuits. The Inverting Amplifier Circuit Figure $1$ is an example of an operational amplifier circuit with a negative feedback loop that consists of a resistor, $R_f$, that connects the op amp's output to its input at a summing point, $S$. Because the feedback loop is connected to the op amp's inverting input, the effect is called negative feedback. We can analyze this circuit using the laws of electricity from Chapter 2. Let's begin by rearranging Equation \ref{prop1} to solve for $v_{out}$ $v_{out} = - A_{op} \times (v_{-} - v_{+}) \label{negfb0}$ and then expand the right side of this equation $v_{out} = -A_{op} \times (v_{-} - v_{+}) = -A_{op} \times v_{-} + A_{op} \times v_{+} \label{negfb1}$ and solve for $v_{-}$ $v_{-} = v_{+} - \frac{v_{out}}{A_{op}} \label{negfb2}$ Because the op amp's gain $A_{op}$ is so large—recall that it is typically in the range 104 and 106—we can simplify Equation \ref{negfb2} to $v_{-} \approx v_{+} \label{negfb3}$ One consequence of Equation \ref{negfb3} is that for this circuit $v_{-} \approx 0 \text{ V}$ as it is at the circuit common. From Kirchoff's laws, we know that the total current that enters the summing point must equal the total current that leaves the summing point, or $I_{in} = I_s + I_f \label{negfb4}$ where $I_s$ is the current between the op amp's two inputs. As we noted in Chapter 3.1, an operational amplifier's internal circuitry is designed such at $I_s \approx 0$; thus $I_{in} = I_f \label{negfb5}$ Substituting in Ohm's law ($V = I \times R$) gives $\frac{v_{in} - v_{-}}{R_{in}} = \frac{v_{-} - v_{out}}{R_f} \label{negfb6}$ From Equation \ref{negfb3}, we know that $v_{-} \approx 0$, which allows us to simplify Equation \ref{negfb6} to $\frac{v_{in}}{R_{in}} = -\frac{v_{out}}{R_f} \label{negfb7}$ Rearranging, we find that the gain for the circuit, $A_c$, is $A_c = \frac{v_{out}}{v_{in}} = - \frac{R_f}{R_{in}} \label{negfb8}$ Equation \ref{negfb8} shows us that circuit in Figure $1$ returns a voltage, $v_{out}$, that has the opposite sign of $v_{in}$ with a gain for the circuit that depends on only the relative values of the two resistors, $R_f$ and $R_{in}$. The Voltage-Follower Circuit Figure $2$ shows another operational amplifier with a feedback loop. In this case the input to the op amp, $v_{in}$, is made to the noninverting lead and the output is feed back into the op amp's inverting lead. From Kirchoff's voltage law, we know that the op amp's output voltage is equal to the sum of the input voltage and the difference, $v_s$ between the voltage applied to the op amp's two leads; thus $V_{out} = v_{in} + v_{s} \label{follow1}$ The op amp's gain, $A_{op}$, is defined in terms of $v_s$ and $v_{out}$ $- A_{op} = \frac{v_{out}}{v_{s}} \label{follow2}$ where the minus sign is due to the change in sign between the output voltage and the voltage applied to the inverting lead. Substituting Equation \ref{follow2} into Equation \ref{follow1} gives $V_{in} - \frac{V_{out}}{A_{op}} = V_{out} \label{follow3}$ Because the operational amplifier's gain—which is not the same thing as the circuit's gain—is large, Equation \ref{follow3} becomes $v_{in} = v_{out} \label{follow4}$ Our analysis of this circuit shows that it returns the original voltage without any gain. It does, however, allow us to draw that voltage from the circuit with more current than the original voltage source might be able to handle.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/03%3A_Operational_Amplifiers_in_Chemical_Instrumentation_(TBD)/3.02%3A_Operational_Amplifier_Circuits.txt
In Chapter 1.3 we identified the basic components of an instrument as a probe that interacts with the sample, an input transducer that converts the sample's chemical and/or physical properties into an electrical signal, a signal processor that converts the electrical signal into a form that an output transducer can convert into a numerical or a visual output that we can understand. We can represent this as a sequence of actions that take place within the instrument $\text{probe} \rightarrow \text{sample} \rightarrow \text{input transducer} \rightarrow \text{raw data} \rightarrow\text{signal processor} \rightarrow \text{output transducer} \nonumber$ creating the following general flow of information $\text{chemical and/or physical information} \rightarrow \text{electrical information} \rightarrow \text{numerical or visual response} \nonumber$ As suggested above, information is encoded in two broad ways: as electrical information (such as currents and potentials) and as information in other, non-electrical forms (such as chemical and physical properties). In this section we will consider how we can measure electrical signals. Current Measurements In Chapter 7 we will introduce the phototube, seen here in Figure $1$, as a transducer for converting photons of light into an electrical current that we can measure. A photon of light strikes a photoemissive cathode and ejects an electron that is then drawn to an anode that is held a positive potential. The resulting current is our analytical signal. As each photon generates a single electron, the resulting current is small and needs amplifying if it is to be useful to us. Operational amplifiers provide a way to accomplish this amplification. Figure $2$ shows a simple electrical circuit that we can use to amplify and measure a small current. If you compare this to the inverting amplifier circuit in Chapter 3.2, you will see that we are replacing the input voltage and input resistor with the input current, $I_x$, that we wish to measure. From Kirchoff's current law, we know that at the summing point, $S$, the current from the transducer is equal to the sum of the current through the feedback loop, $I_f$, and the current to the op amp's inverting input, $I_s$. $I_x = I_s + I_f \label{iv1}$ As we learned in the last section, $I_s \approx 0$, which means $I_x \approx I_f$. From Ohm's law, we have $V_{out} = - I_f \times R_f = -I_x \times R_f \label{iv2}$ Rearranging to solve for $I_x$ $I_x = - \frac{V_{out}}{R_f} = k V_{out} \label{iv3}$ shows us that there is a linear relationship between the voltage we measure from the circuit's output and the current that enters the circuit. By choosing to make $R_f$ large, a small current is converted into a voltage that is easy to measure. In addition, we know from Chapter 2 that the error in measuring the current from the transducer, $E_x$, is $E_x = - \frac{R_m}{R_m + R_l} \times 100 \label{iv4}$ where $R_m$ is the resistance of the measuring circuit and $R_l$ is the resistance of the source, which generally is large. The resistance of the measuring circuit is $R_{m} = \frac{R_r}{A_{op}} \label{iv5}$ If we choose $R_f$ such that it is similar in magnitude to the op amp's gain, $A_{op}$, then $R_m$ is small and the relative error is small as well. Potential Measurements From Chapter 2 we know that the error in measuring voltage, $E_x$, is a function of the resistance of the measuring circuit, $R_m$, and the resistance of the source, $R_x$. $E_x = \frac{V_m - V_x}{V_x} \times 100 = - \frac{R_m}{R_m + R_x} \times 100 \label{volt1}$ To maintain a small measurement error requires that $R_x << R_m$. This creates a complication when the voltage source has a high internal resistance, as is the case, for example, when we measure pH using a glass electrode where the internal resistance is on the order of $10^7 - 10^8 \Omega$ (see Chapter 23 for details about glass electrodes). The inverting amplifier circuit discussed in Chapter 3.2 has a resistance of perhaps $10^5 \Omega$. To increase $R_m$, the voltage we wish to measure, $V_x$, is first run through a voltage follower circuit, where the internal resistance is on the order of $10^{12} \Omega$, and the output is then run through the inverting amplifier, as seen in Figure $3$. The result is an amplified output voltage measured under conditions where the relative error is small. Comparison of Transducer Outputs In Chapter 13 we will cover molecular absorption spectroscopy in which we measure the absorbance of sample relative to the absorbance of a reference. A difference amplifier, such as that shown in Figure $4$, allows us to amplify and measure the difference between two voltages. In this circuit, the two voltages, $v_1$ and $v_2$, are fed into the op amp's two inputs, $v_{-}$ and $v_{+}$, passing through identical resistors. A feedback loop with a resistor connects $v_1$ to $v_{out}$ and a resistor identical to that in the feedback loop connects $v_2$ to the circuit common. We can use Ohm's law to define the currents $I_1$ and $I_f$ as $I_1 = \frac{v_1 - v_{-}}{R_i} \label{comp1}$ $I_f = \frac{v_{-} - v_{out}}{R_f} \label{comp2}$ By now you should see that the currents $I_1$ and $I_f$ are approximately the same because the op amp's high internal impedence prevents current from flowing into the op amp. Combining Equation \ref{comp1} and Equation \ref{comp2} gives $\frac{v_1 - v_{-}}{R_i} = \frac{v_{-} - v_{out}}{R_f} \label{comp3}$ which we can solve for the voltage at the op amp's inverting input $R_f v_1 - R_f v_{-} = R_i v_{-} - R_i V_{out} \label{comp4}$ $R_i v_{-} + R_f v_{-} = R_f v_1 + R_i v_{out} \label{comp5}$ $v_{-} = \frac{R_f v_1 + R_i v_{out}}{R_i + R_f} \label{comp6}$ The input to the op amp's noniverting lead is the output of a voltage divider (see Chapter 2) acting on $v_2$ $v_{+} = v_2 \times \left( \frac{R_f}{R_i + R_f} \right) \label{comp7}$ The feedback loop works to ensure that the inputs to $v_{-}$ and to $v_{+}$ are identical; thus $\frac{R_f v_1 + R_i v_{out}}{R_i + R_f} = v_2 \times \left( \frac{R_f}{R_i + R_f} \right) \label{comp8}$ which we can simply to $V_1 R_f + V_{out} R_i = V_2 R_f \label{comp9}$ $V_{out} = \frac{R_f}{R_i} \times (V_2 - V_1) \label{comp10}$ The output voltage from the circuit is equal to the difference between the two input voltages, but amplified by the ratio of the resistance of $R_f$ to $R_i$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/03%3A_Operational_Amplifiers_in_Chemical_Instrumentation_(TBD)/3.03%3A_Amplification_and_Measurement_of_Signals.txt
The circuit for comparing two voltages is an example of using an operational amplifier to complete a mathematical operation. In this section we will examine several additional examples of mathematical operations completed using operational amplifiers. Multiplication and Division by a Constant The inverting amplifier that we considered earlier, and that is reproduced here in Figure $1$, returns an output voltage, $v_{out}$ that multiplies the input voltage by an amount that depends on the ratio of the resistors $R_{in}$ and $R_f$. $v_{out} = - v_{in} \times \frac{R_f}{R_{in}} \label{math1}$ Multiplication takes place when $R_f > R_{in}$ and division takes place when $R_{in} > R_f$. Note that there is a reversal in the sign of the voltage. Addition or Subtraction Figure $2$ shows an operational amplifier circuit that adds together four separate input voltages. From our earlier analysis of circuits, you should see that $I_f = I_1 + I_2 + I_3 + I_4 \label{math2}$ We can replace $I_f$ in this equation using Ohm's law; thus $v_{out} = - R_f \times \left( \frac{v_1}{R_1} + \frac{v_2}{R_2} + \frac{v_3}{R_3} + \frac{v_4}{R_4} \right) \label{math3}$ If all five of the resistors are identical, then $v_{out}$ is a simple summation of the four input voltages. $v_{out} = - (v_1 + v_2 + v_3 + v_4) \label{math4}$ If we choose $R_f$ such that it is $0.25 \times R_1$ and set $R_1 = R_2 = R_3 = R_4$, then the output voltage is the average of the input voltages $v_{out} = - \frac{v_1 + v_2 + v_3 + v_4}{4} \label{math5}$ The voltage comparator covered in the last section subtracts one voltage from another. When more than two voltages are involved, then we can adapt the voltage adder circuit in Figure $2$ to include subtraction by first running the voltage we wish to subtract through the inverting amplifier introduced in Chapter 3.2. Figure $3$ shows this where $v_{out} = - (v_4 + v_3 + v_2 - v_1)$. Integration Figure $4$ shows an operational amplifier circuit that we can use to integrate a time-dependent signal. The circuit has a feedback loop, but it is built around a capacitor instead of a resistor because it stores charge over time. The circuit also has two switches that allow us to use the circuit over a specific period of time. When the hold switch is open, the input voltage cannot enter the circuit. Closing the hold switch sets $t = t_0$. As long as the reset switch is open, current moves through the feedback loop. Opening the hold switch sets $t = t_f$, where $t_f$ is the total elapsed time. When this cycle is over, closing the reset switch drains the capacitor so that it is ready for its next use. As we have seen several times, the current into the summing point, $I_{in}$ is equal to the current in the feedback loop. $I_{in} = I_f \label{int1}$ From Chapter 2, we know that the current in the feedback loop is $I_f = - C_f \frac{d v_{out}}{dt}$ and, we know from Ohm's law that $i_{in} = \frac{V_{in}}{R_{in}}$. Substituting both relationships into Equation \ref{int1} gives $\frac{V_{in}}{R_{in}} = - C_f \frac{d v_{out}}{dt} \label{int2}$ Rearranging this equation $d v_{out} = -\frac{v_{in}}{R_i C_f} dt \label{int3}$ and integrating over time gives $\int_{v_{out,1}}^{v_{out,2}} d v_{out} = -\frac{1}{R_{in} C_f} \int_{t_1}^{t_2} v_{in} dt \label{int4}$ If we begin the integration having previously discharged the capacitor and define $t_1$ as the the moment we close the hold switch and define $t_2$ as the moment we reopen the hold switch, then Equation \ref{int4} becomes $v_{out} = -\frac{1}{R_{in} C_f} \int_{0}^{t} v_{in} dt \label{int5}$ and the output voltage is the integral of the input voltage multiplied by $(-R_{in} C_f)^{-1}$. Differentiation Reversing the capacitor and the resistor in the circuit in Figure $4$ coverts the circuit from one that returns the integral of the input signal, into one that returns the derivative of the input signal; Figure $5$ shows the resulting circuit. For this circuit we have $I_{in} = C \times \frac{d v_{in}}{dt}$ and $I_f = - \frac{V_{out}}{R}$. Given that $I_{in} = I_f$, we are left with $- \frac{V_{out}}{R} = C \times \frac{d v_{in}}{dt} \label{deriv1}$ $v_{out} = - R C \times \frac{d v_{in}}{dt} \label{deriv2}$
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/03%3A_Operational_Amplifiers_in_Chemical_Instrumentation_(TBD)/3.04%3A_Application_of_Operational_Amplifiers_to_Mathematical_Operations.txt
• 4.1: Analog and Digital Data The analog trace of an experiment—such as a spectrum— provides a permanent record of an experiment; it is not, however, in form that gives us access to the raw data. We can take the image and use digitizing software to extract a digital version of the data, or we can design our instruments to collect the data in digital form by sampling the analog signal at preset intervals and then saving the data. • 4.2: Working With Binary Numbers Humans are comfortable working with numbers expressed using a decimal notation that relies on 10 unique digits (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9), but computers work with information using a binary notation that is limited to just two unique digits (0 and 1). It is useful, therefore, to be familiar with how we represent numbers in both decimal and binary form. • 4.3: Cleaning Up Signals and Counting Events How an instrument handles signals depends on what is being measured. Broadly speaking an instrument is likely to include one or more of the following: the ability to clean up the raw signal and convert it into a form that we can analyze, the ability to count events in binary form, the ability to convert binary information into a digital information, and the ability to convert between digital and analog signals. In this section we will consider the first two of these topics. 04: Digital Electronics and Microcomputers (TBD) Figure \(1\) shows an xy-recorder that we can use to provide a permanent record of a cyclic voltammetry experiment. In this particular experiment, we apply a variable potential to an electrochemical cell and measure the current that flows in response to this potential (see Chapter 25 for a discussion of cyclic voltammetry). The potential and the current, which is converted into a voltage for the purpose of recording the cyclic voltammogram, are fed into the recorder using the cables on the right side of the recorder. The Y1 Range and the X Range controls allow us to adjust the scales of the axes. The vertical bar on the xy-recorder moves toward the recorder's left or right based on the applied potential, and a pen attached to the vertical bar moves toward the recorder's top or bottom based on the measured current. The applied potential and the current are continuous variables within the instrument's range; the resulting cyclic voltammogram in Figure \(2\) is an analog record of the experiment. Although the analog trace in Figure \(2\) provides a permanent record of an experiment, it is not in form that gives us access to the raw data. We can take the image and use digitizing software (see here for an open-source digitizer) to extract a digital version of the data, or we can design our instruments to collect the data in digital form by sampling the analog signal at preset intervals and then saving the data. Such files often are in a format that includes metadata that explains how to extract the data from the file. For example, xy-coordinate data for a wide variety of spectroscopy experiments is often stored digitally using a format established by the Joint Committee on Atomic and Molecular Physical Data (JCAMP). Such files have the extension .jdx and can be opened using a variety of different software programs. Figure \(3\) is a screenshot that illustrates how we can work with digitized data using data analysis software, such as R and RStudio. The upper left panel shows some of the contents of a .jdx file that contains the IR spectrum of methanol (in this case, digitized by NIST from an analog hard copy). The lines preceded by double hashtags (##) are metadata that provide information about the x-axis scale (minimum and maximum limits and increments between values), the y-axis scale (minimum and maximum values), and the number of data points. This is followed by multiple lines of digitized data. Each line of data contains one value of x and five values of y. The R package readJDX was used to extract the information from the .jdx file and to store it in a variable given the name methanol (see upper right panel). Code written in R (see lower left panel) was used to plot (see lower right panel) the spectrum. Although the spectrum for methanol in Figure \(4\)—with its smooth, continuous line—looks like an analog spectrum, this is a result of choosing to plot the data as a sequence of lines that connect individual points without actually displaying the individual points themselves. Figure \(4\), in which we plot only the individual data points, shows us that the spectrum actually consists of discrete, digitized data.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/04%3A_Digital_Electronics_and_Microcomputers_(TBD)/4.01%3A_Analog_and_Digital_Signals.txt
In Chapter 7 we will examine several transducers for counting photons. The transducers are made of an array—some use a linear array and some use a two-dimensional array—of individual detecting units. We will worry about the details of how these transducers work in Chapter 7, but if you take a quick look at Figure 7.5.4 – 7.5.6 you will see that the number of individual detecting units are interesting: a linear array of 1024 individual units; another linear array, but with 2048 units, and a two-dimensional array that has $1024 \times 1024 = 1,048,576$ individual units. What is interesting about these numbers is that each is a power of two: $1024 = 2^{10}$, $2048 = 2^{11}$, and $1,048,576 = 10^{20}$. Humans are comfortable working with numbers expressed using a decimal notation that relies on 10 unique digits (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9), but computers work with information using a binary notation that is limited to just two unique digits (0 and 1). Although we will not complete calculations using binary numbers, you will see examples of instrumental methods, such as FT-NMR, where the data analysis algorithms (the Fourier transform in this case) require that the number of data points be a power of two. It is useful, therefore, to be familiar with how we represent numbers in both decimal and binary form. Decimal Representation of Numbers My university was founded in 1837, which is a decimal expression of the year. Each of these four digits represents a power of 10, a fact that is clear when we read the number out loud: one thousand—eight hundred— thirty—seven, or, when we write it out this way $(1 \times 1000) + (8 \times 100) + (3 \times 10) + (7 \times 1) = 1837 \nonumber$ or this way $(1 \times 10^3) + (8 \times 10^2) + (3 \times 10^1) + (7 \times 10^0) = 1837 \nonumber$ We refer to the 7 being in the ones place ($10^0 = 1$), the 3 in the tens place ($10^1 = 10$), the 8 in the hundreds place ($10^2 = 100$), and the 1 in the thousands place ($10^3 = 1000$). Figure $1a$ shows these three ways of representing a number using a decimal notation. Binary Representation of Numbers The decimal number 1837 is 11100101101 in binary notation. We can see that this is true if we follow the pattern for decimal numbers in reverse. There are eleven binary digits, so we begin by expressing the number as multiples of the powers of two from $2^{10}$ to $2^{0}$, beginning with the digit furthest to the left and moving to the right $(1 \times 2^{10}) + (1 \times 2^{9}) + (1 \times 2^{8}) + (0 \times 2^{7}) + (0 \times 2^{6}) + (1 \times 2^{5}) + (0 \times 2^{4}) + (1 \times 2^{3}) + (1 \times 2^{2}) + (0 \times 2^{1}) + (1 \times 2^{0}) = 1837 \nonumber$ Each power of two has a decimal equivalent—$2^4$ is the same as $2 \times 2 \times 2 \times 2 = 16$, for example—which we can express here as $(1 \times 1024) + (1 \times 512) + (1 \times 256) + (0 \times 128) + (0 \times 64) + (1 \times 32) + (0 \times 16) + (1 \times 8) + (1 \times 4) + (0 \times 2) + (1 \times 1) = 1837 \nonumber$ Each power of two represents a place as well; thus, the second 0 from the right is in the sixteenths place. Figure $1b$ provides a visual representation of these ways of expressing a binary number. Converting Between Decimal and Binary Representations of Numbers There are lots of on-line calculators that you can use to convert between decimal and binary representations of numbers, such as the one here. Still, it is useful to be comfortable with converting numbers by hand. Converting a binary number into its decimal equivalent is straightforward, as we showed above for the binary representation of the year in which my university was founded $11100101101 = (1 \times 1024) + (1 \times 512) + (1 \times 256) + (0 \times 128) + (0 \times 64) + (1 \times 32) + (0 \times 16) + (1 \times 8) + (1 \times 4) + (0 \times 2) + (1 \times 1) = 1837 \nonumber$ Converting a decimal number, such as 1837, into its binary equivalent requires a bit more work; Table $1$ will help us organize the conversion. We begin by writing the dividend, which is 1837, in the left-most column and divide it by 2, writing the quotient of 918 in the second column and the remainder of 1 in the third column; note that dividing by 2 gives a remainder of 0 if the dividend is even or a remainder of 1 if the dividend is odd. The remainder is the exponent for the first place in the binary notation. In this case, we have $2^0 = 1$. The quotient becomes the dividend for the next cycle, with the process continuing until we achieve a quotient of 0. The binary equivalent of the original decimal is given by reading the remainders from bottom-to-top as 11100101101. Table $1$. Converting a binary number into its decimal equivalent. dividend quotient remainder binary notation 1837 918 1 $2^0 = 1$ 918 459 0 $2^1 = 1$ 459 229 1 $2^2 = 1$ 229 114 1 $2^3 = 1$ 114 57 0 $2^4 = 1$ 57 28 1 $2^5 = 1$ 28 14 0 $2^6 = 1$ 14 7 0 $2^7 = 1$ 7 3 1 $2^8 = 1$ 3 1 1 $2^9 = 1$ 1 0 1 $2^{10} = 1$ 4.03: Basic Digital Circuit Components How an instrument handles signals depends on what is being measured, so we cannot develop here a single model that applies to all instruments. Broadly speaking, however, an instrument is likely to include one or more of the following: the ability to clean up the raw signal and convert it into a form that we can analyze; the ability to count events in binary form; the ability to convert binary information into a digital information; and the ability to convert between digital and analog signals. In this section we will cover the first two of these topics. Cleaning Up a Signal Suppose our instrument is designed to count discrete events, perhaps a Geiger counter that detects the emission of $\beta$ particles, or a photodiode that detects photons. Even though a time-dependent count of particles is a digital signal, the raw signal (a voltage) likely consists of digital pulses superimposed on a background signal that contains noise, as seen in Figure $1$. The total signal, therefore, is in analog form. To clean up this signal we want to accomplish two things: remove the noise and ensure that each pulse is counted. A simple way to accomplish this is to set a threshold signal and use a voltage follower operational amplifier (see Chapter 3) to set all voltages below the threshold to a logical value of 0 and all voltages above the threshold to a logical value of 1. As seen in Figure $2$, the choice of the threshold voltage must be chosen carefully if we are to resolve closely spaced pulses and discriminate against noise. Note that the peak-shaped pulses become rectangular pulses. Binary Pulse Counter To count the pulses in Figure $2$ we can send them though a binary pulse counter (BPC). Figure $3$ shows how such a counter works. In this case, the BPC has three registers, each of which can be in a logical state of 0 or 1. With three registers, we are limited to counting no more than $2^3 = 8$ pulses; a more useful BPC would have more registers. We can treat the pulses as entering the BPC from the right. When a pulse enters a register, it flips each register from 1 to 0 or from 0 to 1, stopping after if first flips a register from 0 to 1. For example, the second pulse flips the right-most register from 1 to 0 and the middle register from 0 to 1; because the middle register initially was at 0, the counting of this pulse comes to an end.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/04%3A_Digital_Electronics_and_Microcomputers_(TBD)/4.02%3A_Working_With_Binary_Numbers.txt
When we try to calibrate an analytical method or to optimize an analytical system, our ability to do so successfully is limited by the uncertainty, or noise, in our measurements and by background signals that interfere with our ability to measure the signal of interest to us. In this chapter we will consider how we characterize noise, example of sources of noise, and ways to clean up our data by decreasing the contribution of noise to our measurements and by correcting for the presence of background signals. • 5.1: The Signal-to-Noise Ratio When we make a measurement it is the sum of two parts, a determinate, or fixed contribution that arises from the analyte and an indeterminate, or random, contribution that arises from uncertainty in the measurement process. We call the first of these the signal and we call the latter the noise. There are two broad categories of noise: that associated with obtaining samples and that associated with making measurements. Our interest here is in the latter. • 5.2: Sources of Instrumental Noise Noise is a random fluctuation in the signal that limits our ability to detect the presence of the underlying signal. There are a variety of ways in which noise can enter into our measurements. In this chapter, we consider sources of noise that arise from the instruments we use to make measurements. We call these sources of instrumental noise. • 5.3: Signal-to-Noise Enhancement There are two broad approaches we can use to improve the signal-to-noise ratio: hardware and software. Hardware approaches are built into the instrument and include decisions on how the instrument is set-up for making measurements and how the signal is processed by the instrument. Software solutions are computational approaches in which we manipulate the data either while we are collecting it or after data acquisition is complete. 05: Signals and Noise (TBD) When we make a measurement it is the sum of two parts, a determinate, or fixed contribution that arises from the analyte and an indeterminate, or random, contribution that arises from uncertainty in the measurement process. We call the first of these the signal and we call the latter the noise. There are two broad categories of noise: that associated with obtaining samples and that associated with making measurements. Our interest here is in the latter. What is Noise? Noise is a random event characterized by a mean and standard deviation. There are many types of noise, but we will limit ourselves for now to noise that is stationary, in that its mean and its standard deviation are independent of time, and that is heteroscedastic, in that its mean and its variance (and its standard deviation) are independent of the signal's magnitude. Figure $\PageIndex{1a}$ shows an example of a noisy signal that meets these criteria. The x-axis here is shown as time—perhaps a chromatogram—but other units, such as wavelength (spectroscopy) or potential (electrochemistry), are possible. Figure $\PageIndex{1b}$ shows the underlying noise and Figure $\PageIndex{1c}$ shows the underlying signal. Note that the noise in Figure $\PageIndex{1b}$ appears consistent in its central tendency (mean) and its spread (variance) along the x-axis and is independent of the signal's strength. How Do We Characterize the Signal and the Noise? Although we characterize noise by its mean and its standard deviation, the most important benchmark is the signal-to-noise ratio, $S/N$, which we define as $S/N = \frac{S_\text{analyte}}{s_\text{noise}} \nonumber$ where $S_\text{analyte}$ is the signal's value at particular location on the x-axis and $s_\text{noise}$ is the standard deviation of the noise using a signal-free portion of the data. As general rules-of-thumb, we can measure the signal with some confidence when $S/N \ge 3$ and we can detect the signal with some confidence when $3 \ge S/N \ge 2$. For the data in Figure $1$, and using the information in the figure caption, the signal-to-noise ratios are, from left-to-right, 10, 6, and 3. Note To measure the signal with confidence implies we can use the signal's value in a calculation, such as constructing a calibration curve. To detect the signal with confidence means we are certain that a signal is present (and that an analyte responsible for the signal is present) even if we cannot measure the signal with sufficient confidence to allow for a meaningful calculation.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/05%3A_Signals_and_Noise_(TBD)/5.01%3A_The_Signal-to-Noise_Ratio.txt
When we make an analytical measurement, we are interested in both the accuracy and the precision of our results. Noise, as we learned in the previous section, is a random fluctuation in the signal that limits our ability to detect the presence of the underlying signal. There are a variety of ways in which noise can enter into our measurements. Some of these sources of noise are related to the process of collecting and processing samples for analysis; these sources of noise, which we might collectively call chemical sources of noise, are important and receive consideration in those sections of this textbook that consider the application of analytical methods. In this chapter, we will limit ourselves to considering sources of noise that arise from the instruments we use to make measurements. We call these sources of instrumental noise. Thermal Noise Even when an external voltage is not applied to an electrical circuit, a small current is present due to the random motion of electrons that arises from the temperature of the surroundings; we can this thermal (or, sometimes, Johnson) noise. The magnitude of this noise in any electrical element increases with temperature, of course, but it also is affected by its resistance, and by how quickly it responds to a change in the signal. Mathematically, we express this as the root-mean-square voltage, $\nu_{\text{rms}}$, which is given as $\nu_{\text{rms}} = \sqrt{4 k T R \Delta f} \label{thermal}$ where $k$ is Boltzmann's constant, $T$ is the temperature in Kelvin, $R$ is the resistance in ohms, and $\Delta f$ is the bandwidth. The latter term is a measure of how quickly the electrical element responds to a change in its input by changing its output from 10% to 90% of its final value, which is called the rise time, $t_r$, where $\Delta f = \frac{1}{3 t_r} \nonumber$ For example, if a change in the input increases the output by 1, then the rise time is how long it takes the output to increase from 0.1 to 0.9. A close look at Equation \ref{thermal} shows that we can reduce thermal noise by decreasing the temperature, by decreasing the resistance of the electrical circuit, and by decreasing the bandwidth; the latter, of course, comes at the cost of an increase in the response time, which means the instrument responds more slowly to a change in the signal. Of these, it is often easiest to reduce the temperature by cooling, for example, the instrument's detector. Shot Noise As its name implies, shot noise is a discrete event that happens in response to an event, such as the movement of an electron through the space between two surfaces of opposite charge. These events are random and quantized, and generate random furcations in the current that have a root-mean-square value, $i_{\text{rms}}$, which is given by $i_{\text{rms}} = \sqrt{2 I e \Delta f} \label{shot}$ where $I$ is the average current, $e$ is the charge on the electron in Coulombs, and $\Delta f$ is the bandwidth. Of these terms, the only one under our control is the bandwidth; again, decreasing the bandwidth comes at the cost of an instrument that responds more slowly to a change in the signal. Flicker Noise Unlike thermal noise or shot noise, flicker noise is related to the frequency of the signal being measured, $f$, instead of the signal's bandwidth. The sources of flicker noise are not well understood, but it is known that it is inversely proportional to the signal's frequency; thus, flicker noise is sometimes called $1/f$ noise. Because of the inverse relationship, flicker noise is more important at low frequencies, where it appears as a long-term drift in the signal. It is less important at higher frequencies where thermal noise and shot noise are more important. Environmental Noise Our instruments normally do not operate in an environment free from external signals, each of which has a frequency that can be picked up by the instrument. Television signals, cell-phone signals, radio signals, power lines are obvious examples of high-to-moderate frequencies that can serve as noise. Less obvious are lower frequency sources of noise, such as the change in temperature during the day or through the year.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/05%3A_Signals_and_Noise_(TBD)/5.02%3A_Sources_of_Noise.txt
There are two broad approaches we can use to improve the signal-to-noise ratio: hardware and software. Hardware approaches are built into the instrument and include decisions on how the instrument is set-up for making measurements (for example, the choice of a scan rate or a slit width), and how the signal is processed by the instrument (for example, using electronic filters). A few approaches are briefly considered here; others are included with the discussion of individual instruments. Software solutions are computational approaches in which we manipulate the data either while we are collecting it or after data acquisition is complete. Hardware Solutions One way to reduce noise is to focus on the circuitry, or hardware, used to measure the signal. Shielding One way to reduce environmental noise is to prevent it from entering into the instrument's electronic circuitry. One approach is to use a Faraday cage in which the instrument sits within a room or space covered with a conductive material. Electromagnetic radiation from the environment is absorbed by the conductive material and then shunted away to the ground. Rather than encasing the entire instrument in a Faraday cage, particularly sensitive portions of the circuitry can be shielded. Differential Amplifier A difference amplifier (see Chapter 3) is an electrical circuit used to determine the difference between two input voltages or currents and to return that difference as a larger voltage or current. As the magnitude of the noise in the two input signals is generally similar in value—that is, it is in phase—while the signal of interest is not, much of the noise's contribution to the signal is subtracted out. Filtering When the frequency of the noise is quite different from the frequency of the signal, a simple electrical circuit can be used to remove the high frequency noise and pass the low frequency signal; this is called a low-pass filter. See Chapter 2 for details on low-pass filters. Modulation When the signal of interest has a low frequency, the effect of flicker noise becomes significant because a technique that removes low frequency noise will remove the signal as well. Modulation is a process of increasing the frequency of the signal. When complete, a high-pass filter is used to remove the noise. Reversing the modulation returns the original signal, but with much of the noise removed. Software Solutions In this section we will consider three common computational tools for improving the signal-to-noise ratio: signal averaging, digital smoothing, and Fourier filtering. Signal Averaging The most important difference between the signal and the noise is that a signal is determinate (fixed in value) and the noise is indeterminate (random in value). If we measure a pure signal several times, we expect its value to be the same each time; thus, if we add together n scans, we expect that the net signal, $S_n$, is defined as $S_n = n S \nonumber$ where $S$ is the signal for a single scan. Because noise is random, its value varies from one run to the next, sometimes with a value that is larger and sometimes with a value that is smaller, and sometimes with a value that is positive and sometimes with a value that is negative. On average, the standard deviation of the noise increases as we make more scans, but it does so at a slower rate than for the signal $s_n = \sqrt{n} s \nonumber$ where $s$ is the standard deviation for a single scan and $s_n$ is the standard deviation after n scans. Combining these two equations, shows us that the signal-to-noise ratio, $S/N$, after n scans increases as $(S/N)_n = \frac{S_n}{s_n} = \frac{nS}{\sqrt{n}s} = \sqrt{n}(S/N)_{n = 1} \nonumber$ where $(S/N)_{n = 1}$ is the signal-to-noise ratio for the initial scan. Thus, when $n = 4$ the signal-to-noise ratio improves by a factor of 2, and when $n = 16$ the signal-to-noise ratio increases by a factor of 4. Figure $1$ shows the improvement in the signal-to-noise ratio for 1, 2, 4, and 8 scans. Signal averaging works well when the time it takes to collect a single scan is short and when the analyte's signal is stable with respect to time both because the sample is stable and the instrument is stable; when this is not the case, then we risk a time-dependent change in $S_\text{analyte}$ and/or $s_\text{noise}$ Because the equation for $(S/N)_n$ is proportional to the $\sqrt{n}$, the relative improvement in the signal-to-noise ratio decreases as $n$ increases; for example, 16 scans gives a $4 \times$ improvement in the signal-to-noise ratio, but it takes an additional 48 scans (for a total of 64 scans) to achieve a $8 \times$ improvement in the signal-to-noise ratio. Digital Smoothing Filters One characteristic of noise is that its magnitude fluctuates rapidly in contrast to the underlying signal. We see this, for example, in Figure $1$ where the underlying signal either remains constant or steadily increases or decreases while the noise fluctuates chaotically. Digital smoothing filters take advantage of this by using a mathematical function to average the data for a small range of consecutive data points, replacing the range's middle value with the average signal over that range. Moving Average Filters For a moving average filter, also called a boxcar filter, we replace each point by the average signal for that point and an equal number of points on either side; thus, a moving average filter has a width, $w$, of 3, 5, 7, ... points. For example, suppose the first five points in a sequence are 0.80 0.30 0.80 0.20 1.00 then a three-point moving average ($w = 3)$ returns values of NA 0.63 0.43 0.67 NA where, for example, 0.63 is the average of 0.80, 0.30, and 0.80. Note that we lose $(w - 1)/2 = (3 - 1)/2 = 1$ points at each end of the data set because we do not have a sufficient number of data points to complete a calculation for the first and the last point. Figure $2$ shows the improvement in the $S/N$ ratio when using moving average filters with widths of 5, 9, and 13. One limitation to a moving average filter is that it distorts the original data by removing points from both ends, although this is not a serious concern if the points in question are just noise. Of greater concern is the distortion in a signal's height if we use a range that is too wide; for example, Figure $3$, shows how a 23-point moving average filter (shown in blue) applied to the noisy signal in the upper left quadrant of Figure $2$, reduces the height of the original signal (shown in black). Because the filter's width—shown by the red bar—is similar to the peak's width, as the filter passes through the peak it systematically reduces the signal by averaging together values that are mostly smaller than the maximum signal. Savitzky-Golay Filters A moving average filter weights all points equally; that is, points near the edges of the filter contribute to the average as a level equal to points near the filter's center. A Savitzky-Golay filter uses a polynomial model that weights each point differently, placing more weight on points near the center of the filter and less weight on points at the edge of the filter. Specific values depend on the size of the window and the polynomial model; for example, a five-point filter using a second-order polynomial has weights of $-3/35 \quad \quad 12/35 \quad \quad 17/35 \quad \quad 12/35 \quad \quad -3/35 \nonumber$ For example, suppose the first five points in a sequence are 0.80 0.30 0.80 0.20 1.00 then this Savitzky-Golay filter returns values of NA NA 0.41 NA NA where, for example, the value for the middle point is $0.80 \times \frac{-3}{35} + 0.30 \times \frac{12}{35} + 0.80 \times \frac{17}{35} + 0.20 \times \frac{12}{35} + 1.00 \times \frac{-3}{35} = 0.406 \approx 0.41 \nonumber$ Note that we lose $(w - 1)/2 = (5 - 1)/2 = 2$ points at each end of the data set, where w is the filter's range, because we do not have a sufficient number of data points to complete the calculations. For other Savitzky-Golay smoothing filters, see Savitzky, A.; Golay, M. J. E. Anal Chem, 1964, 36, 1627-1639. Figure $4$ shows the improvement in the $S/N$ ratio when using Savitzky-Golay filters using a second-order polynomial with 5, 9, and 13 points. Because a Savitzky-Golay filter weights points differently than does a moving average smoothing filter, a Savitzky-Golay filter introduces less distortion to the signal, as we see in the following figure. Fourier Filtering This approach to improving the signal-to-noise ratio takes advantage of a mathematical technique called a Fourier transform (FT). The basis of a Fourier transform is that we can express a signal in two separate domains. In the first domain the signal is characterized by one or more peaks, each defined by its position, its width, and its area; this is called the frequency domain. In the second domain, which is called the time domain, the signal consists of a set of oscillations, each defined by its frequency, its amplitude, and its decay rate. The Fourier transform—and the inverse Fourier transform—allow us to move between these two domains. Note The mathematical details behind the Fourier transform are beyond the level of this textbook; for a more in-depth treatment, consult this series of articles from the Journal of Chemical Education: • Glasser, L. “Fourier Transforms for Chemists: Part I. Introduction to the Fourier Transform,” J. Chem. Educ. 198764, A228–A233. • Glasser, L. “Fourier Transforms for Chemists: Part II. Fourier Transforms in Chemistry and Spectroscopy,” J. Chem. Educ. 198764, A260–A266. • Glasser, L. “Fourier Transforms for Chemists: Part III. Fourier Transforms in Data Treatment,” J. Chem. Educ. 198764, A306–A313. Figure $\PageIndex{6a}$ shows a single peak in the frequency domain and Figure $\PageIndex{6b}$ shows its equivalent time domain signal. There are correlations between the two domains: • the further a peak in the frequency domain is from the origin, the greater it corresponding oscillation frequency in the time domain • the broader a peak's width in the frequency domain, the faster its decay rate in the time domain • the greater the area under a peak in the frequency domain, the higher its initial intensity in the time domain We can use a Fourier transform to improve the signal-to-noise ratio because the signal is a single broad peak and the noise appears as a multitude of very narrow peaks. As noted above, a broad peak in the frequency domain has a fast decaying signal in the time domain, which means that while the beginning of the time domain signal includes contributions from the signal and the noise, the latter part of the time domain signal includes contributions from noise only. The figure below shows how we can take advantage of this to reduce the noise and improve the signal-to-noise ratio for the noisy signal in Figure $\PageIndex{7a}$, which has 256 points along the x-axis and has a signal-to-noise ratio of 5.1. First, we use the Fourier transform to convert its original domain into the new domain, the first 128 points of which are shown in Figure $\PageIndex{7b}$ (note: the first half of the data contains the same information as the second half of the data, so we only need to look at the first half of the data). The points at the beginning are dominated by the signal, which is why there is a systematic decrease in the intensity of the oscillations; the remaining points are dominated by noise, which is why the variation in intensity is random. To filter out the noise we retain the first 24 points as they are and set the intensities of the remaining points to zero (the choice of how many points to retain may require some adjustment). As shown in Figure $\PageIndex{7c}$, we repeat this for the remaining 128 points, retaining the last 24 points as they are. Finally, we use an inverse Fourier transform to return to our original domain, with the result in Figure $\PageIndex{7d}$, with the signal-to-noise ratio improving from 5. 1 for the original noisy signal to 11.2 for the filtered signal.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/05%3A_Signals_and_Noise_(TBD)/5.03%3A_Signal-to-Noise_Enhancement.txt
• 6.1: General Properties of Electromagnetic Radiation Electromagnetic radiation—light—is a form of energy whose behavior is described by the properties of both waves and particles. Some properties of electromagnetic radiation, such as its refraction when it passes from one medium to another, are explained best when we describe light as a wave. Other properties, such as absorption and emission, are better described by treating light as a particle. • 6.2: Wave Properties of Electromagnetic Radiation Electromagnetic radiation consists of oscillating electric and magnetic fields that propagate through space along a linear path and with a constant velocity. The oscillations in the electric field and the magnetic field are perpendicular to each other and to the direction of the wave’s propagation. In this section we consider the wave model of electromagnetic radiation. • 6.3: Quantum Mechanical Properties of Electromagnetic Radiation In the last section, we considered properties of electromagnetic radiation that are consistent with identifying light as a wave. Other properties of light, however, cannot be explained by a model that treats it as a wave; instead, we need to consider a model that treats light as a system of discrete particles, which we call photons. • 6.4: Emission and Absorbance Spectra When an atom, ion, or molecule absorbs a photon it undergoes a transition from a lower-energy state to a higher-energy, or excited state, we obtain an absorbance spectrum. The result of the reverse process, in which an atom, ion, or molecule emits a photon as it moves from a higher-energy state to a lower energy state, is an emission spectrum. In this section we consider the characteristics of each. • 6.5: Quantitative Considerations An important part of the chapters that follow is a consideration of how we can use the emission or absorbance of photons to determine the concentration of an analyte in a sample. Here we provide a brief summary of quantitative spectroscopic methods of analysis, leaving more specific details for later chapters. 06: An Introduction to Spectrophotometric Methods Electromagnetic radiation—light—is a form of energy whose behavior is described by the properties of both waves and particles. Some properties of electromagnetic radiation, such as its refraction when it passes from one medium to another (\(Figure 1\)), are explained best when we describe light as a wave. Other properties, such as absorption and emission, are better described by treating light as a particle. The exact nature of electromagnetic radiation remains unclear, as it has since the development of quantum mechanics in the first quarter of the 20th century [Home, D.; Gribbin, J. New Scientist 1991, 2 Nov. 30–33]. Nevertheless, this dual model of wave and particle behavior provides a useful description for electromagnetic radiation. The Electromagnetic Spectrum The frequency and the wavelength of electromagnetic radiation vary over many orders of magnitude. For convenience, we divide electromagnetic radiation into different regions—the electromagnetic spectrum—based on the type of atomic or molecular transitions that gives rise to the absorption or emission of photons (Figure \(2\)). The boundaries between the regions of the electromagnetic spectrum are not rigid and overlap between spectral regions is possible.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/06%3A_An_Introduction_to_Spectrophotometric_Methods/6.01%3A_General_Properties_of_Electromagnetic_Radiation.txt
Ways to Characterize a Wave Electromagnetic radiation consists of oscillating electric and magnetic fields that propagate through space along a linear path and with a constant velocity. The oscillations in the electric field and the magnetic field are perpendicular to each other and to the direction of the wave’s propagation. Figure $1$ shows an example of plane-polarized electromagnetic radiation, which consists of a single oscillating electric field and a single oscillating magnetic field. Measurable Properties An electromagnetic wave is characterized by several fundamental properties, including its velocity, amplitude, frequency, phase angle, polarization, and direction of propagation [Ball, D. W. Spectroscopy 1994, 9(5), 24–25]. Focusing on the oscillations in the electric field, amplitude is the maximum displacement of the electrical field. The wave's frequency, $\nu$, is the number of oscillations in the electric field per unit time. Wavelength, $\lambda$ is defined as the distance between successive maxima . Figure $1$ shows the initial amplitude as 0; the phase angle $\Phi$ accounts for the fact that the initial amplitude need not be zero, which we can accomplish by shifting the wave along the direction of propagation. There is a relationship between wavelength and frequency, which is $\lambda = \frac {c} {\nu} \nonumber$ where $c$ is the speed of light in a vacuum. Another unit useful unit is the wavenumber, $\overline{\nu}$, which is the reciprocal of the wavelength $\overline{\nu} = \frac {1} {\lambda} \nonumber$ Wavenumbers frequently are used to characterize infrared radiation, with the units given in cm–1. Power, $P$, and intensity, $I$, are two additional properties of light, both related to the square of the amplitude; power is the energy transferred per second and intensity is the power transferred to a given area. In a vacuum, electromagnetic radiation travels at the speed of light, c, which is $2.99792 \times 10^8$ m/s. When electromagnetic radiation moves through a medium other than a vacuum, its velocity, v, is less than the speed of light in a vacuum. The difference between v and c is sufficiently small (<0.1%) that the speed of light to three significant figures, $3.00 \times 10^8$ m/s, is accurate enough for most purposes. When electromagnetic radiation moves between different media—for example, when it moves from air into water—its frequency, $\nu$, remains constant. Because its velocity depends upon the medium in which it is traveling, the electromagnetic radiation’s wavelength, $\lambda$, changes. If we replace the speed of light in a vacuum, c, with its speed in the medium, $v$, then the wavelength is $\lambda = \frac {v} {\nu} \nonumber$ This change in wavelength as light passes between two media explains the refraction of electromagnetic radiation seen in the photograph of light passing through rain drop, what was included in the previous section. This is discussed in more detail later in this section. Example 6.2.1 In 1817, Josef Fraunhofer studied the spectrum of solar radiation, observing a continuous spectrum with numerous dark lines. Fraunhofer labeled the most prominent of the dark lines with letters. In 1859, Gustav Kirchhoff showed that the D line in the sun’s spectrum was due to the absorption of solar radiation by sodium atoms. The wavelength of the sodium D line is 589 nm. What are the frequency and the wavenumber for this line? Solution The frequency and wavenumber of the sodium D line are $\nu=\frac{c}{\lambda}=\frac{3.00 \times 10^{8} \ \mathrm{m} / \mathrm{s}}{589 \times 10^{-9} \ \mathrm{m}}=5.09 \times 10^{14} \ \mathrm{s}^{-1} \nonumber$ $\overline{\nu}=\frac{1}{\lambda}=\frac{1}{589 \times 10^{-9} \ \mathrm{m}} \times \frac{1 \ \mathrm{m}}{100 \ \mathrm{cm}}=1.70 \times 10^{4} \ \mathrm{cm}^{-1} \nonumber$ Exercise 6.2.1 Another historically important series of spectral lines is the Balmer series of emission lines from hydrogen. One of its lines has a wavelength of 656.3 nm. What are the frequency and the wavenumber for this line? Answer The frequency and wavenumber for the line are $\nu=\frac{c}{\lambda}=\frac{3.00 \times 10^{8} \ \mathrm{m} / \mathrm{s}}{656.3 \times 10^{-9} \ \mathrm{m}}=4.57 \times 10^{14} \ \mathrm{s}^{-1} \nonumber$ $\overline{\nu}=\frac{1}{\lambda}=\frac{1}{656.3 \times 10^{-9} \ \mathrm{m}} \times \frac{1 \ \mathrm{m}}{100 \ \mathrm{cm}}=1.524 \times 10^{4} \ \mathrm{cm}^{-1} \nonumber$ Polarization Figure $1$ shows a single oscillating electrical field and, perpendicular to that, a single oscillating magnetic field. This is an example of plane polarized light in which oscillation of the electrical field occurs at just one angle. Normally electromagnetic radiation oscillates simultaneously at all possible angles. Figure $2$ shows the difference in these two cases. If we observe the plane polarized light as it oscillates toward us, we see the single line at the top of the figure where blue indicates a positive amplitude and red indicates a negative amplitude, and where the opacity of the shading shows the change in the amplitudes. The vertical dashed lines show nodes where the amplitude is zero and where no light is seen. With ordinary light, we see a circular beam of radiation because the electrical field is oscillating at all angles. The amplitude's sign and magnitude, and the presence of nodes where the amplitude is zero, remain evident to us. Note that if we observe the source's intensity, then the each of the lines and circles in Figure $2$ will appear blue (positive values as intensity is proportion to the square of the amplitude); we continue to observe fluctuations in the intensity and the presence of the nodes. Mathematical Representation of Waves We can describe the oscillations in the electric field as a sine wave $A_{t}=A_{e} \sin (2 \pi \nu t+\Phi) \nonumber$ where At is the magnitude of the electric field at time t, Ae is the field’s maximum amplitude, $\nu$ is the wave's frequency, and $\Phi$ is a phase angle that accounts for the fact that $A_t$ need not have a value of zero at time $t = 0$. The identical equation for the magnetic field is $A_{t}=A_{m} \sin (2 \pi \nu t+\Phi) \nonumber$ where Am is the magnetic field’s maximum amplitude. One of the important features of waves is that adding or subtracting together two (or more) gives a new wave. Figure $3$ shows one example. The superposition of waves explains why two identical waves that are completely out-of-phase with each other produce a signal in which the amplitude is zero at all points. Another important consequence of the superposition of waves is that if we can add together a series of waves to produce a new wave, then there is a corresponding mathematical process that takes a complex wave and determines the underlying set of sine waves of which it is comprised. This process is called a Fourier transform, which we will revisit in later chapters. Interactions of Waves With Matter When light encounters matter—perhaps a particle, a solution, or a thin film—it can interact with it in several ways. In this section we consider two such interactions: refraction and reflection. Three additional types of interactions—the scattering of light, the diffraction of light, and the transmission of light—are considered in later chapters where they play an important role in specific instrumental methods of analysis. Refraction When light passes from one medium (perhaps air) into another medium (perhaps water) that has a different density, the light experiences a change in direction that is a consequence of a difference in its velocity in the two media. This bending of light is called refraction, the extent of which is given by Snell's law $\frac{\text{sin } \theta_1} {\text{sin } \theta_2} = \frac {\eta_2} {\eta_1} = \frac {v_1} {v_2} \nonumber$ where $\eta_i$ is the refractive index of a medium and $v_i$ is the velocity in a medium, and where the angles, $\theta_i$, are shown in Figure $4$. Reflection In addition to refraction, when light crosses an interface that separates media with different refractive indexes, some of the light is reflected back. When then angle of incidence is 0° (that is, the light is perpendicular to the interface), then the fraction of light that is reflected is given by $\frac{I_r}{I_0} = \frac{(\eta_2 - \eta_1)^2}{(\eta_2 + \eta_1)^2} \nonumber$ where $I_r$ is the intensity of light that is reflected, $I_0$ is the intensity of light from the source that enters the interface, and $\eta_i$ is the refractive index of the media. If light crosses more than one interface—as is the case when light passes through a sample cell—then the total fraction of reflected light is the sum of the fraction of light reflected at each interface.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/06%3A_An_Introduction_to_Spectrophotometric_Methods/6.02%3A_Wave_Properties_of_Electromagnetic_Radiation.txt
In the last section, we considered properties of electromagnetic radiation that are consistent with identifying light as a wave. Other properties of light, however, cannot be explained by a model that treats it as a wave; instead, we need to consider a model that treats light as a system of discrete particles, which we call photons. The Photoelectric Effect As shown in Figure $1$, in a photoelectric cell, a metal, such as sodium, is held under vacuum and exposed to electromagnetic radiation, which enters the cell through an optical window. If the frequency of the radiation is sufficient, electrons escape from the metal with a kinetic energy that we can measure; we call these photoelectrons. If the photocell's anode is held at a potential that is positive relative to the potential applied to the cathode, the photoelectrons move from the cathode to the anode, generating a current that is measured by an ammeter. If the voltage applied to the anode is made sufficiently negative, the electrons eventually fail to reach the anode and the current decreases to zero. The voltage needed to stop the flow of electrons is called the stopping voltage. In a photoelectron spectrum we vary the frequency and intensity of the electromagnetic radiation and observe their effect on either the number of photoelectrons released (measured as a current) or the energy of the photoelectrons released (measured by their kinetic energy). A typical set of experiments are shown in Figure $2a$ using Na and in Figure $2b$ using Na, Zn, and Cu. The data show several interesting features. First, we see in Figure $2a$ that the intensity of the light source has no effect on the minimum frequency of light needed to eject a photoelectron from Na—we call this the threshold frequency—but that a high intensity source of electromagnetic radiation results in the release of a greater number of photoelectrons and, therefore, a greater current than for a lower intensity source. Second, we see in Figure $2b$ that different metals have different threshold frequencies, but that once we exceed each metal's threshold frequency, the change in the kinetic energy of the photoelectrons with increasing frequency yields lines of equal slopes. We can explain these experimental observations if we assume that the source of electromagnetic energy has an energy, $E_\text{ER}$, that does two things: it overcomes the energy that binds the photoelectron to the metal, $E_\text{BE}$, and it imparts the remaining energy into the photoelectron's kinetic energy, $E_\text{KE}$, where ER means electromagnetic radiation, BE means binding energy and KE means kinetic energy. $E_\text{KE} = E_\text{ER} - E_\text{BE} \nonumber$ A wave model for electromagnetic radiation is insufficient to explain the photoelectric effect because when it strikes the metal the radiation's energy would be distributed across all atoms on the surface, none of which would then receive an energy that exceeds the photoelectron's binding energy. Instead, the results in Figure $2$ make sense only if we assume that light consists of discrete particles with energies that are a function of frequency or wavelength $E_\text{ER} = h \nu = \frac{hc}{\lambda} \label{qm}$ where $h$ is Plank's constant. This leave us with the following equation relating kinetic energy, the energy of the photon, and the binding energy of the electron. $E_\text{KE} = h \nu - E_\text{BE} \nonumber$ Note that the slope of the lines in Figure $2b$ is Plank's constant. Energy States Equation \ref{qm} is central to the particle, or quantum mechanical model of the atom in which we understand that chemical species—atoms, ions, molecules—exist only in discrete states, each with a single, well-defined energy. A wave, on the other hand, can take on any energy. A simple image is the possible energies of a ball as it rolls down a ramp (wave) or a staircase (particle), as in Figure $3$. When an atom, ion, or molecule moves between two of these discrete states, the difference in energy, $\Delta E$, is given by $\Delta E = h \nu = \frac{hc}{\lambda} \nonumber$ In absorption spectroscopy a photon is absorbed by an atom, ion, or molecule, which undergoes a transition from a lower-energy state to a higher-energy, or excited state (Figure $4a$). The reverse process, in which an atom, ion, or molecule emits a photon as it moves from a higher-energy state to a lower energy state ($4b$), is called emission. The types of energy states involved in emission and absorption depend on the energy of the electromagnetic radiation. In general, $\gamma$-rays involve transitions between nuclear states, X-rays probe the energies of core-level electrons, ultraviolet-visible radiation probes the energies of valence electrons, infrared radiation provides information on vibrational energy states, microwave radiation probes rotational energy levels and electron spins, and radio waves provide information on nuclear spins. While infrared spectroscopy may provide information on a molecule's vibrational energy states, the energies available in ultraviolet-visible spectroscopy provide information on both the molecule's electronic states and its vibrational states, as shown in Figure $5$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/06%3A_An_Introduction_to_Spectrophotometric_Methods/6.03%3A_Quantum_Mechanical_Properties_of_Electromagnetic_Radiation.txt
In the last section we considered the source of emission and absorption. In this section we consider the types of emission and absorbance spectra that we will form the basis for many of the chapters that follow. Emission Spectra When an atom, ion, or molecule moves from a higher-energy state to a lower-energy state it emits photons with energies equal to the difference in energy between the two states. The result is an emission spectrum that shows the intensity of emission as a function of wavelength. The shapes of these emission spectra fall into two broad types: line spectra and band spectra. Line Spectra When the energy states are well separated from each other, and when there is just one type of transition between the energy states, the result is a line spectrum that consists of a small number of narrow bands. Figure \(1\), for example, shows the emission spectrum from gas phase Cu atoms, which consists of seven lines, two of which are too close to each other to resolve them from each other. The individual emissions lines are very narrow, as we might expect, because the atom's energy levels have precise values. Band Spectra The emission spectrum for a gas phase atom is relatively simple because the number of possible transitions is small and because their individuals energies are well-separated from each other. When a molecule in a solvent emits light, the number of possible changes in energy levels can be quite large if the molecule undergoes transitions between electronic, vibrational, and rotational energy levels. The resulting spectrum has so many emission individual emission lines that we see a single broad peak, or band, that we call a band spectrum. Figure \(2\) shows the emission spectrum for the dye coumarin 343, which is incorporated in a reverse micelle and suspended in cyclohexanol. Note When considering sources of electromagnetic radiation for spectroscopic instruments, we usually describe them as line sources and continuous sources depending on on whether they emit discrete lines, as is the case for the hollow cathode lamp in Figure \(1\), or exhibit emission over a broad range of wavelengths without any gaps, as is the case for a green light-emitting diode (LED), whose spectrum is shown in Figure \(3\). Absorbance Spectra When an atom, ion, or molecule moves from a lower-energy state to a higher-energy state it absorbs photons with energies equal to the difference in energy between the two states. The result is an absorbance spectrum that shows the intensity of emission as a function of wavelength. As is the case for emission spectra, absorbance spectra range from narrow lines to broad bands. The atomic absorption spectrum for Na is shown in Figure \(4\), and is typical of that found for most atoms. The most obvious feature of this spectrum is that it consists of a small number of discrete absorption lines that correspond to transitions between the ground state (the 3s atomic orbital) and the 3p and the 4p atomic orbitals. Another feature of the atomic absorption spectrum in Figure \(4\) is the narrow width of the absorption lines, which is a consequence of the fixed difference in energy between the ground state and the excited state, and the lack of vibrational and rotational energy levels. Natural line widths for atomic absorption, which are governed by the uncertainty principle, are approximately 10–5 nm. Other contributions to broadening increase this line width to approximately 10–3 nm. The absorbance spectra for molecules consists of broad bands for the same reasons discussed above for emission spectra. The UV/Vis spectrum for cranberry juice in Figure \(5\) shows a single broad band for the anthocyanin dyes that are responsible for its red color. The IR spectrum for ethanol in Figure \(6\) shows multiple absorption bands, some broader and some narrower. The narrow bands, however, are still much broader than the lines in the atomic absorption spectrum for Na. 6.05: Quantitative Considerations An important part of the chapters that follow is a consideration of how we can use the emission or absorbance of photons to determine the concentration of an analyte in a sample. Here we provide a brief summary of quantitative spectroscopic methods of analysis in Table $1$, leaving more specific details for later chapters. Table $1$. Quantitative spectroscopic methods of analysis. What happens to the photons? What do we measure? What is the relationship between what we measure and concentration, $C$? What are some examples? emitted the power, $P_e$, of emitted light $P_e = kC$ flame atomic emission, molecular fluorescence and phosphorescence scattered the power, $P_{sc}$, of scattered light $P_{sc} = kC$ nephelometry, turbidity, Raman spectroscopy absorbed the power, $P_t$, of transmitted light relative to the power, $P_0$, of the light source $- \log \left( \frac{P_t}{P_0} \right)$ flame atomic absorbance, molecular absorbance
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/06%3A_An_Introduction_to_Spectrophotometric_Methods/6.04%3A_Spectra.txt
An early example of a colorimetric analysis is Nessler’s method for ammonia, which was introduced in 1856. Nessler found that adding an alkaline solution of HgI2 and KI to a dilute solution of ammonia produced a yellow-to-reddish brown colloid, in which the colloid’s color depended on the concentration of ammonia. By visually comparing the color of a sample to the colors of a series of standards, Nessler was able to determine the concentration of ammonia. Colorimetry, in which a sample absorbs visible light, is one example of a spectroscopic method of analysis. At the end of the nineteenth century, spectroscopy was limited to the absorption, emission, and scattering of visible, ultraviolet, and infrared electromagnetic radiation. Since then, spectroscopy has expanded to include other forms of electromagnetic radiation—such as X-rays, microwaves, and radio waves—and other energetic particles—such as electrons and ions. • 7.1: General Design of Optical Instruments The spectroscopic techniques in the chapters that follow use instruments that share several common basic components: a source of energy, a means for holding the sample of interest to us, a device that can isolate a narrow range of wavelengths, a detector for measuring the signal, and a signal processor that displays the signal in a form convenient for the analyst. • 7.2: Sources of Radiation All forms of spectroscopy require a source of energy to place the analyte in an excited state. In absorption and scattering spectroscopy this energy is supplied by photons. Emission and photoluminescence spectroscopy use thermal, radiant (photon), or chemical energy to promote the analyte to a suitable excited state. In this section we consider the sources of radiant energy. • 7.3: Wavelength Selectors In optical spectroscopy we measure absorbance or transmittance as a function of wavelength. Unfortunately, we can not isolate a single wavelength of radiation from a continuum source, although we can narrow the range of wavelengths that reach the sample. A wavelength selector passes a narrow band of radiation characterized by a nominal wavelength, an effective bandwidth, and a maximum throughput of radiation. Several types of wavelength selectors are considered in this section. • 7.4: Sample Containers The sample compartment provides a light-tight environment that limits stray radiation. Samples normally are in a liquid or solution state, and are placed in cells constructed with UV/Vis transparent materials, such as quartz, glass, and plastic. • 7.5: Radiation Transducers Transducer is a general term that refers to any device that converts a chemical or a physical property into an easily measured electrical signal. The retina in your eye, for example, is a transducer that converts photons into an electrical nerve impulse; your eardrum is a transducer that converts sound waves into a different electrical nerve impulse. A photon transducer takes a photon and converts it into an electrical signal, such as a current, a change in resistance, or a voltage. • 7.6: Fiber Optics If we need to monitor an analyte’s concentration over time, it may not be possible to remove samples for analysis. This often is the case, for example, when monitoring an industrial production line or waste line, when monitoring a patient’s blood, or when monitoring an environmental system, such as stream. With a fiber-optic probe we can analyze samples in situ. • 7.7: Fourier Transform Optical Spectroscopy Thus far, the optical benches described in this chapter either use a single detector and a monochromator to pass a single wavelength of light to the detector, or use a multichannel array of detectors and a diffraction grating to disperse the light across the detectors, both of which have limitations. We can overcome these limitations by using an interferometer. Thumbnail: A diffraction grating monochromator. 07: Components of Optical Instruments The spectroscopic techniques in the chapters that follow use instruments that share several common basic components: a source of energy; a means for holding the sample of interest to us; a device that can isolate a narrow range of wavelengths; a detector for measuring the signal; and a signal processor that displays the signal in a form convenient for the analyst. Figure \(1\) shows four common ways of stringing together these units. The remaining sections of this chapter provide general information on each of these units. More specific details appear in the chapters on individual methods.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.01%3A_General_Design_of_Optical_Instruments.txt
All forms of spectroscopy require a source of energy to place the analyte in an excited state. In absorption and scattering spectroscopy this energy is supplied by photons. Emission and photoluminescence spectroscopy use thermal energy, radiant (photon) energy, or chemical energy to promote the analyte to a suitable excited state. In this section we consider the sources of radiant energy. Sources of Electromagnetic Radiation A source of electromagnetic radiation must provide an output that is both intense and stable in the region of interest. Sources of electromagnetic radiation are classified as either continuum or line sources. Table \(1\) provides a list of some common sources of electromagnetic radiation. Table \(1\). Common sources of electromagnetic radiation. source wavelength region useful for... H2 and D2 lamp continuum source from 160–380 nm molecular absorption tungsten lamp continuum source from 320–2400 nm molecular absorption Xe arc lamp continuum source from 200–1000 nm molecular fluorescence nernst glower continuum source from 0.4–20 µm molecular absorption globar continuum source from 1–40 µm molecular absorption nichrome wire continuum source from 0.75–20 µm molecular absorption hollow cathode lamp line source in UV/Vis atomic absorption Hg vapor lamp line source in UV/Vis molecular fluorescence laser line source in UV/Vis/IR atomic and molecular absorption, fluorescence, and scattering Continuum sources emits radiation over a broad range of wavelengths, with a relatively smooth variation in intensity (Figure \(1\)), and are used for molecular absorbance using UV/Vis and IR radiation. Further details on these sources are in Chapters 13 and 16, respectively. A line source, on the other hand, emits radiation at discrete wavelengths, with broad regions showing no emission lines (Figure \(2\)), and are used for atomic absorption, atomic and molecular fluorescence, and Raman spectroscopy. Further details on hollow cathode lamps are included in Chapter 9. Laser Sources An important line source of radiation is a laser, which is an acronym for light amplification by stimulated emission of radiation. Laser emission is monochromatic with a narrow bandwidth of just a few micrometers. As suggested by the term amplification, a laser provides a source of high intensity emission. The source of this intensity is embedded in the term stimulated emission, to which we now turn our attention. How a Laser Works To understand how a laser works, we need to consider four key ideas: pumping, population inversion, stimulated emission, and light amplification. Pumping Emission cannot occur unless we first populate higher energy levels with electrons, which we can accomplish by, for example, the absorption of photons, as shown in Figure \(3\). Emission occurs when an electron in a higher energy state relaxes back to a lower energy state by emitting a photon with an energy equal to the difference in the energy between these two states. The process of populating the excited states with electrons is called pumping and is accomplished by using an electrical discharge, by passing an electrical current through the lasing medium, or by absorption of high energy photons. The goal of pumping is to create a large population of excited states. Population Inversion Normally the majority of the species we are studying are in their ground electronic state with only a small number of species in an excited electronic state. For a laser to achieve a high intensity of emission, it is necessary to create a situation in which there are more species in the excited state than in the ground state, as shown in Figure \(4\) where the non-inverted population has four species in the ground state and two species in the excited state, and where the inverted population has four species in the excited state and two species in the ground state. Stimulated Emission and Light Amplification Figure \(3\) shows emission of a photon following absorption of a photon of equal energy. No more than one photon is emitted for each photon that is absorbed, with some species in an excited state relaxing to the ground state through non-radiative pathways. This spontaneous emission is a random process, which means that the timing of emission and the direction in which emission occurs are random. Emission in a laser, as depicted in Figure \(5\), is stimulated by a photon with an energy equal to that of the difference in energy between the excited state and the ground state. The interaction of the incoming photon with the excited state results in the excited state's immediate relaxation to the ground state by the emission of a photon. The original photon and the emitted photon are coherent, with identical energies, identical directions, and identical phases. Because two coherent photons are emitted, the amplitude of the emitted radiation is doubled, as we see in Figure \(5\); this is what we mean by light amplification. Laser Systems As the previous sections suggest, creating a population inversion is the limiting factor in generating radiation from a laser. The two-level system in Figure \(5\), which involves a single excited state and a single ground state, cannot create a population inversion because when the ground state and excited state are equal in population, the rate at which excited states are produced through pumping equals the rate at which excited states are lost through emission. To achieve stimulated emission, laser systems use three-level or four-level systems, as outlined in Figure \(6\). In a three-level system, pumping is used to populate the excited states in level two. From level two, an efficient pathway for non-radiative relaxation populates the excited state in level three, which is sufficiently stable to allow for a population inversion. In a four-level system, the population inversion is achieved between level three and level four. Types of Lasers Lasers are categorized by the nature of the lasing medium: solid-state crystals, gases, dyes, and semiconductors. Solid-state lasers use a crystalline material, such as aluminum oxide, that contains trace amounts of an element, such as chromium or neodymium, which serves as the actual lasing medium. Gas lasers use gas phase atoms, ions, or molecules as a lasing medium. The lasing medium in a dye laser is a solution of an organic dye molecule. A dye laser typically is capable of emitting light over a broad range of wavelengths, but is tunable to a specific wavelength within that range. Finally, a semiconductor laser uses modified light-emitting diodes as a lasing medium. Table \(2\). Examples of Lasers. category lasing medium wavelengths solid state ruby (0.05% Cr(III) in Al2O3) 694.3 nm Nd:YAG (neodymium ion in yttrium aluminum garnet 1054 nm; 532 nm gas He/Ne 632.8 nm Ar+ 514.5 nm, 488 nm N2 337.1 nm CO2 10.6 µm dye rhodamine 540–680 nm fluorescein 530–560 nm cumarin 490–620 nm semiconductor indium gallium nitride 405 nm aluminum gallium indium phosphide 635 nm
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.02%3A_Sources_of_Radiation.txt
In Nessler’s original colorimetric method for ammonia, which was described at the beginning of the chapter, the sample and several standard solutions of ammonia are placed in separate tall, flat-bottomed tubes. As shown in Figure $1$, after adding the reagents and allowing the color to develop, the analyst evaluates the color by passing ambient light through the bottom of the tubes and looking down through the solutions. By matching the sample’s color to that of a standard, the analyst is able to determine the concentration of ammonia in the sample. In Figure $1$ every wavelength of light from the source passes through the sample. This is not a problem if there is only one absorbing species in the sample. If the sample contains two components, then a quantitative analysis using Nessler’s original method is impossible unless the standards contains the second component at the same concentration as in the sample. To overcome this problem, we want to select a wavelength that only the analyte absorbs. Unfortunately, we can not isolate a single wavelength of radiation from a continuum source, although we can narrow the range of wavelengths that reach the sample. As seen in Figure $2$, a wavelength selector always passes a narrow band of radiation characterized by a nominal wavelength, an effective bandwidth, and a maximum throughput of radiation. The effective bandwidth is defined as the width of the radiation at half of its maximum throughput. The ideal wavelength selector has a high throughput of radiation and a narrow effective bandwidth. A high throughput is desirable because the more photons that pass through the wavelength selector, the stronger the signal and the smaller the background noise. A narrow effective bandwidth provides a higher resolution, with spectral features separated by more than twice the effective bandwidth being resolved. As shown in Figure $3$, these two features of a wavelength selector often are in opposition. A larger effective bandwidth favors a higher throughput of radiation, but provide less resolution. Decreasing the effective bandwidth improves resolution, but at the cost of a noisier signal [Jiang, S.; Parker, G. A. Am. Lab. 1981, October, 38–43]. For a qualitative analysis, resolution usually is more important than noise and a smaller effective bandwidth is desirable; however, in a quantitative analysis less noise usually is desirable. Filters The simplest method for isolating a narrow band of radiation is to use an absorption or interference filter. Absorption Filters As their name suggests, absorption filters work by selectively absorbing radiation from a narrow region of the electromagnetic spectrum. A simple example of an absorption filter is a piece of colored glass or polymer film. A purple filter, for example, removes the complementary color green from 500–560 nm. Commercially available absorption filters provide effective bandwidths of 30–250 nm, although the throughput at the low end of this range often is only 10% of the source’s emission intensity. Interference filters are more expensive than absorption filters, but have narrower effective bandwidths, typically 10–20 nm, with maximum throughputs of at least 40%. The latter value suggests that an important limitation to an absorption filter is that it may significantly reduce the amount of light from the source that reaches the sample and the detector. Figure $4$ shows an example of a filter holder with filters that pass bands of light centered at 440 nm, 490 nm, or 550 nm. Interference Filters An interference filter consists of a transparent dielectric material, such as CaF2, which is sandwiched between two glass plates, each coated with a thin, semitransparent metal film ($5a$). When a continuous source of light passes through the interference filter it undergoes constructive and destructive interference that isolates and passes a narrow band of light centered at a wavelength that satisfies Equation \ref{lambda} $\lambda = \frac{2nb}{m} \label{lambda}$ where $n$ is the refractive index of the dielectric material, $b$ is the thickness of the dielectric material, and $m$ is the order of the interference (typically first-order). Figure $5b$ shows the result of passing the emission from a green LED—a continuous source that emits light from approximately 500 nm to 650 nm—through an interference filter that produces an effective bandwidth of a few nanometers. In this case, a 210 nm thick film with a refractive index of 1.35 passes light centered at a wavelength of $\lambda = \frac{2 \times 1.35 \times 210 \text{ nm}}{1} = 567 \text{ nm} \nonumber$ Monochromators A filter has one significant limitation—because a filter has a fixed nominal wavelength, if we need to make measurements at two wavelengths, then we must use two filters. A monochromator is an alternative method for selecting a narrow band of radiation that also allows us to continuously adjust the band’s nominal wavelength. Monochromators are classified as either fixed-wavelength or scanning. In a fixed-wavelength monochromator we select the wavelength by manually rotating the grating. Normally a fixed-wavelength monochromator is used for a quantitative analysis where measurements are made at one or two wavelengths. A scanning monochromator includes a drive mechanism that continuously rotates the grating, which allows successive wavelengths of light to exit from the monochromator. A scanning monochromator is used to acquire a spectrum, and, when operated in a fixed-wavelength mode, for a quantitative analysis. The construction of a typical monochromator is shown in Figure $6$. Radiation from the source enters the monochromator through an entrance slit. The radiation is collected by a collimating mirror or lens, which focuses a parallel beam of radiation to a diffraction grating (left) or a prism (right), that disperses the radiation in space. A second mirror or lens focuses the radiation onto a planar surface that contains an exit slit. Radiation exits the monochromator and passes to the detector. As shown in Figure $6$, a monochromator converts a polychromatic source of radiation at the entrance slit to a monochromatic source of finite effective bandwidth at the exit slit. The choice of which wavelength exits the monochromator is determined by rotating the diffraction grating or prism. A narrower exit slit provides a smaller effective bandwidth and better resolution than does a wider exit slit, but at the cost of a smaller throughput of radiation. Polychromatic means many colored. Polychromatic radiation contains many different wavelengths of light. Monochromatic means one color, or one wavelength. Although the light exiting a monochromator is not strictly of a single wavelength, its narrow effective bandwidth allows us to think of it as monochromatic. Monochromators Based on Prisms Although prism monochromators were once in common use, they have mostly been replaced by diffraction gratings. There are several reasons for this. One reason is that diffraction gratings are much less expensive to manufacture. A second reason is that a diffraction grating provides a linear dispersion of of wavelengths along the focal plane of the exit slit, which means the resolution between adjacent wavelengths is the same throughout the source's optical range. A prism, on the other hand, provides a greater resolution at shorter wavelengths than it does a longer wavelengths. Monochromators Based on Diffraction Gratings The inset in the diffraction grating monochromator in Figure $6$ shows the general saw-toothed pattern of a diffraction grating, which consists of a series of grooves with broad surfaces exposed to light from the source. As shown in Figure $7$, parallel beams of source radiation (shown in blue) from the monochromator's collimating mirror strike the surface of the diffraction grating and are reflected back (shown in green) toward the monochromator's focusing mirror and the detector. The parallel beams from the source strike the diffraction grating at an incident angle $i$ relative to the grating normal, which is a line perpendicular to the diffraction grating's base. The parallel beams bounce back toward the detector do so at a reflected angle $r$ to the grating normal. Constructive interference between the reflected beams occurs if their path lengths differ by an integer multiple of the incident beam's wavelength ($n \lambda$), where $n$ is the diffraction order. A close examination of Figure $7$ shows that the difference in the distance traveled by two parallel beams of light, identified as 1 and 2, that strike adjacent grooves on the diffraction grating is equal to the sum of the line segments $\overline{CB}$ and $\overline{BD}$, both shown in red; thus $n \lambda = \overline{CB} + \overline{BD}$ The incident angle, $i$, is equal to the angle CAB and the reflected angle, $r$, is equal to the angle DAB, which means we can write the following two equations $\overline{CB} = d \sin{i}$ $\overline{BD} = d \sin{r}$ where $d$ is the distance between the diffraction grating's grooves. Substituting back gives $n \lambda = d(\sin{i} + \sin{r}) \label{nlambda}$ which allows us to calculate the angle at which we can detect a wavelength of interest, $r$, given the angle of incidence from the source, $i$, and the number of grooves per mm (or the distance between grooves). Example $1$ At what angle can we detect light of 650 nm using a diffraction grating with 1500 gooves per mm if the incident radiation is at an angle of $50^{\circ}$ to the grating normal? Assume that this is a first-order diffraction. Solution The distance between the grooves is $d = \frac{1 \text{ mm}}{1500 \text{ grooves}} \times \frac{10^6 \text{ nm}}{\text{mm}} = 666.7 \text{ nm} \nonumber$ To find the angle, we begin with $n \lambda = 1 \times 650 \text{ nm} = d(\sin{i} + \sin{r}) = 666.7 \text{ nm} \times (\sin{(50)} + \sin{r}) \nonumber$ $0.9750 = 0.7660 + \sin{r} \nonumber$ $0.2090 = \sin{r} \nonumber$ $r = 12.1^{\circ} \nonumber$ Performance Characteristics of a Monochromator The quality of a monochromator depends on several key factors: the purity of the light that emerges from the exit slit, the power of the light that emerges from the exit slit, and the resolution between adjacent wavelengths. Spectral Purity The radiation that emerges from a monochromator is pure if it (a) arises from the source and if it (b) follows the optical path from the entrance slit to the exit slit. Stray radiation that enters the monochromator from openings other than the entrance slit—perhaps through small imperfections in the joints—or that reaches the exit slit after scattering from imperfections in the optical components or dust, serves as a contaminant in that the power measured at the detector has a component at the monochromator's analytical wavelength and a component from the stray radiation that includes radiation at other wavelengths. Power The amount of radiant energy that exits the monochromator and reaches the detector in a unit time is power. The greater the power, the better the resulting signal-to-noise ratio. The more radiation that enters the monochromator and is gathered by the collimating mirror, the greater the amount of radiation that exits the monochromator and the greater the power at the detector. The ability of a monochromator to collect radiation is defined by its $f/number$. As shown in Figure $8$, the smaller the $f/number$, the greater the area and the greater the power. The light-gathering power increases as the inverse square of the $f/number$; thus, a monochromator rated as $f/2$ gathers $4 \times$ as much radiation as a monochromator rated as $f/4$. Resolution To separate two wavelengths of light and detect them separately, it is necessary to to disperse them over a sufficient distance. The angular dispersion of a monochromator is defined as the change in the angle of reflection (see the angle $r$ in Figure $7$) for a change in wavelength, or $dr/d\lambda$. Taking the derivative of Equation \ref{nlambda} for a fixed angle of incidence (see the angle $i$ in Figure $7$) gives the angular dispersion as $\frac{dr}{d \lambda} = \frac{n}{d \cos{r}} \label{angdisp}$ where $n$ is the diffraction order. The linear dispersion of radiation, $D$, gives the change in wavelength as a function of $y$, the distance along the focal plane of the monochromator's exit slit; this is related to the angular dispersion by $D = \frac{dy}{d \lambda} = \frac{F dr}{d \lambda} \label{lineardisp}$ where $F$ is the focal length. Because we are interested in wavelength, it is convenient to take the inverse of Equation \ref{lineardisp} $D^{-1} = \frac{d \lambda}{dy} = \frac{1}{F} \times \frac{d \lambda}{dr} \label{invlineardisp}$ where $D^{-1}$ is the reciprocal linear dispersion. Substituting Equation \ref{angdisp} into Equation \ref{invlineardisp} gives $D^{-1} = \frac{d \lambda}{dy} = \frac{d \cos{r}}{nF}$ which simplifies to $D^{-1} = \frac{d}{nF}$ for angles $r < 20^{\circ}$ where $\cos{r} \approx 1$. Because the linear dispersion of radiation along the monochromator's exit slit is independent of wavelength, the ability to resolve two wavelengths is the same across the spectrum of wavelengths. Another way to report a monochromator's ability to distinguish between two closely spaced wavelengths is its resolving power, $R$, which is defined as $R = \frac{\lambda}{\Delta \lambda} = n N \nonumber$ where $\lambda$ is the average of the two wavelengths, $\Delta \lambda$ is the difference in their values and $N$ is the number of grooves on the diffraction grating that are exposed to the radiation from the collimating mirror. The greater the number of grooves, the greater the resolving power. Monochromator Slits A monochromator has two sets of slits: an entrance slit that brings radiation from the source into the monochromator and an exit slit that passes the radiation from the monochromator to the detector. Each slit consists of two metal plates with sharp, beveled edges separated by a narrow gap that forms a rectangular window and which is aligned with the focal plane of the collimating mirror. Figure $9$ shows a set of four slits from a monochromator taken from an atomic absorption spectrophotometer. From bottom-to-top, the slits have widths, $w$, of 2.0 mm, 1.0 mm, 0.5 mm, and 0.2 mm. Effect of Slits on Monochromatic Radiation Suppose we have a source of monochromatic radiation with a wavelength of 400.0 nm and that we pass this beam of radiation through a monochromator that has entrance and exit slits with a width, $w$, of 1.0 mm and a reciprocal linear dispersion of 1.2 nm/mm. The product of these two variables is called the monochromator's effective bandwidth, $\Delta \lambda_\text{eff}$, and is given as $\Delta \lambda_\text{eff} = w D^{-1} = 1.0 \text{ mm} \times 1.2 \text{ nm/mm} = 1.2 \text{ nm}$ The width of the beam in units of wavelength, therefore, is 1.2 nm. In this case, as shown in Figure $10$, if we scan the monochromator, our beam of monochromatic radiation will first enter the exit slit at a wavelength setting of 398.8 nm and will fully exit the slit at a wavelength setting of 401.2 nm. In between these limits a portion of the beam is blocked and only a portion of the beam passes through the exit slit and reaches the detector. For example, when the monochromator is set to 399.4 nm or 400.6 nm, half of ther beam reaches the detector with a power of $0.5\times P$. If we monitor the power at the detector as a function of wavelength, we obtain the profile shown at the bottom of Figure $10$. The monochromator's bandwidth encompasses the range of wavelengths over which some portion of the beam of radiation passes through the exit slit. Effect of Slit Width on Resolution Suppose we have a source of radiation that consists of precisely three wavelengths—399.4 nm, 400.0 nm, and 400.6 nm—and we pass them through a monochromator with an effective bandwidth of 1.2 nm. Using the analysis from the previous section, the radiation with a wavelength of 399.4 nm passes through the monochromator's exit slit for any wavelength setting between 398.8 and 400.0 nm, which means it overlaps with the radiation with a wavelength of 400.0 nm. The same is true for the radiation with a wavelength of 400.6 nm, which also overlaps with the radiation with a wavelength of 400.0 nm. As shown in Figure $11a$, we cannot resolve the three monochromatic sources of radiation, which appear as a single broad band of radiation. Decreasing the effective bandwidth to one-half of the difference in the wavelengths of the adjacent sources of radiation produces, as shown in Figure $11b$, baseline resolution of the individual sources of wavelength. To resolve the sources of radiation with wavelengths of 399.4 nm and 400.0 nm using a monochromator with a reciprical linear dispersion of 1.2 nm/mm requires an effective bandwidth of $\Delta \lambda_\text{eff} = 0.5 \times (400.0 \text{ nm} - 399.4 \text{ nm}) = 0.3 \text{ nm} \nonumber$ and a slit width of $w = \frac{\Delta \lambda_\text{eff}}{D^{-1}} = \frac{0.3 \text{ nm}}{1.2 \text{nm/mm}} = 0.25 \text{ mm} \nonumber$ Choosing a Slit Width The choice of slit width always involves a trade-off between increasing the radiant power that reaches the detector by using a wide slit width, which improves the signal-to-noise ratio, and improving the resolution between closely spaced peaks, which requires a narrow slit width. Figure $3$ illustrates this trade-off. Ultimately, the needs of the analyst will dictate the choice of slit width.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.03%3A_Wavelength_Selectors.txt
The sample compartment provides a light-tight environment that limits stray radiation. Samples normally are in a liquid or solution state, and are placed in cells constructed with UV/Vis transparent materials, such as quartz, glass, and plastic (Figure \(1\)). A quartz or fused-silica cell is required when working at a wavelength <300 nm where other materials show a significant absorption. The most common pathlength is 1 cm (10 mm), although cells with shorter (as little as 0.1 cm) and longer pathlengths (up to 10 cm) are available. Longer pathlength cells are useful when analyzing a very dilute solution or for gas samples. The highest quality cells allow the radiation to strike a flat surface at a 90o angle, minimizing the loss of radiation to reflection. A test tube often is used as a sample cell with simple, single-beam instruments, although differences in the cell’s pathlength and optical properties add an additional source of error to the analysis. Infrared spectroscopy routinely is used to analyze gas, liquid, and solid samples. Sample cells are made from materials, such as NaCl and KBr, that are transparent to infrared radiation. Gases are analyzed using a cell with a pathlength of approximately 10 cm. Longer pathlengths are obtained by using mirrors to pass the beam of radiation through the sample several times. A liquid sample may be analyzed using a variety of different sample cells (Figure \(2\)). For non-volatile liquids a suitable sample is prepared by placing a drop of the liquid between two NaCl plates, forming a thin film that typically is less than 0.01 mm thick. Volatile liquids are placed in a sealed cell to prevent their evaporation. 7.05: Radiation Transducers Introduction In Nessler’s original method for determining ammonia (see Section 7.4) the analyst’s eye serves as the detector, matching the sample’s color to that of a standard. The human eye, of course, has a poor range—it responds only to visible light—and it is not particularly sensitive or accurate. Modern detectors use a sensitive transducer to convert a signal consisting of photons into an easily measured electrical signal. Ideally the detector’s signal, S, is a linear function of the electromagnetic radiation’s power, P, \[S=k P+D  \] where k is the detector’s sensitivity, and D is the detector’s dark current, or the background current when we prevent the source’s radiation from reaching the detector. There are two broad classes of spectroscopic transducers: photon transducers and thermal transducers, although we will subdivide the photon transducers given their rich variety. Table \(1\) provides several representative examples of each class of transducers. Transducer is a general term that refers to any device that converts a chemical or a physical property into an easily measured electrical signal. The retina in your eye, for example, is a transducer that converts photons into an electrical nerve impulse; your eardrum is a transducer that converts sound waves into a different electrical nerve impulse. Table \(1\). Examples of Transducers for Spectroscopy transducer class wavelength range output signal photovoltaic cell photon 350–750 nm current phototube photon 200–1000 nm current photomultiplier photon 110–1000 nm current Si photodiode photon 250–1100 nm current photoconductor photon 750–6000 nm change in resistance photovoltaic cell photon 400–5000 nm current or voltage thermocouple thermal 0.8–40 µm voltage thermistor thermal 0.8–40 µm change in resistance pneumatic thermal 0.8–1000 µm membrane displacement pyroelectric thermal 0.3–1000 µm current Photon Transducers A photon transducer takes a photon and converts it into an electrical signal, such as a current, a change in resistance, or a voltage. Many such detectors use a semiconductor as the photosensitive surface. When the semiconductor absorbs photons, valence electrons move to the semiconductor’s conduction band, producing a measurable current. Photovoltaic Cells A photovoltaic cell (Figure \(1\)) consists of a thin film of a semiconducting material, such as selenium sandwiched between two electrodes: a base electrode of iron or copper and a thin semi-transparent layer of silver or gold that serves as the collector electrode. When a photon of visible light falls on the photovoltaic cell it generates an electron and a hole with a positive charge within the semiconductor. Movement of the electrons from the collector electrode to the base electrode generates a current that is proportional to the power of the incoming radiation and that serves as the signal. Phototubes and Photomultipliers Phototubes and photomultipliers use a photosensitive surface that absorbs radiation in the ultraviolet, visible, or near IR to produce an electrical current that is proportional to the number of photons that reach the transducer (see Figure \(2\)). The current results from applying a negative potential to the photoemissive surface and a positive potential to a wire that serves as the anode. In a photomultiplier tube, a series of positively charged dynodes serves to amplify the current, producing 106–107 electrons per photon. Silicon Photodiodes Applying a reverse biased voltage to the pn junction of a silicon semiconductor creates a depletion zone in which conductance is close to zero (see Chapter 2 for an earlier discussion of semiconductors). When a photon of light of sufficient energy impinges on the depletion zone, an electron-hole pair is formed. Movement of the electron through the n–region and of the hole through the p–region generates a current that is proportional to the number of photons reaching the detector. A silicon photodiode has a wide spectral range from approximately 190 nm to 1100 nm, which makes it versatile; however, a photodiode is less sensitive than a photomultiplier. Multichannel Photon Transducers The photon transducers discussed above detect light at a single wavelength passed by the monochromator to the detector. If we wish to record a complete spectrum then we must continually adjust the monochromator either manually or by using a servo motor. In a multichannel instrument we create a one-dimensional or two-dimensional array of detectors that allow us to monitor simultaneously radiation spanning a broad range of wavelengths. Photodiode Arrays An individual silicon photodiode is quite small, typically with a width of approximately 0.025 mm. As a result, a linear (one-dimensional) array that consists of 1024 individual photodiodes has a width of just 25.6 mm. Figure \(4\), for example, shows the UV detector from an HPLC. Light from the deuterium lamp passes through a flow cell, is dispersed by a diffraction grating, and then focused onto a linear array of photodiodes. The close-up on the right shows the active protion of the photodiode array covered by an optical window. The active width of this photodiode array is approximately 6 mm and includes more than 200 individual photodiodes, sufficient to provide 1 nm resolution from 180 nm to 400 nm. Charge-Transfer Devices One way to increase the sensitivity of a detector is to collect and store charges before counting them. This is the approach taken with two types of charge-transfer devices: charged-coupled detectors and charge-injection detectors. Individual detectors, or pixels, consist of a layer of silicon dioxide coated on top of semiconductor. When a photon impinges on the detector it creates an electron-hole pair. An electrode on top of the silicon dioxide layer collects and stores either the negatively charged electrons or the positively charged holes. After a sufficient time, during which 10,000-100,000 charges are collected, the total accumulated charge is measured. Because individual pixels are small, typically 10 µm, they can be arranged in either a linear, one-dimensional array or a two-dimensional array. A charge-transfer device with 1024 x 1024 pixels will be approximately 10 mm x 10 mm in size. Note There are two important charge-transfer devices used as detectors: a charge-coupled device (CCD), which is discussed below, and a charge-injection device (CID), which is discussed in Chapter 10. Both types of devices use a two-dimensional arrays of individual detectors that store charge. The two devices differ primarily in how the accumulated charges are read. Figure \(5\) shows a cross-section of a single detector (pixel) in a charge-coupled device (CCD) where individual pixels are arranged in a two-dimensional array. Electron-hole pairs are created in a layer of p-doped silicon. The holes migrate to the n-doped silicon layer and the electrons are drawn to the area below a positively charged electrode. When it is time to record the accumulated charges, the charge is read in the upper-right corner of the array with charges in the same row measured by shifting them from left-to-right. When the first row is read, the charges in the remaining rows are shifted up and recorded. In a charge-injection device, the roles of the electrons and holes are reversed and the accumulated positive charged are recorded. Figure \(6\) shows an example of spectrophotometer equipped with a linear CCD detector that includes 2048 individual elements with a wavelength range from 200 nm to 1100 nm. The spectrometer is housed in a compact space of 90 mm x 60 mm Thermal Transducers Infrared photons do not have enough energy to produce a measurable current with a photon transducer. A thermal transducer, therefore, is used for infrared spectroscopy. The absorption of infrared photons increases a thermal transducer’s temperature, changing one or more of its characteristic properties. A pneumatic transducer, for example, is a small tube of xenon gas with an IR transparent window at one end and a flexible membrane at the other end. Photons enter the tube and are absorbed by a blackened surface, increasing the temperature of the gas. As the temperature inside the tube fluctuates, the gas expands and contracts and the flexible membrane moves in and out. Monitoring the membrane’s displacement produces an electrical signal.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.04%3A_Sample_Containers.txt
If we need to monitor an analyte’s concentration over time, it may not be possible to remove samples for analysis. This often is the case, for example, when monitoring an industrial production line or waste line, when monitoring a patient’s blood, or when monitoring an environmental system, such as stream. With a fiber-optic probe we can analyze samples in situ. An example of a remote sensing fiber-optic probe is shown in Figure \(1\). The probe consists of two bundles of fiber-optic cable. One bundle transmits radiation from the source to the probe’s tip, which is designed to allow the sample to flow through the sample cell. Radiation from the source passes through the solution and is reflected back by a mirror. The second bundle of fiber-optic cable transmits the nonabsorbed radiation to the wavelength selector. Another design replaces the flow cell shown in Figure \(1\) with a membrane that contains a reagent that reacts with the analyte. When the analyte diffuses into the membrane it reacts with the reagent, producing a product that absorbs UV or visible radiation. The nonabsorbed radiation from the source is reflected or scattered back to the detector. Fiber optic probes that show chemical selectivity are called optrodes. 7.07: Types of Optical Instruments Thus far, the optical benches described in this chapter either use a single detector and a monochromator to pass a single wavelength of light to the detector, or use a multichannel array of detectors and a diffraction grating to disperse the light across the detectors. Both of these approaches have advantages and limitations. For the first of these designs, we can improve resolution by using a smaller slit width, although this comes with a decrease in the throughput of light that reaches the detector, which increases noise. Recording a complete spectrum requires scanning the monochromator; a slow scan rate can improve resolution by reducing the range of wavelengths reaching the detector per unit time, but at the expense of a longer analysis time, which is a problem if the composition of our samples changes with time. For the second of these designs, resolution is limited by the size of the array; for example, a spectral range of 190 nm to 800 nm and a photodiode array with 512 individual elements has a digital resolution of $\frac{800 - 190}{512} = 1.2 \text{ nm/diode} \nonumber$ although the optical resolution—defined by the actual number of individual diodes over which a wavelength of light is dispersed—is greater and may vary with wavelength. Because a photodiode array allows for the simultaneous detection of radiation by each diode in the array, data acquisition is fast and a complete spectrum is acquired in approximately one second. Interferometers We can overcome the limitations described above if we can find a way to avoid dispersing the source radiation in time by scanning the monochromator, or dispersing the source radiation in space across an array of sensors. An interferometer, Figure $1$, provides one way to accomplish this. Radiation from the source is collected by a collimating mirror and passed to a beam splitter where half of the radiation is directed toward a mirror set at a fixed distance from the beam splitter, and the other half of the radiation is passed through to a mirror that moves back and forth. The radiation from the two mirrors is recombined at the beam splitter and half of it is passed along to the detector. Time Domain and Frequency Domain When the radiation recombines at the beam splitter, constructive and destructive interference determines, for each wavelength, the intensity of light that reaches the detector. As the moving mirror changes position, the wavelength of light that experiences maximum constructive interference and maximum destructive interference also changes. The signal at the detector shows intensity as a function of the moving mirror’s position, expressed in units of distance or time. The result is called an interferogram or a time domain spectrum. The time domain spectrum is converted mathematically, by a process called a Fourier transform, to a spectrum (a frequency domain) that shows intensity as a function of the radiation’s frequency. Figure $2$ shows the relationship between the time domain spectrum and the frequency domain spectrum. The spectra in the first row show the relationship between (a) the time domain spectrum and (b) the corresponding frequency domain spectrum for a monochromatic source of radiation with a frequency, $\nu_1$, of 1 and an amplitude, $A_1$, of 1.0. In the time domain we see a simple cosine function with the general form $S = A_1 \times \cos{(2 \pi \nu_1 t)} \label{signal1}$ where $S$ is the signal and $t$ is the time. The spectra in the second row show the same information for a second monochromatic source of radiation with a frequency, $\nu_2$, of 1.2 and an amplitude, $A_2$, of 1.5, which is given by the equation $S = A_2 \times \cos{(2 \pi \nu_2 t)} \label{signal2}$ If we have a source that emits just these two frequencies of light, then the corresponding time domain and frequency domain spectra in the last row, where $S = A_1 \times \cos{(2 \pi \nu_1 t)} + A_2 \times \cos{(2 \pi \nu_2 t)} \label{signal3}$ Although the time domain spectrum in panel (e) is more complex than those in panels (a) and (c), there is a clear repeating pattern, one cycle of which is shown by the arrow. Note that for each of these three examples, the time domain spectrum and the frequency domain spectrum encode the same information about the source radiation. The two monochromatic signals in Figure $2$ are line spectra with line widths that are essentially zero. But what if our signal has a measurable linewidth? We might consider such a signal to be the sum of a series of cosine functions, each with an amplitude and a frequency. Figure $3a$ shows a frequency domain that contains a single peak with a finite width and Figure $3b$ shows the corresponding time domain spectrum, which consists of an oscillating signal with an amplitude that decays over time. In general, Figure $2$ and Figure $3$ show that • the further a peak in the frequency domain is from the origin, the greater its corresponding oscillation frequency in the time domain • the broader a peak's width in the frequency domain, the faster its decay rate in the time domain • the greater the area under a peak in the frequency domain, the higher its initial intensity in the time domain The mathematical process of converting between the time domain and the frequency domain is called a Fourier transform. The details of the mathematics are sufficiently complex that calculations by hand are impractical. Advantages of Fourier Transform Spectrometry In comparison to a monochromator, an interferometer has several significant advantages. The first advantage, which is termed Jacquinot’s advantage, is the greater throughput of source radiation. Because an interferometer does not use slits and has fewer optical components from which radiation is scattered and lost, the throughput of radiation reaching the detector is $80-200 \times$ greater than that for a monochromator. The result is less noise. A second advantage, which is called Fellgett’s advantage, is a savings in the time needed to obtain a spectrum. Because the detector monitors all frequencies simultaneously, a spectrum takes approximately one second to record, as compared to 10–15 minutes when using a scanning monochromator. A third advantage is that increased resolution is achieved by increasing the distance traveled by the moving mirror, which we can achieve without the need to decrease a scanniing monochromator's slit width or without increasing the size of an array detector.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.06%3A_Fiber_Optics.txt
• 8.1: Optical Atomic Spectra The energy of ultraviolet and visible electromagnetic radiation is sufficient to cause a change in an atom’s valence electron configuration, resulting atomic absorption, atomic emission, and atomic fluorescence spectra. Although these spectra have narrow lines, there are a number of factors that affect line widths. • 8.2: Atomization Methods Atomic methods require that the sample consist of individual gas phase atoms or gas phase atomic ions. With rare exceptions, this is not the form in which we obtain samples. Examples of atomization methods include the use of flames, resistive heating, plasmas, and electric arcs and sparks. • 8.3: Sample Introduction Methods In addition to a method of atomization, atomic methods require a means of placing the sample within the device used for atomization. The analysis of seawater for sodium ions requires a means for working with a sample that is in solution. Examples of different methods of sample introduction include aspirating a solution directly into a flame, injecting a small aliquot of solution onto a resistive heating mechanism, or exposing a solid sample to a laser or electric spark. More details on specific 08: An Introduction to Optical Atomic Spectroscopy Energy Level Diagrams The energy of ultraviolet and visible electromagnetic radiation is sufficient to cause a change in an atom’s valence electron configuration. Sodium, for example, has a single valence electron in its 3s atomic orbital. As shown in Figure $1$, unoccupied, higher energy atomic orbitals also exist. The valence shell energy level diagram in Figure $1$ might strike you as odd because it shows the 3p orbitals split into two groups of slightly different energy (the two lines differ by just 0.6 nm). The cause of this splitting is a consequence of an electron's angular momentum and its spin. When these two are in the opposite direction, then the energy is slightly smaller than when the two are in the same direction. The effect is largest for p orbitals and sufficiently smaller for d and f orbitals that we do not bother to show the difference in their energies in this diagram. Absorption of a photon is accompanied by the excitation of an electron from a lower-energy atomic orbital to an atomic orbital of higher energy. Not all possible transitions between atomic orbitals are allowed. For sodium the only allowed transitions are those in which there is a change of ±1 in the orbital quantum number (l); thus transitions from $s \rightarrow p$ orbitals are allowed, but transitions from $s \rightarrow s$ and from $s \rightarrow d$ orbitals are forbidden. Atomic Absorption Spectra The atomic absorption spectrum for Na is shown in Figure $2$, and is typical of that found for most atoms. The most obvious feature of this spectrum is that it consists of a small number of discrete absorption lines that correspond to transitions between the ground state (the 3s atomic orbital) and the 3p and the 4p atomic orbitals. Absorption from excited states, such as the $3p \rightarrow 4s$ and the $3p \rightarrow 3d$ transitions included in Figure $1$, are too weak to detect. Because an excited state’s lifetime is short—an excited state atom typically returns to a lower energy state in 10–7 to 10–8 seconds—an atom in the exited state is likely to return to the ground state before it has an opportunity to absorb a photon. Atomic Emission Spectra Atomic emission occurs when electrons in higher energy orbitals return to a lower energy state, releasing the excess energy as a photon. The ground state electron configuration for Na of $1s^2 2s^2 2p^6 3s^1$ places a single electron in the $3s$ valence shell. Introducing a solution of NaCl to a flame results in the formation of Na atoms (more on this in Chapter 9) and provides sufficient energy to promote the valence electron in the $3s$ orbital to higher energy excited states, such as the $3p$ orbitals identified in the energy level diagram for sodium in Figure $1$. When an electron returns to its ground state, the excess energy is released as a photon. As seen in Figure $3$, the emission spectrum for Na is dominated by the pair of lines with wavelengths of 589.0 and 589.6 nm. Atomic Fluorescence Spectra When an atom in an excited state emits a photon as a means of returning to a lower energy state, how we describe the process depends on the source of energy creating the excited state. When excitation is the result of thermal energy, as is the case for the spectrum in Figure $3$, we call the process atomic emission spectroscopy. When excitation is the result of the absorption of a photon, we call the process atomic fluorescence spectroscopy. The absorption spectrum for Na in Figure $2$ and its emission spectrum in Figure $2$ shows that Na has both strong absorption and emission lines at 589.0 and 589.6 nm. If we use a source of light at 589.6 nm to move the 3s valence electron to a 3p excited state, we can then measure the emission of light at the same wavelength, making the measurement at 90° to avoid an interference from the original light source. Fluorescence also may occur when an electron in an excited state first loses energy by a process other than the emission of a photon—we call this a radiationless transition—reaching a lower energy excited state from which it then emits a photon. For example, a ground state Na atom may first absorb a photon with a wavelength of 330.2 nm (a $3s \rightarrow 4 p$ transition), which then loses energy through a radiationless transition to the 3p orbital where it then emits a photon to reach the 3s orbital. Atomic Line Widths Another feature of the atomic absorption spectrum in Figure $2$ and the atomic emission spectrum in Figure $3$ is the narrow width of the absorption and emission lines, which is a consequence of the fixed difference in energy between the ground state and the excited state, and the lack of vibrational and rotational energy levels. The width of an atomic absorption or emission line arises from several factors that we consider here. Broadening Due to Uncertainty Principle From the uncertainty principle, the product of the uncertainty of the frequency of light and the uncertainty in time must be greater than 1. $\Delta \nu \times \Delta t > 1 \nonumber$ To determine the frequency with infinite precision, $\Delta \nu = 0$, requires that the lifetime of an electron in a particular orbital must be infinitely large. While this may be essentially true for an electron in the ground state, it is not true for an electron in an excited state where the average lifetime—how long it takes before it returns to the ground state—may be on the order of $10^{-7} \text{ to }10^{-8} \text{ s}$. For example, if $\Delta t = 5 \times 10^{-8} \text{ s}$ for the emission of a photon with a wavelength of 500.0 nm, then $\Delta \nu = 2 \times 10^7 \text{ s}^{-1} \nonumber$ To convert this an uncertainty in wavelength, $\Delta \lambda$, we begin with the relationship $\nu = \frac{c}{\lambda} \nonumber$ and take the derivative of $\nu$ with respect to wavelength $d \nu = - \frac{c}{\lambda^2} d \lambda \nonumber$ Rearranging to solve for the uncertainty in wavelength, and letting $\Delta \nu$ and $\Delta \lambda$ serve as estimates for $d \nu$ and $d \lambda$ leaves us with $\left| \Delta \lambda \right| = \frac{\Delta \nu \times \lambda^2}{c} = \frac{(500.0 \times 10^{-9} \text{ m}^2) \times (2 \times 10^7 \text{s}^{-1})}{2.998 \times 10^8 \text{ m/s}} = 1.7 \times 10^{-14} \text{ m} \nonumber$ or $1.7 \times 10^{-5} \text{ nm}$. Natural line widths for atomic spectra are approximately 10–5 nm. Doppler Broadening and Pressure Broadening When an atom emits a photon, the frequency (and, thus, the wavelength) of the photon depends on whether the emitting atom is moving toward the detector or moving away from the detector. When the atom is moving toward the detector, as in Figure $4a$, its emitted light reaches the detector at a greater frequency—a shorter wavelength—than when the light source is stationary, as in Figure $4b$. An atom moving away from the detector, as in Figure $4c$ emits light that reaches the detector with a smaller frequency and a longer wavelength. Atoms are in constant motion, which means that they also experience constant collisions, each of which results in a small change in the energy of an electron in the ground state or in an excited state, and a corresponding change in the wavelength emitted or absorbed. This effect is called pressure (or collisional) broadening. As is the case for Doppler broadening, pressure broadening increases with temperature. Together, Doppler broadening and pressure broadening result in an approximately 100-fold increase in line width, with line widths on the order of approximately 10–3 nm. Effect of Temperature on Atomic Spectra As noted in the previous section, temperature contributes to the broadening of atomic absorption and atomic emission lines. Temperature also has an effect on the intensity of emission lines as it determines the relative population of an atom's various excited states. The Boltzmann distribution $\frac{N_i}{N_0} = \frac{P_i}{P_0} e^{-E_i/kT} \nonumber$ gives the relative number of atoms in a specific excited state, $N_i$, relative to the number of atoms in the ground state, $N_0$, as a function of the difference in their energies, $E_i$, Boltzmann's constant, $k$, the temperature in Kelvin, $T$, and $P_i$ and $P_0$ are statistical factors that account for the number of equivalent energy states for the excited state and the ground state. Figure $5$ shows how temperature affects the atomic emission spectrum for sodium's two intense emission lines at 589.0 and 589.6 nm for temperatures from 2500 K to 7500 K. Note that the emission at 2500 K is too small to appear using a y-axis scale of absolute intensities. A change in temperature from 5500 K to 4500 K reduces the emission intensity by 62%. As you might guess from this, a small change in temperature—perhaps as little as 10 K can result in a measurable decrease in emission intensity of a few percent. An increase in temperature may also change the relative emission intensity of different lines. Figure $6$, for example, shows the atomic emission spectra for copper at 5000 K and 7000 K. At the higher temperature, the most intense emission line changes from 510.55 nm to 521.82 nm, and several additional peaks between 400 nm and 500 nm become more intense. Band and Continuum Spectra The atomic emission spectra for sodium in Figure $3$ consists of discrete, narrow lines because they arise from the transition between the discrete, well-defined energy levels seen in Figure $1$. Atomic emission from a flame also include contributions from two additional sources: the emission from molecular species that form in the flame and emission from the flame. A sample of water, for example, is likely to contain a variety of ions, such as Ca2+, that form molecular species, such as CaOH in the flame, and that emit photon over a much broader range of wavelengths than do atoms. The flame, itself, emits photons throughout the range of wavelengths used in UV/Vis atomic emission. 8.02: Atomization Methods Atomic methods require that the sample consist of individual gas phase atoms or gas phase atomic ions. With rare exceptions, this is not the form in which we obtain samples. If we are interested in analyzing seawater for the concentration of sodium, we need to find a way to convert the solution of aqueous sodium ions, Na+(aq), into gas phase sodium atoms, Na(g), or gas phase sodium ions, Na+(g). The process by which this happens is called atomization and requires a source of thermal energy. Examples of atomization methods include the use of flames, resistive heating, plasmas, and electric arcs and sparks. More details on specific atomization methods appear in the chapters that follow. 8.03: Sample Introduction Methods In addition to a method of atomization, atomic spectroscopic methods require a means of placing the sample within the device used for atomization. The analysis of seawater for sodium ions requires a means for working with a sample that is in solution. The analysis of a salt-substitute for sodium, on the other hand, requires a means for working with solid samples, which could be first bringing it into solution or working directly with the solid. How a sample is introduced also depends on the method of atomization. Examples of different methods of sample introduction include aspirating a solution directly into a flame, injecting a small aliquot of solution onto a resistive heating mechanism, or exposing a solid sample to a laser or electric spark. More details on specific methods for introducing samples appear in the chapters that follow.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/08%3A_An_Introduction_to_Optical_Atomic_Spectroscopy/8.01%3A_Optical_Atomic_Spectra.txt
• 9.1: Sample Atomization Techniques The process of converting an analyte to a free gaseous atom is called atomization. Converting an aqueous analyte into a free atom requires that we strip away the solvent, volatilize the analyte, and, if necessary, dissociate the analyte into free atoms. There are two common atomization methods: flame atomization and electrothermal atomization, although a few elements are atomized using other methods. • 9.2: Atomic Absorption Instrumentation Atomic absorption spectrophotometers use the same single-beam or double-beam optics described earlier in Chapter 7, including a source of radiation, a method for introducing the sample (covered in the previous section), a means for isolating the wavelengths of interest, and a way to measure the amount of light absorbed or emitted. • 9.3: Interferences in Absorption Spectroscopy In describing the optical benches for atomic absorption spectroscopy, we noted the need to modulate the radiation from the source in order to discriminate against emission of radiation from the flame. In this section we consider additional sources of interference and discuss ways to compensate for them. • 9.4: Atomic Absorption Techniques This section provides some details about how samples are prepared for an atomic absorption analysis, details about calibration strategies, and a summary of the method's strengths and limitations. 09: Atomic Absorption and Atomic Fluorescence Spectrometry The process of converting an analyte to a free gaseous atom is called atomization. Converting an aqueous analyte into a free atom requires that we strip away the solvent, volatilize the analyte, and, if necessary, dissociate the analyte into free atoms. Desolvating an aqueous solution of CuCl2, for example, leaves us with solid particulates of CuCl2. Converting the particulate CuCl2 to gas phases atoms of Cu and Cl requires thermal energy. $\mathrm{CuCl}_{2}(a q) \rightarrow \mathrm{CuCl}_{2}(s) \rightarrow \mathrm{Cu}(g)+2 \mathrm{Cl}(g) \nonumber$ There are two common atomization methods: flame atomization and electrothermal atomization, although a few elements are atomized using other methods. Flame Atomization Figure $1$ shows a typical flame atomization assembly with close-up views of several key components. In the unit shown here, the aqueous sample is drawn into the assembly by passing a high-pressure stream of compressed air past the end of a capillary tube immersed in the sample. When the sample exits the nebulizer it strikes a glass impact bead, which converts it into a fine aerosol mist within the spray chamber. The aerosol mist is swept through the spray chamber by the combustion gases—compressed air and acetylene in this case—to the burner head where the flame’s thermal energy desolvates the aerosol mist to a dry aerosol of small, solid particulates. The flame’s thermal energy then volatilizes the particles, producing a vapor that consists of molecular species, ionic species, and free atoms. Burner. The slot burner in Figure $1a$ provides a long optical pathlength and a stable flame. Because absorbance is directly proportional to pathlength, a long pathlength provides greater sensitivity. A stable flame minimizes uncertainty due to fluctuations in the flame. The burner is mounted on an adjustable stage that allows the entire assembly to move horizontally and vertically. Horizontal adjustments ensure the flame is aligned with the instrument’s optical path. Vertical adjustments change the height within the flame from which absorbance is monitored. This is important because two competing processes affect the concentration of free atoms in the flame. The more time an analyte spends in the flame the greater the atomization efficiency; thus, the production of free atoms increases with height. On the other hand, a longer residence time allows more opportunity for the free atoms to combine with oxygen to form a molecular oxide. As seen in Figure $2$, for a metal this is easy to oxidize, such as Cr, the concentration of free atoms is greatest just above the burner head. For a metal, such as Ag, which is difficult to oxidize, the concentration of free atoms increases steadily with height. Flame. The flame’s temperature, which affects the efficiency of atomization, depends on the fuel–oxidant mixture, several examples of which are listed in Table $1$. Of these, the air–acetylene and the nitrous oxide–acetylene flames are the most popular. Normally the fuel and oxidant are mixed in an approximately stoichiometric ratio; however, a fuel-rich mixture may be necessary for easily oxidized analytes. Table $1$. Fuels and Oxidants Used for Flame Combustion fuel oxidant temperature range (oC) natural gas air 1700–1900 hydrogen air 2000–2100 acetylene air 2100–2400 acetylene nitrous oxide 2600–2800 acetylene oxygen 3050–3150 Figure $3$ shows a cross-section through the flame, looking down the source radiation’s optical path. The primary combustion zone usually is rich in gas combustion products that emit radiation, limiting is useful- ness for atomic absorption. The interzonal region generally is rich in free atoms and provides the best location for measuring atomic absorption. The hottest part of the flame typically is 2–3 cm above the primary combustion zone. As atoms approach the flame’s secondary combustion zone, the decrease in temperature allows for formation of stable molecular species. Sample Introduction. The most common means for introducing a sample into a flame atomizer is a continuous aspiration in which the sample flows through the burner while we monitor absorbance. Continuous aspiration is sample intensive, typically requiring from 2–5 mL of sample. Flame microsampling allows us to introduce a discrete sample of fixed volume, and is useful if we have a limited amount of sample or when the sample’s matrix is incompatible with the flame atomizer. For example, continuously aspirating a sample that has a high concentration of dissolved solids—sea water, for example, comes to mind—may build-up a solid de- posit on the burner head that obstructs the flame and that lowers the absorbance. Flame microsampling is accomplished using a micropipet to place 50–250 μL of sample in a Teflon funnel connected to the nebulizer, or by dipping the nebulizer tubing into the sample for a short time. Dip sampling usually is accomplished with an automatic sampler. The signal for flame microsampling is a transitory peak whose height or area is proportional to the amount of analyte that is injected. Advantages and Disadvantages of Flame Atomization. The principal advantage of flame atomization is the reproducibility with which the sample is introduced into the spectrophotometer; a significant disadvantage is that the efficiency of atomization is quite poor. There are two reasons for poor atomization efficiency. First, the majority of the aerosol droplets produced during nebulization are too large to be carried to the flame by the combustion gases. Consequently, as much as 95% of the sample never reaches the flame, which is the reason for the waste line shown at the bottom of the spray chamber in Figure $1$. A second reason for poor atomization efficiency is that the large volume of combustion gases significantly dilutes the sample. Together, these contributions to the efficiency of atomization reduce sensitivity because the analyte’s concentration in the flame may be a factor of $2.5 \times 10^{-6}$ less than that in solution [Ingle, J. D.; Crouch, S. R. Spectrochemical Analysis, Prentice-Hall: Englewood Cliffs, NJ, 1988; p. 275]. Electrothermal Atomization A significant improvement in sensitivity is achieved by using the resistive heating of a graphite tube in place of a flame. A typical electrothermal atomizer, also known as a graphite furnace, consists of a cylindrical graphite tube approximately 1–3 cm in length and 3–8 mm in diameter. As shown in Figure $4$, the graphite tube is housed in an sealed assembly that has an optically transparent window at each end. A continuous stream of inert gas is passed through the furnace, which protects the graphite tube from oxidation and removes the gaseous products produced during atomization. A power supply is used to pass a current through the graphite tube, resulting in resistive heating. Samples of between 5–50 μL are injected into the graphite tube through a small hole at the top of the tube. Atomization is achieved in three stages. In the first stage the sample is dried to a solid residue using a current that raises the temperature of the graphite tube to about 110oC. In the second stage, which is called ashing, the temperature is increased to between 350–1200oC. At these temperatures organic material in the sample is converted to CO2 and H2O, and volatile inorganic materials are vaporized. These gases are removed by the inert gas flow. In the final stage the sample is atomized by rapidly increasing the temperature to between 2000–3000oC. The result is a transient absorbance peak whose height or area is proportional to the absolute amount of analyte injected into the graphite tube. Together, the three stages take approximately 45–90 s, with most of this time used for drying and ashing the sample. Electrothermal atomization provides a significant improvement in sensitivity by trapping the gaseous analyte in the small volume within the graphite tube. The analyte’s concentration in the resulting vapor phase is as much as $1000 \times$ greater than in a flame atomization [Parsons, M. L.; Major, S.; Forster, A. R. Appl. Spectrosc. 1983, 37, 411–418]. This improvement in sensitivity—and the resulting improvement in detection limits—is offset by a significant decrease in precision. Atomization efficiency is influenced strongly by the sample’s contact with the graphite tube, which is difficult to control reproducibly. Specialized Atomization Techniques A few elements are atomized by using a chemical reaction to produce a volatile product. Elements such as As, Se, Sb, Bi, Ge, Sn, Te, and Pb, for example, form volatile hydrides when they react with NaBH4 in the presence of acid. An inert gas carries the volatile hydride to either a flame or to a heated quartz observation tube situated in the optical path. Mercury is determined by the cold-vapor method in which it is reduced to elemental mercury with SnCl2. The volatile Hg is carried by an inert gas to an unheated observation tube situated in the instrument’s optical path. Flame or Electrothermal Atomization? The most important factor in choosing a method of atomization is the analyte’s concentration. Because of its greater sensitivity, it takes less analyte to achieve a given absorbance when using electrothermal atomization. Table $2$ which compares the amount of analyte needed to achieve an absorbance of 0.20 when using flame atomization and electrothermal atomization, is useful when selecting an atomization method. For example, flame atomization is the method of choice if our samples contain 1–10 mg Zn2+/L, but electrothermal atomization is the best choice for samples that contain 1–10 μg Zn2+/L. Table $2$. Concentration of Analyte (in mg/L) That Yields an Absorbance of 0.20 element flame atomization electrothermal atomization Ag 1.5 0.0035 Al 40 0.015 As 40 0.050 Ca 0.8 0.003 Cd 0.6 0.001 Co 2.5 0.021 Cr 2.5 0.0075 Cu 1.5 0.012 Fe 2.5 0.006 Hg 70 0.52 Mg 0.15 0.00075 Mn 1 0.003 Na 0.3 0.00023 Ni 2 0.024 Pb 5 0.080 Pt 70 0.29 Sn 50 0.023 Zn 0.3 0.00071 Source: Varian Cookbook, SpectraAA Software Version 4.00 Pro. As: 10 mg/L by hydride vaporization; Hg: 11.5 mg/L by cold-vapor; and Sn:18 mg/L by hydride vaporization
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.01%3A_Sample_Atomization_Techniques.txt
Atomic absorption spectrophotometers use optical benches similar to those described earlier in Chapter 7, including a source of radiation, a method for introducing the sample (covered in the previous section), a means for isolating the wavelengths of interest, and a way to measure the amount of light absorbed or emitted. Radiation Sources Because atomic absorption lines are narrow, we need to use a line source instead of a continuum source to record atomic absorption spectra. Figure \(1\) will help us understand why this is necessary. As discussed in Chapter 7, a typical continuum source has an effective bandwidth on the order of 1 nm after passing through a monochromator. An atomic absorption line, as we learned in Chapter 8, has an effective line width on the order of 0.002 nm due to Doppler broadening and pressure broadening that takes place in a flame. If we pass the radiation from a continuum source through the flame, the incident power from the source, \(P_0\), and the power that reaches the detector, \(P_t\), are essentially identical, leading to an absorbance of zero. A line source, which operates a temperature that is lower than a flame, has a line width on the order of 0.001 nm. Passing this source radiation through the flame results in a measurable \(P_T\) and a measurable absorbance. The source for atomic absorption is a hollow cathode lamp that consists of a cathode and anode enclosed within a glass tube filled with a low pressure of an inert gas, such as Ne or Ar (Figure \(2\)). Applying a potential across the electrodes ionizes the filler gas. The positively charged gas ions collide with the negatively charged cathode, sputtering atoms from the cathode’s surface. Some of the sputtered atoms are in the excited state and emit radiation characteristic of the metal(s) from which the cathode is manufactured. By fashioning the cathode from the metallic analyte, a hollow cathode lamp provides emission lines that correspond to the analyte’s absorption spectrum. Each element in a hollow cathode lamp provides several atomic emission lines that we can use for atomic absorption. Usually the wavelength that provides the best sensitivity is the one we choose to use, although a less sensitive wavelength may be more appropriate for a sample that has higher concentration of analyte. For the Cr hollow cathode lamp in Table \(1\), the best sensitivity is obtained using a wavelength of 357.9 nm as this line requires the smallest concentration of analyte to achieve an absorbance of 0.20. Table \(1\). Atomic Emission Lines for a Cr Hollow Cathode Lamp wavelength (nm) slit width (nm) mg Cr/L giving A = 0.20 P0 (relative) 357.9 0.2 2.5 40 425.4 0.2 12 85 429.0 0.5 20 100 520.5 0.2 1500 15 520.8 0.2 500 20 Another consideration is the emission line's intensity, \(P_0\). If several emission lines meet our requirements for sensitivity, we may wish to use the emission line with the largest relative P0 because there is less uncertainty in measuring P0 and PT. When analyzing a sample that is ≈10 mg Cr/L, for example, the first three wavelengths in Table \(1\) provide good sensitivity; the wavelengths of 425.4 nm and 429.0 nm, however, have a greater P0 and will provide less uncertainty in the measured absorbance. The emission spectrum for a hollow cathode lamp includes, in addition to the analyte's emission lines, additional emission lines from impurities present in the metallic cathode and from the filler gas. These additional lines are a potential source of stray radiation that could result in an instrumental deviation from Beer’s law. The monochromator’s slit width is set as wide as possible to improve the throughput of radiation and narrow enough to eliminate these sources of stray radiation. Optical Benches Atomic absorption spectrometers are available using either a single-beam and a double-beam optical bench. Figure \(3\) shows a typical single-beam spectrometer, which consists of a hollow cathode lamp as a source, a flame, a grating monochromator, a detector (usually a photomultiplier tube), and a signal processor. Also included in this design is a chopper that periodically blocks light from the hollow cathode lamp from passing through the flame and reaching the detector. The purpose of the chopper is to provide a means for discriminating against the emission of light from the flame, which will otherwise contribute to the total amount of light that reaches the detector. As shown in Figure \(3\), when the chopper is closed, the only light reaching the detector is from the flame; emission from the flame and light from the lamp after it passes through the flame reach the detector when the chopper is open. The difference between the two signals gives the amount of light that reaches the detector after being absorbed by the sample. An alternative method that accomplishes the same thing is to modulate the amount of radiation emitted by the hollow cathode lamp. Figure \(4\) shows the typical arrangement of a double-beam instrument for atomic absorption spectroscopy. In this design, the chopper alternates between two optical paths: one in which light from the hollow cathode lamp bypasses the flame and that measures the total emission of radiation from the flame and the lamp, and one that passes the hollow light from the hollow cathode lamp through the flame and that measures the emission of light from the flame and the amount of light from the hollow cathode lamp that is not absorbed by the sample. The difference between the two signals gives the amount of light that reaches the detector after being absorbed by the sample.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.02%3A_Atomic_Absorption_Instrumentation.txt
In describing the optical benches for atomic absorption spectroscopy, we noted the need to modulate the radiation from the source in order to discriminate against emission of radiation from the flame. In this section we consider additional sources of interference and discuss ways to compensate for them. Spectral Interferences A spectral interference occurs when an analyte’s absorption line overlaps with an interferent’s absorption line or band. Because atomic absorption lines are so narrow, the overlap of two such lines seldom is a problem. On the other hand, a molecule’s broad absorption band or the scattering of source radiation is a potentially serious spectral interference. An important consideration when using a flame as an atomization source is its effect on the measured absorbance. Among the products of combustion are molecular species that exhibit broad absorption bands and particulates that scatter radiation from the source. If we fail to compensate for these spectral interferences, then the intensity of transmitted radiation is smaller than expected. The result is an apparent increase in the sample’s absorbance. Fortunately, absorption and scattering of radiation by the flame are corrected by analyzing a blank that does not contain the sample. Spectral interferences also occur when components of the sample’s matrix other than the analyte react to form molecular species, such as oxides and hydroxides. The resulting absorption and scattering constitutes the sample’s background and may present a significant problem, particularly at wavelengths below 300 nm where the scattering of radiation becomes more important. If we know the composition of the sample’s matrix, then we can prepare our samples using an identical matrix. In this case the background absorption is the same for both the samples and the standards. Alternatively, if the background is due to a known matrix component, then we can add that component in excess to all samples and standards so that the contribution of the naturally occurring interferent is insignificant. Finally, many interferences due to the sample’s matrix are eliminated by increasing the atomization temperature. For example, switching to a higher temperature flame helps prevents the formation of interfering oxides and hydroxides. If the identity of the matrix interference is unknown, or if it is not possible to adjust the flame or furnace conditions to eliminate the interference, then we must find another method to compensate for the background interference. Several methods have been developed to compensate for matrix interferences, and most atomic absorption spectrophotometers include one or more of these methods. One of the most common methods for background correction is to use a continuum source, such as a D2 lamp. Because a D2 lamp is a continuum source, absorbance of its radiation by the analyte’s narrow absorption line is negligible. Only the background, therefore, absorbs radiation from the D2 lamp. Both the analyte and the background, on the other hand, absorb the hollow cathode’s radiation. Subtracting the absorbance for the D2 lamp from that for the hollow cathode lamp gives a corrected absorbance that compensates for the background interference. Although this method of background correction is effective, it does assume that the background absorbance is constant over the range of wavelengths passed by the monochromator. If this is not true, then subtracting the two absorbances underestimates or overestimates the background. A typical optical arrangement is shown in Figure $1$. Another approach to removing the background is to take advantage of the Zeeman effect. The basis of the technique is outlined in Figure $2$ and described below in more detail. In the absence of an applied magnetic field—B = 0, where B is the strength of the magnetic field—a $p \rightarrow d$ absorbance by the analyte takes place between two well-defined energy levels and yields a single well-defined absorption line, as seen on the left side of panel (a). When a magnetic field is applied, B > 0, the three equal energy p-orbitals split into three closely spaced energy levels and the five equal energy d-orbitals split into five closely spaced energy levels. The allowed transitions between these energy levels of $\Delta M_l = 0, \pm 1$ yields three well-defined absorption lines, as seen on the right-side of panel (a), the central one of which ($\Delta M_l = 0$) is at the same wavelength as the absorption line in the absence of the applied magnetic field. This central band is the only wavelength at which the analyte absorbs. As we see in Figure $2b,c$, we apply a magnetic field to the instrument's electrothermal atomizer and place a rotating polarizer between it and the hollow cathode lamp. When the rotating polarizer is in one position, radiation from the hollow cathode light is absorbed only by the central absorption line, giving a measure of absorption by both the background and the analyte. when the rotating polarizer is in the other position, radiation from the hollow cathode lamp is absorbed only by the two outside lines, providing a measure of absorption by the background only. The difference in these two absorption values is a function of the analyte's concentration. A third method for compensating for background absorption is to take advantage of what happens to the emission intensity of a hollow cathode lamp when it is operated at a high current. As seen in Figure $3$, when using a high current the emission band become significantly broader than when using a normal (low) current and, at the analytical wavelength, the emission intensity from the lamp decreases due to self-absorption, a process in which the ground state atoms in the hollow cathode lamp absorb photons emitted by the excited state atoms in the hollow cathode lamp. When using a low current we measure absorption from both the analyte and the background; when using a high current, absorption is due almost exclusively to the background. This approach is called Smith-Hieftje background corrections. Chemical Interferences The quantitative analysis of some elements is complicated by chemical interferences that occur during atomization. The most common chemical interferences are the formation of nonvolatile compounds that contain the analyte and ionization of the analyte. One example of the formation of a nonvolatile compound is the effect of $\text{PO}_4^{3-}$ or Al3+ on the flame atomic absorption analysis of Ca2+. In one study, for example, adding 100 ppm Al3+ to a solution of 5 ppm Ca2+ decreased calcium ion’s absorbance from 0.50 to 0.14, while adding 500 ppm $\text{PO}_4^{3-}$ to a similar solution of Ca2+ decreased the absorbance from 0.50 to 0.38. These interferences are attributed to the formation of nonvolatile particles of Ca3(PO4)2 and an Al–Ca–O oxide [Hosking, J. W.; Snell, N. B.; Sturman, B. T. J. Chem. Educ. 1977, 54, 128–130]. When using flame atomization, we can minimize the formation of non-volatile compounds by increasing the flame’s temperature by changing the fuel-to-oxidant ratio or by switching to a different combination of fuel and oxidant. Another approach is to add a releasing agent or a protecting agent to the sample. A releasing agent is a species that reacts preferentially with the interferent, releasing the analyte during atomization. For example, Sr2+ and La3+ serve as releasing agents for the analysis of Ca2+ in the presence of $\text{PO}_4^{3-}$ or Al3+. Adding 2000 ppm SrCl2 to the Ca2+/ $\text{PO}_4^{3-}$ and to the Ca2+/Al3+ mixtures described in the previous paragraph increased the absorbance to 0.48. A protecting agent reacts with the analyte to form a stable volatile complex. Adding 1% w/w EDTA to the Ca2+/ $\text{PO}_4^{3-}$ solution described in the previous paragraph increased the absorbance to 0.52. An ionization interference occurs when thermal energy from the flame or the electrothermal atomizer is sufficient to ionize the analyte $\mathrm{M}(s)\rightleftharpoons \ \mathrm{M}^{+}(a q)+e^{-} \label{10.1}$ where M is the analyte. Because the absorption spectra for M and M+ are different, the position of the equilibrium in reaction \ref{10.1} affects the absorbance at wavelengths where M absorbs. To limit ionization we add a high concentration of an ionization suppressor, which is a species that ionizes more easily than the analyte. If the ionization suppressor's concentration is sufficient, then the increased concentration of electrons in the flame pushes reaction \ref{10.1} to the left, preventing the analyte’s ionization. Potassium and cesium frequently are used as an ionization suppressor because of their low ionization energy.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.03%3A_Interferences_in_Absorption_Spectroscopy.txt
Preparing the Sample Flame and electrothermal atomization require that the analyte is in solution. Solid samples are brought into solution by dissolving in an appropriate solvent. If the sample is not soluble it is digested, either on a hot-plate or by microwave, using HNO3, H2SO4, or HClO4. Alternatively, we can extract the analyte using a Soxhlet extractor. Liquid samples are analyzed directly or the analytes extracted if the matrix is in- compatible with the method of atomization. A serum sample, for instance, is difficult to aspirate when using flame atomization and may produce an unacceptably high background absorbance when using electrothermal atomization. A liquid–liquid extraction using an organic solvent and a chelating agent frequently is used to concentrate analytes. Dilute solutions of Cd2+, Co2+, Cu2+, Fe3+, Pb2+, Ni2+, and Zn2+, for example, are concentrated by extracting with a solution of ammonium pyrrolidine dithiocarbamate in methyl isobutyl ketone. Standardizing the Method Because Beer’s law also applies to atomic absorption, we might expect atomic absorption calibration curves to be linear. In practice, however, most atomic absorption calibration curves are nonlinear or linear over a limited range of concentrations. Nonlinearity in atomic absorption is a consequence of instrumental limitations, including stray radiation from the hollow cathode lamp and the variation in molar absorptivity across the absorption line. Accurate quantitative work, therefore, requires a suitable means for computing the calibration curve from a set of standards. When possible, a quantitative analysis is best conducted using external standards. Unfortunately, matrix interferences are a frequent problem, particularly when using electrothermal atomization. For this reason the method of standard additions often is used. One limitation to this method of standardization, however, is the requirement of a linear relationship between absorbance and concentration. Most instruments include several different algorithms for computing the calibration curve. The instrument in my lab, for example, includes five algorithms. Three of the algorithms fit absorbance data using linear, quadratic, or cubic polynomial functions of the analyte’s concentration. It also includes two algorithms that fit the concentrations of the standards to quadratic functions of the absorbance. Evaluation of Atomic Absorption Spectroscopy Scale of Operation Atomic absorption spectroscopy is ideally suited for the analysis of trace and ultratrace analytes, particularly when using electrothermal atomization. For minor and major analytes, sample are diluted before the analysis. Most analyses use a macro or a meso sample. The small volume requirement for electrothermal atomization or for flame microsampling, however, makes practical the analysis of micro and ultramicro samples. Accuracy If spectral and chemical interferences are minimized, an accuracy of 0.5–5% is routinely attainable. When the calibration curve is nonlinear, accuracy is improved by using a pair of standards whose absorbances closely bracket the sample’s absorbance and assuming that the change in absorbance is linear over this limited concentration range. Determinate errors for electrothermal atomization often are greater than those obtained with flame atomization due to more serious matrix interferences. Precision For an absorbance greater than 0.1–0.2, the relative standard deviation for atomic absorption is 0.3–1% for flame atomization and 1–5% for electrothermal atomization. The principle limitation is the uncertainty in the concentration of free analyte atoms that result from variations in the rate of aspiration, nebulization, and atomization for a flame atomizer, and the consistency of injecting samples for electrothermal atomization. Sensitivity The sensitivity of a flame atomic absorption analysis is influenced by the flame’s composition and by the position in the flame from which we monitor the absorbance. Normally the sensitivity of an analysis is optimized by aspirating a standard solution of analyte and adjusting the fuel-to-oxidant ratio, the nebulizer flow rate, and the height of the burner, to give the greatest absorbance. With electrothermal atomization, sensitivity is influenced by the drying and ashing stages that precede atomization. The temperature and time at each stage is optimized for each type of sample. Sensitivity also is influenced by the sample’s matrix. We already noted, for example, that sensitivity is decreased by a chemical interference. An increase in sensitivity may be realized by adding a low molecular weight alcohol, ester, or ketone to the solution, or by using an organic solvent. Selectivity Due to the narrow width of absorption lines, atomic absorption provides excellent selectivity. Atomic absorption is used for the analysis of over 60 elements at concentrations at or below the level of μg/L. Time, Cost, and Equipment The analysis time when using flame atomization is short, with sample throughputs of 250–350 determinations per hour when using a fully automated system. Electrothermal atomization requires substantially more time per analysis, with maximum sample throughputs of 20–30 determinations per hour. The cost of a new instrument ranges from between \$10,000– \$50,000 for flame atomization, and from \$18,000–\$70,000 for electrothermal atomization. The more expensive instruments in each price range include double-beam optics, automatic samplers, and can be programmed for multielemental analysis by allowing the wavelength and hollow cathode lamp to be changed automatically.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.04%3A_Atomic_Absorption_Techniques.txt
What is Emission? An analyte in an excited state possesses an energy, E2, that is greater than its energy when it is in a lower energy state, E1. When the analyte returns to its lower energy state—a process we call relaxation—the excess energy, $\Delta E$, is $\Delta E=E_{2}-E_{1} \nonumber$ There are several ways in which an atom may end up in an excited state, including thermal energy, which is the focus of this chapter. The amount of time an atom, A, spends in its excited state—what we call the excited state's lifetime—is short, typically $10^{-5}$ to $10^{-9}$ s for an electronic excited state. Relaxation of the atom's excited state, A*, occurs through several mechanisms, including collisions with other species in the sample and the emission of photons. In the first process, which we call nonradiative relaxation, the excess energy is released as heat. $A^{*} \longrightarrow A+\text { heat } \nonumber$ In the second mechanism, the excess energy is released as a photon of electromagnetic radiation. $A^{*} \longrightarrow A+h \nu \nonumber$ The release of a photon following thermal excitation is called emission. The focus of this chapter is on the emission of ultraviolet and visible radiation following the thermal excitation of atoms. Atomic emission spectroscopy has a long history. Qualitative applications based on the color of flames were used in the smelting of ores as early as 1550 and were more fully developed around 1830 with the observation of atomic spectra generated by flame emission and spark emission [Dawson, J. B. J. Anal. At. Spectrosc. 1991, 6, 93–98]. Quantitative applications based on the atomic emission from electric sparks were developed by Lockyer in the early 1870 and quantitative applications based on flame emission were pioneered by Lundegardh in 1930. Atomic emission based on emission from a plasma was introduced in 1964. Atomic Emission Spectra Atomic emission occurs when a valence electron in a higher energy atomic orbital returns to a lower energy atomic orbital. Figure $1$ shows a portion of the energy level diagram for sodium, which consists of a series of discrete lines at wavelengths that correspond to the difference in energy between two atomic orbitals. The intensity of an atomic emission line, Ie, is proportional to the number of atoms, $N^*$, that populate the excited state $I_{e}=k N^* \label{10.1}$ where k is a constant that accounts for the efficiency of the transition. If a system of atoms is in thermal equilibrium, the population of excited state i is related to the total concentration of atoms, N, by the Boltzmann distribution. For many elements at temperatures of less than 5000 K the Boltzmann distribution is approximated as $N^* = N\left(\frac{g_{i}}{g_{0}}\right) e^{-E_i / k T} \label{10.2}$ where gi and g0 are statistical factors that account for the number of equivalent energy levels for the excited state and the ground state, Ei is the energy of the excited state relative to the ground state, E0, k is Boltzmann’s constant ($1.3807 \times 10^{-23}$ J/K), and T is the temperature in Kelvin. From Equation \ref{10.2} we expect that excited states with lower energies have larger populations and more intense emission lines. We also expect emission intensity to increase with temperature. The emission spectrum for sodium is shown in Figure $2$ An atomic emission spectrometer is similar in design to the instrumentation for atomic absorption. In fact, it is easy to adapt most flame atomic absorption spectrometers for atomic emission by turning off the hollow cathode lamp and monitoring the difference between the emission intensity when aspirating the sample and when aspirating a blank. Many atomic emission spectrometers, however, are dedicated instruments designed to take advantage of features unique to atomic emission, including the use of plasmas, arcs, sparks, and lasers as atomization and excitation sources, and an enhanced capability for multielemental analysis. Flames as a Source Atomization and excitation in flame atomic emission is accomplished with the same nebulization and spray chamber assembly used in atomic absorption (see Chapter 9). The burner head consists of a single or multiple slots, or a Meker-style burner. Older atomic emission instruments often used a total consumption burner in which the sample is drawn through a capillary tube and injected directly into the flame. A Meker burner is similar to the more common Bunsen burner found in most laboratories; it is designed to allow for higher temperatures and for a larger diameter flame. The Inductively Coupled Plasma Source A plasma is a hot, partially ionized gas that contains an abundant concentration of cations and electrons. The plasma used in atomic emission is formed by ionizing a flowing stream of argon gas, producing argon ions and electrons. A plasma’s high temperature results from resistive heating as the electrons and argon ions move through the gas. Because a plasma operates at a much higher temperature than a flame, it provides for a better atomization efficiency and a higher population of excited states. A schematic diagram of the inductively coupled plasma source (ICP) is shown in Figure $3$. The ICP torch consists of three concentric quartz tubes, surrounded at the top by a radio-frequency induction coil. The sample is mixed with a stream of Ar using a nebulizer, and is carried to the plasma through the torch’s central capillary tube. Plasma formation is initiated by a spark from a Tesla coil. An alternating radio-frequency current in the induction coil creates a fluctuating magnetic field that induces the argon ions and the electrons to move in a circular path. The resulting collisions with the abundant unionized gas give rise to resistive heating, providing temperatures as high as 10000 K at the base of the plasma, and between 6000 and 8000 K at a height of 15–20 mm above the coil, where emission usually is measured. At these high temperatures the outer quartz tube must be thermally isolated from the plasma. This is accomplished by the tangential flow of argon shown in the schematic diagram. Samples are brought into the ICP using the same basic types of nebulization described in Chapter 8 for flame atomic absorption spectroscopy. The Direct Current Plasma Source An alternative to the inductively coupled plasma source is the direct current (dc) plasma jet, one example of which is illustrated in Figure $4$. The argon plasma (shown here in blue) forms between two graphite anodes and a tungsten cathode. The sample is aspirated into the plasma's excitation region where it undergoes atomization, excitation, and emission at temperatures of 5000 K. Flame and Plasma Spectrometers One advantage of atomic emission over atomic absorption is the ease of analyzing samples for multiple analytes. This additional capability arises because atomic emission, unlike atomic absorption, does not need an analyte-specific source of radiation. The two most common types of spectrometers are sequential and multichannel. In a sequential spectrometer the instrument has a single detector and uses the monochromator to move from one emission line to the next. A multichannel spectrometer uses the monochromator to disperse the emission across a field of detectors, each of which measures the emission intensity at a different wavelength. Sequential Instruments A sequential instrument uses a programmable scanning monochromator, such as those described in Chapter 7, to rapidly move the monochromator's grating over wavelength regions that are not of interest, and then pauses and scans slowly over the emission lines of the analytes. Sampling rates of 300 determinations per hour are possible with this configuration. Another option, which is less common, is to move the exit slit and the detector across the monochromator's focal plane, pausing and recording the emission at the desired wavelengths. Multichannel Instruments Another approach to a multielemental analysis is to use a multichannel instrument that allows us to monitor simultaneously many analytes. A simple design for a multichannel spectrometer, shown in Figure $5$, couples a monochromator with multiple detectors that are positioned in a semicircular array around the monochromator at positions that correspond to the wavelengths for the analytes. A sample throughput of 3000 determinations per hour are possible using a multichannel ICP. Another option for a multichannel instrument takes advantage of the charge-injection device, or CID, as a detector (see Chapter 7 for discussion of the charge-coupled device, another type of charge-transfer device used as a detector). Light from the plasma source is dispersed across the CID in two dimensions. The surface of the CID has in excess of 90000 detecting elements, or pixels, that allows for a resolution between detecting elements on the order of 0.04 nm. Light from the atomic emission source is distributed across the detector's surface by a diffraction grating such that each element of interest is detected using its own set of pixels, called a read window. Figure $6$ shows that individual read windows consist of a set of detecting elements, nine of which collect photons from the spectral line and 30 of which provide a measurement of the source's background. Application of Flame and Plasma Sources Atomic emission is used widely for the analysis of trace metals in a variety of sample matrices. The development of a quantitative atomic emission method requires several considerations, including choosing a source for atomization and excitation, selecting a wavelength and slit width, preparing the sample for analysis, minimizing spectral and chemical interferences, and selecting a method of standardization. Choice of Atomization and Excitation Source Except for the alkali metals, detection limits when using an ICP are significantly better than those obtained with flame emission (Table $1$). Plasmas also are subject to fewer spectral and chemical interferences. For these reasons a plasma emission source is usually the better choice. Table $1$. Detection Limits for Atomic Emission element detection limit (µg/mL): flame emission detection limit (µg/mL): ICP Ag 2 0.2 Al 3 0.2 As 2000 2 Ca 0.1 0.0001 Cd 300 0.07 Co 5 0.1 Cr 1 0.08 Fe 10 0.09 Hg 150 1 K 0.01 30 Li 0.001 0.02 Mg 1 0.02 Mn 1 0.01 Na 0.01 0.1 Ni 10 0.2 Pb 0.2 1 Pt 2000 0.9 Sn 100 3 Zn 1000 0.1 Source: Parsons, M. L.; Major, S.; Forster, A. R.; App. Spectrosc. 1983, 37, 411–418. Selecting the Wavelength and Slit Width The choice of wavelength is dictated by the need for sensitivity and the need to avoid interferences from the emission lines of other constituents in the sample. Because an analyte’s atomic emission spectrum has an abundance of emission lines—particularly when using a high temperature plasma source—it is inevitable that there will be some overlap between emission lines. For example, an analysis for Ni using the atomic emission line at 349.30 nm is complicated by the atomic emission line for Fe at 349.06 nm. A narrower slit width provides better resolution, but at the cost of less radiation reaching the detector. The easiest approach to selecting a wavelength is to record the sample’s emission spectrum and look for an emission line that provides an intense signal and is resolved from other emission lines. Preparing the Sample Flame and plasma sources are best suited for samples in solution and in liquid form. Although a solid sample can be analyzed by directly inserting it into the flame or plasma, they usually are first brought into solution by digestion or extraction. Minimizing Spectral Interferences The most important spectral interference is broad, background emission from the flame or plasma and emission bands from molecular species. This background emission is particularly severe for flames because the temperature is insufficient to break down refractory compounds, such as oxides and hydroxides. Background corrections for flame emission are made by scanning over the emission line and drawing a baseline (Figure $7$). Because a plasma’s temperature is much higher, a background interference due to molecular emission is less of a problem. Although emission from the plasma’s core is strong, it is insignificant at a height of 10–30 mm above the core where measurements normally are made. Minimizing Chemical Interferences Flame emission is subject to the same types of chemical interferences as atomic absorption; they are minimized using the same methods: by adjusting the flame’s composition and by adding protecting agents, releasing agents, or ionization suppressors. An additional chemical interference results from self-absorption. Because the flame’s temperature is greatest at its center, the concentration of analyte atoms in an excited state is greater at the flame’s center than at its outer edges. If an excited state atom in the flame’s center emits a photon, then a ground state atom in the cooler, outer regions of the flame may absorb the photon, which decreases the emission intensity. For higher concentrations of analyte self-absorption may invert the center of the emission band (Figure $8$). Chemical interferences when using a plasma source generally are not significant because the plasma’s higher temperature limits the formation of nonvolatile species. For example, $\text{PO}_4^{3-}$ is a significant interferent when analyzing samples for Ca2+ by flame emission, but has a negligible effect when using a plasma source. In addition, the high concentration of electrons from the ionization of argon minimizes ionization interferences. Standardizing the Method From Equation \ref{10.1} we know that emission intensity is proportional to the population of the analyte’s excited state, $N^*$. If the flame or plasma is in thermal equilibrium, then the excited state population is proportional to the analyte’s total population, N, through the Boltzmann distribution (Equation \ref{10.2}). A calibration curve for flame emission usually is linear over two to three orders of magnitude, with ionization limiting linearity when the analyte’s concentrations is small and self-absorption limiting linearity at higher concentrations of analyte. When using a plasma, which suffers from fewer chemical interferences, the calibration curve often is linear over four to five orders of magnitude and is not affected significantly by changes in the matrix of the standards. Emission intensity is affected significantly by many parameters, including the temperature of the excitation source and the efficiency of atomization. An increase in temperature of 10 K, for example, produces a 4% increase in the fraction of Na atoms in the 3p excited state, an uncertainty in the signal that may limit the use of external standards. The method of internal standards is used when the variations in source parameters are difficult to control. To compensate for changes in the temperature of the excitation source, the internal standard is selected so that its emission line is close to the analyte’s emission line. In addition, the internal standard should be subject to the same chemical interferences to compensate for changes in atomization efficiency. To accurately correct for these errors the analyte and internal standard emission lines are monitored simultaneously. 10.02: Emission Spectroscopy Based on Arc and Spark Sources Arc Source An arc source consists of two electrodes separated by a gap of up to 20 mm (see Figure \(1\) for one configuration). A potential of 50 V (or more) is applied and a continuous current in the range of 2–30 A is maintained throughout the analysis. If the sample is a metal, then it can be fashioned into the electrodes. For nonmetallic samples, the electrodes typically are fashioned from graphite and a cup-like depression is drilled into one of the electrodes. The sample is ground into a powder and packed into the sample cup. The plasma generated by an arc source typically has a temperature of 4000 K to 5000 K and has an abundance of emission lines for the analyte with a relatively small background emission. Spark Source Unlike an arc source, which generates a continuous emission of electromagnetic radiation, a spark source generates a series of short emissions, each lasting on the order of a few µs. The sample serves as one of the two electrodes, with the other electrode fashioned from tungsten (see Figure \(2\)). The two electrodes are separated by a gap of 3–6 mm. A potential as small as 300–500 V and as large as 1020 KV. The frequency of the spark is in the range of 100–500 per second. The temperature within the plasma can be quite intense, which gives rise to both emission lines from the atoms, but also emission from ions formed in the plasma. Instrumentation For both the arc source and the spark source, emission from the plasma is collected and analyzed using the same types of optical benches discussed in the previous section on atomic emission from flames and plasma sources. Figure \(3\) shows an emission spectrum for a sample of the alkaline earth metals, which shows a single intense emission line for Ca at 422.673nm and a single intense emission line for Sr at 460.7331 nm. Mg exhibits three closely spaced emission lines at 516.7322 nm, 517.2684 nm, and 518.3604 nm. Finally, Ba has a single strong emission line at 553.5481 nm, but also many less intense emission lines above 600 nm. The presence of faint, but measurable emission lines can create complications when trying to identify the elements present in a sample.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/10%3A_Atomic_Emission_Spectrometry/10.01%3A_Emission_Spectroscopy_Based_on_Plasma_Sources.txt
• 11.1: General Features of Atomic Mass Spectrometry In mass spectrometry we convert the analyte into ions and then separate these ions based on the ratio of their masses to their charges. In this section we give careful attention to what we mean by mass, by charge, and by mass-to-charge ratio. We also give brief consideration to how we generate and measure ions, topics covered in greater detail in subsequent sections. • 11.2: Mass Spectrometers A mass spectrometer has three essential needs: a means for producing ions, in this case (mostly) singly charged atoms; a means for separating these ions in space or in time by their mass-to-charge ratios; and a means for counting the number of ions for each mass-to-charge ratio. • 11.3: Inductively Coupled Plasma Mass Spectrometer An inductively coupled plasma in ICP is formed by ionizing a flowing stream of argon gas, producing argon ions and electrons. The sample is introduced into the plasma where the high operating temperature of 6000–8000 K is sufficient to atomize and ionize the sample. In ICP-MS we use the plasma as a source of ions that we can send to a mass spectrometer for analysis. • 11.4: Other Forms of Atomic Mass Spectrometry Although ICP-MS is the most widely used method of atomic mass spectrometry, there are other forms of atomic mass spectrometry, three of which we highlight here. 11: Atomic Mass Spectrometry In mass spectrometry—whether of atoms, which is covered in this chapter, or of molecules, which is covered in Chapter 20—we convert the analyte into ions and then separate these ions based on the ratio of their masses to their charges. In this section we give careful attention to what we mean by mass, by charge, and by mass-to-charge ratio. We also give brief consideration to how we generate and measure ions, topics covered in greater detail in subsequent sections. Atomic Weights in Mass Spectrometry We trace the modern era of chemistry to John Dalton’s development of atomic theory, which made three hypotheses: 1. Elements, which are the smallest division of matter with distinct chemical properties, are composed of atoms. All atoms of a given element are identical—This is not strictly true, as we will see shortly, but we won’t hold that against Dalton!and different from the atoms of other elements. The element carbon is made of carbon atoms, which are different from the atoms of oxygen that make up elemental oxygen. 2. Compounds are composed of atoms from two or more elements. Because atoms cannot be subdivided, the elements that make up a compound are always present in ratios of whole numbers. A compound containing carbon and oxygen, for example, can have 1 carbon atom and 1 oxygen atom (CO) or 1 carbon atom and 2 oxygen atoms (CO2), but it cannot have 1.5 carbon atoms. 3. In a chemical reaction, the elements that make up the reactants rearrange to make new compounds as products. The atoms that make up these compounds, however, are not destroyed, nor are new atoms created. Dalton’s first hypothesis simply recognized the atom as the basic building block of chemistry. Water, for example, is made from atoms of hydrogen and oxygen. The second hypothesis recognizes that for every compound there is a fixed combination of atoms. Regardless of its source (rain, tears, or a bottle of Evian) a molecule of water always consists of two hydrogen atoms for every atom of oxygen. Dalton’s third hypothesis is a statement that atoms are conserved in a reaction; this is more commonly known as the conservation of mass. The Structure of the Atom Although Dalton believed that atoms were indivisible, we know now that they are made from three smaller subatomic particles: the electron, the proton, and the neutron. The atom, however, remains the smallest division of matter with distinct chemical properties. Electrons, Protons, and Neutrons. The characteristic properties of electrons, protons, and neutrons, are shown in Table $1$. Table $1$. Mass and Charge of Subatomic Particles particle mass (g) unit charge charge (in Coulombs, C) electron $9.10939 \times 10^{-28}$ $-1$ $-1.6022 \times 10^{-19}$ proton $1.67262 \times 10^{-24}$ $+1$ $+1.6022 \times 10^{-19}$ neutron $1.67493 \times 10^{-24}$ 0 0 The proton and the neutron make up the atom’s nucleus, which is located at the center of the atom and has a radius of approximately $5 \times 10^{-3} \text{ pm}$. The remainder of the atom, which has radius of approximately 100 pm, is mostly empty space in which the electrons are free to move. Of the three subatomic particles, only the electron and the proton carry a charge, which we can express as a relative unit charge, such as $+1$ or $-2$, or as an absolute charge in Coulombs. Because elements have no net charge (that is, they are neutral), the number of electrons and protons in an element must be the same. Atomic Numbers. Why is an atom of carbon different from an atom of hydrogen or helium? One possible explanation is that carbon and hydrogen and helium have different numbers of electrons, protons, or neutrons; Table $2$ provides the relevant numbers. Table $2$. Comparison of the Elements Hydrogen, Helium, and Carbon element number of protons number of neutrons 1 number of electrons hydrogen 1 0, 1, or 2 2 helium 2 2 2 carbon 6 6, 7, or 8 6 1 Only the number of neutrons for the most important naturally occurring forms of these elements are shown here. Note that although Table $2$ shows that a helium atom has two neutrons, an atom of hydrogen or carbon has three possibilities for the numbers of neutrons. It is even possible for a hydrogen atom to exist without a neutron. Clearly the number of neutrons is not crucial to determining if an atom is carbon, hydrogen, or helium. Although hydrogen, helium, and carbon have different numbers of electrons, the number is not critical to an element's identity. For example, it is possible to strip an electron away from helium to form a helium ion with a charge of $+1$ that has the same number of electrons as hydrogen; nevertheless, it is still helium. What makes an atom carbon is the presence of six protons, whereas every atom of hydrogen has one proton and every atom of helium has two protons. The number of protons in an atom is called its atomic number, which we represent as Z. Atomic Mass and Isotopes. Protons and neutrons are of similar mass and much heavier than electrons (see Table $1$); thus, most of an atom’s mass is in its nucleus. Because not all of an element’s atoms necessarily have the same number of neutrons, it is possible for two atoms of an element to differ in mass. For this reason, the sum of an atom’s protons and neutrons is known as its mass number (A). Carbon, for example, can have a mass number of 12, 13, or 14 (six protons and six, seven, or eight neutrons), and hydrogen can have a mass number of 1, 2, or 3 (one proton and zero, one, or two neutrons). Atoms of the same element (same Z), but with a different number of neutrons (different A) are called isotopes. Hydrogen, for example has three isotopes (see Table $2$). The isotope with 0 neutrons is the most abundant, accounting for 99.985% of all stable hydrogen atoms, and is known, somewhat self-referentially, as hydrogen. Deuterium, which accounts for 0.015% of all stable hydrogen atoms, has 1 neutron. The isotope of hydrogen with two neutrons is called tritium. Because tritium is radioactive it is unstable and disappears with time. The usual way to represent isotopes is with the symbol $^A _Z X$ where X is the atomic symbol for the element. The three isotopes of hydrogen, which has an elemental symbol of H, are $^1 _1 \text{H}$, $^2 _1 \text{H}$, and $^3 _1 \text{H}$. Because the elemental symbol (X) and the atomic number (Z) provide redundant information, we often omit the atomic number; thus, deuterium becomes $^2 \text{H}$. Unlike hydrogen, the isotopes of other elements do not have specific names. Instead they are named by taking the element’s name and appending the atomic mass. For example, the isotopes of carbon are called carbon-12, carbon-13, and carbon-14. Atomic Mass Individual atoms weigh very little, typically about $10^{-24} \text{ g}$ to $10^{-22} \text{ g}$. This amount is so small that there is no easy way to measure the mass of a single atom. To assign masses to atoms it is necessary to assign a mass to one atom and to report the masses of all other atoms relative to that absolute standard. By agreement, atomic mass is stated in terms of atomic mass units (amu) or Daltons (Da), where 1 amu and 1 Da are defined as 1/12 of the mass of an atom of carbon-12. The atomic mass of carbon-12, therefore, is exactly 12 amu. The atomic mass of carbon-13 is 13.00335 amu because the mass of an atom of carbon-13 is $1.0836125 \times$ greater than the mass of an atom of carbon-12. Note If you calculate the masses of carbon-12 and carbon-13 by adding together the masses of each isotope’s electrons, neutrons, and protons from Table $1$ you will obtain a mass ratio of 1.08336, not 1.0836125. The reason for this is that the masses in Table $1$ are for “free” electrons, protons, and neutrons; that is, for electrons, protons, and neutrons that are not in an atom. When an atom forms, some of the mass is lost. “Where does it go?,” you ask. Remember Einstein and $E = mc^2$? Mass can be converted to energy and the lost mass is the nuclear binding energy that holds the nucleus together. Average Atomic Mass. Because carbon exists in several isotopes, the atomic mass of an “average” carbon atom is not exactly 12 amu. Instead it is usually reported on periodic tables as 12.01 or 12.011, values that are closer to 12.0 because 98.90% of all carbon atoms are carbon-12. The IUPAC's Commission on Isotopic Abundances and Atomic Weights currently reports its mass as [12.0096, 12.0116] amu where the values in the brackets are the lower and the upper estimates for the average mass in a variety of naturally occurring materials. As shown in the following example, if you know the percent abundance and atomic masses of an element’s isotopes, then you can calculate it’s average atomic mass. Example $1$ The element magnesium, Mg, has three stable isotopes with the following atomic masses and percent abundances: isotope mass (amu) percent abundance $^{24} \text{Mg}$ 23.9924 78.70 $^{25} \text{Mg}$ 24.9938 10.13 $^{26} \text{Mg}$ 25.9898 11.17 Calculate the average atomic mass for magnesium. Solution To find the average atomic mass we multiply each isotopes’ atomic mass by its fractional abundance (the decimal equivalent of its percent abundance) and add together the results; thus avg. amu = (0.7870)(23.994 amu) + (0.1013)(24.9938 amu) + (0.1117)(25.9898 amu) avg. amu = 24.32 amu As the next example shows, we also can work such problems in reverse, using an element’s average atomic mass and the atomic masses of its isotopes to find each isotope’s percent abundance. Example $2$ The element gallium, Ga, has two naturally occurring isotopes. The isotope $^{69} \text{Ga}$ has an atomic mass of 68.926 amu and the isotope $^{71} \text{Ga}$ has an atomic mass of 70.926 amu. The average atomic mass for gallium is 69.723. Find the percent abundances for gallium’s two isotopes. Solution If we let x be the fractional abundance of $^{69} \text{Ga}$, then the fractional abundance of $^{71} \text{Ga}$ is 1 – x (that is, the total amounts of $^{69} \text{Ga}$ and $^{71} \text{Ga}$ must add up to one). Usingthe same general approach as Example $1$, we find that 69.723 amu = (x)(68.926 amu) + (1 – x)(70.926 amu) 69.723 amu = 68.926x amu + 70.926 amu – 70.926x amu 2.000x amu = 1.203 amu x = 0.6015 1 – x = 1 – 0.6015 = 0.3985 Thus, 60.15% of naturally occurring gallium is $^{69} \text{Ga}$ and 39.85% is $^{71} \text{Ga}$. Note Although many periodic tables report atomic masses to two decimal places—the periodic table I consult most frequently, for example gives the average atomic mass of carbon as 12.01 amu—the high resolving power of some mass spectrometers allows us to report masses to three or four decimal places. Mass-to-Charge Ratio As we will learn later, a mass spectrometer separates ions on the basis of their mass-to-charge ratio (m/z), and not on their mass only or their charge only. As most ions that form during mass spectrometry are singly charged, spectra are often reported using masses (m) instead of mass-to-charge ratios; be sure to remain alert for this when looking at mass spectra.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/11%3A_Atomic_Mass_Spectrometry/11.01%3A_General_Features_of_Atomic_Mass_Spectrometry.txt
A mass spectrometer has three essential needs: a means for producing ions, in this case (mostly) singly charged atoms; a means for separating these ions in space or in time by their mass-to-charge ratios; and a means for counting the number of ions for each mass-to-charge ratio. Figure $1$ provides a general view of a mass spectrometer in the same way that we first introduced optical instruments in Chapter 7. The ionization of the sample is analagous to the source of photons in optical spectroscopy as it generates the particles (ions, instead of photons) that ultimately make up the measured signal. The separation of the resulting ions by their mass-to-charge ratios, which is accomplished using a mass analyzer, is analagous to the role of a monochromator in optical spectroscopy. The means for counting ions serves the same role as, for example, a photomultiplier tube in optical spectroscopy. Note that the mass spectrometer is held under vacuum as this allows the ions to travel great distances without undergoing collisions that might alter their charge or energy. Sources of Ionization The most common means for generating ions are plasmas of various sorts, lasers, electrical sparks, and other ions. We will give greater attention to these in the next several sections as we consider specific examples of atomic mass spectrometry. Transducers for Counting Ions The transducer for mass spectrometry must be able to report the number of ions that emerge from the mass analyzer. Here we consider two common types of transducers. Electron Multipliers In Chapter 7 we introduced the photomultiplier tube as a way to convert photons into electrons, amplifying the signal so that a single photon produces 106 to 107 electrons, which generates a measurable current. An electron multiplier serves the same role in mass spectrometry. Figure $2$ shows two versions of this transducer. The electron multiplier in Figure $2a$ uses a set of individual dynodes. When an ion strikes the first dynode, it generates several electrons, each of which is passed along to the next dynode before arriving at a collecting plate where the current is measured. The result is an amplification, or gain, in the signal of approximately $10^7 \times$. The electron multiplier in Figure $2b$ uses a horn-shaped cylinder—typically made from glass coated with a thin layer of a semiconducting material—whose surface acts as a single, continuous dynode. When an ion strikes the continuous dynode it generates several electrons that are reflected toward the collector plate where the current is measured. The result is an amplification of $10^5 \text{ to } 10^8 \times$. Faraday Cup A Faraday cup, as its name suggests, is a simple device shaped like a cup. Ions enter the cup where they strike a collector electrode. A current is directed to the collector plate that is sufficient to neutralize the charge of the ions. The magnitude of this current is proportional to the number of ions. A Faraday cup has the advantage of simplicity, but is less sensitive than an electron multiplier because it lacks the amplification provided by the dynodes. Separating Ions Before we can detect the ions, we need to separate them so that we can generate a spectrum that shows the intensity of ions as a function of their mass-to-charge ratio. In this section we consider the three most common mass analyzers for atomic mass spectrometry. Quadrupole Mass Analyzers The quadrupole mass analyzer is the most important of the mass analyzers included in this chapter: it is compact in size, low in cost, easy to use, and easy to maintain. As shown in Figure $3$, a quadrapole mass analyzer consists of four cylindrical rods, two of which are connected to the positive terminal of a variable direct current (dc) power supply and two of which are connected to the power supply's negative terminal; the two positive rods are positioned opposite of each other and the two negative rods are positioned opposite of each other. Each pair of rods is also connected to a variable alternating current (ac) source operated such that the alternating currents are 180° out-of-phase with each other. An ion beam from the source is drawn into the channel between the quadrupoles and, depending on the applied dc and ac voltages, ions with only one mass-to-charge ratio successfully travel the length of the mass analyzer and reach the transducer; all other ions collide with one of the four rods and are destroyed. To understand how a quadrupole mass analyzer achieves this separation of ions, it helps to consider the movement of an ion relative to just two of the four rods, as shown in Figure $4$ for the poles that carry a positive dc voltage. When the ion beam enters the channel between the rods, the ac voltage causes the ion to begin to oscillate. If, as in the top diagram, the ion is able to maintain a stable oscillation, it will pass through the mass analyzer and reach the transducer. If, as in the middle diagram, the ion is unable to maintain a stable oscillation, then the ion eventually collides with one of the rods and is destroyed. When the rods have a positive dc voltage, as they do here, ions with larger mass-to-charge ratios will be slow to respond to the alternating ac voltage and will pass through the transducer. The result is shown in the figure at the bottom (and repeated in Figure $5a$) where we see that ions with a sufficiently large mass-to-charge ratios successfully pass through the transducer; ions with smaller mass-to-charge ratios do not. In this case, the quadrupole mass analyzer acts as a high-pass filter. We can extend this to the behavior of the ions when they interact with rods that carry a negative dc voltage. In this case, the ions are attracted to the rods, but those ions that have a sufficiently small mass-to-charge ratio are able to respond to the alternating current's voltage and remain in the channel between the rods. The ions with larger mass-to-charge ratios move more sluggishly and eventually collide with one of the rods. As shown in Figure $5b$, in this case, the quadrupole mass analyzer acts as a low-pass filter. Together, as we see in Figure $5c$, a quadrupole mass analyzer operates as both a high-pass and a low-pass filter, allowing a narrow band of mass-to-charge ratios to pass through the transducer. By varying the applied dc voltage and the applied ac voltage, we can obtain a full mass spectrum. Quadrupole mass analyzers provide a modest mass-to-charge resolution of about 1 amu and extend to $m/z$ ratios of approximately 2000. Quadrupole mass analyzers are particularly useful for sources based on plasmas. Time-of-Flight (TOF) Mass Analyzers In a time-of-flight mass analyzers, ions are created in small clusters by applying a periodic pulse of energy to the sample using a laser beam or a beam of energetic particles to ionize the sample. The small cluster of ions are then drawn into a tube by applying an electric field and then allowed to drift through the tube in the absence of any additional applied field; the tube, for obvious reasons, is called a drift tube. All of the ions in the cluster enter the drift tube with the same kinetic energy, KE, which means, given $\text{KE} = \frac{1}{2} m v^2 \label{kineticenergy}$ that the square of an ion's velocity is inversely proportional to the ion's mass. As a result, lighter ions move more quickly than heavier ions. Flight times are typically less than 30 µs. A time-of-flight mass analyzer provide better resolution than a quadrupole mass analyzer, but is limited to sources that can be pulsed. Double-Focusing Mass Analyzers In a double-focusing mass analyzer, two mechanisms are used to focus a beam of ions onto the transducer. One of the mechanisms is an electrotatic analyzer that serves to confine the kinetic energy of the ions to a narrow range of energies. The second mechanism is a magnetic sector analyzer that uses an applied magnetic field to separate the ions by their mass-to-charge ratio. The combination of two analyzers allows for a significant resolution. More details on this type of mass analyzer is included in Chapter 20.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/11%3A_Atomic_Mass_Spectrometry/11.02%3A_Mass_Spectrometers.txt
In Chapter 10 we introduced the inductively coupled plasma (ICP) as a source for atomic emission. The plasma in ICP is formed by ionizing a flowing stream of argon gas, producing argon ions and electrons. The sample is introduced into the plasma where the high operating temperature of 6000–8000 K is sufficient to atomize and ionize the sample. In optical ICP we measure the emission of photons from the atoms and ions that are in excited states. In ICP-MS we use the plasma as a source of ions that we can send to a mass spectrometer for analysis. Instrumentation for ICP-MS An ICP torch operates at room pressure and at an elevated temperature, and a mass spectrometer, as noted in Section 11.2, operates under a vacuum and at room temperature. This difference in pressure and temperature means that a coupling together of these two instrument requires an interface than can bring the pressure and temprature in line with the demands of the mass spectrometer. Figure $1$ provides a schematic diagram of a typical ICP-MS instrument with the ICP torch on the right and the mass spectrometer's quadrupole mass analyzer and a continuous electron multiplier on the left. In between the two is a two-stage interface. Note that none of the components in Figure $1$ are drawn to scale. The first stage of the interface consists of two cone-shaped openings: a sampler cone and a skimmer cone. The hot plasma from the ICP torch enters the first stage of the interface through the sampler cone, which is a pin-hole with a diameter of approximately 1 mm. Samples in solution form are drawn directly into the ICP torch using a nebulizer. Solid samples are vaporized using a laser (a process called laser ablation) and the vapor drawn directly into the ICP torch. A pump is used to drop the pressure in the first stage to approximately 1 torr. The expansion of the plasma as it enters the first stage results in some cooling of the plasma. The skimmer cone allows a small portion of the plasma in the first stage to pass into the second stage, which is held at the mass spectrometer's operating pressure of approximately 10–5 torr. A series of ion lenses are used to narrow the conical dispersion of the plasma, to isolate positive ions from electrons, neutral species, and photons—all of which will generate a signal if they reach the transducer—and to focus the ion beam onto the quadrupole's entrance. Atomic Mass Spectra and Interferences Figure $2$ shows an example of an ICP-MS spectrum for the analysis of a metal coating using laser ablation to volatilize the sample. The quadrupole mass analyzer operates over a mass-to-charge range of approximately 3 to 300 and can resolve lines that differ by $\pm1 \text{ m/z}$. Data are collected either by scanning the quadrupole to provide a survey spectrum of all ions generated in the plasma, as is the case in Figure $2$, or by peak hopping in which we gather data for just a few discrete mass-to-charge ratios, adjusting the quadrupole so that it passes only a single mass-to-charge ratio and count the ions for a set period of time before moving to the next mass-to-charge ratio. Spectroscopic Interferences An ICP-MS spectrum is much simpler than the corresponding ICP atomic emission spectrum because each element in the latter has many emission lines and because the plasma itself has many emission lines. Still, an ICP-MS is not free from interferences, the two most important of which are isobaric ions and polyatomic ions. Isobaric Ions. Iso- means same and -baric means weight; thus, isobaric means same weight and refers to two (or more) species that have—within the resolution of the mass spectrometer—identical weights and that both contribute to the same peak in the mass spectrum. The source of this interference is the existence of isotopes. For example, the most abundant ions for argon and for calcium are 40Ar and 40Ca, and, given the resolving power of a quadrupole mass analyzer, the two ions appear as a single peak at $m/z = 40$ even though the mass of 40Ar is 39.962383 amu and the mass of 40Ca is 39.962591 amu. We can correct for this interference because the second most abundant isotope of calcium, 44Ca, does not share a $m/z$ with argon (or with another element). Figure $3$ shows the ICP-MS spectrum for a sample that contains calcium and argon, and Example $1$ shows how we can use this spectrum to determine the contribution of each element. Example $1$ For the spectrum in Figure $3$, the intensity at $m/z = 40$ is 972.07 cps and the intensity at $m/z = 44$ is 18.77 cps. Given that the istopic abundance of 40Ca is 96.941% and the isotopic abundance of 44Ca is 2.086%, what is the counts-per-second at $m/z = 40$ for Ca and for Ar. Solution Given that only 44Ca contributes to the peak at $m/z = 44$ we can use the relative abundances of 40Ca and 44Ca to determine the expected contribution of 40Ca to the total intensity at $m/z = 40$. $18.77 \text{ cps} \times \frac{96.941}{2.086} = 872.28 \text{ cps} \nonumber$ Subtracting this result from the total intensity gives the intensity at $m/z = 40$ for argon as $972.07 \text{ cps} - 872.28 \text{ cps} = 99.79 \text{ cps} \nonumber$ Polyatomic Ions. Compensating for isobaric ions is relatively straightforward because we can rely on the known isotopic abundances of the elements. A more difficult problem is an interference between the isotope of an elemental analyte and a polyatomic ion that has the same mass. Such polyatomic ions may arise from the sample's matrix or from the plasma. For example, the ion 40Ar16O+ has a mass-to-charge ratio of 56, which overlaps with peak for 56Fe, the most abundant isotope of iron. Although we could choose to monitor iron at a different mass-to-charge ratio, we will lose sensitivity as we are using a less abundant isotope. Corrections can be made using the method outlined in Example $1$, although it may require using multiple peaks, which increases the uncertainty of the final result. Matrix Effects A matrix effect occurs when the sample's matrix affects the relationship between the signal and the concentration of the analyte. Matrix effects are common in ICP-MS and may lead to either a suppression or an enhancement in the signal. Although not always well understood, matrix effects likely result from how easily an ionizable element affects the ability to ionize other elements. Matrix matching, using the method of standard additions, or using an internal standard can help minimize matrix effects for quantitative work. Applications of ICP-MS ICP-MS finds application for analytes in a wide variety of matrices, including both solutions and solids. Solution samples with high concentrations of dissolved ions may present problems due to the deposition of the salts onto the sampler and skimmer cones, which reduces the size of the pinhole that provides entry into the interface between the ICP torch and the mass spectrometer. The use of laser ablation makes it possible to analyzer surfaces—such as glasses, metals, and ceramics—without additional sample preparation. Qualitative and Semiquantitative Applications. One of the strengths of ICP-MS is its ability to provide a survey scan, such as that in Figure $1$, that allows for the identification of the elements present in a sample. Analysis of a single sample that contains known concentrations of these elements is suitable for providing a rough estimate of their concentration in the sample. Quantitative Analysis. For a more accurate and precise quantitative analysis, one can prepare multiple external standards and prepare a calibration curve. Linearly across approximately six orders of magnitude with detection limits of less than 1 ppb. Including an internal standard in the external standards can help reduce matrix effects. The ideal internal standard will not produce isobaric ions and its primary ionization potential should be similar to that for the analyte; when working with several analytes, it may be necessary to choose a different internal standard for each analyte. Isotope Ratios. An important advantage of ICP-MS over other analytical methods is its ability to monitor multiple isotopes for a single element. 11.04: Spark Source Mass Spectrometry Although ICP-MS is the most widely used method of atomic mass spectrometry, there are other forms of atomic mass spectrometry, three of which we highlight here. Spark Source Mass Spectrometry (SSMS) In SSMS, a solid sample is vaporized using a spark source, as described in Chapter 10.2 for atomic emission. Because the spark is generated in an evacuated housing the interface between the spark source and the mass spectrometer is simpler. Because the spark generates ions with a large distribution of kinetic energies a quadrupole mass analyzer is not practicable; instead, the mass spectrum is recorded using a double-focusing mass analyzer (see Chapter 20 for more details about this type of mass spectrometer). One advantage of the double-focusing mass analyzer is that it is capable or resolving small differences in masses. For example, in ICP-MS the peaks for 56Fe+ and the polyatomic ion 40Ar16O+ overlap, appearing as a single peak. A double-focusing mass analyzer can separate these two ions, which have, respectively, masses of 55.934942 amu and 55.957298 amu. Glow Discharge Mass Spectrometry (GDMS) A glow discharge source generates ions in manner similar to that used to generate the emission of photons in a hollow cathode lamp (see Chapter 9.2 for a discussion of the hollow cathode lamp). The sample serves as the cathode in a cell that contains a very low pressure of argon gas. The application of a high voltage pulse between the cathode and an anode that also is in the cell, converts some of the Ar to Ar+ ion, which then collide with the cathode, sputtering some of the solid sample into a mixture of gas-phase atoms and ions, the later of which are drawn into the mass spectrometer for analysis. Elemental Surface Analysis by Mass Spectrometry When analyzing a solid sample, we often are interested in how its composition varies either across the surface or as a function of depth. We can gather information across a surface if we can focus the ion source to a small spot and then raster that spot across the surface, and we can gather information as a function of depth if we can use sputter away a portion of the surface. See Chapter 21 for a discussion of two such techniques: secondary ion mass spectometry (SIMS) and laser microprove mass spectrometry.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/11%3A_Atomic_Mass_Spectrometry/11.03%3A_Inductively_Coupled_Plasma_Mass_Spectrometer.txt
• 12.1: Fundamental Principles Before considering instrumental methods that rely on X-rays, we first review some fundamental principles of X-rays. • 12.2: Instrument Components Atomic X-ray spectrometry has the same needs as other forms of optical spectroscopy: a source of X-rays, a means for isolating a desired range of wavelengths of X-rays, a means for detecting the X-rays, and a means for converting the signal at the transducer into a meaningful number. In this section we explore each of these needs. • 12.3: Atomic X-Ray Fluorescence Methods In X-ray fluorescence a source of X-rays is used to excite the atoms of an analyte in a sample. These excited-state atoms return to their ground state by emitting X-rays, the process we know as fluorescence. The wavelengths of these emission lines are characteristic of the elements that make up the sample; thus, atomic X-ray fluorescence is a useful method for both a qualitative analysis and a quantitative analysis. • 12.4: Other X-Ray Methods The application of X-rays to the analysis of materials can take forms other than X-ray fluorescence. In this section we briefly consider X-ray absorption and X-ray diffraction. 12: Atomic X-Ray Spectrometry In Chapter 6 we introduced the electromagnetic spectrum and the characteristic properties of photons, such as the wavelengths, the frequencies, and the energies of ultraviolet, visible, and infrared light. The wavelength range for photons of X-ray radiation extends from approximately 0.01 nm to 10 nm. Although we are used to reporting a photon's wavelength in nanometers, for historical reasons the wavelength of an X-ray photon usually is reported in angstroms (for which the symbol is Å) where 1 Å = 0.1 nm; thus the wavelength range of 0.01 nm to 10 nm for X-ray radiation also is expressed as 0.1 Å to 100 Å. This range of wavelengths corresponds to a range of frequencies from approximately $3 \times 10^{19} \text{ s}^{-1}$ to $3 \times 10^{16} \text{ s}^{-1}$, and a range of energies from $2 \times 10^{-19} \text{ J}$ to $2 \times 10^{-17} \text{ J}$. Sources of X-Rays There are three routine ways to generate X-rays, each of which is covered in this section: we can bombard a suitable metal with a beam of high-energy electrons, we can use one X-ray to stimulate the emission of additional X-rays through fluorescence, and we can use a radioactive isotope that emits X-rays as it decays. Obtaining X-Rays From Electron Beam Sources An electron beam is created by heating a tungsten wire filament to a temperature at which it releases electrons. These electrons are pulled toward a metal target by applying an accelerating voltage between the metal target and the tungsten wire. The result is the broad continuum of X-ray emission in Figure $1$. The source of this continuous emission spectrum is the reduction in the kinetic energy of the electrons as they collide with the metal target. The loss of kinetic energy results in the production of photons over a broad range of wavelengths and is known as Bremsstrahlung, or braking radiation. Note In earlier chapters we divided the sources of photons into two broad groups: continuous sources, such as a tungsten lamp, that produce photons at all wavelengths between a lower limit and an upper limit, and line sources, such as a hollow cathode lamp, that produce photons for one or more discrete wavelengths. The sources used to generate X-rays also generate continuum and/or line spectra. The lower wavelength limit for X-ray emission, identified here as $\lambda_0$, is the maximum possible loss of kinetic energy, KE, and is equal to $KE = \frac{hc}{\lambda_0} = Ve \label{lmin}$ where h is Planck's constant, c is the speed of light, V is the accelerating voltage, and e is the charge on the electron. The product of the accelerating voltage and the charge on the electron is the kinetic energy of the electrons. Solving Equation \ref{lmin} for $\lambda_0$ gives $\lambda_0 = \frac{hc}{Ve} = \frac{12.398 \text{ kV Å}}{V} \label{lambdamin2}$ where $\lambda_0$ is in angstroms and V is in kilovolts. Note that Equation \ref{lmin} and Equation \ref{lambdamin2} do not include any terms that depend on the target metal, which means that for any accelerating voltage, $\lambda_0$ is the same for all metal targets. Table $1$ gives values of $\lambda_0$ that span the range of accelerating voltages in Figure $1$. Table $1$. Shortest wavelength of continuous X-ray emission when using an electron beam as a function of its accelerating voltage. accelerating voltage (kV) $\lambda_0$ (Å) 20 0.62 25 0.50 30 0.41 35 0.35 40 0.31 45 0.28 50 0.25 If we apply a sufficiently large accelerating voltage, then the emission spectrum will consist of both a continuum spectrum and a line spectrum, as we see in Figure $2$ with molybdenum as the target metal. The spectrum consists of both a continuum similar to that in Figure $1$, and two lines, one at a wavelength of 0.63 Å and one at a wavelength of 0.71 Å. The source of these lines is the emission of X-rays from excited state ions that form when a sufficiently high-energy electron from the electron beam removes an electron from an atomic orbital close to the nucleus. As electrons in atomic orbitals at a greater distance from the nucleus drop into the atomic orbital with a vacancy, they release their extra energy as a photon. Although the background emission from the continuum is the same for all metal targets, the energy for the lines have values that are characteristic for different metals because the energy to remove an electron varies from element-to-element, increasing with atomic number. For example, an accelerating voltage of at least $V = \frac{12.398 \text{ kV Å}}{0.61 Å} = 20 \text{ kV} \nonumber$ is needed to generate the line spectrum for molybdenum in Figure $2$. The characteristic emission lines for molybdenum in Figure $2$ are identified as $K_{\alpha}$ and $K_{\beta}$, a notation with which you may not be familiar. The simplified energy level diagram in Figure $3$ will help us understand this notation. Each arrow in this energy-level diagram shows a transition in which an electron moves from an orbital at greater distance from the nucleus to an orbital closer to the nucleus. The letters K, L, and M correspond to the principal quantum number n, which has values of 1, 2, 3... that indicate the initial vacancy created by the collision of the ion beam with the target metal. The Greek symbols $\alpha$, $\beta$, and $\gamma$ indicate the source of the electron that fills this vacancy in terms of its change in the principal quantum number, $\Delta n$. An electron moving from n = 2 to n = 1 and an electron moving from n = 4 to n = 3 have the same designation of $\alpha$. The emission line in Figure $2$ identified as $K_{\beta}$, therefore, is the result of an electron in the n = 3 shell moving into a vacancy in the n = 1 shell $K$. Note Why is Figure $3$ a simplified energy-level diagram? For each n > 1 there is more than one atomic orbital. When n = 2 there are three energy levels: one that corresponds to l = 0, one that corresponds to l = 1 and ml = 0, and one that corresponds to l = 1 and ml = ±1. The allowed transitions to the n = 1 energy levels requires a change in the value for l; thus, we expect to find two emission lines from n = 2 to n = 1 instead of the one shown in Figure $3$. These two lines, which we can identify as $\text{K}_{\alpha 1}$ and $\text{ K}_{\alpha 2}$, generally are sufficiently close in value that they are not resolved in the X-ray emission spectrum. For example, $\text{K}_{\alpha 1} = 0.709$ and $\text{ K}_{\alpha 2} = 0.714$ for molybdenum. You can find a table of X-ray emission lines here. Obtaining X-Rays From Fluorescent Sources When an atom in an excited state emits a photon as a means of returning to a lower energy state, how we describe the process depends on the source of energy that created the excited state. When excitation is the result of thermal energy, we call the process atomic emission. When excitation is the result of the absorption of a photon, we call the process atomic fluorescence. In X-ray fluorescence, excitation is brought about using photons from a source of continuous X-ray radiation. More details on X-ray fluorescence are provided later in this chapter. Obtaining X-Rays From Radioactive Sources Atoms that have the same number of protons but a different number of neutrons are isotopes. To identify an isotope we use the notation ${}_Z^A E$, where E is the element’s atomic symbol, Z is the element’s atomic number, and A is the element’s atomic mass number. Although an element’s different isotopes have the same chemical properties, their nuclear properties are not identical. The most important difference between isotopes is their stability. The nuclear configuration of a stable isotope remains constant with time. Unstable isotopes, however, disintegrate spontaneously, emitting radioactive decay particles as they transform into a more stable form. An element’s atomic number, Z, is equal to the number of protons and its atomic mass, A, is equal to the sum of the number of protons and neutrons. We represent an isotope of carbon-13 as $_{6}^{13} \text{C}$ because carbon has six protons and seven neutrons. Sometimes we omit Z from this notation—identifying the element and the atomic number is repetitive because all isotopes of carbon have six protons and any atom that has six protons is an isotope of carbon. Thus, 13C and C–13 are alternative notations for this isotope of carbon. Radioactive particles can decay in several ways, one of which results in the emission of X-rays. For example, 55Fe can capture an electron and undergo a process in which a proton becomes a neutron, becoming 55Mn and releasing the excess energy as a $\text{K}_{\alpha}$ X-ray. We will not give further consideration to radioactive sources of atomic X-ray emission; see Chapter 32, however, for a further discussion of radioactive methods of analysis. X-Ray Absorption Figure $4$ shows a portion of molybdenum's X-ray absorption spectrum over the same range of wavelengths as shown in Figure $2$ for its emission spectrum. Both spectra are relatively simple: the emission spectrum consists of two lines superimposed on a continuum background, and the absorbance spectrum consists of a single line, identified here as the K edge. The Absorption Process If an X-ray photon is of sufficient energy, then its absorbance by an atom results in the ejection of an electron from one of the atom's innermost atomic orbitals, which you may recognize as the production of a photoelectron. For molybdenum, a wavelength of 0.62 Å (an energy of 20.0 kV) is needed to eject a photoelectron from the K shell (n = 1). At this wavelength the probability of absorption is at is greatest. At shorter wavelengths (greater energies) there is sufficient energy to eject the electron, however, the probability of absorption decreases and the relative absorbance decreases slowly. The abrupt decrease in absorbance for wavelengths larger than 0.62 Å—this abrupt decrease is the source of the term edge—happens because the photons no longer have sufficient energy to eject an electron from the K shell. The slow increasing absorbance at wavelengths greater than the K edge is the result of ejecting electrons from the L shell, which has edges at 4.3 Å, 4.7 Å, and 4.9 Å. Note The simplified energy level diagram in $3$ shows only one energy level for n = 2 (the L shell). As we noted earlier, there are three energy levels when n = 2: one that corresponds to l = 0, one that corresponds to l = 1 and ml = 0, and one that corresponds to l = 1 and ml = ±1. The three edges corresponding to these energy levels are identified as LI, LII, and LIII. Beer's Law and X-Ray Absorption When a source of X-rays passes through a sample with a thickness of x, the following equation holds $A = -\ln \frac{P}{P_0} = \mu_{\text{M}} \rho x \label{beerxray}$ where A is the absorbance, $P_0$ is the power of the X-ray source incident on the sample, $P$ is the power of the X-ray source after it passes through the sample, $\mu_{\text{M}}$ is the sample's mass absorption coefficient and $\rho$ is the sample's density. You may have noticed the similarity between this equation and the equation for Beer's law that we first encountered in Chapter 6 $A = -\ln \frac{P}{P_0} = \epsilon b C \label{beer}$ where $\epsilon$ is the molar absorptivity, $b$ is the pathlength, and $C$ is molar concentration. Note that both density (g/mL) and molarity (mol/L) are a measure of concentration that expresses the amount of the absorbing material present in the sample. X-Ray Fluorescence When an electron is ejected from a shell near the nucleus by the absorption of an X-ray, the vacancy created is eventually filled when an electron at a greater distance from the nucleus moves down. Because it takes more energy to eject an electron and create a vacancy than is returned by the movement of other electrons into the vacancy, the resulting fluorescent emission of X-rays is always at wavelengths that are longer (lower energy) than the wavelength that was absorbed. We see this in Figure $4$ and Figure $1$ for molybdenum where it absorbs an X-ray with a wavelength of 0.62 Å and emits X-rays with wavelengths of 0.63 Å and 0.71 Å. X-Ray Diffraction When an X-ray beam is focused onto a sample that has a regular (crystalline) pattern of atoms in three dimensions, some of the radiation scatters from the surface and some of the radiation passes through to the next layer of atoms where the combination of scattering and passing through continues. As a result of this process, the radiation undergoes diffraction in which X-rays of some wavelengths appear to reflect off the surface while X-rays of other wavelengths do not. The conditions the result in diffraction are easy to understand using the diagram in Figure $5$. The red and green arrows are two parallel beams of X-rays that are focused on an ordered crystalline solid that consists of a layered repeatable pattern of atoms shown by the blue circles. The two beams of X-rays encounter the solid at an angle of $\theta$. The X-ray shown in red scatters off of the first layer, exiting at the same angle of $\theta$. The X-ray shown in green penetrates to the second layer where it undergoes scattering, exiting at the same angle of $\theta$. We know from the superposition of waves (see Chapter 6) that the two beams of X-rays will remain in phase, and thus experience constructive interference, only if the additional distance traveled by the green wave—the sum of the line segments $\overline{bc}$ and $\overline{cd}$—is an integer multiple of the wavelength; thus $\overline{bc} + \overline{cd} = n \lambda \label{bragg1}$ We also know that the length of the line segments $\overline{bc}$ and $\overline{cd}$ are given by $\overline{bc}= \overline{cd} = d \sin \theta \label{bragg2}$ where $d$ is the distance between the crystal's layers. Combining Equation \ref{bragg1} and Equation \ref{bragg2} gives $n \lambda = 2 d \sin \theta \label{bragg3}$ Rearranging Equation \ref{bragg3} shows that we will observe diffraction only at angles that satisfy the equation $\sin \theta = \frac{n \lambda}{2d} \label{bragg4}$
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/12%3A_Atomic_X-Ray_Spectrometry/12.01%3A_Fundamental_Principles.txt
Atomic X-ray spectrometry has the same needs as other forms of optical spectroscopy: a source of X-rays, a means for isolating a desired range of wavelengths of X-rays, a means for detecting the X-rays, and a means for converting the signal at the transducer into a meaningful number. X-Ray Sources The most important source of X-rays is the X-ray tube, a basic diagram of which is shown in Figure $1$. A beam of electrons (shown in red) from a heated tungsten filament (shown in orange) serves as a cathode with a negative potential. The electrons are drawn toward an anode that has a positive potential. The tip of the anode is made from a metal target (shown in blue) that will produce X-rays (shown in green) with the desired wavelengths when struck by the electron beam. Typical metal targets include tungsten, molybdenum, silver, copper, iron, and cobalt. The filament and the target metal are housed inside an evacuated tube. The emitted X-rays exit the tube through an optical window. Any material that is naturally radioactive emits characteristic X-rays that potentially can serve as a source of X-rays that another species can absorb. For example, in the absorption spectrum for molybdenum (see Figure 18.1.4) the $\text{K}_{\alpha}$ line has a wavelength of 0.62 Å, which corresponds to an energy of 20.0 kV. A radioactive source with an emission line that has a wavelength slightly longer than 0.62 Å (between, for example, 0.5 Å and 0.6 Å) is sufficient. One possibility is 109Cd, which emits X-rays with a wavelength of 0.56 Å, or an energy of 22 kV. X-Ray Filters and Monochromators A filter and a monochromator are designed to take a broad range of emission from a source and narrow the range of wavelengths that reach the sample. Figure $2$ shows how to accomplish this using an absorption filter. The blue line shows the emission spectrum for a sample that includes two lines—the $\text{K}_{\alpha}$ line and the $\text{K}_{\beta}$ line—superimposed on a broad continuum. The green line shows the absorption spectrum for a different element whose K edge falls in between the sample's $\text{K}_{\alpha}$ and $\text{K}_{\beta}$ lines. In this case the K edge filter removes most of the continuum and the $\text{K}_{\beta}$ line, allowing just the $\text{K}_{\alpha}$ line and a small amount of the continuum to reach the sample. Figure $3$ shows the basic design for an X-ray monochromator, which can operate in either an absorption mode, in which X-rays from the source pass through the sample before entering the monochromator, or in an emission mode, in which X-rays from the source excite the sample and fluorescent emission is sampled at 90°. In either mode, the X-rays pass through a collimator that focuses them onto a crystal where the X-rays undergo diffraction. X-rays are collected by a second collimator before arriving at the transducer. To scan the source, the crystal rotates through an angle of $\theta$; the transducer must rotate twice as fast, traversing an angle of $2 \theta$ to maintain an identical angle between the source and the transducer. An X-ray monochromator's effective range is determined by the properties of the crystal used for diffraction. We know from Chapter 12.1 that $n \lambda = 2 d \sin \theta \label{diffract1}$ where $n$ is the diffraction order, $\lambda$ is the wavelength, $\theta$ is the X-ray's angle of incidence, and $d$ is the spacing between the crystal's layers. The practical limit for the angle depends on the monochromator's design, but typically $\theta$ is 7.5° to 75° (or $2 \theta$ angles of 15° to 150°). A common crystal is LiF, which has a spacing of 2.01 Å; thus, it provides a wavelength range from a lower limit of $\lambda = 2 d \sin \theta = 2 \times 2.01 \text{ Å} \times \sin(7.5^{\circ}) = 0.52 \text{ Å} \nonumber$ to an upper limit of $\lambda = 2 d \sin \theta = 2 \times 2.01 \text{ Å} \times \sin(75^{\circ}) = 3.9 \text{ Å} \nonumber$ when $n = 1$. This range of wavelengths is sufficient to study the elements K to Cd using their $\text{K}_{\alpha}$ lines. X-Ray Transducers The most common transducers for atomic X-ray spectrometry are the flow proportional counter, the scintillation counter, and the Si(Li) semiconductor. All three transducers act as photon counters. Photon Counting The most common transducer for measuring atomic absorbance and atomic emission of ultraviolet and visible light is a photomultiplier tube. As we learned in Chapter 7, a photon strikes a photosensitive surface and generates several electrons. These electrons collide with a series of dynodes, each collision of which generates additional electrons. This amplification of one photon into 106–107 electrons results in a steady-state current that we can measure. When the intensity of radiation from the source is smaller, as it is with X-rays, then it is possible to store the electrons in a capacitor that, when discharged, provides a pulsed signal that carries information about the photons. Flow Proportional Counters Figure $4$ shows the basic structure of a flow proportional counter. The transducer's cell has an inlet and an outlet for creating the flow of argon gas. The cell has windows made from an X-ray transparent materials, such as beryllium. X-rays enter the cell and, as shown by the reaction in the upper left, ionizes the argon, generating a photoelectron. This photoelectron is sufficiently energetic that it further ionizes the argon, as shown by the reaction in the lower right. The result is an amplification of a single photon into as many as 10,000 electrons. These electrons are drawn to a tungsten wire that is held at a positive charge, and then flow into a capacitor. Discharging the capacitor gives a pulsed signal whose height is proportional to the initial number of electrons and, therefore, to the energy, frequency, and wavelength of the photons. Scintillation Counters A flow proportional counter is not an efficient transducer for shorter wavelength (lower energy) X-rays that are likely to pass through the cell without being absorbed by the argon gas, leading to a reduction in the signal. In this case we can use a scintillation counter. Figure $5$ shows how this works. X-ray photons are focused onto a single crystal of NaI that is doped with a small amount, approximately 0.2%, of Tl+ as an iodide salt. Absorption of the X-rays results in the fluorescent emission of multiple photons of visible light with a wavelength of 410 nm. Each of these photons falls on the photocathode of a photomultiplier, eventually producing a voltage pulse. Each pulse corresponds to a single photon with an energy that is proportional to the pulse's height. Semiconductor Transducers In Chapter 7.5 we introduced the use of the pn junction of a silicon semiconductor as a transducer for optical spectroscopy. Absorption of a photon of sufficient energy results in the formation of an electron-hole pair. Movement of the electron through the n-layer and movement of the hole through the p-region generates a current that is proportional to the number of photons reaching the detector. Figure $6$ shows the structure of the semiconductor used in monitoring X-rays, which consists of a p-type layer and an n-type layer on either side of single crystal of silicon doped with lithium or germanium. The Si(Li) layer has the same role here as Ar has in the flow proportional counter. An X-ray photon that enters into the Si(Li) layer generates electron-hole pairs leading to a measurable current that is proportional to the energy of the X-ray. X-Ray Signal Processors The flow proportional counter, scintillation counter, and semiconductor transducers pass a stream of pulses to the signal processor where pulse-height selector is used to isolate only those pulses of interest and a pulse-height analyzer is used to summarize the distribution of pulses. Pulse-Height Selectors Not all pulses measured by the transducer are of interest. For example, pulses with small heights are likely to be noise and pulses with large heights may be a higher-order ($n > 1$) diffraction of shorter, and more energetic wavelengths. Figure $7$ shows the basic details of how pulse-height selector works. The pulse-height selector is set to pass only those pulse heights that are between a lower limit and an upper limit. The figure shows three pulses, one that is too small (in blue), one that is too large (in red), and one that we wish to keep (in green). The pulses run through two channels, one that removes only the blue signal and one that retains only the red signal. The latter signal is inverted and combined with the signal from the other channel. Because the red signal has a different sign in the two channels, it, too, is removed, leaving only the one pulse height that meets the criteria for selection. Having removed pulses with heights that are too small or too large, the remaining pulses are analyzed by counting the number of pulses that share a range of pulse heights. Each unique range of pulse heights is called a channel and corresponds to a specific energy of the photons. A spectrum is a plot showing the count of pulses as function of the energy of the photons.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/12%3A_Atomic_X-Ray_Spectrometry/12.02%3A_Instrument_Components.txt
In X-ray fluorescence a source of X-rays—emission from an X-ray tube or emission from a radioactive element—is used to excite the atoms of an analyte in a sample. These excited-state atoms return to their ground state by emitting X-rays, the process we know as fluorescence. The wavelengths of these emission lines are characteristic of the elements that make up the sample; thus, atomic X-ray fluorescence is a useful method for both a qualitative analysis and a quantitative analysis. Instruments In the previous section we covered the basic components that make up an atomic X-ray spectrometer: a source of X-rays, a means for isolating those wavelengths of interest, a transducer to measure the intensity of fluorescence, and a signal processor to convert the transducer's signal into a useful measurement. How we string these units together is the subject of this section in which we consider two ways to acquire a sample's spectrum: wavelength dispersive instruments and energy dispersive instruments. Wavelength Dispersive Instruments A wavelength dispersive instrument relies on diffraction using a monochromator, such as that in Figure 12.2.3, to select the analytical wavelength. A sequential wavelength dispersive instrument uses a single monochromator. The monochromator's crystal and transducer are set to the desired angles—$\theta$ for the diffracting crystal and $2 \theta$ for the transducer—for the analyte of interest and the fluorescence intensity measured for 1-100 s. The monochromator is adjusted for the next analyte and the process repeated until the analysis for all analytes is complete. Analyzing a sample for 20 analytes may take 30 min or more. A simultaneous, or multichannel, wavelength dispersive instrument contains as many as 30 crystals and transducers, each at a fixed angle that is preset for an analyte of interest. Each individual channel has a dedicated transducer and pulse-height selector and analyzer. Analysis of a complex sample with many analytes requires less than a minute. This is similar to the multichannel ICP used in atomic emission (see Figure 10.1.5). Energy Dispersive Instruments An energy dispersive instrument eschews a scanning monochromator and, instead, uses a semiconductor transducer to analyze the fluorescent emission by the determining the energies of the emitted photons. As each photon reaches the transducer as a pulse of electrons, its height is measured and converted into the photon's energy. The result is a spectrum showing a count of photons with the same energy as a function of the energy. The collection of data is very fast: if it takes 25 µs to complete the collection and processing of a single photon, then the instrument can count 40,000 photons each second (40 kcps, or kilo counts per second). One limitation to an energy dispersive instrument is its limited resolution with respect to energy. An instrument that operates with 2048 channels—that is, an instrument that divides the energies into 2048 bins—and that processes photons with energies up to 20 keV, has a resolution of approximately 10 eV per channel. Because it does not rely on a monochromator, an energy dispersive instrument occupies a smaller footprint, and portable, hand-held versions are available. Qualitative Analysis Figure $1$ shows the X-ray fluorescence spectrum for the yellow pigment known as naples yellow, the major elements of which are zinc, lead, and antimony. It is easy to identify the major elements in the sample by matching the energies of the individual lines to the published emission lines of the elements, which are available in many on-line sources. For example, the first line highlighted in this spectrum is at an energy of 8.66 KeV, which is close to the $\text{K}_{\alpha}$ line for Zn at 8.64 KeV, and the last highlighted line is at an energy of 29.97 KeV, which is close to the $\text{K}_{\beta}$ line for Sb of 29.7 KeV. Quantitative Analysis A semi-quantitative analysis is possible if we assume that there is a linear relationship between the intensity of an element's emission line and its %w/w concentration in the sample. The intensity of emission from a pure sample or the element, $I_\text{pure}$, is measured along with the intensity of emission for the element in a sample, $I_\text{sample}$, and the %w/w calculated as $\% \text{w/w} = \frac {I_\text{sample}} {I_\text{pure}} \label{semiquant}$ Equation \ref{semiquant} is essentially a one-point standardization that makes the significant assumption that the intensity of fluorescent emission is independent of the matrix in which the analyte sits. When this is not true, then errors of $2 \text{-} 3 \times$ are likely. Matrix Effects For fluorescent emission to occur, the analyte must first absorb a photon that can eject a photoelectron. For Equation \ref{semiquant} to hold, the photons that initiate the fluorescent emission must come from the source only. If other elements within the sample's matrix produced fluorescent emission with sufficient energy to eject photoelectrons from the analyte, then the total fluorescence increases and we overestimate the analyte's concentration. If an element in the matrix absorbs the X-rays from the source more strongly than the analyte, then the analyte's total fluorescence becomes smaller and we underestimate the analyte's concentration. There are three common strategies for compensating for matrix effects. External Standards with Matrix Matching. Instead of using a single, pure sample for the calibration, we prepare a series of standards with different concentrations of the analyte. By matching, as best we can, the matrix of the standards to the matrix of the samples, we can improve the accuracy of a quantitative analysis. This assumes, of course, that we have sufficient knowledge of our sample's matrix. Internal Standards. An internal standard is an element that we add to the standards and samples so that its concentration is the same in each. If the analyte and the internal standard experience similar matrix effects, then the ratio of their intensities is proportional to the ratio of their concentrations $\frac{I_\text{analyte, sample}}{I_\text{int std, sample}} = K \times \frac{C_\text{analyte, sample}}{C_\text{int std, sample}} \label{intstd}$ Dilution. A third approach is to dilute the samples and standards by adding a quantity of non-absorbing or poorly absorbing material. Dilution has the effect of minimizing the difference in the matrix of the original samples and standards. 12.04: Other X-Ray Methods The application of X-rays to the analysis of materials can take forms other than X-ray fluorescence. In X-ray absorption spectrometry, the ability of a sample to absorb radiation from an X-ray source is measured. Absorption follows Beer's law (see Section 12.1) and, compared to emission, is relatively free of matrix effects. X-ray absorption, however, is a less selective technique than atomic fluorescence because we are not measuring the emission from an analyte's characteristic lines. X-ray absorption finds its greatest utility for the quantitative analysis of samples that contain just one or two major analytes. In powder X-ray diffraction we focus the radiation from an X-ray tube line source on a powdered sample and measure the intensity of diffracted radiation as a function of the transducer's angle ($2 \theta$). A typical powder X-ray diffraction spectrum is in Figure $1$ for the mineral calcite (CaCO3). Qualitative identification is obtained by matching the $2 \theta$ peaks to those in published databases. A quantitative analysis for the compound—not the elements that make up the compound—is possible using the intensity of a unique diffraction line in a sample to that for a pure sample. Figure $2$ for a mixture of calcite and magnesite (MgCO3) shows that a simultaneous quantitative analysis for both compounds is possible using the diffraction line at a $2 \theta$ of 29.44 for calcite and of 32.65 for magnesite.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/12%3A_Atomic_X-Ray_Spectrometry/12.03%3A_X-Ray_Fluorescence.txt
• 13.1: Transmittance and Absorbance As light passes through a sample, its power decreases as some of it is absorbed. This attenuation of radiation is described quantitatively by two separate, but related terms: transmittance and absorbance. • 13.2: Beer's Law Beer's law connects absorbance to the concentration of the absorbing species. In this section we derive Beer's law and consider some of its limitations. • 13.3: Effect of Noise on Transmittance and Absorbance Measurements In absorption spectroscopy, precision is limited by indeterminate errors—primarily instrumental noise—which are introduced when we measure absorbance. Precision generally is worse for low absorbances and for high absorbances . We might expect, therefore, that precision will vary with transmittance. • 13.4: Instrumentation As covered in Chapter 7, the basic instrumentation for absorbance measurements consists of a source of radiation, a means for selecting the wavelengths to use, a means for detecting the amount of light absorbed by the sample, and a means for processing and displaying the data. In this section we consider two other essential components of an instrument for measuring the absorbance of UV/Vis radiation by molecules: the optical path that connects the source to the detector and a means for placing t 13: Introduction to Ultraviolet Visible Absorption Spectrometry As light passes through a sample, its power decreases as some of it is absorbed. This attenuation of radiation is described quantitatively by two separate, but related terms: transmittance and absorbance. As shown in Figure $1a$, transmittance is the ratio of the source radiation’s power as it exits the sample, PT, to that incident on the sample, P0. $T=\frac{P_{\mathrm{T}}}{P_{0}} \label{10.1}$ Multiplying the transmittance by 100 gives the percent transmittance, %T, which varies between 100% (no absorption) and 0% (complete absorption). All methods of detecting photons—including the human eye and modern photoelectric transducers—measure the transmittance of electromagnetic radiation. Equation \ref{10.1} does not distinguish between different mechanisms that prevent a photon emitted by the source from reaching the detector. In addition to absorption by the analyte, several additional phenomena contribute to the attenuation of radiation, including reflection and absorption by the sample’s container, absorption by other components in the sample’s matrix, and the scattering of radiation. To compensate for this loss of the radiation’s power, we use a method blank. As shown in Figure $1b$, we redefine P0 as the power exiting the method blank. An alternative method for expressing the attenuation of electromagnetic radiation is absorbance, A, which we define as $A=-\log T=-\log \frac{P_{\mathrm{T}}}{P_{0}} \label{10.2}$ Absorbance is the more common unit for expressing the attenuation of radiation because—as we will see in the next section—it is a linear function of the analyte’s concentration. Example 13.1.1 A sample has a percent transmittance of 50%. What is its absorbance? Solution A percent transmittance of 50.0% is the same as a transmittance of 0.500. Substituting into Equation \ref{10.2} gives $A=-\log T=-\log (0.500)=0.301 \nonumber$ Exercise 13.1.1 What is the %T for a sample if its absorbance is 1.27? Answer To find the transmittance, $T$, we begin by noting that $A=1.27=-\log T \nonumber$ Solving for T \begin{align*}-1.27 &=\log T \[4pt] 10^{-1.27} &=T \end{align*} gives a transmittance of 0.054, or a %T of 5.4%.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.01%3A_Transmittance_and_Absorbance.txt
Absorbance and Concentration When monochromatic electromagnetic radiation passes through an infinitesimally thin layer of sample of thickness dx, it experiences a decrease in its power of dP (Figure $1$). This fractional decrease in power is proportional to the sample’s thickness and to the analyte’s concentration, C; thus $-\frac{d P}{P}=\alpha C d x \label{BL1}$ where P is the power incident on the thin layer of sample and $\alpha$ is a proportionality constant. Integrating the left side of Equation \ref{BL1} over the sample’s full thickness $-\int_{P=P_0}^{P=P_t} \frac{d P}{P}=\alpha C \int_{x=0}^{x=b} d x \nonumber$ $\ln \frac{P_{0}}{P_T}=\alpha b C \nonumber$ converting from ln to log, and substituting into the equation relating transmittance to absorbance $A = -\text{log}T = -\text{log}\frac{P_\text{T}}{P_0} \nonumber$ gives $A=a b C \label{BL2}$ where a is the analyte’s absorptivity with units of cm–1 conc–1. If we express the concentration using molarity, then we replace a with the molar absorptivity, $\varepsilon$, which has units of cm–1 M–1. $A=\varepsilon b C \label{BL3}$ The absorptivity and the molar absorptivity are proportional to the probability that the analyte absorbs a photon of a given energy. As a result, values for both a and $\varepsilon$ depend on the wavelength of the absorbed photon. Example 13.2.1 A $5.00 \times 10^{-4}$ M solution of analyte is placed in a sample cell that has a pathlength of 1.00 cm. At a wavelength of 490 nm, the solution’s absorbance is 0.338. What is the analyte’s molar absorptivity at this wavelength? Solution Solving Equation \ref{BL3} for $\epsilon$ and making appropriate substitutions gives $\varepsilon=\frac{A}{b C}=\frac{0.338}{(1.00 \ \mathrm{cm})\left(5.00 \times 10^{-4} \ \mathrm{M}\right)}=676 \ \mathrm{cm}^{-1} \ \mathrm{M}^{-1} \nonumber$ Exercise 13.2.1 A solution of the analyte from Example 13.2.1 has an absorbance of 0.228 in a 1.00-cm sample cell. What is the analyte’s concentration? Answer Making appropriate substitutions into Beer’s law $A=0.228=\varepsilon b C=\left(676 \ \mathrm{M}^{-1} \ \mathrm{cm}^{-1}\right)(1 \ \mathrm{cm}) C \nonumber$ and solving for C gives a concentration of $3.37 \times 10^{-4}$ M. Equation \ref{BL2} and Equation \ref{BL3}, which establish the linear relationship between absorbance and concentration, are known as Beer’s law. Calibration curves based on Beer’s law are common in quantitative analyses. As is often the case, the formulation of a law is more complicated than its name suggests. This is the case, for example, with Beer’s law, which also is known as the Beer-Lambert law or the Beer-Lambert-Bouguer law. Pierre Bouguer, in 1729, and Johann Lambert, in 1760, noted that the transmittance of light decreases exponentially with an increase in the sample’s thickness. $T \propto e^{-b} \nonumber$ Later, in 1852, August Beer noted that the transmittance of light decreases exponentially as the concentration of the absorbing species increases. $T \propto e^{-C} \nonumber$ Together, and when written in terms of absorbance instead of transmittance, these two relationships make up what we know as Beer’s law. Beer's Law and Multicomponent Samples We can extend Beer’s law to a sample that contains several absorbing components. If there are no interactions between the components, then the individual absorbances, Ai, are additive. For a two-component mixture of analyte’s X and Y, the total absorbance, Atot, is $A_{tot}=A_{X}+A_{Y}=\varepsilon_{X} b C_{X}+\varepsilon_{Y} b C_{Y} \nonumber$ Generalizing, the absorbance for a mixture of n components, Amix, is $A_{m i x}=\sum_{i=1}^{n} A_{i}=\sum_{i=1}^{n} \varepsilon_{i} b C_{i} \label{BL4}$ Limitations to Beer's Law Beer’s law suggests that a plot of absorbance vs. concentration—we will call this a Beer’s law plot—is a straight line with a y-intercept of zero and a slope of ab or $\varepsilon b$. In some cases a Beer’s law plot deviates from this ideal behavior (see Figure $2$), and such deviations from linearity are divided into three categories: fundamental, chemical, and instrumental. Fundamental Limitations to Beer's Law Beer’s law is a limiting law that is valid only for low concentrations of analyte. There are two contributions to this fundamental limitation to Beer’s law. At higher concentrations the individual particles of analyte no longer are independent of each other. The resulting interaction between particles of analyte may change the analyte’s absorptivity. A second contribution is that an analyte’s absorptivity depends on the solution’s refractive index. Because a solution’s refractive index varies with the analyte’s concentration, values of a and $\varepsilon$ may change. For sufficiently low concentrations of analyte, the refractive index essentially is constant and a Beer’s law plot is linear. Chemical Limitations to Beer's Law A chemical deviation from Beer’s law may occur if the analyte is involved in an equilibrium reaction. Consider, for example, the weak acid, HA. To construct a Beer’s law plot we prepare a series of standard solutions—each of which contains a known total concentration of HA—and then measure each solution’s absorbance at the same wavelength. Because HA is a weak acid, it is in equilibrium with its conjugate weak base, A. In the equations that follow, the conjugate weak base A is sometimes written as A as it is easy to mistake the symbol for anionic charge as a minus sign; thus, we will write $C_A$ instead of $C_{A^-}$. $\mathrm{HA}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{A}^{-}(a q) \nonumber$ If both HA and A absorb at the selected wavelength, then Beer’s law is $A=\varepsilon_{\mathrm{HA}} b C_{\mathrm{HA}}+\varepsilon_{\mathrm{A}} b C_{\mathrm{A}} \label{BL5}$ Because the weak acid’s total concentration, Ctotal, is $C_{\mathrm{total}}=C_{\mathrm{HA}}+C_{\mathrm{A}} \nonumber$ we can write the concentrations of HA and A as $C_{\mathrm{HA}}=\alpha_{\mathrm{HA}} C_{\mathrm{total}} \label{BL6}$ $C_{\text{A}} = (1 - \alpha_\text{HA})C_\text{total} \label{BL7}$ where $\alpha_\text{HA}$ is the fraction of weak acid present as HA. Substituting Equation \ref{BL6} and Equation \ref{BL7} into Equation \ref{BL5} and rearranging, gives $A=\left(\varepsilon_{\mathrm{HA}} \alpha_{\mathrm{HA}}+\varepsilon_{\mathrm{A}}-\varepsilon_{\mathrm{A}} \alpha_{\mathrm{HA}}\right) b C_{\mathrm{total}} \label{BL8}$ To obtain a linear Beer’s law plot, we must satisfy one of two conditions. If $\varepsilon_\text{HA}$ and $\varepsilon_{\text{A}}$ have the same value at the selected wavelength, then Equation \ref{BL8} simplifies to $A = \varepsilon_{\text{A}}bC_\text{total} = \varepsilon_\text{HA}bC_\text{total} \nonumber$ Alternatively, if $\alpha_\text{HA}$ has the same value for all standard solutions, then each term within the parentheses of Equation \ref{BL8} is constant—which we replace with k—and a linear calibration curve is obtained at any wavelength. $A=k b C_{\mathrm{total}} \nonumber$ Because HA is a weak acid, the value of $\alpha_\text{HA}$ varies with pH. To hold $\alpha_\text{HA}$ constant we buffer each standard solution to the same pH. Depending on the relative values of $\alpha_\text{HA}$ and $\alpha_{\text{A}}$, the calibration curve has a positive or a negative deviation from Beer’s law if we do not buffer the standards to the same pH. Instrumental Limitations to Beer's Law There are two principal instrumental limitations to Beer’s law: stray radiation and non-polychromatic radiation. Stray radiation is the first contribution to instrumental deviations from Beer’s law. Stray radiation arises from imperfections in the wavelength selector that allow light to enter the instrument and to reach the detector without passing through the sample. Stray radiation adds an additional contribution, Pstray, to the radiant power that reaches the detector; thus $A=-\log \frac{P_{\mathrm{T}}+P_{\text { stray }}}{P_{0}+P_{\text { stray }}} \nonumber$ For a small concentration of analyte, Pstray is significantly smaller than P0 and PT, and the absorbance is unaffected by the stray radiation. For higher concentrations of analyte, less light passes through the sample and PT and Pstray become similar in magnitude. This results is an absorbance that is smaller than expected, and a negative deviation from Beer’s law. The second limitation is that Beer’s law assumes that radiation reaching the sample is of a single wavelength—that is, it assumes a purely monochromatic source of radiation. Even the best wavelength selector, however, passes radiation with a small, but finite effective bandwidth. Let's assume we have a line source that emits light at two wavelengths, $\lambda^{\prime}$ and $\lambda^{\prime \prime}$. When treated separately, the absorbances at these wavelengths, A′ and A′′, are $A^{\prime}=-\log \frac{P_{\mathrm{r}}^{\prime}}{P_{0}^{\prime}}=\varepsilon^{\prime} b C \quad \quad A^{\prime \prime}=-\log \frac{P_{\mathrm{T}}^{\prime \prime}}{P_{0}^{\prime \prime}}=\varepsilon^{\prime \prime} b C \nonumber$ If both wavelengths are measured simultaneously the absorbance is $A=-\log \frac{\left(P_{\mathrm{T}}^{\prime}+P_{\mathrm{T}}^{\prime \prime}\right)}{\left(P_{0}^{\prime}+P_{0}^{\prime \prime}\right)} \nonumber$ Expanding the logarithmic function of the equation's right side gives $A = \log (P_0^{\prime} + P_0^{\prime \prime}) - \log (P_\text{T}^\prime + P_\text{T}^{\prime \prime}) \label{IL1}$ Next, we need to find a relationship between $P_\text{T}$ and $P_0$ for any wavelength. To do this, we start with Beer's law $A = - \log \frac{P_\text{T}}{P_0} = \epsilon b C \nonumber$ and then solve for $P_\text{T}$ in terms of $P_0$ $\log \frac{P_\text{T}}{P_0} = - \epsilon b C \nonumber$ $\frac{P_\text{T}}{P_0} = 10^{- \epsilon b C} \nonumber$ $P_\text{T} = P_0 \times 10^{- \epsilon b C} \nonumber$ Substituting this general relationship back into our wavelength-specific equation for absorbance, \ref{IL1}, we obtain $A = \log (P_0^{\prime} + P_0^{\prime \prime}) - \log (P_0^{\prime} \times 10^{- \epsilon b C} + P_0^{\prime \prime} \times 10^{- \epsilon b C}) \label{IL2}$ For monochromatic radiation, we have $\epsilon^{\prime} = \epsilon^{\prime \prime} = \epsilon$ and Equation \ref{IL2} simplifies to Beer's law $A = -\log (10^{- \epsilon b C}) = \epsilon b C \nonumber$ For non-monochromatic radiation, Equation \ref{IL2} predicts that the absorbance is smaller than expected if $\epsilon^{\prime}\ > \epsilon^{\prime \prime}$. Polychromatic radiation always gives a deviation from Beer’s law, but the effect is smaller if the value of $\varepsilon$ essentially is constant over the wavelength range passed by the wavelength selector. For this reason, as shown in Figure $3$, it is better to make absorbance measurements at the top of a broad absorption peak. In addition, the deviation from Beer’s law is less serious if the source’s effective bandwidth is less than one-tenth of the absorbing species’ natural bandwidth [(a) Strong, F. C., III Anal. Chem. 1984, 56, 16A–34A; Gilbert, D. D. J. Chem. Educ. 1991, 68, A278–A281]. When measurements must be made on a slope, linearity is improved by using a narrower effective bandwidth.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.02%3A_Beer%27s_Law.txt
In absorption spectroscopy, precision is limited by indeterminate errors—primarily instrumental noise—which are introduced when we measure absorbance. Precision generally is worse for low absorbances where P0 PT, and for high absorbances where PT approaches 0. We might expect, therefore, that precision will vary with transmittance. We can derive an expression between precision and transmittance by rewriting Beer's law as $C=-\frac{1}{\varepsilon b} \log T \label{noise1}$ and completing a propagation of uncertainty (see the Appendicies for a discussion of propagation of error), which gives $s_{c}=-\frac{0.4343}{\varepsilon b} \times \frac{s_{T}}{T} \label{noise2}$ where sT is the absolute uncertainty in the transmittance. Dividing Equation \ref{noise2} by Equation \ref{noise1} gives the relative uncertainty in concentration, sC/C, as $\frac{s_c}{C}=\frac{0.4343 s_{T}}{T \log T} \nonumber$ If we know the transmittance’s absolute uncertainty, then we can determine the relative uncertainty in concentration for any measured transmittance. Determining the relative uncertainty in concentration is complicated because sT is a function of the transmittance. As shown in Table $1$, three categories of indeterminate instrumental error are observed [Rothman, L. D.; Crouch, S. R.; Ingle, J. D. Jr. Anal. Chem. 1975, 47, 1226–1233]. Table $1$. Effect of Indeterminate Errors on Relative Uncertainty in Concentration category sources of indeterminate error relative uncertainty in concentration $s_T = k_1$ %T readout resolution noise in thermal detectors $\frac{s_{C}}{C}=\frac{0.4343 k_{1}}{T \log T}$ $s_T = k_2 \sqrt{T^2 + T}$ noise in photon detectors $\frac{s_{C}}{C}=\frac{0.4343 k_{2}}{\log T} \sqrt{1+\frac{1}{T}}$ $s_T = k_3 T$ positioning of sample cell fluctuations in source intensity $\frac{s_{C}}{C}=\frac{0.4343 k_{3}}{\log T}$ A constant sT is observed for the uncertainty associated with reading %T on a meter’s analog or digital scale, both common on less-expensive spectrophotometers. Typical values are ±0.2–0.3% (a k1 of ±0.002–0.003) for an analog scale and ±0.001% a (k1 of ±0.00001) for a digital scale. A constant sT also is observed for the thermal transducers used in infrared spectrophotometers. The effect of a constant sT on the relative uncertainty in concentration is shown by curve A in Figure $1$. Note that the relative uncertainty is very large for both high absorbances and low absorbances, reaching a minimum when the absorbance is 0.4343. This source of indeterminate error is important for infrared spectrophotometers and for inexpensive UV/Vis spectrophotometers. To obtain a relative uncertainty in concentration of ±1–2%, the absorbance is kept within the range 0.1–1. Values of sT are a complex function of transmittance when indeterminate errors are dominated by the noise associated with photon detectors. Curve B in Figure $1$ shows that the relative uncertainty in concentration is very large for low absorbances, but is smaller at higher absorbances. Although the relative uncertainty reaches a minimum when the absorbance is 0.963, there is little change in the relative uncertainty for absorbances between 0.5 and 2. This source of indeterminate error generally limits the precision of high quality UV/Vis spectrophotometers for mid-to-high absorbances. Finally, the value of sT is directly proportional to transmittance for indeterminate errors that result from fluctuations in the source’s intensity and from uncertainty in positioning the sample within the spectrometer. The latter is particularly important because the optical properties of a sample cell are not uniform. As a result, repositioning the sample cell may lead to a change in the intensity of transmitted radiation. As shown by curve C in Figure $1$, the effect is important only at low absorbances. This source of indeterminate errors usually is the limiting factor for high quality UV/Vis spectrophotometers when the absorbance is relatively small. When the relative uncertainty in concentration is limited by the %T readout resolution, it is possible to improve the precision of the analysis by redefining 100% T and 0% T. Normally 100% T is established using a blank and 0% T is established while preventing the source’s radiation from reaching the detector. If the absorbance is too high, precision is improved by resetting 100% T using a standard solution of analyte whose concentration is less than that of the sample (Figure $2a$). For a sample whose absorbance is too low, precision is improved by redefining 0% T using a standard solution of the analyte whose concentration is greater than that of the analyte (Figure $2b$). In this case a calibration curve is required because a linear relationship between absorbance and concentration no longer exists. Precision is further increased by combining these two methods (Figure $2c$). Again, a calibration curve is necessary since the relationship between absorbance and concentration is no longer linear.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.03%3A_Effect_of_Noise_on_Transmittance_and_Absorbance_Measurements.txt
Basic Components As covered in Chapter 7, the basic instrumentation for absorbance measurements consists of a source of radiation, a means for selecting the wavelengths to use, a means for detecting the amount of light absorbed by the sample, and a means for processing and displaying the data. In this section we consider two other essential components of an instrument for measuring the absorbance of UV/Vis radiation by molecules: the optical path that connects the source to the detector and a means for placing the sample in this optical path. Instrument Designs for Molecular UV/Vis Absorption Frequently an analyst must select from among several different optical paths, the one that is best suited for a particular analysis. In this section we examine several different instruments for molecular absorption spectroscopy with an emphasis on their advantages and limitations. Filter Photometer The simplest instrument for molecular UV/Vis absorption is a filter photometer (Figure $1$), which uses an absorption or interference filter to isolate a band of radiation. The filter is placed between the source and the sample to prevent the sample from decomposing when exposed to higher energy radiation. A filter photometer has a single optical path between the source and detector, and is called a single-beam instrument. The instrument is calibrated to 0% T while using a shutter to block the source radiation from the detector. After opening the shutter, the instrument is calibrated to 100% T using an appropriate blank. The blank is then replaced with the sample and its transmittance measured. Because the source’s incident power and the sensitivity of the detector vary with wavelength, the photometer is recalibrated whenever the filter is changed. Photometers have the advantage of being relatively inexpensive, rugged, and easy to maintain. Another advantage of a photometer is its portability, making it easy to take into the field. Disadvantages of a photometer include the inability to record an absorption spectrum and the source’s relatively large effective bandwidth, which limits the calibration curve’s linearity. The percent transmittance varies between 0% and 100%. We use a blank to determine P0, which corresponds to 100%T. Even in the absence of light the detector records a signal. Closing the shutter allows us to assign 0%T to this signal. Together, setting 0% T and 100%T calibrates the instrument. The amount of light that passes through a sample produces a signal that is greater than or equal to 0%T and smaller than or equal to 100%T. Figure $1$. Schematic diagram of a filter photometer. The analyst either inserts a removable filter or the filters are placed in a carousel, an example of which is shown in the photographic inset. The analyst selects a filter by rotating it into place. Single-Beam Spectrophotometer An instrument that uses a monochromator for wavelength selection is called a spectrophotometer. The simplest spectrophotometer is a single-beam instrument equipped with a fixed-wavelength monochromator (Figure $2$). Single-beam spectrophotometers are calibrated and used in the same manner as a photometer. One example of a single-beam spectrophotometer is Thermo Scientific’s Spectronic 20D+, which is shown in the photographic insert to Figure $2$. The Spectronic 20D+ has a wavelength range of 340–625 nm (950 nm when using a red-sensitive detector), and a fixed effective bandwidth of 20 nm. Battery-operated, hand-held single-beam spectrophotometers are available, which are easy to transport into the field. Other single-beam spectrophotometers also are available with effective bandwidths of 2–8 nm. Fixed wavelength single-beam spectrophotometers are not practical for recording spectra because manually adjusting the wavelength and recalibrating the spectrophotometer is awkward and time-consuming. The accuracy of a single-beam spectrophotometer is limited by the stability of its source and detector over time. Double-Beam Spectrophotometer The limitations of a fixed-wavelength, single-beam spectrophotometer is minimized by using a double-beam spectrophotometer (Figure $3$). A chopper controls the radiation’s path, alternating it between the sample, the blank, and a shutter. The signal processor uses the chopper’s speed of rotation to resolve the signal that reaches the detector into the transmission of the blank, P0, and the sample, PT. By including an opaque surface as a shutter, it also is possible to continuously adjust 0%T. The effective bandwidth of a double-beam spectrophotometer is controlled by adjusting the monochromator’s entrance and exit slits. Effective bandwidths of 0.2–3.0 nm are common. A scanning monochromator allows for the automated recording of spectra. Double-beam instruments are more versatile than single-beam instruments, being useful for both quantitative and qualitative analyses, but also are more expensive and not particularly portable. Diode Array Spectrometer An instrument with a single detector can monitor only one wavelength at a time. If we replace a single photomultiplier with an array of photodiodes, we can use the resulting detector to record a full spectrum in as little as 0.1 s. In a diode array spectrometer the source radiation passes through the sample and is dispersed by a grating (Figure $4$). The photodiode array detector is situated at the grating’s focal plane, with each diode recording the radiant power over a narrow range of wavelengths. Because we replace a full monochromator with just a grating, a diode array spectrometer is small and compact. One advantage of a diode array spectrometer is the speed of data acquisition, which allows us to collect multiple spectra for a single sample. Individual spectra are added and averaged to obtain the final spectrum. This signal averaging improves a spectrum’s signal-to-noise ratio. If we add together n spectra, the sum of the signal at any point, x, increases as nSx, where Sx is the signal. The noise at any point, Nx, is a random event, which increases as $\sqrt{n} N_x$ when we add together n spectra. The signal-to-noise ratio after n scans, (S/N)n is $\left(\frac{S}{N}\right)_{n}=\frac{n S_{x}}{\sqrt{n} N_{x}}=\sqrt{n} \frac{S_{x}}{N_{x}} \nonumber$ where Sx/Nx is the signal-to-noise ratio for a single scan. The impact of signal averaging is shown in Figure $5$. The first spectrum shows the signal after one scan, which consists of a single, noisy peak. Signal averaging using 4 scans and 16 scans decreases the noise and improves the signal-to-noise ratio. One disadvantage of a photodiode array is that the effective bandwidth per diode is roughly an order of magnitude larger than that for a high quality monochromator. Sample Cells The sample compartment provides a light-tight environment that limits stray radiation. Samples normally are in a liquid or solution state, and are placed in cells constructed with UV/Vis transparent materials, such as quartz, glass, and plastic (Figure $6$). A quartz or fused-silica cell is required when working at a wavelength <300 nm where other materials show a significant absorption. The most common pathlength is 1 cm (10 mm), although cells with shorter (as little as 0.1 cm) and longer pathlengths (up to 10 cm) are available. Longer pathlength cells are useful when analyzing a very dilute solution or for gas samples. The highest quality cells allow the radiation to strike a flat surface at a 90o angle, minimizing the loss of radiation to reflection. A test tube often is used as a sample cell with simple, single-beam instruments, although differences in the cell’s pathlength and optical properties add an additional source of error to the analysis. If we need to monitor an analyte’s concentration over time, it may not be possible to remove samples for analysis. This often is the case, for example, when monitoring an industrial production line or waste line, when monitoring a patient’s blood, or when monitoring an environmental system, such as stream. With a fiber-optic probe we can analyze samples in situ. An example of a remote sensing fiber-optic probe is shown in FFigure $7$. The probe consists of two bundles of fiber-optic cable. One bundle transmits radiation from the source to the probe’s tip, which is designed to allow the sample to flow through the sample cell. Radiation from the source passes through the solution and is reflected back by a mirror. The second bundle of fiber-optic cable transmits the nonabsorbed radiation to the wavelength selector. Another design replaces the flow cell shown in Figure $7$ with a membrane that contains a reagent that reacts with the analyte. When the analyte diffuses into the membrane it reacts with the reagent, producing a product that absorbs UV or visible radiation. The nonabsorbed radiation from the source is reflected or scattered back to the detector. Fiber optic probes that show chemical selectivity are called optrodes [(a) Seitz, W. R. Anal. Chem. 1984, 56, 16A–34A; (b) Angel, S. M. Spectroscopy 1987, 2(2), 38–48].
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.4%3A_Instrumentation.txt
• 14.1: What is Molar Absorptivity? Beer's law, as we learned in Chapter 13, gives the relationship between the amount of light absorbed by a sample, the concentration of the species absorbing light, the distance (path length) the light travels through the sample, and the molar absorptivity of the species absorbing light. • 14.2: Absorbing Species There are two general requirements for an analyte’s absorption of electromagnetic radiation. First, there must be a mechanism by which the radiation’s electric field or magnetic field can interact with the analyte. For ultraviolet and visible radiation, absorption of a photon changes the energy of the analyte’s valence electrons. The second requirement is that the photon’s energy must exactly equal the difference in energy between two of the analyte’s quantized energy states. • 14.3: Qualitative and Characterization Applications UV/Vis absorption bands result from the absorption of electromagnetic radiation by specific valence electrons or bonds. The energy at which the absorption occurs, and the intensity of that absorption, is determined by the chemical environment of the absorbing moiety. UV/Vis spectroscopy also provides ways for studying chemical reactivity. • 14.4: Quantitative Applications The determination of an analyte’s concentration based on its absorption of ultraviolet or visible radiation is one of the most common quantitative analytical methods. In addition, if an analyte does not absorb UV/Vis radiation—or if its absorbance is too weak—we often can react it with another species that is strongly absorbing. • 14.5: Photometric Titrations If at least one species in a titration absorbs electromagnetic radiation, then we can identify the end point by monitoring the titrand’s absorbance at a carefully selected wavelength. 14: Applications of Ultraviolet Visible Molecular Absorption Spectrometry Beer's law, as we learned in Chapter 13, gives the relationship between the amount of light absorbed by a sample, $A$, the concentration of the species absorbing light, $C$, the distance (path length) the light travels through the sample, $b$, and the molar absorptivity of the species absorbing light, $\epsilon$ $A = \epsilon b C \nonumber$ The meaning of path length and concentration are self-evident, and their effect on the extent of absorbance also are self-evident: the more absorbing species that are present (concentration) and the more opportunity for any one molecule to absorb light (path length), the greater the absorbance. The meaning of molar absorptivity—what it represents—is less intuitive. It is, of course, a proportionality constant that converts the product of path length and concentration, $b C$, into absorbance, but that is not a particularly satisfying definition. Maximum values for $\epsilon$ are on the order of $10^5$ L/(mol•cm) for simple molecules. and are proportional to the cross-sectional area of the absorbing species and the probability that a photon passing through this cross-sectional area is absorbed. Here we have a self-evident relationship: the greater the cross-sectional area—the more space occupied by the absorbing species—the greater the opportunity for absorbance; and the more favorable the probability of absorption—with probabilities ranging from 0 to 1—the greater the absorbance. Although molar absorptivity values are often reported in the literature, their values usually vary significantly from study-to-study, presumably due to differences in the purity of the reagents, the solvents used to prepare solutions, the precision with which path length is measured, and the instrument used for the measurements. For this reason, molar absorptivity values are usually calculated as needed by making careful measurements of $A$, $b$, and $C$, or by simply reducing Beer's law to $A = k C$ where $k$ is determined from a calibration curve.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.01%3A_What_is_Molar_Absorptivity.txt
There are two general requirements for an analyte’s absorption of electromagnetic radiation. First, there must be a mechanism by which the radiation’s electric field or magnetic field can interact with the analyte. For ultraviolet and visible radiation, absorption of a photon changes the energy of the analyte’s valence electrons. The second requirement is that the photon’s energy, $h \nu$, must exactly equal the difference in energy, $\Delta E$, between two of the analyte’s quantized energy states. We can use the energy level diagram in Figure $1$ to explain an absorbance spectrum. The lines labeled E0 and E1 represent the analyte’s ground (lowest) electronic state and its first electronic excited state. Superimposed on each electronic energy level is a series of lines representing vibrational energy levels. UV/Vis Spectra for Molecules and Ions The valence electrons in organic molecules and polyatomic ions, such as $\text{CO}_3^{2-}$, occupy quantized sigma bonding ($\sigma$), pi bonding ($\pi$), and non-bonding (n) molecular orbitals (MOs). Unoccupied sigma antibonding ($\sigma^*$) and pi antibonding ($\pi^*$) molecular orbitals are slightly higher in energy. Because the difference in energy between the highest-energy occupied molecular orbitals (HOMO) and the lowest-energy unoccupied molecular orbitals (LUMO) corresponds to ultraviolet and visible radiation, absorption of a photon is possible. Four types of transitions between quantized energy levels account for most molecular UV/Vis spectra. Table $1$ lists the approximate wavelength ranges for these transitions, as well as a partial list of bonds, functional groups, or molecules responsible for these transitions. Of these transitions, the most important are $n \rightarrow \pi^*$ and $\pi \rightarrow \pi^*$ because they involve important functional groups that are characteristic of many analytes and because the wavelengths are easily accessible. The bonds and functional groups that give rise to the absorption of ultraviolet and visible radiation are called chromophores. Table $1$. Electronic Transitions Involving n, $\sigma$, and $\pi$ Molecular Orbitals transition wavelength range examples $\sigma \rightarrow \sigma^*$ <200 nm C—C, C—H $n \rightarrow \sigma^*$ 160–260 nm H2O, CH3OH, CH3Cl $\pi \rightarrow \pi^*$ 200–500 nm C=C, C=O, C=N, C≡C $n \rightarrow \pi^*$ 250–600 nm C=O, C=N, N=N, N=O Many transition metal ions, such as Cu2+ and Co2+, form colorful solutions because the metal ion absorbs visible light. The transitions that give rise to this absorption are valence electrons in the metal ion’s d-orbitals, which are shown in Figure $2$. For a free metal ion, the five d-orbitals are of equal energy. In the presence of a complexing ligand or solvent molecule, however, the d-orbitals split into two or more groups that differ in energy. For example, in an octahedral complex of $\text{Cu(H}_2\text{O)}_6^{2+}$ the six water molecules, which are aligned with the metal rods in Figure $2$, perturb the d-orbitals into the two groups shown in Figure $3$. The magnitude of the splitting of the $d$-orbitals is called the octahedral field strength, $\Delta_\text{oct}$. Although the magnitude of the resulting $d \rightarrow d$ transitions for transition metal ions are relatively weak, solutions of the metal-ligand complexes show distinct colors that depend on the metal ion and the ligand, which affect the magnitude of $\Delta_\text{oct}$. Figure $4$ shows the variation in color for a series of seven octahedral complexes of Co3+. The spectra for three of these complexes are shown in Figure $5$, which we can use to estimate the relative size of $\Delta_\text{oct}$. Each of the spectra shows two absorption bands, one near 400 nm and one a somewhat longer wavelength: a shoulder at about 470 nm for phenanthroline, a peak at about 550 nm for glycine, and a peak at about 620 nm for oxalate. Because $\Delta_\text{oct}$ is inversely proportional to wavelength, the relative magnitude of $\Delta_\text{oct}$ increases from Co(phen)33+ to Co(glycine)33+ to Co(oxalate)33–. In Figure $4$, the octahedral field strengths of the ligands decrease from Co(NO2)63– to Co(CO3)33–. A more important source of UV/Vis absorption for inorganic metal–ligand complexes is charge transfer, in which absorption of a photon produces an excited state in which there is transfer of an electron from the metal, M, to the ligand, L. $M-L+h \nu \rightarrow\left(M^{+}-L^{-}\right)^{*} \nonumber$ Charge-transfer absorption is important because it produces very large absorbances. One important example of a charge-transfer complex is that of o-phenanthroline with Fe2+, the UV/Vis spectrum for which is shown in Figure $7$. Charge-transfer absorption in which an electron moves from the ligand to the metal also is possible.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.02%3A_Absorbing_Species.txt
Qualitative Applications As discussed in Chapter 14.2, ultraviolet, visible, and infrared absorption bands result from the absorption of electromagnetic radiation by specific valence electrons or bonds. The energy at which the absorption occurs, and the intensity of that absorption, is determined by the chemical environment of the absorbing moiety. For example, benzene has several ultraviolet absorption bands due to $\pi \rightarrow \pi^*$ transitions. The position and intensity of two of these bands, 203.5 nm ($\epsilon$ = 7400 M1cm–1) and 254 nm ($\epsilon$ = 204 M–1 cm–1), are sensitive to substitution. For benzoic acid, in which a carboxylic acid group replaces one of the aromatic hydrogens, the two bands shift to 230 nm ($\epsilon$ = 11600 M–1 cm–1) and 273 nm ($\epsilon$ = 970 M–1 cm–1). A variety of rules have been developed to aid in correlating UV/Vis absorption bands to chemical structure. With the availability of computerized data acquisition and storage it is possible to build digital libraries of standard reference spectra. The identity of an a unknown compound often can be determined by comparing its spectrum against a library of reference spectra, a process known as spectral searching. Characterization Applications Molecular absorption, particularly in the UV/Vis range, has been used for a variety of different characterization studies, including determining the stoichiometry of metal–ligand complexes and determining equilibrium constants. Both of these examples are examined in this section. Stoichiometry of a Metal-Ligand Complex We can determine the stoichiometry of the metal–ligand complexation reaction $\mathrm{M}+y \mathrm{L} \rightleftharpoons \mathrm{ML}_{y} \nonumber$ using one of three methods: the method of continuous variations, the mole-ratio method, and the slope-ratio method. Of these approaches, the method of continuous variations, also called Job’s method, is the most popular. In this method a series of solutions is prepared such that the total moles of metal and of ligand, ntotal, in each solution is the same. If (nM)i and (nL)i are, respectively, the moles of metal and ligand in solution i, then $n_{\text { total }}=\ \left(n_{\mathrm{M}}\right)_{i} \ + \ \left(n_{\mathrm{L}}\right)_{i} \nonumber$ The relative amount of ligand and metal in each solution is expressed as the mole fraction of ligand, (XL)i, and the mole fraction of metal, (XM)i, $\left(X_{\mathrm{L}}\right)_{i}=\frac{\left(n_{\mathrm{L}}\right)_{i}}{n_{\mathrm{total}}} \nonumber$ $\left(X_{M}\right)_{i}=1-\frac{\left(n_\text{L}\right)_{i}}{n_{\text { total }}}=\frac{\left(n_\text{M}\right)_{i}}{n_{\text { total }}} \nonumber$ The concentration of the metal–ligand complex in any solution is determined by the limiting reagent, with the greatest concentration occurring when the metal and the ligand are mixed stoichiometrically. If we monitor the complexation reaction at a wavelength where only the metal–ligand complex absorbs, a graph of absorbance versus the mole fraction of ligand has two linear branches—one when the ligand is the limiting reagent and a second when the metal is the limiting reagent. The intersection of the two branches represents a stoichiometric mixing of the metal and the ligand. We use the mole fraction of ligand at the intersection to determine the value of y for the metal–ligand complex MLy. $y=\frac{n_{\mathrm{L}}}{n_{\mathrm{M}}}=\frac{X_{\mathrm{L}}}{X_{\mathrm{M}}}=\frac{X_{\mathrm{L}}}{1-X_{\mathrm{L}}} \nonumber$ You also can plot the data as absorbance versus the mole fraction of metal. In this case, y is equal to (1 – XM)/XM. Example 14.3.1 To determine the formula for the complex between Fe2+ and o-phenanthroline, a series of solutions is prepared in which the total concentration of metal and ligand is held constant at $3.15 \times 10^{-4}$ M. The absorbance of each solution is measured at a wavelength of 510 nm. Using the following data, determine the formula for the complex. XL absorbance XL absorbance 0.000 0.000 0.600 0.693 0.100 0.116 0.700 0.809 0.200 0.231 0.800 0.693 0.300 0.347 0.900 0.347 0.400 0.462 1.000 0.000 0.500 0.578 Solution A plot of absorbance versus the mole fraction of ligand is shown in Figure $1$. To find the maximum absorbance, we extrapolate the two linear portions of the plot. The two lines intersect at a mole fraction of ligand of 0.75. Solving for y gives $y=\frac{X_{L}}{1-X_{L}}=\frac{0.75}{1-0.75}=3 \nonumber$ The formula for the metal–ligand complex is $\text{Fe(phen)}_3^{2+}$. Exercise 14.3.1 Use the continuous variations data in the following table to determine the formula for the complex between Fe2+ and SCN. The data for this problem is adapted from Meloun, M.; Havel, J.; Högfeldt, E. Computation of Solution Equilibria, Ellis Horwood: Chichester, England, 1988, p. 236. XL absorbance XL absorbance XL absorbance XL absorbance 0.0200 0.068 0.2951 0.670 0.5811 0.790 0.8923 0.324 0.0870 0.262 0.3887 0.767 0.6860 0.701 0.9787 0.071 0.1792 0.471 0.4964 0.807 0.7885 0.540 Answer The figure below shows a continuous variations plot for the data in this exercise. Although the individual data points show substantial curvature—enough curvature that there is little point in trying to draw linear branches for excess metal and excess ligand—the maximum absorbance clearly occurs at XL ≈ 0.5. The complex’s stoichiometry, therefore, is Fe(SCN)2+. Several precautions are necessary when using the method of continuous variations. First, the metal and the ligand must form only one metal–ligand complex. To determine if this condition is true, plots of absorbance versus XL are constructed at several different wavelengths and for several different values of ntotal. If the maximum absorbance does not occur at the same value of XL for each set of conditions, then more than one metal–ligand complex is present. A second precaution is that the metal–ligand complex’s absorbance must obey Beer’s law. Third, if the metal–ligand complex’s formation constant is relatively small, a plot of absorbance versus XL may show significant curvature. In this case it often is difficult to determine the stoichiometry by extrapolation. Finally, because the stability of a metal–ligand complex may be influenced by solution conditions, it is necessary to control carefully the composition of the solutions. When the ligand is a weak base, for example, each solutions must be buffered to the same pH. In the mole-ratio method the moles of one reactant, usually the metal, is held constant, while the moles of the other reactant is varied. The absorbance is monitored at a wavelength where the metal–ligand complex absorbs. A plot of absorbance as a function of the ligand-to-metal mole ratio, nL/nM, has two linear branches that intersect at a mole–ratio corresponding to the complex’s formula. Figure $2a$ shows a mole-ratio plot for the formation of a 1:1 complex in which the absorbance is monitored at a wavelength where only the complex absorbs. Figure $2b$ shows a mole-ratio plot for a 1:2 complex in which all three species—the metal, the ligand, and the complex—absorb at the selected wavelength. Unlike the method of continuous variations, the mole-ratio method can be used for complexation reactions that occur in a stepwise fashion if there is a difference in the molar absorptivities of the metal–ligand complexes, and if the formation constants are sufficiently different. A typical mole-ratio plot for the step-wise formation of ML and ML2 is shown in Figure $2c$. For both the method of continuous variations and the mole-ratio method, we determine the complex’s stoichiometry by extrapolating absorbance data from conditions in which there is a linear relationship between absorbance and the relative amounts of metal and ligand. If a metal–ligand complex is very weak, a plot of absorbance versus XL or nL/nM becomes so curved that it is impossible to determine the stoichiometry by extrapolation. In this case the slope-ratio is used. In the slope-ratio method two sets of solutions are prepared. The first set of solutions contains a constant amount of metal and a variable amount of ligand, chosen such that the total concentration of metal, CM, is much larger than the total concentration of ligand, CL. Under these conditions we may assume that essentially all the ligand reacts to form the metal–ligand complex. The concentration of the complex, which has the general form MxLy, is $\left[\mathrm{M}_{x} \mathrm{L_y}\right]=\frac{C_{\mathrm{L}}}{y} \nonumber$ If we monitor the absorbance at a wavelength where only MxLy absorbs, then $A=\varepsilon b\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{\varepsilon b C_{\mathrm{L}}}{y} \nonumber$ and a plot of absorbance versus CL is linear with a slope, sL, of $s_{\mathrm{L}}=\frac{\varepsilon b}{y}\ ] A second set of solutions is prepared with a fixed concentration of ligand that is much greater than a variable concentration of metal; thus \[\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{C_{\mathrm{M}}}{x} \nonumber$ $A=\varepsilon b\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{\varepsilon b C_{\mathrm{M}}}{x} \nonumber$ $s_{M}=\frac{\varepsilon b}{x} \nonumber$ A ratio of the slopes provides the relative values of x and y. $\frac{s_{\text{M}}}{s_{\text{L}}}=\frac{\varepsilon b / x}{\varepsilon b / y}=\frac{y}{x} \nonumber$ An important assumption in the slope-ratio method is that the complexation reaction continues to completion in the presence of a sufficiently large excess of metal or ligand. The slope-ratio method also is limited to systems in which only a single complex forms and for which Beer’s law is obeyed. Determination of Equilibrium Constants Another important application of molecular absorption spectroscopy is the determination of equilibrium constants. Let’s consider, as a simple example, an acid–base reaction of the general form $\operatorname{HIn}(a q)+ \ \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \ \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\operatorname{In}^{-}(a q) \nonumber$ where HIn and In are the conjugate weak acid and weak base forms of an acid–base indicator. The equilibrium constant for this reaction is $K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right][\mathrm{A^-}]}{[\mathrm{HA}]} \nonumber$ To determine the equilibrium constant’s value, we prepare a solution in which the reaction is in a state of equilibrium and determine the equilibrium concentration for H3O+, HIn, and In. The concentration of H3O+ is easy to determine by measuring the solution’s pH. To determine the concentration of HIn and In we can measure the solution’s absorbance. If both HIn and In absorb at the selected wavelength, then, from Beer's law, we know that $A=\varepsilon_{\mathrm{Hln}} b[\mathrm{HIn}]+\varepsilon_{\mathrm{ln}} b[\mathrm{In}^-] \label{10.5}$ where $\varepsilon_\text{HIn}$ and $\varepsilon_{\text{In}}$ are the molar absorptivities for HIn and In. The indicator’s total concentration, C, is given by a mass balance equation $C=[\mathrm{HIn}]+ [\text{In}^-] \label{10.6}$ Solving Equation \ref{10.6} for [HIn] and substituting into Equation \ref{10.5} gives $A=\varepsilon_{\mathrm{Hln}} b\left(C-\left[\mathrm{In}^{-}\right]\right)+\varepsilon_{\mathrm{ln}} b\left[\mathrm{In}^{-}\right] \nonumber$ which we simplify to $A=\varepsilon_{\mathrm{Hln}} bC- \varepsilon_{\mathrm{Hln}}b\left[\mathrm{In}^{-}\right]+\varepsilon_{\mathrm{ln}} b\left[\mathrm{In}^{-}\right] \nonumber$ $A=A_{\mathrm{HIn}}+b\left[\operatorname{In}^{-}\right]\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{HIn}}\right) \label{10.7}$ where AHIn, which is equal to $\varepsilon_\text{HIn}bC$, is the absorbance when the pH is acidic enough that essentially all the indicator is present as HIn. Solving Equation \ref{10.7} for the concentration of In gives $\left[\operatorname{In}^{-}\right]=\frac{A-A_{\mathrm{Hln}}}{b\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{HIn}}\right)} \label{10.8}$ Proceeding in the same fashion, we derive a similar equation for the concentration of HIn $[\mathrm{HIn}]=\frac{A_{\mathrm{In}}-A}{b\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{Hln}}\right)} \label{10.9}$ where AIn, which is equal to $\varepsilon_{\text{In}}bC$, is the absorbance when the pH is basic enough that only Incontributes to the absorbance. Substituting Equation \ref{10.8} and Equation \ref{10.9} into the equilibrium constant expression for HIn gives $K_a = \frac {[\text{H}_3\text{O}^+][\text{In}^-]} {[\text{HIn}]} = [\text{H}_3\text{O}^+] \times \frac {A - A_\text{HIn}} {A_{\text{In}} - A} \label{10.10}$ We can use Equation \ref{10.10} to determine Ka in one of two ways. The simplest approach is to prepare three solutions, each of which contains the same amount, C, of indicator. The pH of one solution is made sufficiently acidic such that [HIn] >> [In]. The absorbance of this solution gives AHIn. The value of AIn is determined by adjusting the pH of the second solution such that [In] >> [HIn]. Finally, the pH of the third solution is adjusted to an intermediate value, and the pH and absorbance, A, recorded. The value of Ka is calculated using Equation \ref{10.10}. Example 14.3.2 The acidity constant for an acid–base indicator is determined by preparing three solutions, each of which has a total concentration of indicator equal to $5.00 \times 10^{-5}$ M. The first solution is made strongly acidic with HCl and has an absorbance of 0.250. The second solution is made strongly basic and has an absorbance of 1.40. The pH of the third solution is 2.91 and has an absorbance of 0.662. What is the value of Ka for the indicator? Solution The value of Ka is determined by making appropriate substitutions into 10.20 where [H3O+] is $1.23 \times 10^{-3}$; thus $K_{\mathrm{a}}=\left(1.23 \times 10^{-3}\right) \times \frac{0.662-0.250}{1.40-0.662}=6.87 \times 10^{-4} \nonumber$ Exercise 14.3.2 To determine the Ka of a merocyanine dye, the absorbance of a solution of $3.5 \times 10^{-4}$ M dye was measured at a pH of 2.00, a pH of 6.00, and a pH of 12.00, yielding absorbances of 0.000, 0.225, and 0.680, respectively. What is the value of Ka for this dye? The data for this problem is adapted from Lu, H.; Rutan, S. C. Anal. Chem., 1996, 68, 1381–1386. Answer The value of Ka is $K_{\mathrm{a}}=\left(1.00 \times 10^{-6}\right) \times \frac{0.225-0.000}{0.680-0.225}=4.95 \times 10^{-7} \nonumber$ A second approach for determining Ka is to prepare a series of solutions, each of which contains the same amount of indicator. Two solutions are used to determine values for AHIn and AIn. Taking the log of both sides of Equation \ref{10.10} and rearranging leave us with the following equation. $\log \frac{A-A_{\mathrm{Hin}}}{A_{\mathrm{ln}^{-}}-A}=\mathrm{pH}-\mathrm{p} K_{\mathrm{a}} \label{10.11}$ A plot of log[(A AHIn)/(AIn A)] versus pH is a straight-line with a slope of +1 and a y-intercept of –pKa. Exercise 14.3.3 To determine the Ka for the indicator bromothymol blue, the absorbance of each a series of solutions that contain the same concentration of bromothymol blue is measured at pH levels of 3.35, 3.65, 3.94, 4.30, and 4.64, yielding absorbance values of 0.170, 0.287, 0.411, 0.562, and 0.670, respectively. Acidifying the first solution to a pH of 2 changes its absorbance to 0.006, and adjusting the pH of the last solution to 12 changes its absorbance to 0.818. What is the value of Ka for bromothymol blue? The data for this problem is from Patterson, G. S. J. Chem. Educ., 1999, 76, 395–398. Answer To determine Ka we use Equation \ref{10.11}, plotting log[(A AHIn)/(AInA)] versus pH, as shown below. Fitting a straight-line to the data gives a regression model of $\log \frac{A-A_{\mathrm{HIn}}}{A_{\mathrm{ln}}-A}=-3.80+0.962 \mathrm{pH} \nonumber$ The y-intercept is –pKa; thus, the pKa is 3.80 and the Ka is $1.58 \times 10^{-4}$. In developing these approaches for determining Ka we considered a relatively simple system in which the absorbance of HIn and In are easy to measure and for which it is easy to determine the concentration of H3O+. In addition to acid–base reactions, we can adapt these approaches to any reaction of the general form $X(a q)+Y(a q)\rightleftharpoons Z(a q) \nonumber$ including metal–ligand complexation reactions and redox reactions, provided we can determine spectrophotometrically the concentration of the product, Z, and one of the reactants, either X or Y, and that we can determine the concentration of the other reactant by some other method. With appropriate modifications, a more complicated system in which we cannot determine the concentration of one or more of the reactants or products also is possible [Ramette, R. W. Chemical Equilibrium and Analysis, Addison-Wesley: Reading, MA, 1981, Chapter 13].
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.03%3A_Qualitative_Applications.txt
Scope The determination of an analyte’s concentration based on its absorption of ultraviolet or visible radiation is one of the most frequently encountered quantitative analytical methods. One reason for its popularity is that many organic and inorganic compounds have strong absorption bands in the UV/Vis region of the electromagnetic spectrum. In addition, if an analyte does not absorb UV/Vis radiation—or if its absorbance is too weak—we often can react it with another species that is strongly absorbing. For example, a dilute solution of Fe2+ does not absorb visible light. Reacting Fe2+ with o-phenanthroline, however, forms an orange–red complex of $\text{Fe(phen)}_3^{2+}$ that has a strong, broad absorbance band near 500 nm. An additional advantage to UV/Vis absorption is that in most cases it is relatively easy to adjust experimental and instrumental conditions so that Beer’s law is obeyed. Environmental Applications The analysis of waters and wastewaters often relies on the absorption of ultraviolet and visible radiation. Many of these methods are outlined in Table $1$. Several of these methods are described here in more detail. Table $1$. Examples of Molecular UV/Vis Analysis of Waters and Wastewaters analyte method $\lambda$ (nm) trace metals aluminum react with Eriochrome cyanide R dye at pH 6; forms red to pink complex 535 arsenic reduce to AsH3 using Zn and react with silver diethyldithiocarbamate; forms red complex 535 cadmium extract into CHCl3 containing dithizone from a sample made basic with NaOH; forms pink to red complex 518 chromium oxidize to Cr(VI) and react with diphenylcarbazide; forms 540 red-violet product 540 copper react with neocuprine in neutral to slightly acid solution and extract into CHCl3/CH3OH; forms yellow complex 457 iron reduce to Fe2+ and react with o-phenanthroline; forms orange-red complex 510 lead extract into CHCl3 containing dithizone from sample made basic with NH3/ NH4+ buffer; forms cherry red complex 510 manganese oxidize to MnO4 with persulfate; forms purple solution 525 mercury extract into CHCl3 containing dithizone from acidic sample; forms orange complex 492 zinc react with zincon at pH 9; forms blue complex 620 inorganic nonmetals ammonia reaction with hypochlorite and phenol using a manganous 630 salt catalyst; forms blue indophenol as product 630 cyanide react with chloroamine-T to form CNCl and then with a pyridine-barbituric acid; forms a red-blue dye 578 fluoride react with red Zr-SPADNS lake; formation of ZrF62decreases color of the red lake 570 chlorine (residual) react with leuco crystal violet; forms blue product 592 nitrate react with Cd to form NO2 and then react with sulfanilamide and N-(1-napthyl)-ethylenediamine; forms red azo 543 dye 543 phosphate react with ammonium molybdate and then reduce with SnCl2; forms molybdenum blue 690 organics phenol react with 4-aminoantipyrine and K3Fe(CN)6; forms yellow antipyrine dye 460 anionic surfactants react with cationic methylene blue dye and extract into CHCl3; forms blue ion pair 652 Although the quantitative analysis of metals in waters and wastewaters is accomplished primarily by atomic absorption or atomic emission spectroscopy, many metals also can be analyzed following the formation of a colorful metal–ligand complex. One advantage to these spectroscopic methods is that they easily are adapted to the analysis of samples in the field using a filter photometer. One ligand used for the analysis of several metals is diphenylthiocarbazone, also known as dithizone. Dithizone is not soluble in water, but when a solution of dithizone in CHCl3 is shaken with an aqueous solution that contains an appropriate metal ion, a colored metal–dithizonate complex forms that is soluble in CHCl3. The selectivity of dithizone is controlled by adjusting the sample’s pH. For example, Cd2+ is extracted from solutions made strongly basic with NaOH, Pb2+ from solutions made basic with an NH3/ NH4+ buffer, and Hg2+ from solutions that are slightly acidic. The structure of dithizone is shown below. When chlorine is added to water the portion available for disinfection is called the chlorine residual. There are two forms of chlorine residual. The free chlorine residual includes Cl2, HOCl, and OCl. The combined chlorine residual, which forms from the reaction of NH3 with HOCl, consists of monochloramine, NH2Cl, dichloramine, NHCl2, and trichloramine, NCl3. Because the free chlorine residual is more efficient as a disinfectant, there is an interest in methods that can distinguish between the total chlorine residual’s different forms. One such method is the leuco crystal violet method. The free residual chlorine is determined by adding leuco crystal violet to the sample, which instantaneously oxidizes to give a blue-colored compound that is monitored at 592 nm. Completing the analysis in less than five minutes prevents a possible interference from the combined chlorine residual. The total chlorine residual (free + combined) is determined by reacting a separate sample with iodide, which reacts with both chlorine residuals to form HOI. When the reaction is complete, leuco crystal violet is added and oxidized by HOI, giving the same blue-colored product. The combined chlorine residual is determined by difference. The concentration of fluoride in drinking water is determined indirectly by its ability to form a complex with zirconium. In the presence of the dye SPADNS, a solution of zirconium forms a red colored compound, called a lake, that absorbs at 570 nm. When fluoride is added, the formation of the stable $\text{ZrF}_6^{2-}$ complex causes a portion of the lake to dissociate, decreasing the absorbance. A plot of absorbance versus the concentration of fluoride, therefore, has a negative slope. SPADNS, the structure of which is shown below, is an abbreviation for the sodium salt of 2-(4-sulfophenylazo)-1,8-dihydroxy-3,6-napthalenedisulfonic acid, which is a mouthful to say. Spectroscopic methods also are used to determine organic constituents in water. For example, the combined concentrations of phenol and ortho- and meta-substituted phenols are determined by using steam distillation to separate the phenols from nonvolatile impurities. The distillate reacts with 4-aminoantipyrine at pH 7.9 ± 0.1 in the presence of K3Fe(CN)6 to a yellow colored antipyrine dye. After extracting the dye into CHCl3, its absorbance is monitored at 460 nm. A calibration curve is prepared using only the unsubstituted phenol, C6H5OH. Because the molar absorptivity of substituted phenols generally are less than that for phenol, the reported concentration represents the minimum concentration of phenolic compounds. 4-aminoantipyrene Molecular absorption also is used for the analysis of environmentally significant airborne pollutants. In many cases the analysis is carried out by collecting the sample in water, converting the analyte to an aqueous form that can be analyzed by methods such as those described in Table $1$. For example, the concentration of NO2 is determined by oxidizing NO2 to $\text{NO}_3^-$. The concentration of $\text{NO}_3^-$ is then determined by first reducing it to $\text{NO}_2^-$ with Cd, and then reacting $\text{NO}_2^-$ with sulfanilamide and N-(1-naphthyl)-ethylenediamine to form a red azo dye. Another important application is the analysis for SO2, which is determined by collecting the sample in an aqueous solution of $\text{HgCl}_4^{2-}$ where it reacts to form $\text{Hg(SO}_3)_2^{2-}$. Addition of p-rosaniline and formaldehyde produces a purple complex that is monitored at 569 nm. Infrared absorption is useful for the analysis of organic vapors, including HCN, SO2, nitrobenzene, methyl mercaptan, and vinyl chloride. Frequently, these analyses are accomplished using portable, dedicated infrared photometers. Clinical Applications The analysis of clinical samples often is complicated by the complexity of the sample’s matrix, which may contribute a significant background absorption at the desired wavelength. The determination of serum barbiturates provides one example of how this problem is overcome. The barbiturates are first extracted from a sample of serum with CHCl3 and then extracted from the CHCl3 into 0.45 M NaOH (pH ≈ 13). The absorbance of the aqueous extract is measured at 260 nm, and includes contributions from the barbiturates as well as other components extracted from the serum sample. The pH of the sample is then lowered to approximately 10 by adding NH4Cl and the absorbance remeasured. Because the barbiturates do not absorb at this pH, we can use the absorbance at pH 10, ApH 10, to correct the absor-ance at pH 13, ApH 13 $A_\text{barb} = A_\text{pH 13} - \frac {V_\text{samp} + V_{\text{NH}_4\text{Cl}}} {V_\text{samp}} \times A_\text{pH 10} \nonumber$ where Abarb is the absorbance due to the serum barbiturates and Vsamp and $V_{\text{NH}_4\text{Cl}}$ are the volumes of sample and NH4Cl, respectively. Table $2$ provides a summary of several other methods for analyzing clinical samples. Table $2$. Examples of the Molecular UV/Vis Analysis of Clinical Samples analyte method $\lambda$ (nm) total serum protein react with NaOH and Cu2+; forms blue-violet complex 540 serum cholesterol react with Fe3+ in presence of isopropanol, acetic acid, and H2SO4; forms blue-violet complex 540 uric acid react with phosphotungstic acid; forms tungsten blue 710 serum barbituates extract into CHCl3 to isolate from interferents and then extract into 0.45 M NaOH 260 glucose react with o-toludine at 100oC; forms blue-green complex 630 protein-bound iodine decompose protein to release iodide, which catalyzes redox reaction between Ce3+ and As3+; forms yellow colored Ce4+ 420 Industrial Applications UV/Vis molecular absorption is used for the analysis of a diverse array of industrial samples including pharmaceuticals, food, paint, glass, and metals. In many cases the methods are similar to those described in Table $1$ and in Table $2$. For example, the amount of iron in food is determined by bringing the iron into solution and analyzing using the o-phenanthroline method listed in Table $1$. Many pharmaceutical compounds contain chromophores that make them suitable for analysis by UV/Vis absorption. Products analyzed in this fashion include antibiotics, hormones, vitamins, and analgesics. One example of the use of UV absorption is in determining the purity of aspirin tablets, for which the active ingredient is acetylsalicylic acid. Salicylic acid, which is produced by the hydrolysis of acetylsalicylic acid, is an undesirable impurity in aspirin tablets, and should not be present at more than 0.01% w/w. Samples are screened for unacceptable levels of salicylic acid by monitoring the absorbance at a wavelength of 312 nm. Acetylsalicylic acid absorbs at 280 nm, but absorbs poorly at 312 nm. Conditions for preparing the sample are chosen such that an absorbance of greater than 0.02 signifies an unacceptable level of salicylic acid. Forensic Applications UV/Vis molecular absorption routinely is used for the analysis of narcotics and for drug testing. One interesting forensic application is the determination of blood alcohol using the Breathalyzer test. In this test a 52.5-mL breath sample is bubbled through an acidified solution of K2Cr2O7, which oxidizes ethanol to acetic acid. The concentration of ethanol in the breath sample is determined by a decrease in the absorbance at 440 nm where the dichromate ion absorbs. A blood alcohol content of 0.10%, which is above the legal limit, corresponds to 0.025 mg of ethanol in the breath sample. Developing a Quantitative Method for a Single Component To develop a quantitative analytical method, the conditions under which Beer’s law is obeyed must be established. First, the most appropriate wavelength for the analysis is determined from an absorption spectrum. In most cases the best wavelength corresponds to an absorption maximum because it provides greater sensitivity and is less susceptible to instrumental limitations. Second, if the instrument has adjustable slits, then an appropriate slit width is chosen. The absorption spectrum also aids in selecting a slit width by choosing a width that is narrow enough to avoid instrumental limitaions to Beer’s law, but wide enough to increase the throughput of source radiation. Finally, a calibration curve is constructed to determine the range of concentrations for which Beer’s law is valid. Additional considerations that are important in any quantitative method are the effect of potential interferents and establishing an appropriate blank. Quantitative Analysis for a Single Sample To determine the concentration of an analyte we measure its absorbance and apply Beer’s law using any of the standardization methods described in Chapter 5. The most common methods are a normal calibration curve using external standards and the method of standard additions. A single point standardization also is possible, although we must first verify that Beer’s law holds for the concentration of analyte in the samples and the standard. Example 14.4.1 The determination of iron in an industrial waste stream is carried out by the o-phenanthroline described in Representative Method 10.3.1. Using the data in the following table, determine the mg Fe/L in the waste stream. mg Fe/L absorbance 0.00 0.000 1.00 0.183 2.00 0.364 3.00 0.546 4.00 0.727 sample 0.269 Solution Linear regression of absorbance versus the concentration of Fe in the standards gives the calibration curve and calibration equation shown here $A=0.0006+\left(0.1817 \ \mathrm{mg}^{-1} \mathrm{L}\right) \times(\mathrm{mg} \mathrm{Fe} / \mathrm{L}) \nonumber$ Substituting the sample’s absorbance into the calibration equation gives the concentration of Fe in the waste stream as 1.48 mg Fe/L Exercise 14.4.1 The concentration of Cu2+ in a sample is determined by reacting it with the ligand cuprizone and measuring its absorbance at 606 nm in a 1.00-cm cell. When a 5.00-mL sample is treated with cuprizone and diluted to 10.00 mL, the resulting solution has an absorbance of 0.118. A second 5.00-mL sample is mixed with 1.00 mL of a 20.00 mg/L standard of Cu2+, treated with cuprizone and diluted to 10.00 mL, giving an absorbance of 0.162. Report the mg Cu2+/L in the sample. Answer For this standard addition we write equations that relate absorbance to the concentration of Cu2+ in the sample before the standard addition $0.118=\varepsilon b \left[ C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}\right] \nonumber$ and after the standard addition $0.162=\varepsilon b\left(C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}+\frac{20.00 \ \mathrm{mg} \ \mathrm{Cu}}{\mathrm{L}} \times \frac{1.00 \ \mathrm{mL}}{10.00 \ \mathrm{mL}}\right) \nonumber$ in each case accounting for the dilution of the original sample and for the standard. The value of $\varepsilon b$ is the same in both equation. Solving each equation for $\varepsilon b$ and equating $\frac{0.162}{C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}+\frac{20.00 \ \mathrm{mg} \ \mathrm{Cu}}{\mathrm{L}} \times \frac{1.00 \ \mathrm{mL}}{10.00 \ \mathrm{mL}}}=\frac{0.118}{C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}} \nonumber$ leaves us with an equation in which CCu is the only variable. Solving for CCu gives its value as $\frac{0.162}{0.500 \times C_{\mathrm{Cu}}+2.00 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L}}=\frac{0.118}{0.500 \times C_{\mathrm{Cu}}} \nonumber$ $0.0810 \times C_{\mathrm{Cu}}=0.0590 \times C_{\mathrm{Ca}}+0.236 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber$ $0.0220 \times C_{\mathrm{Cu}}=0.236 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber$ $C_{\mathrm{Cu}}=10.7 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber$ Quantitative Analysis of Mixtures Suppose we need to determine the concentration of two analytes, X and Y, in a sample. If each analyte has a wavelength where the other analyte does not absorb, then we can proceed using the approach in Example 14.4.1 . Unfortunately, UV/Vis absorption bands are so broad that frequently it is not possible to find suitable wavelengths. Because Beer’s law is additive the mixture’s absorbance, Amix, is $\left(A_{m i x}\right)_{\lambda_{1}}=\left(\varepsilon_{x}\right)_{\lambda_{1}} b C_{X}+\left(\varepsilon_{Y}\right)_{\lambda_{1}} b C_{Y} \label{10.1}$ where $\lambda_1$ is the wavelength at which we measure the absorbance. Because Equation \ref{10.1} includes terms for the concentration of both X and Y, the absorbance at one wavelength does not provide enough information to determine either CX or CY. If we measure the absorbance at a second wavelength $\left(A_{m i x}\right)_{\lambda_{2}}=\left(\varepsilon_{x}\right)_{\lambda_{2}} b C_{X}+\left(\varepsilon_{Y}\right)_{\lambda_{2}} b C_{Y} \label{10.2}$ then we can determine CX and CY by solving simultaneously Equation \ref{10.1} and Equation \ref{10.2}. Of course, we also must determine the value for $\varepsilon_X$ and $\varepsilon_Y$ at each wavelength. For a mixture of n components, we must measure the absorbance at n different wavelengths. Example 14.4.2 The concentrations of Fe3+ and Cu2+ in a mixture are determined following their reaction with hexacyanoruthenate (II), $\text{Ru(CN)}_6^{4-}$, which forms a purple-blue complex with Fe3+ ($\lambda_\text{max}$ = 550 nm) and a pale-green complex with Cu2+ ($\lambda_\text{max}$ = 396 nm) [DiTusa, M. R.; Schlit, A. A. J. Chem. Educ. 1985, 62, 541–542]. The molar absorptivities (M–1 cm–1) for the metal complexes at the two wavelengths are summarized in the following table. analyte $\varepsilon_{550}$ $\varepsilon_{396}$ Fe3+ 9970 84 Cu2+ 34 856 When a sample that contains Fe3+ and Cu2+ is analyzed in a cell with a pathlength of 1.00 cm, the absorbance at 550 nm is 0.183 and the absorbance at 396 nm is 0.109. What are the molar concentrations of Fe3+ and Cu2+ in the sample? Solution Substituting known values into Equation \ref{10.1} and Equation \ref{10.2} gives \begin{aligned} A_{550} &=0.183=9970 C_{\mathrm{Fe}}+34 C_{\mathrm{Cu}} \ A_{396} &=0.109=84 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \end{aligned} \nonumber To determine CFe and CCu we solve the first equation for CCu $C_{\mathrm{Cu}}=\frac{0.183-9970 C_{\mathrm{Fe}}}{34} \nonumber$ and substitute the result into the second equation. \begin{aligned} 0.109 &=84 C_{\mathrm{Fe}}+856 \times \frac{0.183-9970 C_{\mathrm{Fe}}}{34} \ &=4.607-\left(2.51 \times 10^{5}\right) C_{\mathrm{Fe}} \end{aligned} \nonumber Solving for CFe gives the concentration of Fe3+ as $1.8 \times 10^{-5}$ M. Substituting this concentration back into the equation for the mixture’s absorbance at 396 nm gives the concentration of Cu2+ as $1.3 \times 10^{-4}$ M. Another approach to solving Example 14.4.2 is to multiply the first equation by 856/34 giving $4.607=251009 C_{\mathrm{Fe}}+856 C_\mathrm{Cu} \nonumber$ Subtracting the second equation from this equation \begin{aligned} 4.607 &=251009 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \-0.109 &=84 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \end{aligned} \nonumber gives $4.498=250925 C_{\mathrm{Fe}} \nonumber$ and we find that CFe is $1.8 \times 10^{-5}$. Having determined CFe we can substitute back into one of the other equations to solve for CCu, which is $1.3 \times 10^{-5}$. Exercise 14.4.2 The absorbance spectra for Cr3+ and Co2+ overlap significantly. To determine the concentration of these analytes in a mixture, its absorbance is measured at 400 nm and at 505 nm, yielding values of 0.336 and 0.187, respectively. The individual molar absorptivities (M–1 cm–1) for Cr3+ are 15.2 at 400 nm and 0.533 at 505 nm; the values for Co2+ are 5.60 at 400 nm and 5.07 at 505 nm. Answer Substituting into Equation \ref{10.1} and Equation \ref{10.2} gives $A_{400} = 0.336 = 15.2C_\text{Cr} + 5.60C_\text{Co} \nonumber$ $A_{400} = 0187 = 0.533C_\text{Cr} + 5.07C_\text{Co} \nonumber$ To determine CCr and CCo we solve the first equation for CCo $C_{\mathrm{Co}}=\frac{0.336-15.2 \mathrm{C}_{\mathrm{Co}}}{5.60} \nonumber$ and substitute the result into the second equation. $0.187=0.533 C_{\mathrm{Cr}}+5.07 \times \frac{0.336-15.2 C_{\mathrm{Co}}}{5.60} \nonumber$ $0.187=0.3042-13.23 C_{\mathrm{Cr}} \nonumber$ Solving for CCr gives the concentration of Cr3+ as $8.86 \times 10^{-3}$ M. Substituting this concentration back into the equation for the mixture’s absorbance at 400 nm gives the concentration of Co2+ as $3.60 \times 10^{-2}$ M. To obtain results with good accuracy and precision the two wavelengths should be selected so that $\varepsilon_X > \varepsilon_Y$ at one wavelength and $\varepsilon_X < \varepsilon_Y$ at the other wavelength. It is easy to appreciate why this is true. Because the absorbance at each wavelength is dominated by one analyte, any uncertainty in the concentration of the other analyte has less of an impact. Figure $\PageIndex{q}$ shows that the choice of wavelengths for Practice Exercise 14.4.2 are reasonable. When the choice of wavelengths is not obvious, one method for locating the optimum wavelengths is to plot $\varepsilon_X / \varepsilon_y$ as function of wavelength, and determine the wavelengths where $\varepsilon_X / \varepsilon_y$ reaches maximum and minimum values [Mehra, M. C.; Rioux, J. J. Chem. Educ. 1982, 59, 688–689]. When the analyte’s spectra overlap severely, such that $\varepsilon_X \approx \varepsilon_Y$ at all wavelengths, other computational methods may provide better accuracy and precision. In a multiwavelength linear regression analysis, for example, a mixture’s absorbance is compared to that for a set of standard solutions at several wavelengths [Blanco, M.; Iturriaga, H.; Maspoch, S.; Tarin, P. J. Chem. Educ. 1989, 66, 178–180]. If ASX and ASY are the absorbance values for standard solutions of components X and Y at any wavelength, then $A_{SX}=\varepsilon_{X} b C_{SX} \label{10.3}$ $A_{SY}=\varepsilon_{Y} b C_{SY} \label{10.4}$ where CSX and CSY are the known concentrations of X and Y in the standard solutions. Solving Equation \ref{10.3} and Equation \ref{10.4} for $\varepsilon_X$ and for $\varepsilon_Y$, substituting into Equation \ref{10.1}, and rearranging, gives $\frac{A_{\operatorname{mix}}}{A_{S X}}=\frac{C_{X}}{C_{S X}}+\frac{C_{Y}}{C_{S Y}} \times \frac{A_{S Y}}{A_{S X}} \nonumber$ To determine CX and CY the mixture’s absorbance and the absorbances of the standard solutions are measured at several wavelengths. Graphing Amix/ASX versus ASY/ASX gives a straight line with a slope of CY/CSY and a y-intercept of CX/CSX. This approach is particularly helpful when it is not possible to find wavelengths where $\varepsilon_X > \varepsilon_Y$ and $\varepsilon_X < \varepsilon_Y$. The approach outlined here for a multiwavelength linear regression uses a single standard solution for each analyte. A more rigorous approach uses multiple standards for each analyte. The math behind the analysis of such data—which we call a multiple linear regression—is beyond the level of this text. For more details about multiple linear regression see Brereton, R. G. Chemometrics: Data Analysis for the Laboratory and Chemical Plant, Wiley: Chichester, England, 2003. Example 14.4.3 Figure $1$ shows visible absorbance spectra for a standard solution of 0.0250 M Cr3+, a standard solution of 0.0750 M Co2+, and a mixture that contains unknown concentrations of each ion. The data for these spectra are shown here. $\lambda$ (nm) ACr ACu Amix $\lambda$ (nm) ACr ACu Amix 375 0.26 0.01 0.53 520 0.19 0.38 0.63 400 0.43 0.03 0.88 530 0.24 0.33 0.70 425 0.39 0.07 0.83 540 0.28 0.26 0.73 440 0.29 0.13 0.67 550 0.32 0.18 0.76 455 0.20 0.21 0.54 570 0.38 0.08 0.81 470 0.14 0.28 0.47 575 0.39 0.06 0.82 480 0.12 0.30 0.44 580 0.38 0.05 0.79 490 0.11 0.34 0.45 600 0.34 0.03 0.70 500 0.13 0.38 0.51 625 0.24 0.02 0.49 Use a multiwavelength regression analysis to determine the composition of the unknown. Solution First we need to calculate values for Amix/ASX and for ASY/ASX. Let’s define X as Co2+ and Y as Cr3+. For example, at a wavelength of 375 nm Amix/ASX is 0.53/0.01, or 53 and ASY/ASX is 0.26/0.01, or 26. Completing the calculation for all wavelengths and graphing Amix/ASX versus ASY/ASX gives the calibration curve shown in Figure $2$. Fitting a straight-line to the data gives a regression model of $\frac{A_{\operatorname{mix}}}{A_{S X}}=0.636+2.01 \times \frac{A_{S Y}}{A_{S X}} \nonumber$ Using the y-intercept, the concentration of Co2+ is $\frac{C_{X}}{C_{S X}}=\frac{\left[\mathrm{Co}^{2+}\right]}{0.0750 \mathrm{M}}=0.636 \nonumber$ or [Co2+] = 0.048 M; using the slope the concentration of Cr3+ is $\frac{C_{Y}}{C_{S Y}}=\frac{\left[\mathrm{Cr}^{3+}\right]}{0.0250 \mathrm{M}}=2.01 \nonumber$ or [Cr3+] = 0.050 M. Exercise 14.4.3 A mixture of $\text{MnO}_4^{-}$ and $\text{Cr}_2\text{O}_7^{2-}$, and standards of 0.10 mM KMnO4 and of 0.10 mM K2Cr2O7 give the results shown in the following table. Determine the composition of the mixture. The data for this problem is from Blanco, M. C.; Iturriaga, H.; Maspoch, S.; Tarin, P. J. Chem. Educ. 1989, 66, 178–180. $\lambda$ (nm) AMn ACr Amix 266 0.042 0.410 0.766 288 0.082 0.283 0.571 320 0.168 0.158 0.422 350 0.125 0.318 0.672 360 0.036 0.181 0.366 Answer Letting X represent $\text{MnO}_4^{-}$ and letting Y represent $\text{Cr}_2\text{O}_7^{2-}$, we plot the equation $\frac{A_{\operatorname{mix}}}{A_{SX}}=\frac{C_{X}}{C_{SX}}+\frac{C_{Y}}{C_{S Y}} \times \frac{A_{S Y}}{A_{SX}} \nonumber$ placing Amix/ASX on the y-axis and ASY/ASX on the x-axis. For example, at a wavelength of 266 nm the value Amix/ASX of is 0.766/0.042, or 18.2, and the value of ASY/ASX is 0.410/0.042, or 9.76. Completing the calculations for all wavelengths and plotting the data gives the result shown here Fitting a straight-line to the data gives a regression model of $\frac{A_{\text { mix }}}{A_{\text { SX }}}=0.8147+1.7839 \times \frac{A_{SY}}{A_{SX}} \nonumber$ Using the y-intercept, the concentration of $\text{MnO}_4^{-}$ is $\frac{C_{X}}{C_{S X}}=0.8147=\frac{\left[\mathrm{MnO}_{4}^{-}\right]}{1.0 \times 10^{-4} \ \mathrm{M} \ \mathrm{MnO}_{4}^{-}} \nonumber$ or $8.15 \times 10^{-5}$ M $\text{MnO}_4^{-}$, and using the slope, the concentration of $\text{Cr}_2\text{O}_7^{2-}$ is $\frac{C_{Y}}{C_{S Y}}=1.7839=\frac{\left[\mathrm{Cr}_{2} \mathrm{O}_{7}^{2-}\right]}{1.00 \times 10^{-4} \ \mathrm{M} \ \text{Cr}_{2} \mathrm{O}_{7}^{2-}} \nonumber$ or $1.78 \times 10^{-4}$ M $\text{Cr}_2\text{O}_7^{2-}$. Derivative Spectroscopy Sometimes our signal is superimposed on a background signal, which complicates our analysis because the measure absorbance has contributions from both our analyte and from the background. For example, the following figure shows a Gaussian signal with a maximum value of 50 centered at $x = 125$ that is superimposed on an exponential background. The dotted line is the Gaussian signal, which has a maximum value of 50 at $x = 125$, and the solid line is the signal as measured, which has a maximum value of 57 at $x = 125$. If the background signal is consistent across all samples, then we can analyze the data without first removing its contribution. For example, the following figure shows a set of calibration standards and their resulting calibration curve, for which the y-intercept of 7 gives the offset introduced by the background. But background signals often are not consistent across samples, particularly when the source of the background is a property of the samples we collect (natural water samples, for example, may have variations in color due to differences in the concentration of dissolved organic matter) or a property of the instrument we are using (such as a variation in source intensity over time). When true, our data may look more like what we see in the following figure, which leads to a calibration curve with a greater uncertainty. Because the background changes gradually with the values for x while the analyte's signal changes quickly, we can use a derivative to the distinguish between the two. One approach is to calculate and plot the derivative, $\frac{\Delta y}{\Delta x}$, as a function of $x$, as shown in Figure $6$. The calibration signal in this case is the difference between the maximum signal and the minimum signal, which are shown by the dotted red lines in the top part of the figure. The fit of the calibration curve to the data and the calibration curve's y-intercept of zero shows that we have successfully compensated for the background signals. 14.05: Photometric Titrations If at least one species in a titration absorbs electromagnetic radiation, then we can identify the end point by monitoring the titrand’s absorbance at a carefully selected wavelength. For example, we can identify the end point for a titration of Cu2+ with EDTA in the presence of NH3 by monitoring the titrand’s absorbance at a wavelength of 745 nm, where the $\text{Cu(NH}_3)_2^{4+}$ complex absorbs strongly. At the beginning of the titration the absorbance is at a maximum. As we add EDTA, however, the reaction $\text{Cu(NH}_3)_4^{2+}(aq) + \text{Y}^{4-} \rightleftharpoons \text{CuY}^{2-}(aq) + 4\text{NH}_3(aq) \nonumber$ decreases the concentration of $\text{Cu(NH}_3)_2^{4+}$ and decreases the absorbance until we reach the equivalence point. After the equivalence point the absorbance essentially remains unchanged. The resulting spectrophotometric titration curve is shown in Figure $1a$. Note that the titration curve’s y-axis is not the measured absorbance, Ameas, but a corrected absorbance, Acorr $A_\text{corr} = A_\text{meas} \times \frac {V_\text{EDTA} + V_\text{Cu}} {V_\text{Cu}} \nonumber$ where VEDTA and VCu are, respectively, the volumes of EDTA and Cu. Correcting the absorbance for the titrand’s dilution ensures that the spectrophotometric titration curve consists of linear segments that we can extrapolate to find the end point. Other common spectrophotometric titration curves are shown in Figures $1b-f$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.04%3A_Quantitative_Applications.txt
• 15.1: Theory of Fluorescence and Phosphorescence The use of molecular fluorescence for qualitative analysis and for semi-quantitative analysis dates to the early to mid 1800s, with more accurate quantitative methods appearing in the 1920s. Although the discovery of phosphorescence preceded that of fluorescence by almost 200 years, qualitative and quantitative applications of molecular phosphorescence did not receive much attention until after the development of fluorescence instrumentation. • 15.2: Instruments for Measuring Fluorescence and Phosphorescence The basic instrumentation for monitoring fluorescence and phosphorescence—a source of radiation, a means of selecting a narrow band of radiation, and a detector—are the same as those for absorption spectroscopy. The unique demands of fluorescence and phosphorescence, however, require some modifications to the instrument designs discussed in Chapters 7, 13, and 14, such as the filter photometer, the single-beam spectrophotometer, the double-beam spectrophotometer, and the diode array spectrometer • 15.3: Applications and Photoluminescence methods Molecular fluorescence phosphorescence are used for the direct or indirect quantitative analysis of analytes in a variety of matrices. A direct quantitative analysis is possible when the analyte’s fluorescent or phosphorescent quantum yield is favorable. If the analyte is not fluorescent or phosphorescent, or if the quantum yield is unfavorable, then an indirect analysis may be feasible. The application of fluorescence and phosphorescence to inorganic and organic analytes are considered in this • 15.4: Chemiluminscence The focus of this chapter has been on molecular luminescence methods in which emission from the analyte's excited state is achieved following its absorption of a photon. An exothermic reaction may also serve as a source of energy. In chemiluminescence the analyte is raised to a higher-energy state by means of a chemical reaction, emitting characteristic radiation when it returns to a lower-energy state. • 15.5: Evaluation of Molecular Luminescence A summary of the strengths and limitations of molecular luminescence. 15: Molecular Luminescence The use of molecular fluorescence for qualitative analysis and for semi-quantitative analysis dates to the early to mid 1800s, with more accurate quantitative methods appearing in the 1920s. Instrumentation for fluorescence spectroscopy using a filter or a monochromator for wavelength selection appeared in, respectively, the 1930s and 1950s. Although the discovery of phosphorescence preceded that of fluorescence by almost 200 years, qualitative and quantitative applications of molecular phosphorescence did not receive much attention until after the development of fluorescence instrumentation. Source of Fluorescence and Phosphorescence Photoluminescence is divided into two categories: fluorescence and phosphorescence. A pair of electrons that occupy the same electronic ground state have opposite spins and are in a singlet spin state (Figure $1a$). When an analyte absorbs an ultraviolet or a visible photon, one of its valence electrons moves from the ground state to an excited state with a conservation of the electron’s spin (Figure $1b$). Emission of a photon from a singlet excited state to the singlet ground state—or between any two energy levels with the same spin—is called fluorescence. The probability of fluorescence is very high and the average lifetime of an electron in the excited state is only 10–5–10–8 s. Fluorescence, therefore, rapidly decays once the source of excitation is removed. In some cases an electron in a singlet excited state is transformed to a triplet excited state (Figure $1c$) in which its spin no is longer paired with the ground state. Emission between a triplet excited state and a singlet ground state—or between any two energy levels that differ in their respective spin states–is called phosphorescence. Because the average lifetime for phosphorescence can be quite long—it ranges from 10–4–104 seconds—phosphorescence may continue for some time after we remove the excitation source. To appreciate the origin of fluorescence and phosphorescence we must consider what happens to a molecule following the absorption of a photon. Let’s assume the molecule initially occupies the lowest vibrational energy level of its electronic ground state, which is the singlet state labeled S0 in Figure $2$. Absorption of a photon excites the molecule to one of several vibrational energy levels in the first excited electronic state, S1, or the second electronic excited state, S2, both of which are singlet states. Relaxation to the ground state occurs by a number of mechanisms, some of which result in the emission of a photon and others that occur without the emission of a photon. These relaxation mechanisms are shown in Figure $2$. The most likely relaxation pathway from any excited state is the one with the shortest lifetime. Deactivation Processes A molecule in an excited state can return to its ground state in a variety of ways that we collectively call deactivation processes. Radiationless Deactivation When a molecule relaxes without emitting a photon we call the process radiationless deactivation. One example of radiationless deactivation is vibrational relaxation, in which a molecule in an excited vibrational energy level loses energy by moving to a lower vibrational energy level in the same electronic state. Vibrational relaxation is very rapid, with an average lifetime of <10–12 s. Because vibrational relaxation is so efficient, a molecule in one of its excited state’s higher vibrational energy levels quickly returns to the excited state’s lowest vibrational energy level. Another form of radiationless deactivation is an internal conversion in which a molecule in the ground vibrational level of an excited state passes directly into a higher vibrational energy level of a lower energy electronic state of the same spin state. By a combination of internal conversions and vibrational relaxations, a molecule in an excited electronic state may return to the ground electronic state without emitting a photon. A related form of radiationless deactivation is an external conversion in which excess energy is transferred to the solvent or to another component of the sample’s matrix. Let’s use Figure $2$ to illustrate how a molecule can relax back to its ground state without emitting a photon. Suppose our molecule is in the highest vibrational energy level of the second electronic excited state. After a series of vibrational relaxations brings the molecule to the lowest vibrational energy level of S2, it undergoes an internal conversion into a higher vibrational energy level of the first excited electronic state. Vibrational relaxations bring the molecule to the lowest vibrational energy level of S1. Following an internal conversion into a higher vibrational energy level of the ground state, the molecule continues to undergo vibrational relaxation until it reaches the lowest vibrational energy level of S0. A final form of radiationless deactivation is an intersystem crossing in which a molecule in the ground vibrational energy level of an excited electronic state passes into one of the higher vibrational energy levels of a lower energy electronic state with a different spin state. For example, an intersystem crossing is shown in Figure $2$ between the singlet excited state S1 and the triplet excited state T1. Variables that Affect Fluorescence Fluorescence occurs when a molecule in an excited state’s lowest vibrational energy level returns to a lower energy electronic state by emitting a photon. Because molecules return to their ground state by the fastest mechanism, fluorescence is observed only if it is a more efficient means of relaxation than a combination of internal conversions and vibrational relaxations. A quantitative expression of fluorescence efficiency is the fluorescent quantum yield, $\Phi_f$, which is the fraction of excited state molecules that return to the ground state by fluorescence. The fluorescent quantum yields range from 1 when every molecule in an excited state undergoes fluorescence, to 0 when fluorescence does not occur. The intensity of fluorescence, If, is proportional to the amount of radiation absorbed by the sample, P0PT, and the fluorescence quantum yield $I_{f}=k \Phi_{f}\left(P_{0}-P_{\mathrm{T}}\right) \label{10.1}$ where k is a constant that accounts for the efficiency of collecting and detecting the fluorescent emission. From Beer’s law we know that $\frac{P_{\mathrm{T}}}{P_{0}}=10^{-\varepsilon b C} \label{10.2}$ where C is the concentration of the fluorescing species. Solving Equation \ref{10.2} for PT and substituting into Equation \ref{10.1} gives, after simplifying $I_{f}=k \Phi_{f} P_{0}\left(1-10^{-\varepsilon b C}\right) \label{10.3}$ When $\varepsilon bC$ < 0.01, which often is the case when the analyte's concentration is small, Equation \ref{10.3} simplifies to $I_{f}=2.303 k \Phi_{f} \varepsilon b C P_{0}=k^{\prime} P_{0} \label{10.4}$ where k′ is a collection of constants. The intensity of fluorescence, therefore, increases with an increase in the quantum efficiency, the source’s incident power, and the molar absorptivity and the concentration of the fluorescing species. Fluorescence generally is observed when the molecule’s lowest energy absorption is a $\pi \rightarrow \pi^*$ transition, although some $n \rightarrow \pi^*$ transitions show weak fluorescence. Many unsubstituted, nonheterocyclic aromatic compounds have a favorable fluorescence quantum yield, although substitutions on the aromatic ring can effect $\Phi_f$ significantly. For example, the presence of an electron-withdrawing group, such as –NO2, decreases $\Phi_f$, while adding an electron-donating group, such as –OH, increases $\Phi_f$. Fluorrescence also increases for aromatic ring systems and for aromatic molecules with rigid planar structures. Figure $3$ shows the fluorescence of quinine under a UV lamp. A molecule’s fluorescent quantum yield also is influenced by external variables, such as temperature and solvent. Increasing the temperature generally decreases $\Phi_f$ because more frequent collisions between the molecule and the solvent increases external conversion. A decrease in the solvent’s viscosity decreases $\Phi_f$ for similar reasons. For an analyte with acidic or basic functional groups, a change in pH may change the analyte’s structure and its fluorescent properties. As shown in Figure $3$, fluorescence may return the molecule to any of several vibrational energy levels in the ground electronic state. Fluorescence, therefore, occurs over a range of wavelengths. Because the change in energy for fluorescent emission generally is less than that for absorption, a molecule’s fluorescence spectrum is shifted to higher wavelengths than its absorption spectrum. Variables that Affect Phosphorescence A molecule in a triplet electronic excited state’s lowest vibrational energy level normally relaxes to the ground state by an intersystem crossing to a singlet state or by an external conversion. Phosphorescence occurs when the molecule relaxes by emitting a photon. As shown in Figure $2$, phosphorescence occurs over a range of wavelengths, all of which are at lower energies than the molecule’s absorption band. The intensity of phosphorescence, $I_p$, is given by an equation similar to Equation \ref{10.4} for fluorescence \begin{align} I_{P} &= 2.303 k \Phi_{P} \varepsilon b C P_{0} \nonumber \[4pt] &= k^{\prime} P_{0} \label{10.5} \end{align} where $\Phi_p$ is the phosphorescence quantum yield. Phosphorescence is most favorable for molecules with $n \rightarrow \pi^*$ transitions, which have a higher probability for an intersystem crossing than $\pi \rightarrow \pi^*$ transitions. For example, phosphorescence is observed with aromatic molecules that contain carbonyl groups or heteroatoms. Aromatic compounds that contain halide atoms also have a higher efficiency for phosphorescence. In general, an increase in phosphorescence corresponds to a decrease in fluorescence. Because the average lifetime for phosphorescence can be quite long, ranging from 10–4–104 s, the phosphorescent quantum yield usually is quite small. An improvement in $\Phi_p$ is realized by decreasing the efficiency of external conversion. This is accomplished in several ways, including lowering the temperature, using a more viscous solvent, depositing the sample on a solid substrate, or trapping the molecule in solution. Figure $4$ shows an example of phosphorescence. Emission and Excitation Spectra Photoluminescence spectra are recorded by measuring the intensity of emitted radiation as a function of either the excitation wavelength or the emission wavelength. An excitation spectrum is obtained by monitoring emission at a fixed wavelength while varying the excitation wavelength. When corrected for variations in the source’s intensity and the detector’s response, a sample’s excitation spectrum is nearly identical to its absorbance spectrum. The excitation spectrum provides a convenient means for selecting the best excitation wavelength for a quantitative or qualitative analysis. In an emission spectrum a fixed wavelength is used to excite the sample and the intensity of emitted radiation is monitored as function of wavelength. Although a molecule has a single excitation spectrum, it has two emission spectra, one for fluorescence and one for phosphorescence. Figure $5$ shows the UV absorption spectrum and the UV fluorescence emission spectrum for quinine.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.01%3A_Theory_of_Fluorescence_and_Phosphorescence.txt
Instrumentation The basic instrumentation for monitoring fluorescence and phosphorescence—a source of radiation, a means of selecting a narrow band of radiation, and a detector—are the same as those for absorption spectroscopy. The unique demands of fluorescence and phosphorescence, however, require some modifications to the instrument designs discussed in earlier chapters: the filter photometer, the single-beam spectrophotometer, the double-beam spectrophotometer, and the diode array spectrometer. The most important difference is that the detector cannot be placed directly across from the source. Figure \(1\) shows why this is the case. If we place the detector along the source’s axis it receives both the transmitted source radiation, PT, and the fluorescent, If, or phosphorescent, Ip, radiation. Instead, we rotate the director and place it at 90o to the source. Instruments for Measuring Fluorescence Figure \(2\) shows the basic design of an instrument for measuring fluorescence, which includes two wavelength selectors, one for selecting the source's excitation wavelength and one for selecting the analyte's emission wavelength. In a fluorometer the excitation and emission wavelengths are selected using absorption or interference filters. The excitation source for a fluorometer usually is a low-pressure Hg vapor lamp that provides intense emission lines distributed throughout the ultraviolet and visible region. When a monochromator is used to select the excitation and the emission wavelengths, the instrument is called a spectrofluorometer. With a monochromator the excitation source usually is a high-pressure Xe arc lamp, which has a continuous emission spectrum. Either instrumental design is appropriate for quantitative work, although only a spectrofluorometer can record an excitation or emission spectrum. A Hg vapor lamp has emission lines at 254, 312, 365, 405, 436, 546, 577, 691, and 773 nm. The sample cells for molecular fluorescence are similar to those for molecular absorption. Remote sensing using a fiber optic probe is possible using with either a fluorometer or spectrofluorometer. An analyte that is fluorescent is monitored directly. For an analyte that is not fluorescent, a suitable fluorescent probe molecule is incorporated into the tip of the fiber optic probe. The analyte’s reaction with the probe molecule leads to an increase or decrease in fluorescence. Instruments for Measuring Phosphorescence An instrument for molecular phosphorescence must discriminate between phosphorescence and fluorescence. Because the lifetime for fluorescence is shorter than that for phosphorescence, discrimination is achieved by incorporating a delay between exciting the sample and measuring the phosphorescent emission. Figure \(3\) shows how two out-of-phase choppers allow us to block fluorescent emission from reaching the detector when the sample is being excited and to prevent the source radiation from causing fluorescence when we are measuring the phosphorescent emission. Because phosphorescence is such a slow process, we must prevent the excited state from relaxing by external conversion. One way this is accomplished is by dissolving the sample in a suitable organic solvent, usually a mixture of ethanol, isopentane, and diethylether. The resulting solution is frozen at liquid-N2 temperatures to form an optically clear solid. The solid matrix minimizes external conversion due to collisions between the analyte and the solvent. External conversion also is minimized by immobilizing the sample on a solid substrate, making possible room temperature measurements. One approach is to place a drop of a solution that contains the analyte on a small disc of filter paper. After drying the sample under a heat lamp, the sample is placed in the spectrofluorometer for analysis. Other solid substrates include silica gel, alumina, sodium acetate, and sucrose. This approach is particularly useful for the analysis of thin layer chromatography plates.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.02%3A_Instruments_for_Measuring_Fluorescence_and_Phosphorescence.txt
Quantitative Applications Molecular fluorescence and, to a lesser extent, phosphorescence are used for the direct or indirect quantitative analysis of analytes in a variety of matrices. A direct quantitative analysis is possible when the analyte’s fluorescent or phosphorescent quantum yield is favorable. If the analyte is not fluorescent or phosphorescent, or if the quantum yield is unfavorable, then an indirect analysis may be feasible. One approach is to react the analyte with a reagent to form a product that is fluorescent or phosphorescent. Another approach is to measure a decrease in fluorescence or phosphorescence when the analyte is added to a solution that contains a fluorescent or phosphorescent probe molecule. A decrease in emission is observed when the reaction between the analyte and the probe molecule enhances radiationless deactivation or results in a nonemitting product. The application of fluorescence and phosphorescence to inorganic and organic analytes are considered in this section. Inorganic Analytes Except for a few metal ions, most notably $\text{UO}_2^+$, most inorganic ions are not sufficiently fluorescent for a direct analysis. Many metal ions are determined indirectly by reacting with an organic ligand to form a fluorescent or, less commonly, a phosphorescent metal–ligand complex. One example is the reaction of Al3+ with the sodium salt of 2, 4, 3′-trihydroxyazobenzene-5′-sulfonic acid—also known as alizarin garnet R—which forms a fluorescent metal–ligand complex (Figure $1$). The analysis is carried out using an excitation wavelength of 470 nm, with fluorescence monitored at 500 nm. Table $1$ provides additional examples of chelating reagents that form fluorescent metal–ligand complexes with metal ions. A few inorganic nonmetals are determined by their ability to decrease, or quench, the fluorescence of another species. One example is the analysis for F based on its ability to quench the fluorescence of the Al3+–alizarin garnet R complex. Table $1$. Chelating Agents for the Fluorescent Analysis of Metal Ions chelating agent metal ions 8-hydroxyquinoline Al3+, Be2+, Zn2+, Li+, Mg2+ (and others) flavonal Zr2+, Sn4+ benzoin $\text{B}_4\text{O}_6^{2-}$, Zn2+ $2^{\prime},3^{\prime},4^{\prime},5,7-\text{pentahydroxylflavone}$ Be2+ 2-(o-hydroxyphenyl) benzoxazole Cd2+ Organic Analytes As noted earlier, organic compounds that contain aromatic rings generally are fluorescent and aromatic heterocycles often are phosphorescent. Table $2$ provides examples of several important biochemical, pharmaceutical, and environmental compounds that are analyzed quantitatively by fluorimetry or phosphorimetry. If an organic analyte is not naturally fluorescent or phosphorescent, it may be possible to incorporate it into a chemical reaction that produces a fluorescent or phosphorescent product. For example, the enzyme creatine phosphokinase is determined by using it to catalyze the formation of creatine from phosphocreatine. Reacting the creatine with ninhydrin produces a fluorescent product of unknown structure. Table $2$. Examples of Naturally Photoluminescent Organic Analytes class compounds (F = fluorescence, P = phosphorescence) aromatic amino acids phenylalanine (F) tyrosine (F) tryptophan (F, P) vitamins vitamin A (F) vitamin B2 (F) vitamin B6 (F) vitamin B12 (F) vitamin E (F) folic acid (F) catecholamines dopamine (F) norepinephrine (F) pharmaceuticals and drugs quinine (F) salicylic acid (F, P) morphine (F) barbiturates (F) LSD (F) codeine (P) caffeine (P) sulfanilamide (P) environmental pollutants pyrene (F) benzo[a]pyrene (F) organothiophosphorous pesticides (F) carbamate insecticides (F) DDT (P) Standardizing the Method In Section 15.1 we showed that the intensity of fluorescence or phosphorescence is a linear function of the analyte’s concentration provided that the sample’s absorbance of source radiation ($A = \varepsilon bC$) is less than approximately 0.01. Calibration curves often are linear over four to six orders of magnitude for fluorescence and over two to four orders of magnitude for phosphorescence. For higher concentrations of analyte the calibration curve becomes nonlinear because the assumption that absorbance is negligible no longer apply. Nonlinearity may be observed for smaller concentrations of analyte fluorescent or phosphorescent contaminants are present. As discussed earlier, quantum efficiency is sensitive to temperature and sample matrix, both of which must be controlled when using external standards. In addition, emission intensity depends on the molar absorptivity of the photoluminescent species, which is sensitive to the sample matrix. Representative Method: Determination of Quinine in Urine The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of quinine in urine provides an instructive example of a typical procedure. The description here is based on Mule, S. J.; Hushin, P. L. Anal. Chem. 1971, 43, 708–711, and O’Reilly, J. E.; J. Chem. Educ. 1975, 52, 610–612. Description of the Method Quinine is an alkaloid used to treat malaria. It is a strongly fluorescent compound in dilute solutions of H2SO4 ($\Phi_f = 0.55$). Quinine’s excitation spectrum has absorption bands at 250 nm and 350 nm and its emission spectrum has a single emission band at 450 nm. Quinine is excreted rapidly from the body in urine and is determined by measuring its fluorescence following its extraction from the urine sample. Procedure Transfer a 2.00-mL sample of urine to a 15-mL test tube and use 3.7 M NaOH to adjust its pH to between 9 and 10. Add 4 mL of a 3:1 (v/v) mixture of chloroform and isopropanol and shake the contents of the test tube for one minute. Allow the organic and the aqueous (urine) layers to separate and transfer the organic phase to a clean test tube. Add 2.00 mL of 0.05 M H2SO4 to the organic phase and shake the contents for one minute. Allow the organic and the aqueous layers to separate and transfer the aqueous phase to the sample cell. Measure the fluorescent emission at 450 nm using an excitation wavelength of 350 nm. Determine the concentration of quinine in the urine sample using a set of external standards in 0.05 M H2SO4, prepared from a 100.0 ppm solution of quinine in 0.05 M H2SO4. Use distilled water as a blank. Questions 1. Chloride ion quenches the intensity of quinine’s fluorescent emission. For example, in the presence of 100 ppm NaCl (61 ppm Cl) quinine’s emission intensity is only 83% of its emission intensity in the absence of chloride. The presence of 1000 ppm NaCl (610 ppm Cl) further reduces quinine’s fluorescent emission to less than 30% of its emission intensity in the absence of chloride. The concentration of chloride in urine typically ranges from 4600–6700 ppm Cl. Explain how this procedure prevents an interference from chloride. The procedure uses two extractions. In the first of these extractions, quinine is separated from urine by extracting it into a mixture of chloroform and isopropanol, leaving the chloride ion behind in the original sample. 2. Samples of urine may contain small amounts of other fluorescent compounds, which will interfere with the analysis if they are carried through the two extractions. Explain how you can modify the procedure to take this into account? One approach is to prepare a blank that uses a sample of urine known to be free of quinine. Subtracting the blank’s fluorescent signal from the measured fluorescence from urine samples corrects for the interfering compounds. 3. The fluorescent emission for quinine at 450 nm can be induced using an excitation frequency of either 250 nm or 350 nm. The fluorescent quantum efficiency is the same for either excitation wavelength. Quinine’s absorption spectrum shows that $\varepsilon_{250}$ is greater than $\varepsilon_{350}$. Given that quinine has a stronger absorbance at 250 nm, explain why its fluorescent emission intensity is greater when using 350 nm as the excitation wavelength. We know that If is a function of the following terms: k, $\Phi_f$, P0, $\varepsilon$, b, and C. We know that $\Phi_f$, b, and C are the same for both excitation wavelengths and that $\varepsilon$ is larger for a wavelength of 250 nm; we can, therefore, ignore these terms. The greater emission intensity when using an excitation wavelength of 350 nm must be due to a larger value for P0 or k . In fact, P0 at 350 nm for a high-pressure Xe arc lamp is about 170% of that at 250 nm. In addition, the sensitivity of a typical photomultiplier detector (which contributes to the value of k) at 350 nm is about 140% of that at 250 nm. Example $1$ To evaluate the method described iabove, a series of external standard are prepared and analyzed, providing the results shown in the following table. All fluorescent intensities are corrected using a blank prepared from a quinine-free sample of urine. The fluorescent intensities are normalized by setting If for the highest concentration standard to 100. [quinine] (µg/mL) If 1.00 10.11 3.00 30.20 5.00 49.84 7.00 69.89 10.00 100.0 After ingesting 10.0 mg of quinine, a volunteer provides a urine sample 24-h later. Analysis of the urine sample gives a relative emission intensity of 28.16. Report the concentration of quinine in the sample in mg/L and the percent recovery for the ingested quinine. Solution Linear regression of the relative emission intensity versus the concentration of quinine in the standards gives the calibration curve shown below and the following calibration equation. $I_{f}=0.122+9.978 \times \frac{\mathrm{g} \text { quinine }}{\mathrm{mL}} \nonumber$ Substituting the sample’s relative emission intensity into the calibration equation gives the concentration of quinine as 2.81 μg/mL. Because the volume of urine taken, 2.00 mL, is the same as the volume of 0.05 M H2SO4 used to extract the quinine, the concentration of quinine in the urine also is 2.81 μg/mL. The recovery of the ingested quinine is $\frac{\frac{2.81 \ \mu \mathrm{g} \text { quinine }}{\mathrm{mL} \text { urine }} \times 2.00 \ \mathrm{mL} \text { urine } \times \frac{1 \mathrm{mg}}{1000 \ \mu \mathrm{g}}} {10.0 \ \mathrm{mg} \text { quinine ingested }} \times 100=0.0562 \% \nonumber$ It can take 10–11 days for the body to completely excrete quinine so it is not surprising that such a small amount of quinine is recovered from this sample of urine.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.03%3A_Applications_and_Photoluminescence_methods.txt
The focus of this chapter has been on molecular luminescence methods in which emission from the analyte's excited state is achieved following its absorption of a photon. In Chapter 10 we considered atomic emission following excitation of the analyte by thermal energy. An exothermic reaction may also serve as a source of energy. In chemiluminescence the analyte is raised to a higher-energy state by means of a chemical reaction, emitting characteristic radiation when it returns to a lower-energy state. When the chemical reaction results from a biological or enzymatic reaction, the emission of radiation is called bioluminescence. Commercially available “light sticks” and the flash of light from a firefly are examples of chemiluminescence and bioluminescence. The intensity of emitted light, $I$, is proportional to the quantum yield for chemiluminescent emission, $\Phi_{CL}$, which is, itself the product of the quantum yield for creating excited states, $\Phi_{EX}$, and the quantum yield for emission through emission of a photon, $\Phi_{EM}$. The intensity also depends on the rate of the chemical reaction(s) responsible for creating the excited state; thus $I = \Phi_{Cl} \times \frac{dC}{dt} \nonumber$ where $dC/dt$ is the rate of the chemical reaction. Chemiluminescent measurements require less equipment than do other forms of molecular emission because there is no need for a source of photons and no need for a monochromator as the only source of photons are those arising from the chemiluminescent reaction. A sample cell to hold the reaction mixture and a photomultiplier tube may be sufficient for the optical bench. Because chemiluminescent emission depends on the reaction's rate, and because the rate decreases with time, the intensity of emission is time-dependent. As a result, the analytical signal is often the integrated emission intensity over a fixed interval of time. 15.05: Evaluation of Molecular Luminescence Scale of Operation Photoluminescence spectroscopy is used for the routine analysis of trace and ultratrace analytes in macro and meso samples. Detection limits for fluorescence spectroscopy are influenced by the analyte’s quantum yield. For an analyte with $\Phi_f > 0.5$, a picomolar detection limit is possible when using a high quality spectrofluorometer. For example, the detection limit for quinine sulfate, for which $\Phi$ is 0.55, generally is between 1 part per billion and 1 part per trillion. Detection limits for phosphorescence are somewhat higher, with typical values in the nanomolar range for low-temperature phosphorimetry and in the micromolar range for room-temperature phosphorimetry using a solid substrate. Accuracy The accuracy of a fluorescence method generally is between 1–5% when spectral and chemical interferences are insignificant. Accuracy is limited by the same types of problems that affect other optical spectroscopic methods. In addition, accuracy is affected by interferences that affect the fluorescent quantum yield. The accuracy of phosphorescence is somewhat greater than that for fluorescence. Precision The relative standard deviation for fluorescence usually is between 0.5–2% when the analyte’s concentration is well above its detection limit. Precision usually is limited by the stability of the excitation source. The precision for phosphorescence often is limited by reproducibility in preparing samples for analysis, with relative standard deviations of 5–10% being common. Sensitivity The sensitivity of a fluorescent or a phosphorescent method is affected by a number of parameters. We already have considered the importance of quantum yield and the effect of temperature and solution composition on $\Phi_f$ and $\Phi_p$. Besides quantum yield, sensitivity is improved by using an excitation source that has a greater emission intensity, P0, at the desired wavelength, and by selecting an excitation wavelength for which the analyte has a greater molar absorptivity, $\varepsilon$. Another approach for improving sensitivity is to increase the volume from which emission is monitored. Figure $1$ shows how rotating a monochromator’s slits from their usual vertical orientation to a horizontal orientation increases the sampling volume. The result can increase the emission from the sample by $5-30 \times$. Selectivity The selectivity of fluorescence and phosphorescence is superior to that of absorption spectrophotometry for two reasons: first, not every compound that absorbs radiation is fluorescent or phosphorescent; and, second, selectivity between an analyte and an interferent is possible if there is a difference in either their excitation or their emission spectra. The total emission intensity is a linear sum of that from each fluorescent or phosphorescent species. The analysis of a sample that contains n analytes, therefore, is accomplished by measuring the total emission intensity at n wavelengths. Time, Cost, and Equipment As with other optical spectroscopic methods, fluorescent and phosphorescent methods provide a rapid means for analyzing samples and are capable of automation. Fluorometers are relatively inexpensive, ranging from several hundred to several thousand dollars, and often are satisfactory for quantitative work. Spectrofluorometers are more expensive, with models often exceeding \$50,000.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.04%3A_Chemiluminscence.txt
• 16.1: Theory of Infrared Absorption Spectrometry To absorb an infrared photon, the absorbing species must experience a change in its dipole moment, which allows the oscillation in the photon's electrical field to interact with an oscillation in charge within the absorbing species. If the two oscillations have the same frequency, then absorption is possible. In this section we consider classical and quantum mechanical models for vibrational spectrscopy. • 16.2: Infrared Sources and Transducers Instrumentation for IR spectroscopy requires a source of infrared radiation and a transducer for detecting the radiation after it passes through the sample. Common sources and transducers are reviewed here. • 16.3: Infrared Instruments Instrumentation for infrared spectroscopy use one of three common optical benches: non-dispersive instruments, dispersive instruments, and Fourier transform instruments. As we have already examined non-dispersive and dispersive instruments in Chapter 13, and because they are no longer as common as they once were, we give them only a brief consideration here. Fourier transform instruments, which dominate the current marketplace receive more detailed treatment. 16: An Introduction to Infrared Spectrometry Understanding the IR Spectrum Figure $1$ shows the infrared spectrum for ethanol. Unlike a UV/Vis absorbance spectrum, the y-axis is displayed as percent transmittance (%T) instead of absorbance, reflecting the fact that IR is used more for qualitative purposes than for quantitative purposes, where Beer's law, which is a linear function of concentration ($A = \epsilon b C$) makes absorbance the more useful measurement. The x-axis for an IR spectrum usually is given in wavenumbers, $\overline{\nu} = \lambda^{-1}$, with units of cm–1. The peaks in an IR spectrum are inverted relative to absorbance spectrum; that is, they descend from a baseline of 100%T instead of rising from a baseline of zero absorbance. Dipole Changes The energy of a photon of infrared radiation (see Figure $2$) is not sufficient to affect a change in the electronic energy levels of electrons, as in the UV/Vis atomic or molecular absorption or emission spectroscopies covered in Chapters 9, 10, and 12–15. Instead, infrared radiation is confined to changes in the vibrational energy states of molecules and molecular ions. To absorb an IR photon, the absorbing species must experience a change in its dipole moment, which allows the oscillation in the photon's electrical field to interact with an oscillation in charge within the absorbing species. If the two oscillations have the same frequency, then absorption is possible. Note Each vibrational energy state in Figure $2$ also has a set of rotational energy states, which means that the peak for a particular change in vibrational energy levels may consist of a series of closely spaced lines, one for each of several changes in rotational energy. Because rotation is difficult for analytes that in liquid or solid forms, we usually see just a single, broad absorption line; for this reason, we will consider only vibrational transitions in this chapter. Types of Molecular Vibrations Although we tend to think of the atoms in a molecule as being rigidly fixed in space relative to each other, the individual atoms are in a constant state of motion: bond lengths increase and decrease by stretching and compressing, and bond angles change as the result of the bending of the bonds relative to each other. Figure $3$ shows two different types of stretching (symmetric and asymmetric) and four different types of bending (in-plane rocking, in-plane scissoring, out-of-plane wagging, and out-of-plane twisting). Even a simple molecule can have many vibrational modes that give rise to a peak in the IR spectrum, as is the case for ethanol (Figure $1$). The number of possible normal vibrational modes for a linear molecule is $3N - 5$, where N is the number of atoms, and $3N - 6$ for a non-linear molecule. Ethanol, for example, has $3 \times 9 - 6 = 21$ possible vibrational modes. As we will see later in this section, some of these modes may not lead to a change in dipole moment, decreasing the number of peaks in the IR spectrum. Note Why does a non-linear molecule have $3N - 6$ vibrational modes? Consider a molecule of methane, CH4. Each of methane’s five atoms can move in one of three directions (x, y, and z) for a total of $3 \times 5 = 15$ different ways in which the molecule’s atoms can move. A molecule can move in three ways: it can move from one place to another, which we call translational motion; it can rotate around an axis, which we call rotational motion; and its bonds can stretch and bend, which we call vibrational motion. Because the entire molecule can move in the x, y, and z directions, three of methane’s 15 different ways of moving are translational. In addition, the molecule can rotate about its x, y, and z axes, accounting for three additional forms of motion. This leaves $15 - 3 - 3 = 9$ vibrational modes. A linear molecule, such as CO2, has $3N - 5$ vibrational modes because it can rotate around only two axes. Mechanical Model of Stretching in a Diatomic Molecule The simplest model system for the the stretching and compressing of a bond is a weight with a mass, m, attached to an ideal spring that hangs from the ceiling as shown in Figure $4a$. If we pull on the mass and then release it, we initiate a simple oscillating harmonic motion that we can model using Hooke's law. If we displace the weight by a distance, y, then the force, F, that acts on the weight is $F = - k y \label{hookeslaw}$ where $k$ is the spring's force constant—a measure of the spring's springiness. The negative sign in Equation \ref{hookeslaw} indicates that this is the force needed to restore the spring to its original position; that is, the force is in the direction opposite to our action of pulling down on the weight. Potential Energy of a Harmonic Oscillator Let's take the potential energy, E, of the spring and weight as 0 when they are at rest (y = 0). If we pull down on the weight by a distance of $dy$, then the change in the system's potential energy, $dE$, must increase by the product of force and distance $dE = - F \times dy = - ky \times dy \label{PEchange}$ Integrating Equation \ref{PEchange} from $E = 0$ to $E = E$ and from $y = 0$ to $y = y$ $\int_0^E dE = - k \int_0^y ydy \label{PEintegrals}$ gives the energy as $E = \frac{1}{2} k y^2 \label{PE}$ Figure $4b$ shows the resulting potential energy curve, for which the maximum potential energy is $\frac{1}{2}kA^2$ when the weight is at its maximum displacement. Note that the potential energy curve is a parabola. Vibrational Frequency The simple harmonic oscillator described above and shown in Figure $4$ vibrates with a frequency, $\nu_0$, given by the equation $\nu_0 = \frac{1}{2 \pi} \sqrt{\frac{k}{m}} \label{natfreq}$ where $k$ is the spring's force constant and $m$ is the weight's mass. We can extend this to a spring that connects two weights to each other by substituting for the mass, $m$, the system's reduced mass, $\mu$ $\mu = \frac{m_1 \times m_2}{m_1 + m_2} \label{redmass}$ where $m_1$ and $m_2$ are the masses of the two weights. Substituting Equation \ref{redmass} into Equation \ref{natfreq} gives $\nu_0 = \frac{1}{2 \pi} \sqrt{\frac{k}{\mu}} = \frac{1}{2 \pi} \sqrt{\frac{k(m_1 + m_2)}{m_1 \times m_2}} \label{natfreq2}$ If we make the assumption that Equation \ref{natfreq2} applies to simple diatomic molecules, then we can estimate the bond's force constant, $k$, by measuring its vibrational frequency. Quantum Treatment of Vibrations Equations \ref{PE}\ and \ref{natfreq2} are based on a classical mechanics treatment of the simple harmonic oscillator in which any displacement, and, thus, any energy is possible. Molecular vibrations, however, are quantized; thus $E = \left( v + \frac{1}{2} \right) \times h \times \frac{1}{2 \pi} \sqrt{\frac{k}{\mu}} = \left( v + \frac{1}{2} \right) h \nu_0 \label{quantizedE}$ where $v$ is the vibrational quantum number, which has allowed values of $0, 1, 2, \dots$. The difference in energy, $\Delta E$, between any two consecutive vibrational energy levels is $h \nu_0$. As allowed transitions in quantum mechanics are limited to $\Delta \nu = \pm 1$ and as the difference in energy is limited to $\Delta E = h \nu_0$, any particular mode of vibration should give rise to a single peak. Anharmonic Behavior The ideal behavior described in the last section, in which each vibrational motion that produces a change in dipole moment results in a single peak, does not hold due to a variety of reasons, including the coulombic interactions between the atoms as they move toward and away from each other. One result of these non-ideal behaviors is that the value $\Delta E$ does not remain constant for all values of the vibrational quantum number $v$. For larger values of $v$, the value of $\Delta E$ becomes smaller and transitions where $\Delta v = \pm 2$ or $\Delta v = \pm 3$ become possible giving rise to what are called overtone lines at frequencies that are $2 \times$ or $3 \times$ that for $\nu_0$. Why Do We See More or Fewer Vibrational Peaks Than Expected? Figure $5$ shows the IR spectrum for carbon dioxide, CO2, which consists of three clusters of peaks located at approximately 670 cm–1, 2350 cm–1, and 3700 cm–1. As carbon dioxide is a linear molecule that consists of two carbon-oxygen double bonds (O=C=O), it has $3 \times 3 - 5 = 9 - 5 = 4$ vibrational modes. So why do we see just three clusters of peaks? One of the requirements for the absorption of infrared radiation, is that the vibrational motion must result in a change in dipole moment. Figure $6$ shows the four vibrational modes for CO2. Of these four vibrational modes, the symmetric stretch does not result in a change in dipole moment. Although this appears to explain why we see just three clusters of peaks, a close examination of the two bending motions in Figure $6$ should convince you that they are identical and, therefore, will appear as a single peak. So what is the source of the cluster of peaks around 3700 cm–1? Sometimes the absorption of a single photon excites two or more vibrational modes. In this case, the wavenumber for this absorption band is equivalent to the sum of the wavenumbers for the asymmetric stretch and the two degenerate bending modes (2349 + 667 = 3016 cm–1, and 2349 + 667 + 667 = 3683 cm–1). These are called combination bands. Another source of additional peaks are overtone bands in which $\Delta v = \pm 2$ or $\Delta v = \pm 3$. Figure $7$ shows the IR spectrum for carbonyl sulfide, OCS, which is analagous to CO2 in which one of the oxygens is replaced with sulfur. The peak at 520 cm–1 is for its two degenerate bending motions and is labeled $\nu_2$. The asymmetric stretch at 2062 cm–1 $(\nu_3)$ and the symmetric stretch at 859 cm–1 $(\nu_1)$ are the other two fundamental absorption bands. The remaining peaks are overtones, such as the peak labeled $2 \nu_2$ at 1040 cm–1, or combination bands, such as the peak labeled $\nu_3 + \nu_1$ at 2921 cm–1. Many of the peaks appear as two peaks; this is the result of changes in rotational energy as well.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/16%3A_An_Introduction_to_Infrared_Spectrometry/16.01%3A_Theory_of_Infrared_Absorption_Spectrometry.txt
Instrumentation for IR spectroscopy requires a source of infrared radiation and a transducer for detecting the radiation after it passes through the sample. Infrared Sources Most IR sources consist of a solid material that emits radiation when heated by passing a current through the device. The intensity of emitted light typically is greatest at 5900–5000 cm–1 and then decreases steadily to 500 cm–1. Common examples of IR sources include the Nernst glower (a ceramic rod heated to 1200–2200 K), a globar (a silicon carbide rod heated to 1300–1500 K), an incandescent wire (a nichrome wire heated to approximately 1100 K). Infrared Transducers Because IR radiation has a much lower energy than visible and ultraviolet light, the types of detectors used in UV/Vis spectroscopy are not suitable for recording IR spectra. Most IR detectors measure heat either directly or by a temperature-dependent change in one of its properties. When two different metals, M1 and M2, are connected to each other in a closed loop, forming two M1–M2 junctions, a potential difference exists between the two junctions. The magnitude of this difference in potential depends on the difference in the temperatures of the two junctions. If the temperature of one junction is held constant or, if the source radiation is chopped—see Chapter 9.2 for a discussion of chopping—then the change in temperature of the other junction can be measured. The active junction is usually coated with a dark material to enhance the absorbance of thermal energy, and is small in size. A high-quality thermocouple is sensitive temperature differences as small as 10–6 K. A bolometer is fashioned from materials for which the resistance is temperature dependent. As is true for a thermocouple, the active part of the detector is coated with a dark material and kept small in size. Triglycerine sulfate, (NH2CH2COOH)3 • H2SO4, TGS, is a crystalline pyroelectric material. It usually is partially deuterated (DTGS) and, perhaps, doped with L-alanine (DLaTGS). The pyroelectric material is placed between two electrodes, one of which is optically transparent to infrared radiation. The absorption of infrared radiation results in a change in temperature and a resulting change in the detector's capacitance and, therefore, the current that flows. 16.03: Infrared Instruments Instrumentation for infrared spectroscopy use one of three common optical benches: non-dispersive instruments, dispersive instruments, and Fourier transform instruments. As we have already examined non-dispersive and dispersive instruments in Chapter 13, and because they are no longer as common as they once were, we give them only a brief consideration here. Fourier transform instruments, which dominate the current marketplace, receive a more detailed treatment. Non-Dispersive Instruments The simplest instrument for IR absorption spectroscopy is a filter photometer similar to that shown earlier in Figure 13.4.1 for UV/Vis absorption. These instruments have the advantage of portability, which makes the useful in the field, and typically are used as dedicated analyzers for gases such as HCN and CO. Dispersive Instruments Infrared instruments using a monochromator for wavelength selection use double-beam optics similar to that shown earlier in Figure 13.4.3. Double-beam optics are preferred over single-beam optics because the sources and detectors for infrared radiation are less stable than those for UV/Vis radiation. In addition, it is easier to correct for the absorption of infrared radiation by atmospheric CO2 and H2O vapor when using double-beam optics. Resolutions of 1–3 cm–1 are typical for most instruments. Fourier Transform Instruments We covered the basic concepts of the Fourier transform in Chapter 7, which you may wish to review. In this section we take a more detailed look at the application of Fourier transforms to infrared instrumentation. Components of a FT-IR In a Fourier transform infrared spectrometer, or FT–IR, the monochromator is replaced with an interferometer (Figure \(1\)). There are four key components that make up the interfometer: the drive mechanism that moves the moving mirror, the beam splitter, the light source, and the detector. Drive Mechanism As we learned in Chapter 7, the Fourier transform encodes information about the wavelength or frequency of source radiation absorbed by the sample by observing how the signal reaching the detector varies with time. As the moving mirror is displaced in space, some frequencies of light experience complete constructive interference, some frequencies of light experience complete destructive interference, and other frequencies fall somewhere in between giving rise to a time domain spectrum. As the signal is monitored as function of time and the moving mirror is traversing a variable distance, the drive mechanism must allow for a precise and accurate relationship between the two. The mechanism of the moving mirror must be capable of moving the mirror through a distance of up to 20 cm at a scan rate as fast as 10 cm/s; it must also accomplish this while maintaining the mirror's orientation relative to the axis of its movement. To maintain accuracy, a HeNe laser, which emits visible light with a wavelength of 632.8 nm, is aligned with the light source so that they follow the same optical path. Beam Splitter The beam splitter is designed to reflect 50% of the source radiation to the fixed mirror and to pass the remaining 50% of the source radiation to the moving mirror. The materials used to construct the beam splitter depends on the range of wavelengths being used. The most common range of wavelengths, which is called mid-IR, runs from approximately 670 cm–1 to 4000 cm–1. Instruments for mid-IR use a beam splitter that consists of silicon or germanium coated onto a substrate of KBr or NaCl. Sources and Transducers The most common sources for FT-IR are those discussed in the previous section, such as a Nernst glower. The most common transducer for FT-IR is pyroelectric triglycine sulfate.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/16%3A_An_Introduction_to_Infrared_Spectrometry/16.02%3A_Infrared_Sources_and_Transducers.txt
Infrared spectroscopy finds wide use for both qualitative and quantitative analysis. Our organization of IR applications follows that traditionally used by others by dividing the broad range of infrared radiation into three distinct units: the near-IR (4000 cm–1 to 14,000 cm–1, or 2,500 nm to 700 nm), the mid-IR (670 cm–1 to 4000 cm–1, or 15 µm to 2.5 µm), and the far-IR (670 cm–1 to 10 cm–1, or 1000 µm to 15 µm). Note that the near in near-IR means that it is nearest to the visible range of light. Of these, the most important in terms of the breadth of applications is the mid-IR. • 17.1: Mid-Infrared Absorption Spectometry Mid-infrared spectrometry is used for the routine qualitative analysis and, to a lesser extent, the quantitative analysis of organic molecules. In this section we consider absorption spectrometry in which we measure the absorbance of IR light as it passes through a gas, solution, liquid, or solid sample. • 17.2: Mid-Infrared Reflection Spectrometry The first section of this chapter considered mid-IR absorption spectrometry in which we measure the amount of light that is transmitted by the sample, which we can convert, if we wish, into absorbance values. In the process, we examined both transmittance and absorbance spectra. In this section, we consider experiments in which we measure the reflection of infrared radiation by a sample. • 17.3: Near-Infrared and Far-Infrared Spectroscopy At the beginning of this chapter we divided infrared radiation into three areas: the near-IR, the mid-IR, and the far-IR. The mid-IR, which runs from 4000 cm–1 to 670 cm–1 (2.5 µm to 15 µm) is the most analytical useful region and was the subject of the previous two sections. Here we briefly turn our attention to applications using the near-IR and the far-IR. 17: Applications of Infrared Spectrometry Mid-infrared spectrometry is used for the routine qualitative analysis and, to a lesser extent, the quantitative analysis of organic molecules. In this section we consider absorption spectrometry in which we measure the absorbance of IR light as it passes through a gas, solution, liquid, or solid sample. In Section 17.2 we consider reflectance spectrometry in which we measure the absorbance of IR light as it reflects off the surface of a solid sample or a thin film of a liquid sample. Sample Handling Infrared spectroscopy is routinely used to analyze gas, liquid, and solid samples. We know from Beer's law, $A = \epsilon b C$, that absorbance is a linear function of the analyte's concentration, $C$, and the distance, $b$, the light travels through the sample. The challenge with obtaining an IR spectrum, is rarely the analyte's concentration or path length; instead it is finding materials and solvents that are transparent to IR radiation. The optical windows in IR cells are made from materials, such as NaCl and KBr, that are transparent to infrared radiation. Gas Phase Samples The cell for analyzing a sample in the gas phase generally is a 5–10 cm glass cylinder fitted with optically transparent windows. For an analyte with a particularly small concentration, the sample cell is designed with reflective surfaces that allow the infrared radiation to make several passes through the cell before it exits the sample cell, increasing the pathlength and, therefore, the absorbance. Solution Solutions The analysis of a sample in solution is limited by the solvent’s IR absorbing properties, with carbon tetrachloride, CCl4, carbon disulfide, CS2, and chloroform, CHCl3, being common solvents. A typical solution cell is shown in Figure $1$. It is fashioned with two NaCl windows separated by a spacer. By changing the spacer, pathlengths from 0.015–1.0 mm are obtained. The sample is introduced into the cell using a syringe and the sample inlet port. Liquid Phase Samples A sample that is a volatile liquid may be analyzed using the solution cell in Figure $1$. For a non-volatile liquid sample, however, a suitable sample for qualitative work can be prepared by placing a drop of the liquid between the two NaCl plates shown in Figure $2a$, forming a thin film that typically is less than 0.01 mm thick. An alternative approach is to place a drop of the sample on a disposable card equipped with a polyethylene "window" that IR transparent with the exception of strong absorption bands at 2918 cm–1 and 2849 cm–1 (Figure $2b$). Solid Phase Samples Transparent solid samples are analyzed by placing them directly in the IR beam. Most solid samples, however, are opaque, and are first dispersed in a more transparent medium before recording the IR spectrum. If a suitable solvent is available, then the solid is analyzed by preparing a solution and analyzing as described above. When a suitable solvent is not available, solid samples are analyzed by preparing a mull of the finely powdered sample with a suitable oil and then smearing it on a NaCl salt plate or a disposable IR card (Figure $2$). Alternatively, the powdered sample is mixed with KBr and pressed, under high pressure, into a thin, optically transparent pellet, as shown in Figure $3$. Qualitative Analysis The most important application of mid-infrared spectroscopy is in the qualitative identification of organic molecules. Figure $4$ shows mid-IR solution spectra for four simple alcohols: methanol, CH3OH, ethanol, CH3CH2OH, propanol, CH3CH2CH2OH, and isopropanol, (CH3)2CHOH. Clearly there are similarities and differences in these four spectra: similarities that might lead us to expect that each molecule contains the same functional groups and differences that appear as features unique to a particular molecule. The similarities in these four spectra appear at the higher wavenumber end of the x-axis scale; we call the peaks we find there group frequencies. The differences in these four spectra occur below approximately 1500 cm–1 in what we call the fingerprint region. Note The fingerprint region is defined here as beginning at 1500 cm–1, extending to the lowest wavenumber shown on the x-axis. If you do some searching on the fingerprint region you will see that there is no broad agreement on where it begins. In my searching, I found sources that place the beginning of the fingerprint region as 1500 cm–1, 1450 cm–1, 1300 cm–1, 1200 cm–1, and 1000 cm–1. Group Frequencies All four of the spectra in Figure $4$ share a small intensity, sharp peak at approximately 3650 cm–1, a strong intensity, broad peak at approximately 3350 cm–1, and two medium intensity, sharp peaks at 2950 cm–1 and 3850 cm–1. By comparing spectra for these and other compounds, we know that the presence of a broad peak between approximately 3200 cm–1 and 3600 cm–1 is good evidence that the compound contains a hydrogen-bonded –OH functional group. The sharp peak at approximately 3650 cm–1 also is evidence of an –OH functional group, but one that is not hydrogen-bonded. The two sharp peaks at 2950 cm–1 and 3850 cm–1 are consistent with C–H bonds. All four of these peaks are for stretching vibrations. Tables of group frequencies are routinely available. The "Fingerprint" Region Figure $5$ shows a close-up of the fingerprint region for the alcohol samples in Figure $4$. Of particular interest with this set of samples is the increasing complexity of the spectra as we move from the simplest of these alcohols (methanol), to the most complex of these alcohols (propanol and isopropanol). Also of interest is that each spectrum is unique in a way that allows us to confirm a sample by matching it against a library of recorded spectra. There are a number of accessible collections of spectra that are available for this purpose. One such collection of spectra is the NIST Webbook—NIST is the National Institute of Standards and Technology—which is the source of the data used to display the spectra included in this section's figures and which includes spectra for over 16,000 compounds. Computer Search Systems With the availability of computerized data acquisition and storage it is possible to build digital libraries of standard reference spectra. The identity of an a unknown compound often can be determined by comparing its spectrum against a library of reference spectra, a process known as spectral searching. Comparisons are made using an algorithm that calculates the cumulative difference between the sample’s spectrum and a reference spectrum. For example, one simple algorithm uses the following equation $D = \sum_{i = 1}^n | (A_{sample})_i - (A_{reference})_i | \label{spec_sub}$ where D is the cumulative difference, Asample is the sample’s absorbance at wavelength or wavenumber i, Areference is the absorbance of the reference compound at the same wavelength or wavenumber, and n is the number of digitized points in the spectra. Note that the spectra are defined here by absobrance instead of transmittance as absorbance is directly proportional to concentration. The cumulative absolute difference is calculated for each reference spectrum. The reference compound with the smallest value of D is the closest match to the unknown compound. The accuracy of spectral searching is limited by the number and type of compounds included in the library, and by the effect of the sample’s matrix on the spectrum. Another advantage of computerized data acquisition is the ability to subtract one spectrum from another. When coupled with spectral searching it is possible to determine the identity of several components in a sample without the need of a prior separation step by repeatedly searching and subtracting reference spectra. An example is shown in Figure $6$ in which the composition of a two-component mixture is determined by successive searching and subtraction. Figure $6a$ shows the spectrum of the mixture. A search of the spectral library selects cocaine•HCl (Figure $6b$) as a likely component of the mixture. Subtracting the reference spectrum for cocaine•HCl from the mixture’s spectrum leaves a result (Figure $6c$) that closely matches mannitol’s reference spectrum (Figure $6d$). Subtracting the reference spectrum for mannitol leaves a small residual signal (Figure $6e$). Quantitative Applications A quantitative analysis based on the absorption of infrared radiation, although important, is encountered less frequently than with UV/Vis absorption, primarily due to the three issues raised here. Deviations from Beer's Law One challenge for quantitative IR is the greater tendency for instrumental deviations from Beer’s law when using infrared radiation. Because an infrared absorption band is relatively narrow, any deviation due to the lack of monochromatic radiation is more pronounced. In addition, infrared sources are less intense than UV/Vis sources, which makes stray radiation more of a problem. Differences between the path lengths for samples and for standards when using thin liquid films or KBr pellets are a problem, although an internal standard can correct for any difference in pathlength; alternatively, we can use the cell shown in Figure $1$ to maintain a constant path length. Background Correction The water and carbon dioxide in air have strong absorbances in the mid-IR. A double-beam dispersive instrument corrects for the contributions of CO2 and H2O vapor because they are present in both pathways through the instrument. An FT-IR, however, includes only a single optical path, so it is necessary to collect a separate spectrum to compensate for the absorbance of atmospheric CO2 and H2O vapor. This is done by collecting a background spectrum without the sample and storing the result in the instrument’s computer memory. The background spectrum is removed from the sample’s spectrum by taking the ratio the two signals. Another approach is to flush the sample compartment with nitrogren. Measuring Absorbance Another challenge for quantitative IR is that establishing a 100% T (A = 0) baseline often is difficult because the optical properties of NaCl sample cells may change significantly with wavelength due to contamination and degradation. We can minimize this problem by measuring absorbance relative to a baseline established for the absorption band. Figure $7$ shows how this is accomplished. Typical Applications A recent review paper [Fahelelbom, K. M.; Saleh, A.; Al-Tabakha, M. A.; Ashames, A. A. Rev. Anal. Chem. 2022, 41, 21–33] summarizes the rich literature in quantitative mid-infrared spectrometry. Among the areas covered are the analysis of pharmaceuticals, including antibiotics, antihypertensives, antivirals, and counterfeit drugs. Mid-infrared spectrometry also finds use for the analysis of environmentally significant gases, such as methane, CH4, hydrogen chloride, HCl, sulfur dioxide, SO2, and nitric oxide, NO.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/17%3A_Applications_of_Infrared_Spectrometry/17.01%3A_Mid-Infrared_Absorption_Spectometry.txt
The first section of this chapter considered mid-IR absorption spectrometry in which we measure the amount of light that is transmitted by the sample, which we can convert, if we wish, into absorbance values. In the process, we examined both transmittance (Figure 7.1.4 and Figure 7.1.5) and absorbance (Figure 7.1.6) spectra. In this section, we consider experiments in which we measure the reflection of infrared radiation by a sample. Types of Reflections There are two broad classes of reflection: internal and external. As shown in Figure $1$, internal reflection occurs when light encounters an interface between two media—here identified as the sample and the support—that have different refractive indicies, n. When the refractive index of the support is greater than the refractive index of the sample, then some of the light reflects off the interface. Attenuated total reflectance spectrometry is one example of an instrumental method that relies on internal reflection. External reflectance occurs when light reflects off of the sample's surface. As shown in Figure $2$, the way in which light reflects depends on the nature of the sample's surface. In specular reflectance, the angle of reflection is the same at all locations because the sample's surface is smooth; in diffuse reflectance, the angle of reflection varies between locations due to the roughness of the sample's surface. Diffuse reflectance spectrometry is one example of an instrumental method that relies on external reflection. Attenuated Total Reflectance Spectrometry The analysis of an aqueous sample is complicated by the solubility of the NaCl cell window in water. One approach to obtaining an infrared spectrum of an aqueous solution is to use attenuated total reflectance instead of transmission. Figure $3$ shows a diagram of a typical attenuated total reflectance (ATR) FT–IR instrument. The ATR cell consists of a high refractive index material, such as ZnSe or diamond, sandwiched between a low refractive index substrate and a lower refractive index sample. Radiation from the source enters the ATR crystal where it undergoes a series of internal reflections before exiting the crystal. During each reflection the radiation penetrates a short distance into the sample. This depth of penetration, $d_p$, depends on the wavelength of the light, $\lambda$, the refractive index of the ATR crystal, $n_1$, the refractive index of the sample, $n_2$, and the angle of the incident radiation, $\theta$. $d_p = \frac {\lambda} {2 \pi \sqrt{n_1^2 \sin^2 \theta - n_2^2}} \label{depth}$ For example, when using ZnSe as the ATR crystal ($n_1 = 2.4$) and an angle of incidence of $45^{\circ}$, light of 1000 cm–1 penetrates to a depth of 2.0 µm in a sample with a refractive index similar to that for KBr ($n_2\ = 1.5$). Solid samples also can be analyzed using an ATR sample cell. After placing the solid in the sample slot, a compression tip ensures that it is in contact with the ATR crystal. Examples of solids analyzed by ATR include polymers, fibers, fabrics, powders, and biological tissue samples. ATR spectra are similar, but not identical, to those obtained by measuring transmission. An important contribution to this is the wavelength-dependent depth of penetration of the infrared radiation where a decrease in wavenumber (longer wavelength) results in a greater depth of penetration, which changes the intensity and width of absorption bands. Diffuse Reflectance Spectrometry Another reflectance method is diffuse reflectance, in which radiation is reflected from a rough surface, such as a powder. Powdered samples are mixed with a non-absorbing material, such as powdered KBr, and the reflected light is collected and analyzed. As with ATR, the resulting spectrum is similar to that obtained by conventional transmission methods. Figure $4$ shows the IR spectrum for urea obtained using transmission and diffuse reflectance (both collected using an FT-IR). Both spectra show similar features between 1000 cm–1 and 2000 cm–1, although there are differences in relative peak heights and background absorption. 17.03: Near Far IR At the beginning of this chapter we divided infrared radiation into three areas: the near-IR, the mid-IR, and the far-IR. The mid-IR, which runs from 4000 cm–1 to 670 cm–1 (2.5 µm to 15 µm) is the most analytical useful region and was the subject of the previous two sections. Here we briefly turn our attention to applications using the near-IR and the far-IR. Near-Infrared (NIR) Spectroscopy The near-IR extends from approximately 13,000 cm–1 (a wavelength of 770 nm or 0.77 µm, the upper wavelength limit of visible light) to 4000 cm–1 (a wavelength of 2,500 nm or 2.5 µm). Earlier we noted that absorption bands in the region that extends from 1500 cm–1 to 4000 cm–1 are called group frequencies. The absorption bands in the near-infrared often are overtones and combination bands of these group frequencies. Of particular importance are functional groups that include hydrogen: OH, CH, and NH are examples. Absorption bands generally are less intense and less broad. Compared to mid-IR, the NIR is more useful for a quantitative analysis of aqueous samples because the OH absorption bands are much weaker. The instrumentation for NIR spectroscopy, both in transmission mode and in reflectance mode, is similar to that for UV/visible spectrometers and for mid-IR spectrometry. Far-Infrared (FIR) Spectroscopy The far-IR extends from approximately 670 cm–1 (a wavelength of 15 µm) to 10 cm–1 (a wavelength of 1000 µm or 1 mm). FIR spectroscopy finds applications in the analysis of materials that include metals, including metal oxides, metal sulfides, and metal-ligand complexes. FIR spectroscopy has also been applied to the analysis of polyamides, peptides, and proteins. Because the FIR merges into the microwave region, it also finds use in the analysis of the rotational energies of gases.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/17%3A_Applications_of_Infrared_Spectrometry/17.02%3A_Mid-Infrared_Reflection_Spectrometry.txt
• 18.1: Theory of Raman Spectroscopy There are two general classes of scattering: elastic, or Rayleigh scattering and inelastic, or Raman scatting. In elastic scattering, a photon is first absorbed by a particle and then emitted without a change in its energy. With inelastic scattering, a photon is first absorbed by a particle and then emitted with a change in its energy. A plot that shows the intensity of scattered radiation as a function of the change in energy is called a Raman spectrum. • 18.2: Instrumentation The basic instrumentation for Raman spectroscopy is similar to that for other spectroscopic techniques: a source or radiation, an optical bench for bringing the source to the sample, and a suitable detector. • 18.3: Applications of Raman Spectroscopy Raman spectroscopy is useful for both qualitative and quantitative analyses, examples of which are provided in this section. • 18.4: Other Types of Raman Spectroscopy Traditional Raman spectroscopy has several limitations, perhaps the most important of which is that the probability of Raman scattering is much less than that for Rayleigh scattering, which leads to low sensitivity with detection limits often as large as 0.1 M. Here we briefly describe two forms of Raman spectroscopy that allow for significant improvements in detection limits. 18: Raman Spectroscopy The blue color of the sky during the day and the red color of the sun at sunset are the result of the scattering of light by small particles of dust, by molecules of water vapor, and by other gases in the atmosphere. The efficiency of a photon's scattering depends on its wavelength. We see the sky as blue during the day because violet and blue light scatter to a greater extent than other, longer wavelengths of light. For the same reason, the sun appears red at sunset because red light scatters less efficiently and is more likely to pass through the atmosphere than other wavelengths of light. If we send a focused, monochromatic beam of radiation with a wavelength of $\lambda$ through a medium of particles—whether solid particulates or individual molecules—that have dimensions $<1.5 \lambda$, then the radiation scatters in all directions. For example, infrared light in the near-IR with a wavelength of 700 nm will scatter from any particle whose longest dimension is less than 1,300 nm. Even in an otherwise transparent sample, scattering from molecules occurs. Raman Spectra There are two general classes of scattering: elastic scattering and inelastic scattering. In elastic scattering, a photon is first absorbed by a particle and then emitted without a change in its energy ($\Delta E = 0$); this is called Rayleigh scattering. With inelastic scattering, a photon is first absorbed by a particle and then emitted with a change in its energy ($\Delta E \ne 0$); this is called Raman scattering. A plot that shows the intensity of scattered radiation as a function of the scattered photon's energy, expressed as a change in the wavenumber, $\Delta \overline{\nu}$, is called a Raman spectrum and values of $\Delta \overline{\nu}$ are called Raman shifts. Figure $1$ shows a portion of the Raman spectrum for carbon tetrachloride and illustrates several important features. First, Rayleigh scattering produces an intense peak at $\Delta \overline{\nu} = 0$. Although the peak is intense, it carries no useful information as the absolute energy is just that for the source. Second, Raman scattering has two components—Stokes lines and the anti-Stokes lines—that have identical absolute shifts relative to the line for Rayleigh scattering, but that have different signs. The Stokes lines have positive values for $\Delta\overline{\nu}$ and the anti-Stokes lines have negative values for $\Delta \overline{\nu}$. Third, each of the Stokes lines is more intense than the corresponding anti-Stokes line. Fourth, because we measure the shift in a peak's wavenumber relative to the source radiation, the spectrum is independent of the source radiation. Note The energy—and, thus, the wavenumber—of a photon that experiences Stokes scattering is less than the energy—and, thus, the wavenumber—of the source radiation, which begs the question of why a Stokes shift is reported as a positive value instead of a negative value. Although you will find most Raman spectra with positive values for the Stokes shift, you also will find examples where Stokes shifts are reported with negative values. Because the Stokes lines are more intense than the anti-Stokes lines, and, therefore, more useful, and because their respective shifts result from the same changes in vibrational energy states that we find in IR spectroscopy, it is convenient to report the Stokes lines as positive values so that we can align a species's Raman and IR spectra. See the next two sections for additional details. Mechanism of Raman and Rayleigh Scattering In Chapter 6 we examined the mechanism by which absorption and emission occur. In subsequent chapters we explored atomic absorption and atomic emission spectrometry, ultraviolet and visible molecular absorption spectrometry, molecular luminescence spectrometry, and infrared molecular absorption spectrometry. In each case we began by considering an energy level diagram that explains the origin of absorption and emission. Figure $2$ provides an energy diagram that we can use to explain the origin of the lines that make up a Raman spectrum, such as the spectrum for carbon tetrachloride in Figure $1$. The first thing to note about the energy level diagram in Figure $2$ is that, in addition to showing the ground electronic state and the first excited electronic state—each with three vibrational energy levels—it also shows a virtual electronic state, something we did not encounter with other methods (see, for example, the energy diagram for UV and IR molecular absorption spectrometry in Figure 6.4.2). The ground and the first excited electronic states are quantized, which means that absorption cannot happen if the source's energy does not match exactly the change in energy between the two electronic states. The energy of an emitted photon also is fixed by the difference in the energy of the two electronic states. A virtual electronic state, however, is not quantized and is determined by the energy of the source radiation. The source of radiation, therefore, does not need to match a particular change in energy. Absorption of a photon of source radiation moves the analyte from the ground electronic state to a virtual electronic state without a change in vibrational energy state, as seen by the two arrows at the far left of the diagram. Because the ground vibrational energy state, $\nu_0$, is more populated than the vibrational energy state, $\nu_1$, more of the analyte ends up in a virtual electronic state's lowest vibrational energy than in a higher vibrational energy state, which is shown here by the relative thickness of the two arrows. Once in a virtual electronic state, the analyte can return to the ground excited state in one of three ways. It can do so without a change in the vibrational energy level. In this case, the energy of absorption and the energy of emission are the same and $\Delta E = 0$ and $\Delta \overline{\nu} = 0$. This is Rayleigh scattering and, as suggested by the combined thickness of the two arrows in Figure $2$, it is the most important mechanism of relaxation. When relaxation includes a change in the vibrational energy level, the result is an absolute change in energy equivalent to the difference in energy, $\Delta E$, between adjacent vibrational energy levels. For Stokes scattering, relaxation is to a higher vibrational energy level, such as $\nu_0 \rightarrow \nu_1$ and, for anti-Stokes scattering, relaxation to a lower vibrational energy level, such as $\nu_1 \rightarrow \nu_0$. As suggested by the thickness of the lines for Stokes and anti-Stokes scattering in Figure $2$, the Stokes lines are more intense than the anti-Stokes lines because they begin in a more heavily populated excited state. Relationship Between IR and Raman Spectra One important feature of Figure $2$ is that the transition that gives rise to a particular Stokes line or anti-Stokes line is the same transition that will give rise to a corresponding IR band. If the selection rules for these transitions are the same for a particular species, then we expect that its IR spectrum and its Raman spectrum will have peaks at the same (or similar) values of $\overline{\nu}$ and $\Delta \overline{\nu}$ for its fundamental vibrations; however, as we see in Table $1$ for carbon tetrachloride, CCl4, there are five fundamental vibrations in its Raman spectrum, but just three in its IR spectrum. Table $1$: Fundamental Vibrational Energies for CCl4 (values from NIST). Infrared ($\overline{\nu}$, cm–1) Raman ($\Delta \overline{\nu}$, cm–1) 217.0 (l) 309.9 (l) weak 313.5 (l) 458.7 (l) 768 (g); very strong 761.7 (l) 789 (g); very strong 790.4 (l) The designations (g) and (l) indicate the sample's phase where (g) is gas phase and (l) is liquid phase. The designations of weak and very strong for the IR peaks indicates the relative extent of absorption: very strong means a relatively small %T (and a strong absorbance) and weak means a relatively large %T (and a weak absorbance). See Figure $1$ for the relative amount of scattering in the Raman spectrum for CCl4. In Chapter 16 we learned that in IR spectroscopy a compound's fundamental vibrational energy is active—that is, we see a peak in its IR spectrum—only if the corresponding stretch or bend results in a change in the compound's dipole moment. For Raman spectroscopy, a compound's fundamental vibrational energy is active only if the corresponding stretch or bend results in a change in the polarizability of its electrons. Polarizability essentially is a measure of how easy it is to distort a compound's electron cloud by applying an external electric field, such as when a photon from the source is absorbed; in general, polarizability increases when a stretching or bending motion increases the compound's volume as the electrons are then spread over a greater amount of space. Figure $3$ shows the four stretching and bending modes for CCl4. The stretching motion in (a), in which all four C–Cl bond lengths increase and decrease together, means the molecule's volume increases and decreases; thus, this vibrational mode is Raman active. The symmetry of the stretching motion, however, means there is no change in the molecule's dipole moment and the vibrational mode is IR inactive. The asymmetric stretch in (b), on the other hand, is both IR and Raman active. The bending motion in (c) results in the molecule becoming more or less compact in size, and is Raman active; the symmetry of the scissoring motions, however, means that the vibrational mode is IR inactive. The bending motions in (d) are both IR and Raman active. In general, symmetric stretching and bending modes result in relatively strong Raman scattering peaks, but no absorption in the IR, while symmetric stretching and bending modes result in both IR and Raman peaks. As a result, IR and Raman are complementary techniques. Raman Depolarization Ratios If the source of electromagnetic radiation is plane-polarized, then it is possible to collect a Raman spectrum using light scattered in a plane that is parallel to the source and, separately, in a plane that is perpendicular to the source. The ratio of a line's intensity of scattering in the perpendicular spectrum, $I_{\perp}$, to the intensity of scattering in the parallel spectrum, $I_{||}$, is called the depolarization ratio, $p$. $p = \frac{I_{\perp}}{I_{||} } \label{depolarization}$ A Raman line that originates from a vibrational mode that does not change the molecules shape will result in a depolarization ratio close to zero and an absence of the line in the perpendicular spectrum. Figure $4$ shows the Raman spectrum when collecting data parallel (top) and perpendicular (bottom) to the light source. The absence of the peak at 458.7 cm–1 in the perpendicular spectrum confirms that this is the symmetric stretch illustrated in Figure $3a$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/18%3A_Raman_Spectroscopy/18.01%3A_Theory_of_Raman_Spectroscopy.txt
The basic instrumentation for Raman spectroscopy is similar to that for other spectroscopic techniques: a source or radiation, an optical bench for bringing the source to the sample, and a suitable detector. Sources One of the notable features of the Raman spectrum for CCl4 (see Figure 18.1.1) is the low intensity of the Stokes lines and the anti-Stokes lines relative to the line for Rayleigh scattering. The low intensity of these lines requires that we use a high intensity source so that there are a sufficient number of scattered photons to collect. For this reason, a laser is the most common source for Raman spectroscopy, providing a high intensity, monochromatic source. Table $1$ summarizes some of the more common lasers. Table $1$. Examples of Laser Sources for Raman Spectroscopy type of laser wavelengths (nm) Ar ion 488.0 or 514.5 Kr ion 530.9 or 647.1 He/Ne 632.8 Near Infrared (NIR) Diode Laser 785 or 830 Nd/YAG 532 or 1064 The intensity of Raman scattering is proportional to $\frac{1}{\lambda^4}$, where $\lambda$ is the wavelength of the source radiation; thus, the smaller the wavelength, the more intense the intensity of scattered light. For example, the intensity of scattering using an Ar ion laser at 488.0 nm is almost $23 \times$ greater than the intensity of scattering using a Nd/YAG laser at 1064 nm $\frac{(1/480)^4}{(1/1064)^4} = 22.6 \nonumber$ The increased scattering when using a smaller wavelength laser comes at a cost, however, of an interference from fluorescence from species that are promoted into excited electronic states by the source. The NIR diode laser and Nd/YAG laser, when operated at 1064 cm–1, discriminate against fluorescence and are useful, therefore, for samples where fluorescence is a problem. Samples Raman spectroscopy has several advantages over infrared spectroscopy. Because water does not exhibit much Raman scattering it is possible to analyze aqueous samples; this is a serious limitation for IR spectroscopy where water absorbs strongly. The ability to focus a laser onto a small area makes it possible to analyze very small samples. A liquid sample, for example, can be held in the tip of a 1-mm inner diameter capillary tube, such as that used for measuring melting points. Solid samples and gaseous samples can be sampled using the same types of cells used in IR and FT-IR (see Chapter 17). Fiber optic probes make it possible to collect samples remotely. Figure $1$ shows the basic set-up. A small bundle of fibers (shown in blue) brings light from the source to the sample where a second bundle of fibers (shown in green) brings the scattered light to the slit that passes light onto the detector. Optical Bench Raman spectrometers use optical benches similar to those for UV/Vis or IR spectroscopy, which were covered in Chapter 7. Dispersive instruments place the laser source and the detector at 90° to each other so that any unscattered high intensity emission from the laser source is not collected by the detector. A filter is used to remove the Rayleigh scattering. To record a spectrum one either uses a scanning monochromator or a multichannel detector. Fourier transform instruments are similar to those used in FT-IR and include a filter to isolate the Stokes lines. 18.03: Applications of Raman Spectroscopy Raman spectroscopy is useful for both qualitative and quantitative analyses, examples of which are provided in this section. Qualitative Applications There are numerous databases that provide reference spectra for inorganic compounds, for minerals, for synthetic organic pigments, for natural and synthetic inorganic and organic pigments, and for carbohydrates. Such data bases are often searchable by not only name and formula, but by the prominent Raman scattering lines. Examples of spectra are included here using data from the databases linked to above. Quantitative Applications The intensity of Raman scattering, $I(\nu)_R$, is directly proportional to the intensity of the source radiation, $I_l$, and the concentration of the scattering species, $C$. The direct proportionality between $I(\nu)_R$ and $I_l$ is important given that each photon experiencing Raman scattering requires approximately $10^8$ excitation photons. Using a laser as a source of radiation and increasing its power leads to an improvement in sensitivity. The direct proportionality between $I(\nu)_R$ and the concentration of the scattering species means that a calibration curve of band intensity (or band area) is a linear function of concentration, allowing for a quantitative analysis. 18.04: Other Types of Raman Spectroscopy Traditional Raman spectroscopy has several limitations, perhaps the most important of which is that the probability of Raman scattering is much less than that for Rayleigh scattering, which leads to low sensitivity with detection limits often as large as 0.1 M. Here we briefly describe two forms of Raman spectroscopy that allow for significant improvements in detection limits. Resonance Raman Spectroscopy (RRS) If the wavelength of the source is similar to the wavelength needed to move the species from its ground electronic state to its first electronic excited state (not the virtual excited state shown in Figure 18.1.2), then the lines associated with the symmetric fundamental vibrations increase in intensity by a factor of $10^2$ to $10^6$. The improvement in sensitivity results in a substantial reduction in detection limits as low as $10^{-8} \text{ M}$. The use of a tunable laser makes it possible to adjust the wavelength of light emitted by the source to maximize the intensity of scattering. Surface-Enhanced Raman Spectroscopy (SERS) For reasons that are poorly understood, the intensity of Raman scattering lines is enhanced when the scattering species is absorbed to the surface of colloidal particles of metals such as Ag, Au, or Cu, or to the surface of etched metals. The phenomenon is not limited to just a few lines—as is the case for RRS—and results in a $10^3$ to $10^6)$ improvement in the intensity of scattering. If a tunable laser is used for the source, allowing for both RRS and SERS, detection limits of $10^{-9} \text{ M}$ to $10^{-12} \text{ M}$ are possible.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/18%3A_Raman_Spectroscopy/18.02%3A_Instrumentation.txt
• 19.1: Theory of Nuclear Magnetic Resonance As is the case with other forms of optical spectroscopy, the signal in nuclear magnetic resonance (NMR) spectroscopy arises from a difference in the energy levels occupied by the nuclei in the analyte. In this section we develop a general theory of nuclear magnetic resonance spectroscopy that draws on quantum mechanics and on classical mechanics to explain these energy levels. • 19.2: Environmental Effects on NMR Spectra In this section we will consider why the location of a nucleus within a molecule—what we call its environment—might affect the frequency at which it absorbs and why a particular absorption line might appear as a cluster of individual peaks instead of as a single peak. • 19.3: NMR Spectrometers Earlier we noted that there are two basic experimental designs for recording a NMR spectrum. One is a continuous-wave instrument in which we scan through the range of frequencies over which the nucleus of interest absorbs, exciting them sequentially. Most instruments use pulses of RF radiation to excite all nuclei at the same time and then use a Fourier transform to recover the signals from the individual nuclei. Our attention in this chapter is limited to instruments for FT-NMR. • 19.4: Applications of Proton NMR Proton NMR finds use for both qualitative analyses and quantitative analyses; in this section we briefly consider each of these areas. • 19.5: Carbon-13 NMR In this chapter we consider C-13 NMR, which was slower to develop than proton NMR because it has less sensitivity. • 19.6: Two-Dimensional Fourier Transform NMR The NMR spectra consider up to this point are shown in one dimension (1D), which is the frequency absorbed by the analyte's nuclei expressed in ppm. In addition to 1D experiments, there are a host of 2D experiments in which we apply a sequence of two or more pulses, recording the resulting FID after applying the last pulse. In this section we will consider one example of a 2D NMR experiment in some detail: 1H – 1H correlation spectroscopy, or 1H – 1H COSY. 19: Nuclear Magnetic Resonance Spectroscopy As is the case with other forms of optical spectroscopy, the signal in nuclear magnetic resonance (NMR) spectroscopy arises from a difference in the energy levels occupied by the nuclei in the analyte. In this section we develop a general theory of nuclear magnetic resonance spectroscopy that draws on quantum mechanics and on classical mechanics to explain these energy levels. Quantum Mechanical Description of NMR The quantum mechanical description of an electron is given by four quantum numbers: the principal quantum number, $n$, the angular momentum quantum number, $l$, the magnetic quantum number, $m_l$, and the spin quantum number, $m_s$. The first three of these quantum numbers tell us something about where the electron is relative to the nucleus and something about the electron's energy. The last of these four quantum numbers, the spin quantum number, tells us something about the ability of an electron to interact with an applied magnetic field. An electron has possible spins of +1/2 or of –1/2, which we often refer to as spin up, using an upwards arrow, $\uparrow$, to represent it, or as spin down, using a downwards arrow, $\downarrow$, to represent it. A nucleus, like an electron, carries a charge and has a spin quantum number. The overall spin, $I$, of a nucleus is a function of the number of protons and neutrons that make up the nucleus. Here are three simple rules for nuclear spin states: • If the number of neutrons and the number of protons are both even numbers, then the nucleus does not have a spin; thus, 12C, with six protons and six neutrons, has no overall spin and $I = 0$. • If the number of neutrons plus the number of protons is an odd number, then the nucleus has a half-integer spin, such as 1/2 or 3/2; thus, 13C, with six protons and seven neutrons, has an overall spin of $I = 1/2$; this also is true for 1H. • If the number of neutrons and the number of protons are both odd numbers, then the nucleus has an integer spin, such as 1 or 2; thus, 2H, with one proton and one neutron, has an overall spin of $I = 1$. Note Predicting that 13C has a spin of $I = 1/2$, but that 127I has a spin of $I = 3/2$ and that 17O has a spin of $I = 5/2$ is not trivial. A periodic table that provides spin states for elements is available here. The total number of spin states—that is, the total number of possible orientations of the spin—is equal to $(2 \times I) + 1$. To be NMR active, a nucleus must have at least two spin states so that a change in spin states, and, therefore, a change in energy, is possible; thus, 12C, for which there are $(2 \times 0) + 1 = 1$ spin states, is NMR inactive, but 13C, for which there are $(2 \times 1/2) + 1 = 2$ spin states with values of $m = +1/2$ and of $m = -1/2$, is NMR active, as is 2H for which there are $(2 \times 1) + 1 = 3$ spin states with values of $m = +1/2$, $m = 0$, and $m = -1/2$. As our interest in this chapter is in the NMR spectra for 1H and for 13C, we will limit ourselves to considering $I = 1/2$ and spin states of $m = +1/2$ and of $m = -1/2$. Energy Levels in an Applied Magnetic Field Suppose we have a large population of 1H atoms. In the absence of an applied magnetic field the atoms are divided equally between their possible spin states: 50% of the atoms have a spin of +1/2 and 50% of the atoms have a spin of –1/2. Both spin states have the same energy, as is the case on the left side of Figure $1$, and neither absorption nor emission occurs. In the presence of an applied magnetic field, as on the right side of Figure $1$, the nuclei are either aligned with the magnetic field with spins of $m = +1/2$, or aligned against the magnetic field with spins of $m = -1/2$. The energies in these two spin states, $E_\text{lower}$ and $E_\text{upper}$, are given by the equations $E_\text{lower} = - \frac{\gamma h}{4 \pi}B_0 \label{nmr1}$ $E_\text{upper} = + \frac{\gamma h}{4 \pi}B_0 \label{nmr2}$ where $\gamma$ is the magnetogyric ratio for the nucleus, $h$ is Planck's constant, and $B_0$ is the strength of the applied magnetic field. The difference in energy, $\Delta E$, between the two states is $\Delta E = E_\text{upper} - E_\text{lower} = + \frac{\gamma h}{4 \pi}B_0 - \left( - \frac{\gamma h}{4 \pi}B_0 \right) = \frac{\gamma h}{2 \pi}B_0 \label{nmr3}$ Substituting Equation \ref{nmr3} into the more familiar equation $\Delta E = h \nu$ gives the frequency, $\nu$, of electromagnetic radiation needed to effect a change in spin state as $\nu = \frac{\gamma B_0}{2 \pi} \label{nmr4}$ This is called the Larmor frequency for the nucleus. For example, if the magnet has a field strength of 11.74 Tesla, then the frequency needed to effect a change in spin state for 1H, for which $\gamma$ is $2.68 \times 10^8 \text{ rad}\text{ T}^{-1} \text{s}^{-1}$, is $\nu = \frac{(2.68 \times 10^8 \text{rad} \text{ T}^{-1}\text{s}^{-1})(11.74 \text{ T})}{2 \pi} = 5.01 \times 10^8 \text{ s}^{-1} \nonumber$ or 500 MHz, which is in the radio frequency (RF) range of the electromagnetic spectrum. This is the Larmor frequency for 1H. Population of Spin States The relative population of the upper spin state, $N_\text{upper}$, and of the lower spin state, $N_\text{lower}$, is given by the Boltzmann equation $\frac{N_\text{upper}}{N_\text{lower}} = e^{- \Delta E/k T} \label{nmr5}$ where $k$ is Boltzmann's constant ($1.38066 \times 10^{-23} \text{ J/K}$) and $T$ is the temperature in Kelvin. Substituting in Equation \ref{nmr3} for $\Delta E$ gives this ratio as $\frac{N_\text{upper}}{N_\text{lower}} = e^{-\gamma h B_0/2 \pi k T} \label{nmr6}$ IF we place a population of 1H atoms in a magnetic field with a strength of 11.74 Tesla, the ratio $\frac{N_\text{upper}}{N_\text{lower}}$ at 298 K is $\frac{N_\text{upper}}{N_\text{lower}} = e^{-\frac{(2.68 \times 10^{8} \text{ rad} \text{ s}^{-1})(6.626 \times 10^{-34} \text{ Js})(11.74 \text{ T})}{(2 \pi)(1.38 \times 10^{-23} \text{ JK}^{-1})(298 \text{ K})}} = 0.99992 \nonumber$ If this ratio is 1:1, then the probability of absorption and emission are equal and there is no net signal. In this case, the difference in the populations is on the order of 8 per 100,000, or 80 per 1,000,000, or 80 ppm. The small difference in the two populations means that NMR is less sensitive than many other spectroscopic methods. Classical Description of NMR To understand the classical description of an NMR experiment we draw upon Figure $2$. For simplicity, let's assume that in the population of nuclei available to us, there is an excess of just one nucleus with a spin state of +1/2. In Figure $2a$, we see that the spin of this nucleus is not perfectly aligned with the applied magnetic field, $B_0$, which is aligned with the z-axis; instead the nucleus precesses around the z-axis at an angle of theta, $\Theta$. As a result, the net magnetic moment along the z-axis, $\mu_z$, is less than the magnetic moment, µ, of the nucleus. The precession occurs with an angular velocity, $\omega_0$, of $\gamma B_0$. If we apply a source of radio frequency (RF) electromagnetic radiation along the x-axis such that its magnetic field component, $B_1$, is perpendicular to $B_0$, then it will generate its own angular velocity in the xy-plane. When the angular velocity of the precessing nucleus matches the angular velocity of $B_1$, absorption takes place and the spin flips, as seen in Figure $2b$. Relaxation When the magnetic field $B_1$ is removed, the nucleus returns to its original state, as seen in Figure $2a$, a process called relaxation. In the absence of relaxation, the system is saturated with equal populations of the two spin states and absorption approaches zero. This process of relaxation has two separate mechanisms: spin-lattice relaxation and spin-spin relaxation. In spin-lattice relaxation the nucleus in its higher energy spin state, Figure $2b$, returns to its lower energy state spin state, Figure $2a$, by transferring energy to other species present in the sample (the lattice in spin-lattice). Spin-lattice relaxation is characterized by first-order exponential decay with a characteristic relaxation time of $T_1$ that is a measure of the average time the nucleus remains in its higher energy spin state. Smaller values for $T_1$ result in more efficient relaxation. If two nuclei of the same type, but in different spin states, are in close proximity to each other, they can trades places in which the nucleus in the higher energy spin state gives up its energy to the nucleus in the lower energy spin state. The result is a decrease in the average life-time of an excited state. This is called spin-spin relaxation and it is characterized by a relaxation time of $T_2$. Continuous Wave NMR vs. Fourier Transform NMR In Chapter 16 we learned that we can record an infrared spectrum by using a scanning monochromator to pass, sequentially, different wavelengths of IR radiation through a sample, obtaining a spectrum of absorbance as a function of wavelength. We also learned that we can obtain the same spectrum by passing all wavelengths of IR radiation through the sample at the same time using an interferometer, and then use a Fourier transform to convert the resulting interferogram into a spectrum of absorbance as a function of wavelength. Here we consider their equivalents for NMR spectroscopy. Continuous Wave NMR If we scan $B_1$ while holding $B_0$ constant—or scan $B_0$ while holding $B_1$ constant—then we can identify the Larmor frequencies where a particular nucleus absorbs. The result is an NMR spectrum that shows the intensity of absorption as a function of the frequency at which that absorption takes place. Because we record the spectrum by scanning through a continuum of frequencies, the method is known as continuous wave NMR. Figure $2$ provides a useful visualization for this experiment. Fourier Transform NMR In Fourier transform NMR, the magnetic field $B_1$ is applied as a brief pulse of radio frequency (RF) electromagnetic radiation centered at a frequency appropriate for the nucleus of interest and for the strength of the primary magnetic field, $B_0$. The pulse typically is 1-10 µs in length and applied in the xy-plane. From the Heisenberg uncertainty principle, a short pulse of $\Delta t$ results in a broad range of frequencies as $\Delta f = 1/\Delta t$; this ensures that the pulse spans a sufficient range of frequencies such that the nucleus of interest to us will absorb energy and enter into an excited state. Before we apply the pulse, the population of nuclei are aligned parallel to the applied magnetic field, $B_0$, some with a spin of +1/2 and others with a spin of –1/2. As we learned above, there is a slight excess of nuclei with spins of +1/2, which we can represent as a single vector that shows their combined magnetic moments along the z-axis, $\mu_z$, as shown in Figure $3a$. When we apply a pulse of RF electromagnetic radiation with a magnetic field strength of $B_1$, the spin states of the nuclei tip away from the z-axis by an angle that depends on the nucleus's magnetogyric ratio, $\gamma$, the value of $B_1$, and the length of the pulse. If, for example, a pulse of 5 µs tips the the magnetic vector by 45° (Figure $3b$), then a pulse of 10 µs will tip the magnetic vector by 90° degrees (Figure $3c$), so that it now lies completely within the xy-plane. At the end of the pulse, the nuclei begin to relax back to their original state. Figure $4$ shows that this relaxation occurs both in the xy-plane (spin-spin relaxation) and along the z-axis (spin-lattice relaxation). If we were to trace the path of the magnetic vector with time, we would see that it follows a spiral-like motion as its contribution in the xy-plane decreases and its contribution along the z-axis increases. We measure this signal—called the free induction decay, or FID—during this period of relaxation. The FID for a system that consists of only one type of nucleus is the simple exponentially damped oscillating signal in Figure $5a$. The Fourier transform of this simple FID gives the spectrum in Figure $5b$ that has a single peak. A sample with a more than one type of nucleus yields a more complex FID pattern, such as that in Figure $5c$, and a more complex spectrum, such as the two peaks in Figure $5d$. Note that, as we learned in an earlier treatment of the Fourier transform in Chapter 7, a broader peak in the frequency domain results in a faster decay in the time domain. Figure $6$ shows a typical pulse sequence highlighting the total cycle time and its component parts: the pulse width, the acquisition time during which the FID is recorded, and a recycle delay before applying the next pulse and beginning the next cycle.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/19%3A_Nuclear_Magnetic_Resonance_Spectroscopy/19.01%3A_Theory_of_Nuclear_Magnetic_Resonance.txt
In the previous section we showed that there is a relationship between the Larmor frequency for a nucleus, $\nu$, its magnetogyric ratio, $\gamma$, and the primary applied magnetic field strength, $B_0$ $\nu = \frac{\gamma B_0}{2 \pi} \label{env1}$ and we used this equation to show that the Larmor frequency for the 1H nucleus in a magnetic field of $B_0 = 11.74 \text{ T}$ is 500 MHz. If this is the only thing that determines the frequency where absorption takes place, then all compounds that contain hydrogens will yield a 1H NMR spectrum with a single peak at the same frequency. If all spectra are identical, then NMR provides little in the way of useful information. The NMR spectrum for propane (CH3–CH2—CH3) in Figure $1$ shows two clusters of peaks that give us confidence in the utility of NMR. In this case, it seems likely that the cluster of peaks between 250 Hz and 300 Hz, which have a greater total intensity, are for the six hydrogens in the two methyl groups (–CH3) and that the cluster of peaks around 400 Hz are due to the methylene group (–CH2–). In this section we will consider why the location of a nucleus within a molecule—what we call its environment—might affect the frequency at which it absorbs and why a particular absorption line might appear as a cluster of individual peaks instead of as a single peak. The NMR Spectrum's Scale Before we consider how a nucleus's environment affects the frequencies at which it absorbs, let's take a moment to become familiar with the scale used to plot a NMR spectrum. The label on the x-axis of the NMR spectrum for propane in Figure $1$ raises several questions that we will answer here. Why is the Scale Relative? From Equation \ref{env1} we see that the frequency at which a nucleus absorbs is a function of the magnet's field strength, $B_0$. This means the frequency of a peak in an NMR spectrum depends on the value of $B_0$. One complication is that instruments with identical nominal values for $B_0$ likely will have slightly different actual values, which leads to small variations in the frequency at which a particular hydrogen absorbs on different instruments. We can overcome this problem by referencing a hydrogen's measured frequency to a reference compound that is set to a frequency of 0. The difference between the two frequencies should be the same on different instruments. For example, the most intense peak in the NMR spectrum for propane, Figure $1$, has a frequency of 269.57 Hz when measured on an NMR with a nominal field strength of 300 MHz, which means that its frequency is 269.57 Hz greater than the reference, which is identified as TMS. What is TMS? The reference compound is tetramethylsilane, TMS, which has the chemical formula of (CH3)4Si in which four methyl groups are in a tetrahedral arrangement about the central silicon. TMS has the advantage of having all of its hydrogens in the same environment, which yields a single peak. Its hydrogen atoms also absorbs at a low frequency that is well removed from the frequency at which most other hydrogen atoms absorb, which makes it easy to identify its peak in the NMR spectrum. How Can We Create a Universal Scale? An additional complication with the spectrum in Figure $1$ is that the frequency at which a particular hydrogen absorbs is different when using a 60 MHz NMR than it is when using a 300 MHz NMR, a consequence of Equation \ref{env1}. To create a single scale that is independent of $B_0$ we divide the peak's frequency, relative to TMS, by $B_0$, expressing both in Hz, and then report the result on a part-per-million scale by multiplying by 106. For example, the most intense peak in the NMR spectrum for propane, Figure $1$, has a frequency of 269.57 Hz; the NMR on which the spectrum was recorded had a field strength of 300 MHz. On a parts-per-million scale, which we identify as delta, $\delta$, the peak appears at $\delta = \frac{269.57 \text{ Hz}}{300 \times 10^6 \text{ Hz}} \times 10^6 = 0.899 \text{ ppm} \nonumber$ If we record the spectrum of propane on a 60 MHz instrument, then we expect that this peak to appear at 0.899 ppm, or a frequency of $\nu = \frac{0.899 \text{ ppm} \times (60 \times 10^6 \text{ Hz}}{10^6} = 53.9 \text{ Hz} \nonumber$ relative to TMS. Most hydrogens have values of $\delta$ between 1 and 13. Figure $2$ shows the 1H NMR for propane using a ppm scale. The right side of the ppm scale is described as being upfield, with absorption occurring at a lower frequency, and with a smaller difference in energy, $\Delta E$, between the ground state and the excited state. The left side of the ppm scale is described as being downfield, with absorption occurring at a higher frequency, and with a greater difference in energy, $\Delta E$, between the ground state and the excited state. Types of Environmental Effects The NMR spectrum for propane in Figure $2$ shows two important features: the peaks for the two types of hydrogen in propane are shifted downfield relative to the reference and the methylene hydrogens are shifted further downfield than the methyl hydrogens. Both groups appear as clusters of peaks instead of as single peaks. In this section we consider the source of these two phenomena. Chemical Shifts In the presence of a magnetic field, the electrons in a molecule circulate, generating a secondary magnetic field, $B_e$, that usually, but not always, opposes the primary applied magnetic field, $B_\text{appl}$. The result is that the nucleus is partially shielded by the electrons such that the field it experiences, $B_0$, usually is smaller than the applied field and $B_0 = B_\text{appl} - B_e \label{env2}$ The greater the shielding, the smaller the value of $B_0$ and the further to the right the peak appears in the NMR spectrum. For example, in the NMR spectrum for propane in Figure $2$ the cluster of peaks for the –CH3 hydrogens centered at 0.899 ppm shows greater shielding than the cluster of peaks for the –CH2– hydogens that is centered at 1.337 ppm. Chemical shifts are useful for determining structural information for molecules. A few examples are listed in the following table and more extensive tables here. Note that the range of chemical shifts for the methyl and the methylene groups encompass the values for propane in Figure $2$. Table $1$. $^{1}\text{H}$ Shifts in ppm type of hydrogen example range of chemical shifts (ppm) primary alkyl $\ce{R-CH3}$ 0.7 – 1.3 secondary alkyl $\ce{R-CH2–R}$ 1.2 – 1.6 tertiary alkyl $\ce{R3CH}$ 1.4 – 1.8 methyl ketone $\ce{R–C(=O)–CH3}$ 2.0 – 2.4 aromatic methyl $\ce{C6H5–CH3}$ 2.4 – 2.7 alkynyl $\ce{R–C#C–H}$ 2.5 – 3.0 alkyl halide (X = F, Cl, Br, I) $\ce{R2X–CH}$ 2.5 – 4.0 alcohol $\ce{R3–C–OH}$ 2.5 – 5.0 vinylic $\ce{R2–C=C(–R)–H}$ 4.5 – 6.5 aryl $\ce{C6H5–H}$ 6.5 – 8.0 aldehyde $\ce{R–C(=O)–H}$ 9.7 – 10.0 carboxylic acid $\ce{R–C(=O)–OH}$ 11.0 – 12.0 Spin-Spin Coupling Chemical shifts are the result of shielding from the magnetic field associated with a molecule's circulating electrons. The splitting of a peak into a multiplet of peaks is the result of the shielding of one nucleus by the nuclei on adjacent atoms, and is called spin-spin coupling. Consider the NMR for propane in Figure $2$, which consists of two clusters of peaks. The six hydrogens in the two methyl groups are sufficiently close to the two hydrogens in the methylene group that the spins of the methylene hydrogens can affect the frequency at which the methyl hydrogens absorb. Figure $3a$ shows how this works. Each of the two methylene hydrogens has a spin and those spins can both be aligned with the magnetic field, $B_0$, both be aligned against $B_0$, or two configurations in which one is aligned with $B_0$ and one is aligned against $B_0$, as seen by the arrows. When the two spins are aligned with $B_0$, the frequency at which the methyl hydrogens absorb is shifted downfield, and when the two spins are aligned against $B_0$, the frequency at which the methyl hydrogens absorb is shifted upfield; in the remaining two cases, there is no change in the ferquency at with the methyl hydrogens absorb. The result, as seen in Figure $3a$ is a triplet of peaks in a 1:2:1 intensity ratio. The analysis for the effect of the six methyl hydrogens on the two methylene hydrogens is a bit more complex, but works in the same way. Figure $3b$, for example, shows that there are 15 ways to arrange the spins of the six methyl hydrogens such that two are spin down and four are spin up. Figure $3c$ show the resulting NMR spectrum, which is a set of seven peaks in a 1:6:15:20:15:6:1 intensity ratio. Figure $4$ provides the splitting pattern observed for nuclei with $I = +1/2$, such as 1H. The pattern is defined by the coefficients of a binomial distribution—asking how many different ways you can get X outcomes in Y attempts is at the heart of a binomial distribution—this is easy to represent using Pascal's triangle—which shows us that for six equivalent nuclei we expect to find seven peaks with relative peak areas (or other measure of the signal) of 1:6:15:20:15:6:1. Note that the first and the last entry in any row is 1 and that all other entries in a row, as illustrated for the third entry in the seventh row, are the sum of the two entries in the row immediately above. The pattern also is know as the $N+1$ rule as the $N$ equivalent hydrogens will split the peak for an adjacent hydrogen into $N + 1$ peaks. Figure $5$ compares the experimental NMR for propane with its simulatd spectrum based on spin-spin splitting and the 2:6 ratio of methylene hydrogens relative to methyl hydrogens. The overall agreement between the two spectra is pretty good. The splitting of the individual peaks is designated by the coupling constant, J, which is shown in Figure $5$ for both the experimental and the calculated spectra. Note that the coupling constant is the same whether we are considering the effect of the methyl hydrogens on the methylene hydrogens, or the effect of the methylene hydrogens on the methyl hydrogens. Values of the coupling constant become smaller the greater the distance between the nuclei. The treatment of spin-spin coupling above works well if the difference in the chemical shifts for the two nuclei is significantly greater than the magnitude of their coupling constant. When this is not true, the splitting patterns can become much more complex and often are difficult to interpret. There are a variety of to simplify spectra, one of which, decoupling, is outlined in Figure $6$. The original spectrum (top) shows two doublets, suggesting that we have two individual nuclei that are coupled to each other. If we irradiate the nucleus on the right at its frequency, we can saturate its ground and excited states such that it ceases to absorb. As a result, the nucleus on the left no longer shows evidence of spin-spin coupling to the nucleus on the right (middle) and appears as a singlet. When we turn off the decoupler (bottom) the spin-spin coupling between the two nuclei returns more quickly than relaxation returns the signal for the nucleus on the right.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/19%3A_Nuclear_Magnetic_Resonance_Spectroscopy/19.02%3A_Environmental_Effects_on_NMR_Spectra.txt
Earlier in this chapter, we noted that there are two basic experimental designs for recording a NMR spectrum. One is a continuous-wave instrument in which the range of frequencies over which the nucleus of interest absorb is scanned linearly, exciting the different nuclei sequentially. Most instruments, however, use pulses of RF radiation to excite all nuclei at the same time and then use a Fourier transform to recover the signals from the individual nuclei. Our attention in this chapter is limited to instruments for FT-NMR. Components of Fourier Transform Spectrometers Figure $1$ includes a photograph of 400 MHz NMR and a cut-away illustration of the instrument; together, these show the key components of a FT-NMR: a magnet that provides the applied magnetic field, $B_0$, a nucleus-dependent probe that provides the radio-frequency signal that yields the magnetic field, $B_1$, and a way to insert the sample into the instrument. The NMR in the photograph also is equipped with a sample changer that allows the user to load 30 or more samples that are analyzed sequentially. Magnets The NMR in Figure $1$ is described as having a frequency, $\nu$, of 400 MHz. The relationship between frequency and the magnet's field strength, $B_0$, is given by the equation $\nu = \frac{\gamma B_0}{2 \pi} \label{compent1}$ where $\gamma$ is the magnetogyric ratio for the nucleus. A NMR's frequency is defined in terms of a nucleus of 1H; thus, a 400 MHz NMR has a magnet with a field strength of $B_0 = \frac{(2 \pi) \times \nu}{\gamma} = \frac{(2 \pi) \times (400 \times 10^6 \text{ s}^{-1})}{2.86 \times 10^{8} \text{ rad T}^{-1} \text{ s}^{-1}} = 9.4 \text{ T} \nonumber$ Early instruments used a permanent magnet and were limited to field strengths of 0.7, 1.4, and 2.1 T, or 30, 60, and 90 MHz. As higher frequencies provide for greater sensitivity and resolution, modern instruments use a tightly wrapped coil of wire—typically a niobium/tin alloy or a niobium/titanium wire—that becomes superconducting when cooled to the temperature of liquid He (4.2 K). The result is a magnetic field strength of as much as 21 T or 900 MHz for 1H NMR. The magnetic coil is held within a reservoir of liquid He, which, itself, is held within a reservoir of liquid N2. To be useful, the magnetic field must remain stable—that is, it must not drift—and it must be homogeneous throughout the sample. These are accomplished by using a reference to lock the magnetic field in place and by shimming. Locking the Magnetic Field Samples for NMR are prepared using a solvent in which the protons are replaced with deuterium. For example, instead of using chloroform, CHCl3, as a solvent, we use deuterated chloroform, CDCl3, where D is equivalent to 2H. This has the benefit of providing a solvent that will not contribute to the signals in the NMR spectrum. It also has the benefit that 2H has a spin of $I = 1$, and a corresponding Larmor frequency. By monitoring the frequency at which 2H absorbs, the instrument can use a feed-back loop to maintain its value by adjusting the magnet's field strength. Shimming A magnetic field that is not homogeneous is like a table with four legs, one of which is just a bit shorter than the others. To balance the table, we place a small wedge, or shim, under the shorter leg. When a magnetic field is not homogeneous, small, localized adjustments are made to the magnetic field using a set of shimming coils arranged around the sample. Shimming can be accomplished by the operator by monitoring the quality of the signal for a particular nucleus, however, most instruments use an algorithm that allows the instrument to shim itself. The Sample Inlet and the Sample Probe The center of the instrument, which runs from the sample input at the top to the sample probe at the bottom, is open to the laboratory environment and is at room temperature. The sample is placed in a cylindrical tube (Figure $2a$), that is made from thin-walled borosilicate glass and is 180 mm long and 5 mm in diameter. The tube is then inserted into a teflon sleeve—called a spinner—as shown in Figure $2b$, which is designed to both situate the sample at the proper depth within the sample probe, and to spin the sample about its long axis. This spinning is used to ensure that the sample averages out any inhomogeneities in the magnetic field not resovled by shimming. The sample probe contains the coils needed to excite the sample and to detect the NMR signal as the excited states undergo relaxation. Figure $3$ shows two configurations for this; in both configurations, the same coil is used for both excitation and detection. In the design on the left, which uses a permanent magnet, the applied magnetic field, $B_0$, is oriented horizontally across the sample's diameter and the radio frequency electromagnetic radiation and its field, $B_1$ is oriented vertically using a spiral coil. In the design on the right, which is used with a superconducting magnet, the applied magnetic field, $B_0$, is oriented vertically and the pulse of radio frequency electromagnetic radiation and its field, $B_1$, is oriented horizontally using a saddle coil. Data Processing In Chapter 19.1 we used the following figure to describe a pulse NMR experiment. Following a pulse that is applied for 1–10 µs, the free-induction decay, FID, is recorded for a period of time that may range from as little as 0.1 seconds to as long as 10 seconds, depending on the nucleus being probed. The FID is an analog signal in the form of a voltage, typically in the µV range. This analog signal must be converted into a digital signal for data processing, which is called an analog-to-digital conversion, ADC. Two important considerations are needed here: how to ensure that the signal—more specifically, the location of the peaks in the NMR spectrum—is not distorted, and how to accomplish the ADC when the frequencies are on the order of hundreds of MHz. Analog-to-Digital Conversion An analog-to-digital converter maps the signal onto a limited number of possible values—expressed in binary notation—and are characterized by the number of available bits. A 2-bit ADC convertor, for example, is limited to $2^2 = 4$ possible binary values of 00, 01, 10, and 11 that correspond to the decimal numbers 1, 2, 3, and 4. Having only four possible values, of course, would distort the FID pattern in Figure $4$ from a smoothly varying oscillating signal into a series of steps. Using an ADC convertor with 16 bits allows for 65,536 unique digital values, a significant improvement. Another form of distortion occurs if we do not sample the FID with sufficient frequency. Consider, for example, the simple sine wave in Figure $5a$ that is shown as a solid line. If we sample this signal only five times over a period of less than four complete cycles, as shown by the five equally-spaced dots in Figure $5a$, then the apparent signal is that shown by the dadshed line. According to the Nyquist theorem, to determine accurately the frequency of a periodic signal, we must sample the signal at least twice during each cycle or period. Given a sampling rate of $\Delta$, the following equation $\Delta = \frac{1}{2 \nu_\text{max}} \label{adc1}$ defines the highest frequency, $\nu_\text{max}$, that we can monitor accurately. A sampling rate of six samples per period is more than sufficient to reproduce the real signal in Figure $5$. A peak with a frequency that is greater than $\nu_\text{max}$ is not absent from the spectrum; instead, it simply appears at a different location. For example, suppose we can monitor accurately any frequency within the window shown in Figure $6$ and that we only measure frequencies within this window. A peak with a frequency that is greater than what we can measure accurately by $\Delta \nu$ appears at an apparent frequency that is $\Delta \nu$ greater than the frequency window's lower limit. This is called folding. Managing MHz Signals The instrument in Figure $1$ is a 400 MHz NMR. This is a range of frequencies that is too large for an analog-to-digital convertor to handle with accuracy. The frequency window of interest to us, however, is typically 10 ppm for 1H NMR (see Chapter 19.2 to review the NMR scale). For a 400 MHz NMR this corresponds to just 4000 Hz, with the useful range running from 400.000 MHz to 400.004 MHz. Subtracting the instrument's frequency of 400 MHz from the signal's frequency limits the latter to the range of 0–4000 Hz, a range that is easy for an ADC to handle. Signal Integrators Integrating to determine the area under the peaks provides a way to gain some quantitative information about the sample. Figure $7$ shows the integration of the NMR of propane first seen in Chapter 19.2. Integration of the peak for the two methyl groups gives a result of 1766 and integration of the peak for the methylene group gives a result of 710. The ratio of the two is $\frac{1766}{710} = 2.5 \nonumber$ which is somewhat smaller than the expected 3:1 ratio.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/19%3A_Nuclear_Magnetic_Resonance_Spectroscopy/19.03%3A_NMR_Spectrometers.txt
Proton (1H) NMR finds use for both qualitative analyses and quantitative analyses; in this section we briefly consider each of these areas. Identification of Compounds Proton NMR is an essential tool for the qualitative analysis of organic, inorganic, and biochemical compounds. Figure $1$ provides a simple example that shows the relationship between structure and a 1H NMR's peaks. The spectra in this figure are for a set of four simple organic molecules, each of which has a chain of three carbons and an oxygen: 1-propanol, CH3CH2CH2OH, 2-propanol, CH3CH(OH)CH3, propanal, CH3CH2CHO, and propanoic acid, CH3CH2COOH. The first two of these molecules are alcohols, the third is an aldehyde, and the last is an acid. The main spectrum runs from 0–14 ppm, with insets showing the spectra over a narrower range of 0–5 ppm. Each of these molecules has a terminal –CH3 group that is the most upfield peak in its spectrum, appearing between 0.94 – 1.20 ppm. Each of these molecules has a hydrogen that either is bonded to an oxygen or a hydrogen bonded to the same carbon as the oxygen. The hydrogens in the –OH groups of the two alcohols have similar shifts of 2.16 ppm and 2.26 ppm, but the aldehyde hydrogen in the –CHO group and the acid hydrogen in –COOH are shifted further downfield appearing at 9.793 ppm and 11.73 ppm, respectively. The hydrogens in the two –CH2– groups of 1-propanol have very different shifts, with the one adjacent to the –OH group appearing more downfield at 3.582 ppm than the one next to the –CH3 group at 1.57 ppm. Not surprisingly, the –CH– hydrogen in 2-proponal, which is adjacent to the –OH group appears at 4.008 ppm. Comparisons such of this make it possible to build tables of chemical shifts—see Table 19.2.1 in Chapter 19.2 for an example—that can help in determining the identify of the molecule giving rise to a particular NMR spectrum. As this receives extensive coverage in other courses, particularly courses in organic chemistry, we will not provide a more extensive coverage here. Quantitative Analysis A quantitative analysis requires a method of standardization, which for NMR usually makes use of an internal standard. A good internal standard should have high purity and should have a relatively simple NMR spectrum with peaks that do not overlap with the analyte or other species present in the sample. If we are interested in only the relative concentrations of the analyte and the internal standard, then we can use the following formula $\frac{M_a}{M_{is}} = \frac{I_a}{I_{is}} \times \frac{N_{is}}{N_a} \label{quant1}$ where $M$ is the molar concentration of the analyte or internal standard, $I$ is the intensity of the NMR peak for the analyte or internal standard, and $N$ is the number of nuclei giving rise to the NMR peak for the analyte and the internal standard. Even if we don't know the exact concentration of the internal standard, if we know that its concentration is the same in all samples, then we can determine the relative concentration of analyte in a collection of samples. If we are interested in determining the absolute concentration of analyte in a sample, then we must know the absolute concentration of the internal standared; when true, then Equation \ref{quant1} becomes $M_a = \frac{I_a}{I_{is}} \times \frac{N_{is}}{N_a} \times M_{is} \label{quant2}$ Determining the purity of an analyte, $P_a$, in a sample, we can use the equation $P_a = \frac{I_a}{I_{is}} \times \frac{N_{is}}{N_a} \times \frac{M_a}{M_{is}} \times \frac{W_{is}}{W_a} \times P_{is} \label{quant3}$ where $W$ is the weight of the internal standard or the sample that contains our analyte.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/19%3A_Nuclear_Magnetic_Resonance_Spectroscopy/19.04%3A_Applications_of_Proton_NMR.txt
The relatively slow development of instrumentation for 13C NMR spectra is the result of its limited sensitivity compared to 1H NMR. This difference in sensitivity is due to two key differences between the nuclei 1H and 13C: their relative abundances and their relative magnetogyric ratios. While 1H comprises 99% of all hydrogen, 13C accounts for just 1% of all carbon. The strength of an NMR signal also depends on the difference in energy, $\Delta E$, between the ground state and the excited state, which is a function of the magnetogyric ratio, $\gamma$ $\Delta E = h \nu = \frac{\gamma B_0}{2 \pi} \label{carbon1}$ The greater the difference in energy, the greater the difference in the population between the ground and the excited states, and the greater the signal. The magnetogyric ratio, $\gamma$, for 1H is $4 \times$ greater than that for 13C. As a result of these two factors, 1H NMR is approximately $6400 \times$ more senstitive than 13C. The development of magnets with higher field strengths and the capabilities of signal averaging (see Chapter 5 on signals and noise) when using Fourier transforms to gather and analyze data, make 13C feasible. Figure $1$ shows 13C NMR spectrum for three related molecules: p-nitrophenol, o-nitrophenol, and m-nitrophenol. There are three things to make note of from this figure. First, each spectrum consists of a set of peaks, each of which is a singlet, suggesting that no spin-spin coupling is taking place. Second, the number of peaks in each spectrum is the same as the number of unique types of carbon—four unique carbons for p-nitrophenol and six each for m-nitrophenol and o-nitrophenol—which suggests that chemical shifts in 13C provide useful information about the environment of the carbon atoms and, therefore, the molecule's structure. And, third, unlike 1H, there is no relationship between the intensity of a 13C peak and the number of carbon atoms. This is particularly evident when comparing the intensity of the peaks for the carbon bonded to the –NO2 group and the carbon bonded to the –OH group, which are significantly less intense than the peaks for other carbons. We will consider each of these observations in the remainder of this section. Proton Decoupling In 13C NMR there is no coupling between adjacent carbon atoms because it is unlikely that both are 13C, the only isotope of carbon that is NMR active (the odds that two adjacent carbons are both 13C is $0.01 \times 0.01$, or $0.0001$ or $0.010\%$). Coupling does take place between 13C and 1H when the hydrogen atoms are attached to the carbon atom. Such coupling follows the same N+1 rule as in 1H NMR; thus, a quartenary carbon (R4C) appears as a singlet, a methine carbon (R3CH) appears as a doublet, a methylene carbon (R2CH2) appears as a triplet, and a methyl carbon (RCH3) appears as a quartet. Even with the extensive range of ppm values over which 13C peaks appear—chemical shifts for 13C spectra run from 250 – 0 ppm instead of 14 – 0 ppm for 1H spectra—a compound with many different types of carbon atoms, each with 1 – 3 hydrogen atoms results in a complex spectrum. For this reason, 13C NMR spectra are acquired in a way that prevents coupling between 13C and 1H. This is called proton decoupling. The most common method of proton decoupling is to use a second RF generator to irradiate the sample with a broad-band of RF signals that spans the range of frequencies for the protons. As described earlier in Section 19.3, the effect is to saturate the proton's ground and excited states, which prevents the protons from absorbing energy and from coupling with each other and with the carbons atoms. The 13C spectra in Figure $1$ are examples of decoupled spectra. Qualitative Applications of 13C NMR Just as with 1H NMR spectra, tables of chemical shifts for 13C peaks aids in determining a molecules structure. Table $1$ provides ranges of chemical shifts for different types of carbon atom. A set of tables is available here. Table $1$. 13C Shifts in ppm type of carbon atom example range of chemical shifts (ppm) primary alkyl $\ce{R-CH3}$ 10 – 30 secondary alkyl $\ce{R-CH2–R}$ 15 – 55 tertiary alkyl $\ce{R3CH}$ 20 – 60 quaternary alkyl $\ce{R4C}$ 30 – 40 alkynl $\ce{R–C#C–H}$ 65 – 90 alkenyl $\ce{R–C=C–H}$ 100 – 150 aromatic $\ce{C6H6}$ 110 – 170 ester $\ce{R-C(=O)-O-R}$ 165 – 175 amide $\ce{R-C(=O)-N-R2}$ 165 – 175 carboxylic acid $\ce{R–C(=O)–OH}$ 175 – 185 aldehyde $\ce{R–C(=O)–H}$ 190 – 220 ketone $\ce{R-C(=O)-R}$ 205 – 220 attached to iodine $\ce{C-I}$ 0 – 40 attached to bromine $\ce{C-Br}$ 25 – 65 attached to chlorine $\ce{C-Cl}$ 35 – 80 attached to oxygen $\ce{C-O}$ 40 – 80 attached to nitrogen $\ce{C-N}$ 40 – 60 Nuclear Overhauser Enhancement The most intense peak in the 13C NMR spectrum for m-nitrophenol (see Figure $1$ above) is the carbon in the benzene ring labeled as position 4, with an intensity of 1000 (the intensity scale is normalized here to a maximum value of 1000, but for this section, let's take it as an absolute value). Because the spectrum was acquired with proton decoupling turned on, the peak for this carbon appears as a singlet. If we turn proton decoupling off, then we expect the peak to appear as a doublet as this carbon has one hydrogen attached to it. We might reasonably expect to find that each peak has an intensity of 500, giving a total intensity of 1000. The actual intensities of the peaks for this carbon, however, are smaller than expected. Put another way, when we turn proton decoupling on, the intensity of a 13C line increases more than expected and the more hydrogens, the greater the effect. This is called nuclear overhauser enhancement (NOE). NOE is the result of the relative populations of the ground and excited states. The technical details are more than we will consider here, but the extent of the total enhancement of the peak intensities is proportional to the ratio of the magnetogyric ratios of the irradiated nucleus (1H) and the observed nucleus (13C), which for a 1H decoupled 13C NMR results in a total enhancement of the intensity of approximately 200%. As magnetogyric ratios can be negative, as is the case for 15N, a decoupled spectrum can result in less intense peaks. One important consequence of NOE, is that integrated peak areas are not proportional to the number of identical carbon atoms, which is a loss of information. Note Although our focus in this chapter is on 1H and 13C NMR, other nuclei, such as 31P, 19F, and 15N are useful for the study of chemically and biochemically important molecules.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/19%3A_Nuclear_Magnetic_Resonance_Spectroscopy/19.05%3A_Carbon-13_NMR.txt
The 1H and 13C spectra up to this point are shown in one dimension (1D), which is the frequency absorbed by the analyte's nuclei expressed in ppm. These spectra were acquired by applying a brief RF pulse to the sample, recording the resulting FID, and then using a Fourier transform to obtain the NMR spectrum. In addition to 1D experiments, there are a host of 2D experiments in which we apply a sequence of two or more pulses, recording the resulting FID after applying the last pulse. In this section we will consider one example of a 2D NMR experiment in some detail: 1H – 1H correlation spectroscopy, or 1H – 1H COSY. Figure $1$ shows the basic experimental details for the COSY experiment. The pulse train is shown in (a) and consists of a first pulse that is followed by a delay time, $t_1$, in which the nuclear spins are allowed to precess and relax. This is followed by the application of a second pulse and the measurement of the resulting FID during the acquisition period, $t_2$, that consists of $n_{t_2}$ individual data points. The COSY experiment consists of a sequence of $n_{t_1}$ individual pulse trains, each with a different $t_1$. The result (b) is a matrix with $n_{t_1}$ rows and $n_{t_2}$ columns. To process the data, each row in this matrix is Fourier transformed, the resulting $n_{t_1} \times n_{t_2}$ matrix is transposed to a $n_{t_2} \times n_{t_1}$ matrix and then Fourier transformed again to give (c) a $n_{t_2} \times n_{t_2}$ matrix that shows the intensity of the signal for all possible combinations of applied frequencies. Figure $2$ shows the 1H – 1H COSY spectrum for ethyl acetate. Instead of just annotating the two axes with numerical values of the frequencies, they are displayed by superimposing the 1D 1H NMR spectrum for ethyl acetate in ppm. The points that fall on the diagonal line are just the three frequencies where ethyl acetate has 1H peaks. Of more interest are the points that fall on either side of the diagonal line—these are called cross peaks—as these show pairs of hydrogens that are coupled (or correlated) to each other. The cross peaks in the COSY spectrum are symmetrical about the diagonal line and show the correlation between the hydrogens on the methyl carbon and and the methylene carbon that are adjacent to each other (the ethyl part of ethyl acetate). The remaining methyl group is not coupled to the other hydrogen's in the ethyl acetate, so there is no cross peak at the intersection of $\delta = 2.038 \text{ ppm}$ and $\delta = 1.260 \text{ ppm}$. The information about coupling from cross peaks assists in interpreting complex 1H NMR spectra. COSY is one example of a homonuclear 2D NMR experiment because it examines coupling between identical nuclei, such as 1H in Figure $2$. There are many other 2D NMR experiments, each of which uses a sequence of pulses—some that use two pulses, as in COSY, and others that use three or more pulses—and a data analysis algorithm. Table $1$ provides details on some of these methods. Table $1$. Selected examples of 2D NMR experiments. method information obtained from cross peaks correlation spectroscopy (COSY) coupling between two protons (1H – 1H) that are within three chemical bonds of each other total correlation spectroscopy (TOCSY) coupling between all protons (1H) in the molecule heteronuclear correlation spectroscopy (HETCOR) coupling between a proton (1H) and another nucleus, such as carbon (1H – 13C) or nitrogen (1H – 15N) nuclear overhauser and exchange spectroscopy (NOESY) coupling between two protons (1H – 1H) that are within approximately 5 Å of each other heteronuclear single quantum correlation (HSQC) coupling between a proton (1H) and another nucleus, such as carbon (1H – 13C) or nitrogen (1H – 15N) heteronuclear multiple bond coherence spectroscopy (HMBC) coupling between a proton and a carbon (1H – 13C) that are two or three bonds apart incredible natural abundance double-quantum transfer (INADEQUATE) coupling between adjacent carbon atoms (13C – 13C) double quantum filtered correlation spectroscopy (DQF–COSY) suppresses signals from water
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/19%3A_Nuclear_Magnetic_Resonance_Spectroscopy/19.06%3A_Two-Dimensional_Fourier_Transform_NMR.txt
• 20.1: Molecular Mass Spectra In Chapter 11 we considered the use of mass spectrometry in the analysis of atoms. In this chapter we turn our attention to the use of mass spectrometry for the analysis of molecules. • 20.2: Ion Sources Since a mass spectrum shows the relative abundance of ions with different mass-to-charge ratios, a mass spectrometer must include a way to generate ions. More specifically, it needs a method that generates the initial ion as it, once formed, will undergo fragmentation without additional help from the analyst. In this section we consider several common ion sources. • 20.3: Mass Spectrometers A mass spectrometer has four essential elements: a means for introducing the sample to the instrument, a means for generating a mixture of ions, a means for separating the ions, and a means for counting the ions. In Chapter 20.2 we introduced some of the most important ways to generate ions. In this section we turn our attention to sample inlet systems and to separating and counting ions. • 20.4: Applications of Molecular Mass Spectrometry In a qualitative analysis our interest is in determining the identity of a substance of interest to us. By itself, mass spectrometry is a powerful tool for determining the identity of pure compounds. The analysis of mixtures, however, is possible if we use a mass spectrometer as both a qualitative and quantitative detector for a separation technique, such as gas chromatography, or if we string together two or more mass analyzers in sequence. 20: Molecular Mass Spectrometry Figure $1$ shows a mass spectrum for p-nitrophenol, C6H5NO3, which has a nominal (integer) mass of 139 daltons. If we send a beam of energetic electrons through a gas phase sample of p-nitrophenol, it loses an electron, which we write as the reaction $\ce{C6H5NO3} + e^{-} \rightarrow \ce{C6H5NO3^{+•}} + 2e^{-} \label{pnp1}$ where the product is a radical cation that has a charge of $+1$ and that retains the nominal mass of 139 daltons. We call this the molecular ion—highlighted here in green—and it has a mass-to-charge ratio (m/z) of 139. Note Some of the terminology in this chapter was covered earlier in Chapter 11 on atomic mass spectrometry. See the first section of that chapter for a discussion of atomic mass units (amu) and daltons (Da), and of mass-to-charge ratios. If reaction \ref{pnp1} is all that happens when p-nitrophenol interacts with an energetic electron, then it would not provide much in the way of useful information. The radical cation $\ce{C6H5NO3^{+•}}$, however, retains sufficient excess energy from the initial electron-molecule collision that it is in an excited state. In returning to its ground state, the molecular ion undergoes a series of fragmentations that result in the formation of ions—called daughter ions—with different mass-to-charge ratios. A plot that shows the relative intensity of these ions as a function of their mass-to-charge ratios is called a mass spectrum. The most abundant fragment in the spectrum—shown here in red and which is called the base peak—is assigned a relative intensity of 100; the intensity of all other ions is reported relative to the base perk. A molecule's fragmentation patterns provides rich information about its structure. Figure $2$ compares the mass spectra for o-nitrophenol, m-nitrophenol, and p-nitrophenol. All three molecules have clusters of fragment ions at similar mass-to-charge ratios, but the relative abundance of the ions in these clusters varies quite a bit from molecule-to-molecule. For example, the pink rectangle in each spectrum highlights peaks with mass-to-charge ratios from approximately 104 m/z to 115 m/z. All three molecules share the property of producing fragment ions with these mass-to-charge ratios; the relative abundance of the fragment ions, however, varies substantially between the three molecules with o-nitrophenol and p-nitrophenol having a major peak at 109 m/z, but m-nitrophenol showing no more than a trace peak at 109 m/z. We will return to the use of mass spectrometry for determining structure information later in this chapter.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/20%3A_Molecular_Mass_Spectrometry/20.01%3A_Molecular_Mass_Spectra.txt
Since a mass spectrum shows the relative abundance of ions with different mass-to-charge ratios, a mass spectrometer must include a way to generate ions. More specifically, it needs a method that generates the initial ion as it, once formed, will undergo fragmentation without additional help from the analyst (which does not mean the analyst cannot assist in that fragmentation; see discussion of tandem mass spectrometry in Section 20.4). In this section we consider several common ion sources. We can describe these sources using two characteristic properties: (a) the physical state of the species that is initially ionized (gas, liquid, or solid phase), and (b) whether ionization favors the formation of fragment ions or the formation of molecular ions (hard sources or soft sources). Electron Ionization Sources (gas phase/hard source) The electron ionization (EI) source, also known as an electron impact source, uses a beam of energetic electrons to ionize the analyte. As shown in $1$, the sample is volatilized prior to entering the ion source as gas phase molecules, M(g). A heated tungsten filament is used to generate electrons, which are pulled toward a positively charged anode. This electron beam intersects with the gas phase molecules at 90° where ionization occurs $\ce{M}(g) + e^- \rightarrow \ce{M^{+•}}(g) + 2e^- \label{ei1}$ The molecular ions, $\ce{M^{+•}}(g)$, are then swept into the mass analyzer using a set of accelerating plates (not shown here). The electron beam in Figure $1$ has a lot of energy due to the significant potential difference between the cathode and the anode, which may be as much as 70 V. The kinetic energy of these electrons is equivalent to the product of the electron's charge in Coulombs, the applied potential in volts, and Avogadro's number $e \times V \times N_A = 1.6 \times 10^{-19} \times 6.022 \times 10^{-23} = 6.7 \times 10^{6} \text{ J/mol} \nonumber$ or 6,700 kJ/mol. This energy is much greater than typical bond energies, which range from approximately 150–600 kJ/mol for single bonds, from approximately 500–750 kJ/mol for double bonds, and from approximately 800–1100 kJ/mol for triple bonds. The significant difference between the energy of the electrons and bond energies explains why electron ionization spectra are rich in fragment ions, as we saw earlier in Section 20.1 for o-nitrophenol, m-nitrophenol, and p-nitrophenol. This extensive fragmentation is useful in determining an analyte's structure—which is an advantage of a hard ionization method—but at the possible cost of the loss of the molecular ion peak for some analytes. For example, Figure $2$ shows the electron ionization mass spectrum from 1-decanol, C10H22O, which has a nominal mass of 158 daltons. The small peak at m/z = 157 is for the fragment ion C10H21O+; the molecular ion is not observed in this spectrum. Chemical Ionization Sources (gas phase/soft source) Electron ionization is a hard source because the electron beam's energy results in easy fragmentation. In chemical ionization, we introduced a reagent molecule, such as methane, into the electron ionization (CI) source so that it is present at level that is $1000 \times$ to $10,000 \times$ greater than the analyte. At this higher concentration, it is the reagent molecule that is ionized; for example, when using CH4 as the reagent gas, ions such as $\ce{CH4+}$ and $\ce{CH3+}$ form. These ions then react with additional methane molecules $\ce{CH4+}(g) + \ce{CH4}(g) \rightarrow \ce{CH5+}(g) + \ce{CH3}(g) \label{ci1}$ $\ce{CH3+}(g) + \ce{CH4}(g) \rightarrow \ce{C2H5+}(g) + \ce{H2}(g) \label{ci2}$ to form $\ce{CH5+}$ and $\ce{C2H5+}$, species that are sufficiently reactive that they easily transfer a hydrogen to a molecule of the analyte, MH $\ce{CH5+}(g) + \ce{MH}(g) \rightarrow \ce{MH2+}(g) + \ce{CH4}(g) \label{ci3}$ to give a molecular ion that we identify as [M + H]+ and that has a mass that is one amu unit greater than that for M. Alternatively, they can easily remove a hydrogen from a molecule of the analyte, MH $\ce{C2H5+}(g) + \ce{MH}(g) \rightarrow \ce{M+}(g) + \ce{C2H6}(g) \label{ci4}$ to give a molecular ion that we identify as [M – H] and that has a mass that is one amu less than that for M. Because formation of the molecular ion occurs indirectly and less energetically, fragmentation is suppressed, leading to a mass spectrum with a molecular ion peak and with only a small number of other ions. Figure $2$ shows the mass spectrum for 1-decanol when using chemical ionization with $\ce{CH4}$ as the reagent gas. Electrospray Ionization Sources (liquid phase/soft source) Electron impact and chemical ionization are gas phase sources because the sample is volatilized before it enters the mass spectrometer's inlet. In electrospray ionization (ESI), the sample is a liquid and ions desorb from that matrix in the mass spectrometer's inlet system. The liquid sample is pulled into the spectrometer's inlet through a capillary needle, forming a mist of droplets. The application of a large potential across this inlet assures that the droplets carry positive charges. These charged droplets then enter a chamber where they undergo desolvation, which decreases the size of the droplet and increases their charge density (see Figure $4$). As this charge density increases, the droplets eventually become unstable, for reasons that are not fully understood, and ionized gas-phase ions desorb from the droplets and enter into the mass analyzer. A typical electrospray ionization mass spectrum for a small molecule is shown in Figure $5$ for the compound (4-aminophenyl)arsonic acid. As we saw in Figure $3$ for chemical ionization, a soft ionization source results in a limited amount of fragmentation and a strong peak for the molecular ion, which here includes a proton transfer to give the [M + H]+ peak at a m/z of 218 amu. Electrospray ionization is particularly useful for biological molecules, such as peptides and proteins, because the soft ionization ensures that molecular weight information is retained. Because these molecules are large, they readily pick up multiple protons, forming multiply charged ions of the general form [M + zH]z+ where z is the number of protons added. Figure $6$ shows a hypothetical spectrum for the molecule M and Table $1$ provides the corresponding m/z values for the mass spectrum's peaks. Table $1$. Mass-to-charge ratios for the peaks in Figure $6$. peak m/z M1 1112 M2 1001 M3 910 M4 834 M5 770 M6 715 M7 667 M8 626 If we take the mass-to-charge ratios for any two adjacent peaks, $M_i$ and $M_j$, where the peak $M_i$ has the greater value for m/z, and if we assume that $M_j$ has one additional hydrogen atom, giving it a charge that is one unit higher, then $Z_i = \frac{M_j - 1}{M_i - M_j} \label{findz}$ where $Z_i$ is the charge on the ion $M_i$. Table $2$ shows the calculated charges for the ions $M_1$ to $M_7$. Note Here is a derivation for Equation \ref{findz}. Suppose the molecule of interest has a molecular weight of $m$. If the charge on the ion responsible for peak $M_i$ is $Z$, then it must be the case that the peak's mass is equal to m + Z as it has Z extra hydrogens and it must be the case that its mass-to-charge ratio is $M_i = \frac{m + Z}{Z} \nonumber$ and its molecular weight, $m$, is $m = (M_i \times Z) - Z \nonumber$ In the same way, the peak $M_j$ has a charge of $Z + 1$ and $M_j = \frac{m + Z + 1}{Z + 1} \nonumber$ $m = (M_j \times Z )+ M_j - Z - 1 \nonumber$ Setting the two equations for $m$ equal to each other and solving for $Z$ gives $(M_i \times Z) - Z = (M_j \times Z) + M_j - Z - 1 \nonumber$ $(M_i \times Z) = (M_j \times Z) + M_j - 1 \nonumber$ $(M_i \times Z) - (M_j \times Z) = M_j - 1 \nonumber$ $Z = \frac{M_j - 1}{M_i - M_j} \nonumber$ Table $2$. Calculated charges on the ions for peaks $M_1$ to $M_7$ peak m/z Z M1 1112 9 M2 1001 10 M3 910 11 M4 834 12 M5 770 13 M6 715 14 M7 667 15 M8 626 The molecular weight, $m$, is given by the equation $m = (M_i \times Z) - Z \label{findmw}$ Table $3$ shows the molecular weights for the ions $M_1$ to $M_7$ and their average value. The simulated mass spectrum was created by setting the molecular weight to 10,000 amu and with charges ranging from +9 to +16. Table $3$. Calculated molecular weights for the ions $M_1$ to $M_7$. peak m/z Z $m$ M1 1112 9 9,999 M2 1001 10 10,000 M3 910 11 9,999 M4 834 12 9,996 M5 770 13 9,997 M6 715 14 9,996 M7 667 15 10,005 M8 626 average molecular weight 9,999 Matrix-Assisted Laser Desorption/Ionization Sources (solid phase/soft source) Matrix-assisted laser desorption ionization (MALDI) is a soft ionization source for obtaining the mass spectrum for biologically important molecules, such as proteins and peptides. Figure $7$ illustrates the basic steps in obtaining a MALDI spectrum. The sample is first mixed with a small molecule—which is called the matrix—to form a solution; the matrix usually is present in a 10:1 ratio. A drop of this mixture is placed on a sample probe and allowed to dry, leaving the sample in a solid form. A pulsed laser beam ($\lambda = 237$ nm is typical) is focused on the solid sample–matrix mixture. The matrix absorbs the laser pulse and the absorption of the laser's energy volatilizes both the matrix and the sample. Ionization of the sample forms molecular ions, usually [M + H]+ ions, which are then swept into the mass analyzer. When the sample is a digestion of a protein, then it is a mixture of peptides, each of which appeara as a [M + H]+ peak in the resulting mass spectrum. For example, a peptide with the sequence AWSVAR (alanine–tryptophan–serine–valine–alanine–arginine) will appear as a peak with a mass of 689.8 daltons. To find this value we add together the molecular weights of the amino acids, account for the loss of a molecule of water for each peptide bond that forms, and then account for the hydrogen that gives the [M + H]+ ion. In this case we have $\ce{[M + H]^+} = 89.1 + 204.2 + 105.1 + 117.1 + 89.1 + 174.2 - (5 \times 18.0) + 1 = 689.8 \text{ amu} \nonumber$ where the term $5 \times 18.0$ accounts for the loss of five molecules of $\ce{H2O}$ when forming the five peptide bonds. Fast Atom Bombardment Sources (liquid phase/soft source) Fast atom bombardment (FAB) bears some similarity to MALDI: the sample is mixed with a liquid matrix (often glycerol) and bombarded with a beam of xenon or argon atoms (instead of a laser). Desorption of the sample from its matrix forms gas phase ions that are swept into the mass analyzer. Spectra usually contain both a molecular ion and fragmentation patterns.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/20%3A_Molecular_Mass_Spectrometry/20.02%3A_Ion_Sources.txt
A mass spectrometer has four essential elements: a means for introducing the sample to the instrument, a means for generating a mixture of ions, a means for separating the ions, and a means for counting the ions. In Chapter 20.2 we introduced some of the most important ways to generate ions. In this section we turn our attention to sample inlet systems and to separating and counting ions. You may wish to review Chapter 11 where we considered these topics in the context of atomic mass spectrometry. As we noted in Chapter 11, a mass spectrometer must operate under a vacuum to ensure that ions can travel long distances without undergoing undesired collisions that affect their ion energy. Sample Inlet Systems When the sample is a gas or a volatile liquid, it is easy to transfer a portion of the sample into a reservoir as a gas maintained at a relatively small pressure. The sample is then allowed to enter into the mass spectrometer's ion source through a diaphragm that contains a pin-hole, drawn in by holding the ion source at lower pressure. Solid and non-volatile liquids are sampled by inserting them directly into the ion source through a vacuum lock that allows the mass spectrometer to remain under vacuum except for the ion source where the sample is inserted. The sample is placed in a capillary tube or a small cup at the end of a sample probe, and then moved into the ion source. The sample probe includes a heating coil that is used, along with the instrument's vacuum, to help volatilize the sample. Of particular importance are inlet systems that couple a chromatographic or an electrophoretic instrument to a mass spectrometer, providing a way to separate a complex mixture into its individual components and then using the mass spectrometer to determine the composition of those components. The interface between a gas chromatograph and a mass spectrometer (GC-MS) must account for the significant drop in pressure from atmospheric pressure to a pressure of 10–8 torr; the interface for LC-MS and for EC-MS must provide a way to remove the liquid eluent, to volatilize the samples, and to account for the drop in pressure. See Chapters 27, 28, and 30 for more details. Mass Analyzers The purpose of the mass analyzer is to separate the ions by their mass-to-charge ratios. Ideally we want the mass analyzer to allow us to distinguish between small differences in mass and to do so with a strong signal-to-noise ratio. As we learned in Chapter 7 when introducing optical spectroscopy, these two desires usually are in tension with each other, with improvements in resolution often coming with an increase in noise. Resolution The resolution between two peaks, $R$, in mass spectrometry is defined as the ratio of their average mass to the difference in their masses $R = \frac{\overline{m}}{\Delta m} \label{resolution}$ The following table shows how resolution varies as a function of the average mass and the difference in mass. A resolution of 1,000, for example is sufficient to resolve two ions with an average mass of 100 amu that differ by 0.1 amu, or two ions that have an average mass of 1,000 amu that differ by 1 amu. Table $1$. Resolution for stated values of $\overline{m}$ and $\Delta m$. $\overline{m} \rightarrow \nonumber$ $\Delta m \downarrow \nonumber$ 100 amu 1000 amu 10,000 amu 0.1 amu 1,000 10,000 100,000 1 amu 100 1,000 10,000 10 amu 10 100 1,000 Magnetic Sector Mass Analyzers When a beam of ions passes through a magnetic field, its path is altered, as we see in Figure $1$. The ions experience an acceleration as they exit the ion source and enter the mass analyzer with a kinetic energy that is given by the equations $\ce{KE} = z e V \label{msa1}$ $\ce{KE} = \frac{1}{2} mv^2 \label{msa2}$ where $z$ is the ion's charge (usually +1), $e$ is the electronic charge in Coulombs, $V$ is the applied voltage responsible for the acceleration, $m$ is the ion's mass, and $v$ is the ion's velocity after acceleration. Equation \ref{msa1} shows us that all ions with the same charge have the same kinetic energy. Equation \ref{msa2}, then, tells us that ions with a greater mass will move more slowly. An ion's path through the magnetic field is determined by two forces. The first of these forces is the magnetic force, $F_M$, that acts on the ion, which is $F_M = B z e v \label{msa3}$ where $B$ is the magnetic field strength. The second of these forces is the centripetal force, $F_C$, that acts on the ion as it moves along its curved path, which is $F_C = \frac{mv^2}{r} \label{msa4}$ where $r$ is the magnet's radius of curvature. An ion can only navigate these opposing forces if $F_M$ and $F_C$ are equal to each other. This requires that $B z e v = \frac{mv^2}{r} \label{msa5}$ Solving for $v$ gives $v = \frac{B z e r}{m} \label{msa6}$ Substituting back into Equation \ref{msa2} and solving for the mass-to-charge ratio gives $\frac{m}{z} = \frac{B^2 r^2 e}{2V} \label{msa7}$ Equation \ref{msa7} tells us that for any combinaton of magnetic field strength, $B$, and accelerating voltage, $V$, only one mass-to-charge ratio has the correct value of $r$ to reach the director. Ions that are too heavy or ions that are too light, will collide with the sides of the mass analyzer before they reach the detector. The mass spectrum is recorded by holding $V$ and $r$ constant and varying the magnetic field strength, $B$. The resolution of a magnetic sector instrument is usually less than 2000. Double-Focusing Mass Analyzers The resolution of a magnetic sector instrument suffers from limitations that affect its ability to narrow the range of kinetic energies—and, thus, velocities—possessed by the ions when they exit the ion source and enter the mass analyzer. The double-focusing mass analyzer in Figure $2$ compensates for this by placing an electrostatic analyzer before the magnetic analyzer, separating the two by a slit. The electrostatic analyzer consists of two curved metal plates, one of which is held at a positive potential and one at a negative potential. As ions pass betweeen the plates, those ions that have too much energy and those that have too little energy fail to pass through the slit that separates the electrostatic analyzer from the magnetic analyzer. In this way, the distribution of energies—and, thus, velocities—is tightened, improving the resolution achieved by the magnetic sector analyzer. Depending on its design, a double-focusing analyzer can achieve a resolution as large as 100,000. Quadrupole Mass Analyzers The quadrupole mass analyzer was introduced in Chapter 11 and the treatment here is largely the same. A quadupole mass analyzer is compact in size, low in cost, easy to use, and easy to maintain. As shown in Figure $3$, it consists of four cylindrical rods, two of which are connected to the positive terminal of a variable direct current (dc) power supply and two of which are connected to the power supply's negative terminal; the two positive rods are positioned opposite of each other and the two negative rods are positioned opposite of each other. Each pair of rods is also connected to a variable alternating current (ac) source operated such that the alternating currents are 180° out-of-phase with each other. An ion beam from the source is drawn into the channel between the quadrupoles and, depending on the applied dc and ac voltages, ions with only one mass-to-charge ratio successfully travel the length of the mass analyzer and reach the transducer; all other ions collide with one of the four rods and are destroyed. To understand how a quadrupole mass analyzer achieves this separation of ions, it helps to consider the movement of an ion relative to just two of the four rods, as shown in Figure $4$ for the poles that carry a positive dc voltage. When the ion beam enters the channel between the rods, the ac voltage causes the ion to begin to oscillate. If, as in the top diagram, the ion is able to maintain a stable oscillation, it will pass through the mass analyzer and reach the transducer. If, as in the middle diagram, the ion is unable to maintain a stable oscillation, then the ion eventually collides with one of the rods and is destroyed. When the rods have a positive dc voltage, as they do here, ions with larger mass-to-charge ratios will be slow to respond to the alternating ac voltage and will pass through the transducer. The result is shown in the figure at the bottom (and repeated in Figure $5a$) where we see that ions with a sufficiently large mass-to-charge ratios successfully pass through the transducer; ions with smaller mass-to-charge ratios do not. In this case, the quadrupole mass analyzer acts as a high-pass filter. We can extend this to the behavior of the ions when they interact with rods that carry a negative dc voltage. In this case, the ions are attracted to the rods, but those ions that have a sufficiently small mass-to-charge ratio are able to respond to the alternating current's voltage and remain in the channel between the rods. The ions with larger mass-to-charge ratios move more sluggishly and eventually collide with one of the rods. As shown in Figure $5b$, in this case, the quadrupole mass analyzer acts as a low-pass filter. Together, as we see in Figure $5c$, a quadrupole mass analyzer operates as both a high-pass and a low-pass filter, allowing a narrow band of mass-to-charge ratios to pass through the transducer. By varying the applied dc voltage and the applied ac voltage, we can obtain a full mass spectrum. Quadrupole mass analyzers provide a modest mass-to-charge resolution of about 1 amu and extend to $m/z$ ratios of approximately 2000. Time-Of-Flight Mass Analyzers In a time-of-flight mass analyzers, Figure $6$, ions are created in small clusters by applying a periodic pulse of energy to the sample using a laser beam or a beam of energetic particles to ionize the sample. The small cluster of ions are then drawn into a tube by applying an electric field and then allowed to drift through the tube in the absence of any additional applied field; the tube, for obvious reasons, is called a drift tube. All of the ions in the cluster enter the drift tube with the same kinetic energy, KE, which we define as $\text{KE} = \frac{1}{2} m v^2 =z e V \label{tof1}$ The time, $T$, that it takes the ion to travel the distance, $L$, to the detector is $T = \frac{L}{v} \label{tof2}$ Substituting Equation \ref{tof2} into Equation \ref{tof1} $T = \sqrt{\frac{m}{z}} \times \sqrt{\frac{1}{2eV}} \label{tof3}$ shows us that the time it takes an ion to travel through the drift tube is proportional to the square rate of its mass-to-charge ratio. As a result, lighter ions move more quickly than heavier ions. Flight times are typically less than 30 µs. A time-of-flight mass analyzer provide better resolution than a quadrupole mass analyzer, but is limited to sources that can be pulsed. A linear time-of-flight analyzer, such as that in Figure $6$, provide a resolution of approximately 4,000; other configurations can achieve resolutions of 10,000 or better. The time-of-flight analyzer is well-suited for MALDI ionization as the time between pulses of the laser provides the time needed for detection to occur. Ion Trap Mass Analyzers Figure $7$ provides an illustration of an ion trap mass analyzer, which consists of three electrodes—a central ring electrode and two conical end cap electrodes—that create a cavity into which ions are drawn. The ions in the cavity experience stabilizing and destabilizing forces that affect their movement within the cavity. Ions that adopt stable orbits remain in the cavity. By varying the potentials applied to the electrodes, ions with different mass-to-charge ratios enter into destabilizing orbits and exit through a small hole at the bottom of the trap. An ion trap typcially provides a resolution of 1,000. Ion Cyclotron Resonance Mass Analyzer The ion cyclotron resonance (ICR) analyzer is a form of an ion trap but operates in a way that retains all ions within the trap. When a gas phase ion is placed within an applied magnetic field, the ions move in a circular orbit that is perpendicular to the applied field (Figure $8$). In discussing the magnetric sector analyzer, we showed that the velocity, $v$, of an ion in an applied magnetic field with a strength of $B$ is a function of the radius of the ion's motion, $r$, and its charge $v = \frac{B z e r}{m} \label{icr1}$ Solving for the ratio $v / r$ gives the ion's cyclotron frequency, $w_c$, as $w_c = \frac{v}{r} = \frac{z e B}{m} \label{icr2}$ When an ion moving in a circular orbit, as shown by the smaller of the two circular orbits in Figure $8a$, absorbs energy equal to its cyclotron frequency, $w_c$, its velocity, $v$, and the radius of its orbit, $r$ both increase to maintain a constant value for $w_c$; the result is an ion that moves in a circular orbit of greater radius. As $w_c$ depends on the mass-to-charge ratio, all ions of equal $m/z$ experience the same change in their orbit, while ions with other mass-to-charge ratios are unaffected. Ions in the larger orbits eventually return to their original circular orbit as a result of collisions in which they lose energy. The trap itself, as seen in Figure $8b$, is defined by two pairs of plates (four in all). The transmitter plates are used to apply the potential that alters the orbits of the ions. Movement of the ions generates a current in the receiver plates that serves as the signal, as seen in Figure $8c$, that is positive when the ion is closer to one receiver plate and negative when it is closer to the other receiver plate. The initial magnitude of the current is proportional to the number of ions with the mass-to-charge ratio. The ion cyclotron resonance analyzer is usually operated by applying a short pulse of energy that varies linearly in its frequency. This sets all ions into motion, with each mass-to-charge ratio yielding a current response similar to that in Figure $8c$. Collectively, these individual current-time curves gives a time domain spectrum that we can covert into a frequency domain spectrum by taking the Fourier transform. The frequency domain spectrum yields the mass spectrum through Equation \ref{icr2}. FT-ICR instruments are capable of achieving resolutions of 1,000,000.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/20%3A_Molecular_Mass_Spectrometry/20.03%3A_Mass_Spectrometers.txt
Qualitative Applications In a qualitative analysis our interest is in determining the identity of a substance of interest to us. By itself, mass spectrometry is a powerful tool for determining the identity of pure compounds. The analysis of mixtures, however, is possible if we use a mass spectrometer as the detector for a separation technique, such as gas chromatography, or if we string together two or more mass analyzers in sequence. Identification of Pure Compounds There are several ways to use a mass spectrum to identify a compound, including identifying its molecular weight, using isotopic ratios, examining fragmentation patterns, and by searching through data bases. Using Molecular Weight Information. A molecular ion peak, M+•, when it is present, a [M + H]+ peak or a [M – H]+ peak, provides information about the compound's molecular weight. When using a low resolution mass analyzer, this may be sufficient to distinguish between molecular ions with, for example, a nominal mass of 95 amu and a nominal mass of 96 amu, but insufficient to distinguish between molecular ions with a more precise mass of 96.0399 amu and 96.0575 amu. When using a high resolution mass analyzer, the difference between the last pair of molecular ions may be feasible. Using Isotopic Ratios. The molecule cycloheptene has the formula C7H12 and a nominal mass of 96 amu, and the molecule cyclohexenone has the formula C6H8O and a nominal mass of 96 amu. Although both molecules will produce a molecular ion with the same nominal mass-to-charge ratio, each will also have a peak with a nominal mass of M + 1 due to the presence of isotopes of carbon, hydrogen, and oxygen. Because cycloheptene and cyclohexenone have different chemical formulas, the relative heights of their M + 1 peaks are different. Here is how we can work this out. For every 100 atoms of 12C there are 1.08 atoms of 13C (that is, 1.08% of the carbon atoms are 13C), for every 100 atoms of 1H there are 0.015 atoms of 2H, and for every 100 atoms of 16O there are 0.04 atoms of 17O. For cycloheptene, this means that the relative height of its M + 1 peak to its M peak is $(7 \times 1.08) + (12 \times 0.015) = 7.74 \nonumber$ and for cyclohexenone we have $(6 \times 1.08) + (8 \times 0.015) + (1 \times 0.04) = 6.64 \nonumber$ Here we see that a careful examination of the relative height of the M + 1 peak provides a way to distinguish between C7H12 and C6H8O even though they have the same nominal masses. On-line calculators are available—this link provides one example—that you can use to calculate the full isotopic abundance patterns, including M + 2, M + 3, and other peaks. Isotopic patterns are particularly useful for identifying the presence of chlorine and bromine in a molecule because each has one isotope with a significant abundance: for chlorine, 37Cl has an abundance of 32.5% relative to 35Cl, and for bromine, 81Br has an abundance of 98.0% of 79Br. Using Fragmentation Patterns. Figure $1$ shows the mass spectrum of p-nitrophenol, which we first considered in Section 20.1. A molecule's mass spectrum is unique and contains information that we can use to deduce its structure. Interpretation of a mass spectrum relies on identifying possible sources for the loss of mass, such as the a $\Delta m$ of 30 amu corresponding to the loss of NO, or a $\Delta m$ of 46 amu corresponding to the loss of NO2. Some mass-to-charge ratios are recognized as evidence for a particular ion, such as C5H5+ at a mass-to-charge ratio of 65. The interpretation of fragmentation patterns is covered elsewhere in the curriculum, particularly in organic chemistry, and is not given more consideration here. Using Computer Searching. Large databases of mass spectra are available (see here for a source from NIST). A peak table of mass-to-charge ratios and peak intensities for a sample is entered into an algorithm that searches the database and identifies the most likely matches. Analysis of Mixtures Using MS as a Detector for a GC or LC Separation Mass spectrometry is a powerful analytical technique when the sample we are analyzing is pure (or if impurities are of sufficiently low concentration that they have little effect on the mass spectrum). For a mixture of two or more analytes, the interpretation of the mass spectrum is difficult, if not impossible. To analyze such a mixture, we need a means of separating the analytes from each other. One approach is to interface a mass spectrometer to a gas chromatograph or a liquid chromatograph. The GC, or LC separates the mixture into its component parts with the mass spectrometer serving as the detector. See Chapter 27 and Chapter 28 for further details about GC-MS and LC-MS. Analysis of Mixtures Using Tandem Mass Spectrometry Another approach to working with a complex sample is to use two or more mass analyzers in what is called tandem mass spectrometry. For example, if we place three quadrupole mass analyzers in a sequence, we can use a soft ionization source to generate mostly molecular ions of the form [M + H]+ for each of the sample's analytes, and then let the first quadrupole separate these molecular ions by the differences in their mass-to-charge ratio. The [M + H]+ molecular ions for one of the analytes is then selectively passed into the second quadrupole where it is allowed to undergo fragmentation by collision with a gas, such as He. Finally, these fragment ions are passed along to the third quadrupole where the mass spectrum is obtained. By sequentially passing each of the molecular ions from the first quadrupole through the second and third quadrupoles, we are able to obtain mass spectra for each molecule in the mixture. Quantitative Applications As a detector for other instrumental methods, such as gas chromatography and liquid chromatography, mass spectrometry provides for a quantitative analysis by monitoring either the total ion count or by monitoring ions of a single mass-to-charge ratio, which is known as selective ion monitoring. As an Independent method for determining an analyte's concentration, mass spectrometry is less attractive due to the difficulty of controlling the amount of sample or standard introduced into the instrument and the affect of the sample's matrix on fragmentation. The use of an internal standard improves precision and accuracy.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/20%3A_Molecular_Mass_Spectrometry/20.04%3A_Applications_of_Molecular_Mass_Spectrometry.txt