chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Blank Subtraction
1. The signal at 588.9 nm = 43756 counts, a combination of signal and background. The signal at 589.1 nm = 1210 counts, all background. Thus the 588.9 nm signal is 43756-1210 = 42546 counts above background and the sensitivity of the measurement is 42546 counts/100 ppb or 425.5 counts/ppb. Thus a 10 ppb sample should have a total signal of 10*425.5 = 4255 counts (signal) + 1210 counts (background) = 5465 counts. If we hadn't measured background, the sensitivity would have seemed to be 43756/100 = 437.6 counts/ppb, and 5465 counts/(437.6 counts/ppb) gives 12.5 ppb, at 25% overestimation.
2. Now the signal at 588.9 nm is on top of a smaller background than we previously computed because the signal at 589.1 nm includes 1% stray light from the 588.9 nm data. Thus 438 counts of the signal at the longer wavelength is from stray light and the true background is 1210-438 = 772 counts. The signal due to sodium at the shorter wavelength is 43756-772 = 42984 counts. 10 ppb then will generate 4298 counts from sodium plus 772 counts from background or 5070 counts. If we had completely ignored background, the apparent concentration of the 10 ppb solution would be 5070/43756*100 ppb = 11.6 ppb, still a 16% error, but less than before. Note that the measurement of stray light has to be done using an emission line much narrower than the resolution of the instrument so that we can be sure that we're not looking at the wings of the emission line. It is tricky to determine the difference between plasma background and proportionate/stray light background.
3. 4.01% - 0.02% = 3.99%. We have no information on the precision of any of the numbers, so we have no idea how many of the figures are significant.
Problems for Curve Fitting Strategies
1. Plotting the data, the working curve looks linear to the unaided eye. Doing a linear regression, one finds
Signal = (169
±9) + (1199±7) C.
Quadratic regression gives
Signal = (149
±4) + (1258±8) C - (24±3)C2
Inspection of the equations generates no insight. Here are residual plots for the data:
There's a pattern to the residuals for the linear fit, while the residuals are random for the quadratic fit. The quadratic curve is preferred. Useful working range for the linear curve is 0.25 ppm to 2 ppm. Useful range for the quadratic curve is only DEMONSTRATED from 0.5 ppm to 2.5 ppm. It is likely we could work at higher concentration, but one of the easily avoided errors in any analytical method is working outside the range over which one has validated calibration data.
2. Take the ratio of Ca emission/Y emission and plot vs. [Ca2+]. The resulting working curve is ICa/IY=(0.7430±0.0009) + (1.2491±0.0015) CCa. The first "gotcha" in the analysis of the unknown is that there's only 39 mL of Y-containing solution, not 40, so the Y raw intensity needs to be scaled up by 40/39 from the raw value to be able to use the original working curve. Thus, in computing the intensity ratio, use IY = 3315*40/39 = 3400. Now we find CCa = (4988/3400 - 0.7430)/1.2491 = 0.580 ppm. But that's the concentration aspirated into the ICP. What we care about is the concentration in the blood serum. Since we diluted 1 mL of serum into 100 mL before determination, the actual concentration in serum is 58.0 ppm. Take the ratio of Ca emission/Y emission and plot vs. [Ca2+]. The resulting working curve is ICa/IY=(0.7430±0.0009) + (1.2491±0.0015) CCa. The first "gotcha" in the analysis of the unknown is that there's only 39 mL of Y-containing solution, not 40, so the Y raw intensity needs to be scaled up by 40/39 from the raw value to be able to use the original working curve. Thus, in computing the intensity ratio, use IY = 3315*40/39 = 3400. Now we find CCa = (4988/3400 - 0.7430)/1.2491 = 0.580 ppm. But that's the concentration aspirated into the ICP. What we care about is the concentration in the blood serum. Since we diluted 1 mL of serum into 100 mL before determination, the actual concentration in serum is 58.0 ppm.
Time Gating
If the rotation rate is H, the beam sweeps out a distance of 2πDH each second. If the mirror rotates through an angle θ, the reflected image rotates through 2θ, since the angle of incidence equals the angle of reflection. Given the pixel width p, the time the beam is on a given pixel is p/2πDH. Rule of thumb is that data must be separated by 3 pixels to ensure distinctiveness, so the useful time resolution is 3p/2πDH. For a 25 µm pixel CCD, 1 m focal length, 60 revolutions per second (i.e. the mirror is driven by a synchronous AC motor on the North American power grid), the time resolution = 3*2.5×10-5 m/(2π*1 m * 60 s-1) = 0.2 µs. Note that one may readily spin a mirror at higher speeds, and one may also use smaller pixels. At 1 KHz (the mechanical stresses make this a fairly high speed, at least in air) and with 6 µm pixels, the time resolution improves to 3 ns. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Atomic_Emission_Spectroscopy_(AES)/06_Data_Reduction/10_Answers.txt |
AC Arc Plasma created by a power supply or function generator of alternating polarity. Typically, an arc excited at twice the frequency of the electrical grid (50 Hz except North America, which is 60 Hz). While not continuous, the period of low current as the polarity of the driving voltage commutes (switches sign) is brief compared to the overall experiment time.
Arc Current-carrying plasma. See also AC Arc, DC arc
Anode Electrode attractive to negative particles (electrons or anions). Thus, electrode at positive voltage.
Atomization Conversion of molecular or condensed forms of an analyte into free atoms.
Background Data value in a datastream in the absence of any analyte or interference.
Blackbody (or Planck) Radiation Intensity distribution of radition as a function of frequency in equilibrium with a body at a particular temperature.
Blaze Angle a) Tilt of an echellette with respect to the plane of a diffraction grating. b) Angle of incidence and diffraction at which the efficiency of a grating is maximum i.e. where light intensity diffracted/incident intensity is maximum.
Boltzmann Distribution Statistical behavior of particles of various energies in equilibrium. The probability that a particle will occupy an energy level E at a temperature T is proportional to e- E/kT, with k a constant, ~ 1.38 ×10-23 Joule Kelvin-1. If energy is due exclusively to kinetic energy of point-like particles so that E = 1/2 mv2, the distribution is dubbed a Maxwell-Boltzmann distribution.
Boxcar Integration Observation of a periodically-generated signal at a controlled time after each repetition is initiated. By averaging many repetitions at a given delay after event initiation, one obtains a more precise measurement of the behavior of the repetitive event's time course than could be obtained with a single observation.
Cathode Electrode attractive to positive particles (cations). Thus, electrode at negative voltage.
CCD, Charge-coupled Device, Charge-coupled Array Multi-pixel detector in which potential bias supplied to adjacent portions of a semiconductor create wells in which electrons may be stored during exposure and between which electrons may be transferred during readout.
Chemometrics The subset of statistical methods useful in extracting information from chemically-derived data.
DC Arc Plasma with current running in a single direction. Commonly used for analysis of powdered solids or metals. Cathode is primarily sampled electrode.
DCP or Direct Current Plasma A two or three electrode DC arc optimized for analysis of aerosols and sprayed solutions.
Detection Limit Smallest amount of a substance that generates a signal with magnitude at least 3 times the noise in the background.
Diffraction Grating Transmitting or reflecting optical element with regularly spaced scribes or grooves on its surface, designed to use phase-dependent constructive interference to separate light by dispersing it at wavelength-dependent angles.
Dispersion a) Change in refractive index with wavelength. b) Change in propagation direction or location of a light beam with wavelength.
Dynamic Range Ratio of the largest quantity reliably measured by a technique to the lowest quantity so measured. A method with detection limit 1 ppb and upper concentration of working curve linearity of 100 ppm has a dynamic range of 100 ppm / 0.001 ppm = 106
Echelle A diffraction grating designed to operate with incident and diffracted light at an angle greater than 45° from the grating normal.
Echellette A single scribed groove on a diffraction grating.
Electrode Metallic or semiconducting solid used to inject current into a plasma, to act as a sample specimen to be eroded by a plasma, or to probe ionic behavior within a plasma.
Flame Exothermic gas phase reaction, spatially confined, capable of emitting light or stimulating particles passing through it to emit light.
Glow Discharge Low-pressure (typically millitorr) discharge that expands to cover essentially the entire surface of an electrode rather than condensing into a narrow arc channel.
Grating See Diffraction Grating.
Grotrian diagram Graphical display of energy levels of an atom or molecule.
Inductively Coupled Plasma (ICP) A plasma receiving energy from an induction coil powered by a radio-frequency power supply (typically 27.12 MHz, but 13.56 MHz and 40.68 MHz are not uncommon). Because electrodes do not contact the plasma, contamination by elements in the power source and wiring is negligible. Common support gasses are Ar, He, or N2.
Interference Any substance or process other than the analyte or desired signal generation process that influences the detected signal other than by providing a constant offset.
Interferometer Any apparatus to measure the phase shift of light traversing a variable path compared to a fixed reference path. Typically, a device to measure light of multiple wavelengths simultaneously using a moving mirror to produce phase shifts of fixed distance for all wavelengths. The fixed distance corresponds to variable phase shift since 2πd/λ is different for each wavelength.
Internal Standard A substance, added to a specimen, presumed not otherwise to be present in the specimen, which after addition acts similarly to but does not interfere with the sought-for substance. By monitoring a signal due to the internal standard simultaneously with that of the analyte, signal changes due to incomplete sample uptake or due to variations in sample transport can be compensated.
Lockin Amplification Extraction of information of a particular frequency from a datastream by multiplying a periodic waveform of the desired frequency times the raw datastream, followed by signal averaging.
Matrix All components of a sample except the sought-for species. If one is attempting to analyze all components of a sample, then every component is both an analyte and (for all the other components) part of the matrix.
Microwave Plasma Plasma created by radiation with a wavelength less than 10 cm. Typically 2.45 GHz, the same frequency as used in microwave ovens (λ ~ 1.22 cm).
Nebulize Convert a bulk liquid into a spray of small drops.
Noise Random fluctuations in an observable datastream, typically time-varying, that obscure elements of the datastream related to the sought-for substance or quantity. Noise reduction typically lowers the detection limit.
Photomultiplier Tube Vacuum tube with a negatively-biased photocathode that, with useful quantum efficiency, transduces light to a ballistically-launched electron. The electron ricochets to a series of dynodes at less negative potentials, impacting the dynodes and generating an avalanche of electrons. The electrons are collected at an anode so that each detected photon produces a pulse of electrons a few nanoseconds after the initial photon hit the photocathode.
Plasma Partially or totally ionized gas i.e. gas atoms or molecules are dissociated into cations and either electrons or anions, allowing facile conduction of electric current.
Quantum Efficiency Ratio of the number of events of a desired type to the number of stimulae for that type of event. Typically, the ratio of electrons produced by a transducer to the number of incident photons, though it may also mean the ratio of the number of photons fluoresced to the number absorbed or the number of photochemical reactions initiated per absorbed photon.
Refraction Change of direction of light as it passes between dissimilar materials.
Resolution Separation of signals that should, under some circumstance, be distinguishable. Relative resolution (unitless) is the value of the distinguishable parameter (say, wavelength) divided by the smallest value difference that an experiment can actually distinguish. Absolute resolution (has units) is the smallest difference that can be discerned in a measured quantity.
Rydberg a) Energy unit: the energy required to remove a ground-state electron orbiting an infinitely massive, gravitationless point positive charge to the edge of the universe with zero residual kinetic energy. 109737.316 cm-1. b) A Rydberg state is a highly excited state of an atom or molecule such that the electron, for all practical purposes, behaves as if it is orbiting a hydrogen atom.
Self-absorption Line broadening due to saturation of emission in the center of an emission line as the light output reaches the maximum output for a black body at the temperature and wavelength in question.
Self-reversal Diminution of light intensity in the center of an emission line due to atomic absorption by cold atoms of a given element surrounding the hot plasma producing the original emission.
Signal-to-noise Ratio Magnitude of the observed datastream related to the sum of sought-for and interfering substances plus background, and fluctuations unrelated to the sought-for measurements.
Solar-blind Characteristic of a detector (typically a photomultiplier tube) that responds only to ultraviolet light, not visible or infrared radiation.
Spark Transient plasma triggered by imposing so high a voltage across a small gap between conducting electrodes that the intervening gas ionizes.
Spectrograph Instrument to display light of various wavelengths at separated physical positions simultaneously.
Spectrometer Any instrument used to study light as a function of wavelength.
Spectrophotometer An instrument to quantitatively measure the intensity of light as a function of wavelength, typically for making absorbance measurements.
Term Symbol Summary of the momentum, parity, and spin for an atomic or molecular energy level.
Time Gating Observing only a portion of a datastream by selecting time intervals, subsets of the overall datastream, for recording. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Atomic_Emission_Spectroscopy_(AES)/07_Glossary.txt |
Considering that emission spectroscopy has been part of chemical analysis at least since since Kirchoff and Bunsen in the 1860s, no reasonable size bibliography is complete. Here are some of the authors' favorite citations. ICP focuses on books available through Amazon.com. Non-ICP topics point to the primary literature.
A. Overview of ICP Emission and Related
P. W. J. M. Boumans, Inductively-Coupled Plasma Emission Spectroscopy. Part I. Methodology, Instrumentation, and Performance. Part II. Applications and Fundamentals, Wiley-Interscience (New York, 1987).
M. Thompson and J. N. Walsh, A Handbook of Inductively Coupled Plasma Spectrometry, Kluwer Academic Publishers (Amsterdam, 1988).
A. Montaser, Inductively-Coupled Plasmas in Analytical Atomic Spectrometry, Wiley-VCH (New York, 1992).
A. Montaser, Inductively-Coupled Plasma Mass Spectrometry, John Wiley (New York, 1998).
I. B. Brenner and A. T. Zander, "Axially and Radially Viewed Inductively Coupled Plasmas - a Critical Review," Spectrochim. Acta Part B: Atomic Spectrosc., 55(8), 1195-1240 (2000).
C. I. M. Beenakker, "A cavity for microwave-induced plasmas operated in helium and argon at atmospheric pressure," Spectrochim. Acta Part B: Atomic Spectrosc., 31, 483-486 (1976).
Qinhan Jin, Chu Zhu, Matthew W. Border, Gary M. Hieftje, "A microwave plasma torch assembly for atomic emission spectrometry," Spectrochim. Acta Part B: Atomic Spectrosc., 46, 417-430 (1991).
Seiichi Murayama, Hiromitsu Matsuno, Manabu Yamamoto, "Excitation of solutions in a 2450 MHz discharge," Spectrochim. Acta Part B: Atomic Spectrosc., 23, 513-514 (1968).
Patricia G. Brandl, Jon W. Carnahan, "Charge transfer in analytical helium plasmas," Spectrochim. Acta Part B: Atomic Spectrosc., 49, 105-115 (1994).
Valente, Stephen E.; Schrenk, William G. "Design and some emission characteristics of an economical dc arc plasmajet excitation source for solution analysis." Appl. Spectros. 24, 197-205 (1970).
P. Pohl, "Hydride Generation - Recent Advances in Atomic Emission Spectrometry," TrAC, Trends in Analyt. Chem. 23(2), 87-101 (2004).
H. Wiltsche, I. B. Brenner, G. Knapp, and K. Prattes, "Simultaneous Determination of As, Bi, Se, Sn and Te in High Alloy Steels - Re-evaluation of Hydride Generation Inductively Coupled Plasma Atomic Emission Spectrometry," J. Analyt. Atomic Spectrom. 22(9), 1083-1088 (2007).
Journals containing vast amounts of information on ICP techniques include: Analytical Chemistry, Applied Spectroscopy, Journal of Analytical Atomic Spectroscopy, and Spectrochimica Acta Part B: Atomic Spectroscopy.
B. Arc, Spark, and Laser Emission
P. W. J. M. Boumans, Theory of Spectrochemical Excitation, Plenum Press (New York, 1966).
J. P. Walters, "Historical Advances in Spark Emission Spectroscopy," Appl. Spectrosc. 23(4), 317-332 (1969).
J. P. Walters, "Source Parameters and Excitation in a Spark Discharge," Appl. Spectrosc. 26(1), 17-40 (1972).
J. P. Walters, "Formation and Growth of a Stabilized Spark Discharge," Appl. Spectrosc. 26(3), 323-353 (1972).
J. Zynger, and S. R. Crouch, "Miniature, Low Energy Spark Discharge System for Emission Spectrochemical Analysis of Solutions," Appl. Spectrosc. 29(3), 244-255 (1975).
J. P. Walters, "Spark Discharge: Application to Multielement Spectrochemical Analysis," Science, 198(4319), 787-797 (1977).
R. J. Klueppel, D. M. Coleman, W. S. Eaton, S. A. Goldstein, R. D. Sacks, and J. P. Walters, "A Spectrometer for Time-gated, Spatially-resolved Study of Repetitive Electrical Discharge" Spectrochim. Acta, Part B: Atomic Spectrosc. 33B(1-2), 1-30 (1978).
D. M. Coleman, M. A. Sainz, and H. T. Butler, Anal. Chem. 52, 746-753 (1980).
J. W. Carr and G. Horlick, "Laser Vaporization of Solid Metal Samples into an Inductively Coupled Plasma," Spectrochim. Acta, Part B: Atomic Spectrosc. 37B(1), 1-15 (1982).
J. W. Olesik and J. P. Walters, "Statistical Mapping of Alloy Samples by Spark Excited Optical Emission Spectroscopy," Appl. Spectrosc. 37(2), 105-119 (1983).
L. J. Radziemski, T. R. Loree, D. A. Cremers, and N. M. Hoffman, "Time-resolved Laser-induced Breakdown Spectrometry of Aerosols," Anal. Chem. 55(8), 1246-1252 (1983).
K. G. Johnson, "Memorial of Lester William Strock", Am. Mineral. 70, 209-211 (1985).
A. Scheeline, "Sampling Processes in Emission Spectroanalytical Chemistry: A Fundamental Review," Mikrochim. Acta,I, 247-285 (1990).
C. A. Bye and A. Scheeline, "Analyte Matrix Excitation Investigations in the High Voltage Spark Discharge Using an Echelle/CCD System," Appl. Spectrosc. 47, 2031-2035 (1993).
A. W. Miziolek, V. Palleschi, and I. Schechter, Laser-induced Breakdown Spectroscopy, Cambridge University Press (Cambridge, 2006).
D. A. Cremers and L. J. Radziemski, Handbook of Laser-induced Breakdown Spectroscopy, John Wiledy (New York, 2006).
M. Baudelet, M. Boueri, J. Yu, S. S. Mao, V. Piscitelli, X. Mao, and R. E. Russo, "Time-resolved Ultraviolet Laser-induced Breakdown Spectroscopy for Organic Material Analysis," Spectrochim. Acta, Part B: Atomic Spectrosc. 62B(12), 1329-1334 (2007).
C. Instruments, Electronics, and Optics
J. V. Sweedler, K. L. Ratzlaff, and M. B. Denton, Charge Transfer Devices in Spectroscopy, J. Wiley and Sons (New York, 1994).
A. P. Thorne, U. Litzén, S. Johansson and U. Litzen, Spectrophysics: Principles and Applications, Springer Verlag (Berlin, 1999).
A. T. Zander, R. L. Chien, C. B. Cooper, and P. V. Wilson, "An Image-mapped Detector for Simultaneous ICP-AES," Anal. Chem. 71(16) 3332-3340 (1999).
S. Svanberg, Atomic and Molecular Spectroscopy: Basic Aspects and Practical Applications, Springer-Verlag (Berlin, 2001).
N. V. Tkachenko, Optical Spectroscopy: Methods and Instrumentations, Elsevier Science (Amsterdam, 2006).
An excellent on-line resource for learning how to specify and use optical components may be found at Melles Griot. Other online resources include Newport Optical and Jobin Yvon. The last of these lists many CCD cameras that compete with e.g. Andor Technologies and numerous other competitors. Listing of specific commercial links does not imply advocacy by the authors, does not imply preference for linked products vs. other, unlinked products, does not imply servicability of any product for any application, and does not imply endorsement by the authors' employers.
D. Data Reduction
J. Miller and J. Miller, Statistics and Chemometrics for Analytical Chemistry, Pearson Education Ltd., (Harlow, Essex, 4th edn., 2000).
Most textbooks on Quantitative Analysis, Instrumental Analysis, and specific methods listed elsewhere in this bibliography include chapters on data optimization.
Quite detailed simulations may be obtained from Ed Voigtman at the University of Massachusetts.
E. Reference Data on the Web
Atomic emission wavelengths and transition probabilities can be found here.
F. Our Favorite Miscellaneous Papers
J. D. Winefordner, V. Svoboda, and L. J. Cline, "Critical Comparison of Atomic Emission, Atomic Absorption, and Atomic Fluorescence Flame Spectrometry," Crit. Rev. Anal. Chem. 1(2), 233-274 (1970).
R. C. L. Ng and G. Horlick, "Practical Aspects of Fourier Transform and Correlation Based Processing of Spectrochemical Data," Spectrochim. Acta Part B: Atomic Spectrosc., 36B(6), 529-542 (1981).
A. Scheeline, C. A. Bye, D. L. Miller, S. W. Rynders, and R. C. Owen, Jr., "Design and Characterization of an Echelle Spectrometer for Fundamental and Applied Emission Spectrochemical Analysis," Appl. Spectrosc. 45, 334-341 (1991).
T. M. Spudich, C. K. Utz, J. M. Kuntz, R. A. DeVerse, R. M. Hammaker, and D. L. McCurdy, "Potential for Using a Digital Micromirror Device as a Signal Multiplexer in Visible Spectroscopy," Appl. Spectrosc. 57(7), 733-736, (2004). | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Atomic_Emission_Spectroscopy_(AES)/08_Bibliography.txt |
Capillary electrophoresis is an analytical technique that separates ions based on their electrophoretic mobility with the use of an applied voltage. The electrophoretic mobility is dependent upon the charge of the molecule, the viscosity, and the atom's radius. The rate at which the particle moves is directly proportional to the applied electric field--the greater the field strength, the faster the mobility. Neutral species are not affected, only ions move with the electric field. If two ions are the same size, the one with greater charge will move the fastest. For ions of the same charge, the smaller particle has less friction and overall faster migration rate. Capillary electrophoresis is used most predominately because it gives faster results and provides high resolution separation. It is a useful technique because there is a large range of detection methods available.1
Introduction
Endeavors in capillary electrophoresis (CE) began as early as the late 1800’s. Experiments began with the use of glass U tubes and trials of both gel and free solutions.1 In 1930, Arnes Tiselius first showed the capability of electrophoresis in an experiment that showed the separation of proteins in free solutions.2 His work had gone unnoticed until Hjerten introduced the use of capillaries in the 1960’s. However, their establishments were not widely recognized until Jorgenson and Lukacs published papers showing the ability of capillary electrophoresis to perform separations that seemed unachievable. Employing a capillary in electrophoresis had solved some common problems in traditional electrophoresis. For example, the thin dimensions of the capillaries greatly increased the surface to volume ratio, which eliminated overheating by high voltages. The increased efficiency and the amazing separating capabilities of capillary electrophoresis spurred a growing interest among the scientific society to execute further developments in the technique.
Instrumental Setup
A typical capillary electrophoresis system consists of a high-voltage power supply, a sample introduction system, a capillary tube, a detector and an output device. Some instruments include a temperature control device to ensure reproducible results. This is because the separation of the sample depends on the electrophoretic mobility and the viscosity of the solutions decreases as the column temperature rises.3 Each side of the high voltage power supply is connected to an electrode. These electrodes help to induce an electric field to initiate the migration of the sample from the anode to the cathode through the capillary tube. The capillary is made of fused silica and is sometimes coated with polyimide.3 Each side of the capillary tube is dipped in a vial containing the electrode and an electrolytic solution, or aqueous buffer. Before the sample is introduced to the column, the capillary must be flushed with the desired buffer solution. There is usually a small window near the cathodic end of the capillary which allows UV-VIS light to pass through the analyte and measure the absorbance. A photomultiplier tube is also connected at the cathodic end of the capillary, which enables the construction of a mass spectrum, providing information about the mass to charge ratio of the ionic species.
Theory
Electrophoretic Mobility
Electrophoresis is the process in which sample ions move under the influence of an applied voltage. The ion undergoes a force that is equal to the product of the net charge and the electric field strength. It is also affected by a drag force that is equal to the product of $f$, the translational friction coefficient, and the velocity. This leads to the expression for electrophoretic mobility:
$\mu_{EP} = \dfrac{q}{f} = \dfrac{q}{6\pi \eta r} \label{1}$
where f for a spherical particle is given by the Stokes’ law; η is the viscosity of the solvent, and $r$ is the radius of the atom. The rate at which these ions migrate is dictated by the charge to mass ratio. The actual velocity of the ions is directly proportional to E, the magnitude of the electrical field and can be determined by the following equation4:
$v = \mu_{EP} E \label{2}$
This relationship shows that a greater voltage will quicken the migration of the ionic species.
Electroosmotic Flow
The electroosmotic flow (EOF) is caused by applying high-voltage to an electrolyte-filled capillary.4 This flow occurs when the buffer running through the silica capillary has a pH greater than 3 and the SiOH groups lose a proton to become SiO- ions. The capillary wall then has a negative charge, which develops a double layer of cations attracted to it. The inner cation layer is stationary, while the outer layer is free to move along the capillary. The applied electric field causes the free cations to move toward the cathode creating a powerful bulk flow. The rate of the electroosmotic flow is governed by the following equation:
$\mu_{EOF} = \dfrac{\epsilon}{4\pi\eta} E\zeta \label{3}$
where ε is the dielectric constant of the solution, η is the viscosity of the solution, E is the field strength, and ζ is the zeta potential. Because the electrophoretic mobility is greater than the electroosmotic flow, negatively charged particles, which are naturally attracted to the positively charged anode, will separate out as well. The EOF works best with a large zeta potential between the cation layers, a large diffuse layer of cations to drag more molecules towards the cathode, low resistance from the surrounding solution, and buffer with pH of 9 so that all the SiOH groups are ionized.1
Capillary Electroseparation Methods
There are six types of capillary electroseparation available: capillary zone electrophoresis (CZE), capillary gel electrophoresis (CGE), micellar electrokinetic capillary chromatography (MEKC), capillary electrochromatography (CEC), capillary isoelectric focusing (CIEF), and capillary isotachophoresis (CITP). They can be classified into continuous and discontinuous systems as shown in Figure 3. A continuous system has a background electrolyte acting throughout the capillary as a buffer. This can be broken down into kinetic (constant electrolyte composition) and steady-state (varying electrolyte composition) processes. A discontinuous system keeps the sample in distinct zones separated by two different electrolytes.6
Capillary Zone Electrophoresis (CZE)
Capillary Zone Electrophoresis (CZE), also known as free solution capillary electrophoresis, it is the most commonly used technique of the six methods.A mixture in a solution can be separated into its individual components quickly and easily.The separation is based on the differences in electrophoretic mobility, which is directed proportional to the charge on the molecule, and inversely proportional to the viscosity of the solvent and radius of the atom.The velocity at which the ion moves is directly proportional to the electrophoretic mobility and the magnitude of the electric field.1
The fused silica capillaries have silanol groups that become ionized in the buffer. The negatively charged SiO- ions attract positively charged cations, which form two layers—a stationary and diffuse cation layer. In the presence of an applied electric field, the diffuse layer migrates towards the negatively charged cathode creating an electrophoretic flow ($\mu_{ep}$) that drags bulk solvent along with it. Anions in solution are attracted to the positively charged anode, but get swept to the cathode as well. Cations with the largest charge-to-mass ratios separate out first, followed by cations with reduced ratios, neutral species, anions with smaller charge-to-mass ratios, and finally anions with greater ratios. The electroosmotic velocity can be adjusted by altering pH, the viscosity of the solvent, ionic strength, voltage, and the dielectric constant of the buffer.1
Capillary Gel Electrophoresis (CGE)
CGE uses separation based on the difference in solute size as the particles migrate through the gel. Gels are useful because they minimize solute diffusion that causes zone broadening, prevent the capillary walls from absorbing the solute, and limit the heat transfer by slowing down the molecules. A commonly used gel apparatus for the separation of proteins is capillary SDS-PAGE. It is a highly sensitive system and only requires a small amount of sample.1
Micellar Electrokinetic Capillary Chromatography (MEKC)
MEKC is a separation technique that is based on solutes partitioning between micelles and the solvent. Micelles are aggregates of surfactant molecules that form when a surfactant is added to a solution above the critical micelle concentration. The aggregates have polar negatively charged surfaces and are naturally attracted to the positively charged anode. Because of the electroosmotic flow toward the cathode, the micelles are pulled to the cathode as well, but at a slower rate. Hydrophobic molecules will spend the majority of their time in the micelle, while hydrophilic molecules will migrate quicker through the solvent. When micelles are not present, neutral molecules will migrate with the electroosmotic flow and no separation will occur. The presence of micelles results in a retention time to where the solute has little micelle interaction and retention time tmc where the solute strongly interacts. Neutral molecules will be separated at a time between to and tmc. Factors that affect the electroosmotic flow in MEKC are: pH, surfactant concentration, additives, and polymer coatings of the capillary wall.1
Capillary Electrochromatography (CEC)
The separation mechanism is a packed column similar to chromatography. The mobile liquid passes over the silica wall and the particles. An electroosmosis flow occurs because of the charges on the stationary surface. CEC is similar to CZE in that they both have a plug-type flow compared to the pumped parabolic flow that increases band broadening.1
Capillary Isoelectric Focusing (CIEF)
CIEF is a technique commonly used to separate peptides and proteins. These molecules are called zwitterionic compounds because they contain both positive and negative charges. The charge depends on the functional groups attached to the main chain and the surrounding pH of the environment. In addition, each molecule has a specific isoelectric point (pI). When the surrounding pH is equal to this pI, the molecule carries no net charge. To be clear, it is not the pH value where a protein has all bases deprotonated and all acids protonated, but rather the value where positive and negative charges cancel out to zero. At a pH below the pI, the molecule is positive, and then negative when the pH is above the pI. Because the charge changes with pH, a pH gradient can be used to separate molecules in a mixture. During a CIEF separation, the capillary is filled with the sample in solution and typically no EOF is used (EOF is removed by using a coated capillary). When the voltage is applied, the ions will migrate to a region where they become neutral (pH=pI). The anodic end of the capillary sits in acidic solution (low pH), while the cathodic end sits in basic solution (high pH). Compounds of equal isoelectric points are “focused” into sharp segments and remain in their specific zone, which allows for their distinct detection.6
Calculating pI
An amino acid with n ionizable groups with their respective pKa values pK1, pK2, ... pkn will have the pI equal to the average of the group pkas: pI = (pK1+pK2+...+pkn)/n. Most proteins have many ionizable sidechains in addition to their amino- and carboxy- terminal groups. The pI is different for each protein and it can be theoretically calculated according to the Henderson-Hasselbalch approximation, if we know amino acids composition of protein. In order to experimentally determine a protein's pI 2-Dimensional Electrophoresis (2-DE) can be used. The proteins of a cell lysate are applied to a pH immobilized gradient strip, upon electrophoresis the proteins migrate to their pI within the strip. The second dimension of 2-DE is the separation of proteins by MW using a SDS-gel.
Capillary Isotachorphoresis (CITP)
CITP is the only method to be used in a discontinuous system. The analyte migrates in consecutive zones and each zone length can be measured to find the quantity of sample present.1
Capillary Electrophoresis versus High Performance Liquid Chromatography (HPLC)
1. CE has a flat flow, compared to the pumped parabolic flow of the HPLC. The flat flow results in narrower peaks and better resolution (Figure $4$).
2. CE has a greater peak capacity when compared to HPLC—CE uses millions of theoretical plates.
3. HPLC is more thoroughly developed and has many mobile and stationary phases that can be implemented.
4. HPLC has more complex instrumentation, while CE is simpler for the operator.
5. HPLC has such a wide variety of column lengths and packing, whereas CE is limited to thin capillaries.
6. Both techniques use similar modes of detection.
7. Can be used complementary to one another.
Figure $4$: HPLC versus CE flow profiles
Problems
1. Calculate µEP if q= +1, η is 3.7 (lb s/ft2) x 10-5 and the radius of the atom is 2 nm.
2. How does buffer pH affect the capillary?
3. How does hydrophilicity affect MEKC?
4. What advantages does capillary electrophoresis provide over liquid chromatography?
5. Give reasons why “Analyte A” migrated first, while “Analyte D” migrated last.
Capillary Electrophoresis
When an electric field is applied to ions in a medium, a phenomenon causes the ions to move with the electric field. The motion of the ion itself can be characterized by charge, shape, and size. The ion moves through the medium and migrates with a characteristic velocity, which is in turn dependent on the characteristics of the ion itself and the properties of the medium. This phenomenon is called electrophoresis.
Introduction
Many different biochemical techniques use the principles of electrophoresis to identify compounds of interest in a sample and understand interactions at the molecular and ionic level. The basic concept of the techniques uses an electric field that attracts or repels certain ions, and measurements of the mobility of the ions through the medium (Figure $1$). The cations move toward the cathode, and the anions move toward the anode. Heavier or bulkier ions move more slowly through the medium. Visual patterns formed on the various media (through the use of dyes and other techniques, such as Coomassie Brilliant Blue G 250 dye) can be analyzed for useful information.
The velocity of the ion is given by the following equation:
$v = \dfrac{E \times q}{f} \label{1}$
where:
• $v$ is the velocity,
• $f$ is the coefficient of friction,
• $E$ is the applied electric field (Volts/cm), and
• $q$ is the net charge on the ion
The coefficient of friction ($f$) in Equation $\ref{1}$ is introduced due to the fact that when the ion moves through the medium, the medium exerts a frictional force on the ion that inhibits the movement of the ion. Usually the voltage applied is held constant, so the mobility ($η$) of the ion can be measured; it is defined as follows:
$\eta = \dfrac{v}{E} \label{2}$
where:
• $v$ is the velocity
• $E$ is the applied electric field (Volts/cm)The electric field’s properties determine the separation of ions, and is affected by resistance, current, and voltage.
The higher the current, the faster the migration of the ions (due to increased Coulombic attractions). Because current is affected by voltage, as the difference in potential between the electrodes increases, the rate of migration increases. The resistance is dependent on the properties of the medium; a denser or more saturated medium will retard migration, as will a longer or narrower medium. Mediums with different densities can be made from the same compound, and are quite useful for separating and identifying molecules (through molecular sieving, for example). The most common mediums are agar and polyacrylamide gels.
Of key importance is the buffer used. Different compounds have different stability conditions, and the buffer must be chosen carefully so as not to “harm” the molecules’ native states. The main factors to consider when choosing a buffer are its concentration and its pH. Concentration in terms of electrophoresis buffer refers to the ionic strength of the buffer; a buffer with a higher ionic strength will conduct the current more than the sample, leading to slower migration of the molecules. For compounds that have different ionization forms, pH is a major factor in choosing a buffer. The buffer must have a pH that matches the specific ionization forms’ pH range.
• Sher Butt | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Capillary_Electrophoresis/Ionic_Mobility_and_Electrophoresis.txt |
Chromatography is a method by which a mixture is separated by distributing its components between two phases. The stationary phase remains fixed in place while the mobile phase carries the components of the mixture through the medium being used. The stationary phase acts as a constraint on many of the components in a mixture, slowing them down to move slower than the mobile phase. The movement of the components in the mobile phase is controlled by the significance of their interactions with the mobile and/or stationary phases. Because of the differences in factors such as the solubility of certain components in the mobile phase and the strength of their affinities for the stationary phase, some components will move faster than others, thus facilitating the separation of the components within that mixture.
• Chromatographic Columns
Chromatography is an analytical technique that separates components in a mixture. Chromatographic columns are part of the instrumentation that is used in chromatography. Five chromatographic methods that use columns are gas chromatography (GC), liquid chromatography (LC), Ion exchange chromatography (IEC), size exclusion chromatography (SEC), and chiral chromatography. The basic principals of chromatography can be applied to all five methods.
• Chromatography
Chromatography is a method by which a mixture is separated by distributing its components between two phases. The stationary phase remains fixed in place while the mobile phase carries the components of the mixture through the medium being used. The stationary phase acts as a constraint on many of the components in a mixture, slowing them down to move slower than the mobile phase.
• Gas Chromatography
Gas chromatography is a term used to describe the group of analytical separation techniques used to analyze volatile substances in the gas phase. In gas chromatography, the components of a sample are dissolved in a solvent and vaporized in order to separate the analytes by distributing the sample between two phases: a stationary phase and a mobile phase. The mobile phase is a chemically inert gas that serves to carry the molecules of the analyte through the heated column.
• High Performance Liquid Chromatography
High Performance Liquid Chromotagraphy (HPLC) is an analytical technique used for the separation of compounds soluble in a particular solvent.
• Liquid Chromatography
Liquid chromatography is a technique used to separate a sample into its individual parts. This separation occurs based on the interactions of the sample with the mobile and stationary phases. Because there are many stationary/mobile phase combinations that can be employed when separating a mixture, there are several different types of chromatography that are classified based on the physical states of those phases. Liquid-solid column chromatography, the most popular chromatography technique.
Thumbnail: Two-dimensional chromatograph GCxGC-TOFMS at Chemical Faculty of GUT Gdańsk, Poland, 2016. (CC BY-SA 4.0; LukaszKatlewa).
Chromatography
Chromatography is an analytical technique that separates components in a mixture. Chromatographic columns are part of the instrumentation that is used in chromatography. Five chromatographic methods that use columns are gas chromatography (GC), liquid chromatography (LC), Ion exchange chromatography (IEC), size exclusion chromatography (SEC), and chiral chromatography. The basic principals of chromatography can be applied to all five methods.
Gas Chromatographic Columns
In gas chromatography the mobile phase is a gas. Gas chromatographic columns are usually between 1 and 100 meters long. Gas liquid chromatography(GLC): The liquid stationary phase is bonded or adsorbed onto the surface of an open tubular (capillary) column, or onto a packed solid support inside the column. Matching the polarities of the analyte and stationary phase is not an exact science. The two should have similar polarities. The thickness of the stationary phase ranges between 0.1 and 8 µm. The thicker the layer the more volatile the analyte can be.
High Performance Liquid Chromatographic Columns
High performance liquid chromatography (HPLC) is a type of liquid chromatography that uses a liquid moblie phase. The same basic principals from gas chromatography are applied to liquid chromatography. There are three basic types of liquid chromatographic columns: liquid-liquid, liquid-solid, and ion-exchange. Liquid-liquid chromatographic columns have the liquid stationary phase bonded or absorbed to the surface of the column, or packed material. liquid-liquid chromatographic columns are not as popular because they have limited stability and they are inconvenient. Partitioning occurs between the two different liquids of the mobile and stationary phases. In liquid-solid chromatographic columns the stationary phase is a solid and the analyte absorbs onto the stationary phase which separates the components of the mixture. In ion-exchange chromatographic columns the stationary phase is an ion-exchange resin and partitioning occurs with ion exchanges that occur between the analyte and stationary phase.
Usually HPLC has a guard column ahead of the analytical column to protect and extend the life of the analytical column. The guard column removes particulate matter, contaminants, and molecules that bind irreversibly to the column. The guard column has a stationary phase similar to the analytical column.
The most common HPLC columns are made from stainless steel, but they can be also made out of thick glass, polymers such as polyetherethelketone, a combination of stainless steel and glass, or a combination of stainless steel and polymers. Typical HPLC analytical columns are between 3 and 25 cm long and have a diameter of 1 to 5 mm. The columns are usually straight unlike GC columns. Particles that pack the columns have a typical diameter between 3 to 5 µm. Liquid chromatographic columns will increase in efficiency when the diameter of the packed particles inside the column decreases.
Packing Material
HPLC columns are usually packed with pellicular, or porous particles. Pellicular particles are made from polymer, or glass beads. Pellicular particles are surrounded by a thin uniform layer of silica, polystyrene-divinyl-benzene synthetic resin, alumina, or other type of ion-exchange resin. The diameter of the pellicular beads is between 30 and 40 µm. Porous particles are more commonly used and have diameters between 3 to 10 µm. Porous particles are made up silica, polystyrene-divinyl-benzene synthetic resin, alumina, or other type of ion-exchange resin. Silica is the most common type of porous particle packing material.
Partition HPLC uses liquid bonded phase columns, where the liquid stationary phase is chemically bonded to the packing material. The packing material is usually hydrolyzed silica which reacts with the bond-phase coating. Common bond phase coatings are siloxanes. The relative structure of the siloxane is shown in Figure $1$.
Table $1$: This table shows the R groups that can be attached to the siloxane and what chromatographic method it is commonly applied to.
R group attached to siloxane Chromatography method application
Alkyl Reverse phase
Fluoroalkyl Reverse phase
Cyano Normal and reverse phase
Amide Reverse phase
Amino Normal and reverse phase
dimethylamine Weak anion exchanger
Quaternary Amine Strong anion exchanger
Sulfonic Acid Strong cation exchanger
Carboxylic Acid Weak cation exchanger
Diol reverse phase
Phenyl Reverse phase
Carbamate Reverse Phase
Reverse and Normal Phase HPLC
A polar stationary phase and a non-polar mobile phase are used for normal phase HPLC. In normal phase, the most common R groups attached to the siloxane are: diol, amino, cyano, inorganic oxides, and dimethylamino. Normal phase is also a form of liquid-solid chromatography. The most non-polar compounds will elute first when doing normal phase HPLC.
Reverse phase HPLC uses a polar mobile phase and a non-polar stationary phase. Reverse phase HPLC is the most common liquid chromatography method used. The R groups usually attached to the siloxane for reverse phase HPLC are: C8, C18,or any hydrocarbon. Reverse phase can also use water as the mobile phase, which is advantageous because water is cheap, nontoxic, and invisible in the UV region. The most polar compounds will elute first when performing reverse phase HPLC. Check the animation on the principle of reversed-phase chromatography to understand its principle.
Ion Exchange Chromatographic Columns
Ion exchange columns are used to separate ions and molecules that can be easily ionized. Separation of the ions depends on the ion's affinity for the stationary phase, which creates an ion exchange system. The electrostatic interactions between the analytes, moble phase, and the stationary phase, contribute to the separation of ions in the sample. Only positively or negatively charged complexes can interact with their respective cation or anion exchangers. Common packing materials for ion exchange columns are amines, sulfonic acid, diatomaceous earth, styrene-divinylbenzene, and cross-linked polystyrene resins. Some of the first ion exchangers used were inorganic and made from aluminosilicates (zeolites). Although aluminosilicates are not widely used as ion exchange resins used.
Size Exclusion Chromatographic Columns
Size Exclusion Chromatographic columns separate molecules based upon their size, not molecular weight. A common packing material for these columns is molecular sieves. Zeolites are a common molecular sieve that is used. The molecular sieves have pores that small molecules can go into, but large molecules cannot. This allows the larger molecules to pass through the column faster than the smaller ones. Other packing materials for size exclusion chromatographic columns are polysaccharides and other polymers, and silica. The pore size for size exclusion separations varies between 4 and 200 nm.
Chiral Columns
Chiral columns are used to separate enantiomers. Separation of chiral molecules is based upon steriochemistry. These columns have a stationary phase that selectively interacts with one enantiomer over the other. These types of columns are very useful for separating racemic mixtures. Some Stationary Phases Used to Separate Enantiomer are show in Table $2$.
Table $2$: This table shows some stationary phases that are used to separate enantiomers and the corresponding chromatographic methods that they are applied to.
Stationary Phase Method(s) Used
Metal Chelates GC, LC
Amino Acid Derivatives GC, LC
Proteins LC
Helical Polymers LC
Cyclodextrin Derivatives GC, LC
Column Efficiency
Peak or band broadening causes the column to be less efficient. The ideal situation would to have sharp peaks that are resolved. The longer a substance stays in the column it will cause the peaks to widen. Lengthening the column is a way to improve the separation of different species in the column. A column usually needs to remain at a constant temperature to remain efficient. Plate height and number of theoretical plates determines the efficiency of the column. Improving the efficiency would be to increase the number of plates and decrease the plate height.
The number of plates can be determined from the equation:
$N=L/H \nonumber$
where L is the length of the column and H is the height of each plate. N can also be determined from the equation:
$N=16\left( \dfrac{t_R}{W}\right)^2 \nonumber$
or
$N=5.54\left(\dfrac{t_R}{W_{1/2}}\right)^2 \nonumber$
where $t_R$ is the retention time, $W$ is the width of the peak and $W_{1/2}$ is half the width of the peak.
Height equivalent to a theoretical plate (HETP) is determined from the equation:
$H=L/N \nonumber$
or HETP can also be determined by the van Beemter equation:
$H=A+\dfrac{B}{u}+Cu \nonumber$
where H equals HETP, A is the term for eddy diffusion, B is the term for longitudinal diffusion, C is the coefficient for mass-transfer between the stationary and mobile phases, and u is the linear velocity. The equation for HETP is often used to describe the efficiency of the column. An efficient column would have a minimum HETP value. Gas chromatographic columns have plate heights that are at least one order of magnitude greater than liquid chromatographic column plates. However GC columns are longer, which causes them to be more efficient. LC columns have a maximum length of 25 cm whereas GC columns can be 100 meters long. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/Chromatographic_Columns.txt |
Chromatography is a method by which a mixture is separated by distributing its components between two phases. The stationary phase remains fixed in place while the mobile phase carries the components of the mixture through the medium being used. The stationary phase acts as a constraint on many of the components in a mixture, slowing them down to move slower than the mobile phase. The movement of the components in the mobile phase is controlled by the significance of their interactions with the mobile and/or stationary phases. Because of the differences in factors such as the solubility of certain components in the mobile phase and the strength of their affinities for the stationary phase, some components will move faster than others, thus facilitating the separation of the components within that mixture.
Theory
The distribution of a solute between the mobile and stationary phases in chromatography is described by $\kappa$, the partition coefficient, defined by:
$\kappa = \dfrac{C_s }{C_m} \nonumber$
where $C_s$ is the concentration of solute in the stationary phase and $C_m$ is the concentration of the solute in the mobile phase. The mobile phase serves to carry the sample molecules through the chromatographic column. During the sample molecules transportation through the column, each analyte is retained according to that compound's characteristic affinity for the stationary phase. The time that passes between the sample injection and peak maximum is called the retention time. The area underneath each peak is proportional to the amount of co responding analyte in solution.
Retention Time
The retention time, $t_R$, is given in seconds by:
$t_R = t_S + t_M \nonumber$
where $t_S$ is the time the analyte spends in the stationary phase and $t_M$ is the time spent in the mobile phase. $t_M$ is often referred to as the dead, or void time, as all components spend $t_M$ in the mobile phase.
Theory of Band Broadening and Column Efficiency
Column efficiency is affected by the amount of band broadening that occurs as the sample passes through the column. Rate theory describes the shapes of the peaks in quantitative terms and is based upon the infinite number of paths that the sample may take in order to elute out of the column. Some molecules will travel through the column quickly due to their accidentally inclusion in the mobile phase while other molecules will severely lag behind because of their accidental inclusion in the stationary phase. The result of these effects is a typical Gaussian shaped chromatographic band with a spread of velocities around the mean value. Furthermore, the width of the peak increases as it move down the column because of the increased opportunity for spreading.
Two additional, undesirable chromatographic features are fronting and tailing. With fronting, the front of the peak is drawn out and the tali is steepened. The opposite is true for tailing. Both effects can be cause by distribution constant that varies with the concentration. These non-ideal effects are unwanted because of they lead to poor separations.
The two terms used to measure column efficiency are plate height, $H$, and plate count, $N$. These two terms related by the following equation where $L$ is the length of the column:
$N = L / H \nonumber$
Greater column efficiency is characterized by a large plate count $N$ and a small plate height $H$. Both $H$ and $N$ can be determined experimentally using the following two equations:
$H = L W^2 / 16 (t_R)^2 \nonumber$
$N = 16 (t_R / W)^2 \nonumber$
where L is the length of the column packing, W is the width of the magnitude of the base of the triangle and $t_R$ is the retention time of the analyte. Using the theory of band broadening, the efficiency of chromatographic columns can be approximated by the van Deemter equation:
$H = A + \dfrac{B}{u} + C_Su + C_Mu \nonumber$
where $H$ is the plate height in centimeters and u is the linear velocity of the mobile phase in centimeters per second. The term $A$ describes the multiple path effect, or eddy diffusion, $B$ describes the longitudinal diffusion coefficient and $C_Su$ and $C_Mu$ are the mass-transfer coefficients for the stationary and mobile phases, respectively.
The van Deemter Equation
In chromatography, it is important that the components in solution are adequately separated so that the separate components can be collected in their purest form. This becomes easier to do as the separation between the bands for each component have a greater separation between them. Also, it is ideal to have the bands of the individual components as narrow as possible. This is to say that it is best to have each component occupying as little space as possible within the column:
From this figure it can be seen that a better separation between narrow bands of components is ideal for easier collection of the individual samples. Band broadening is an especially important factor for this type of chromatography when separating colored compounds. When the bands of the components are narrow, most of the particles of that component are in close proximity with one another, which makes it easier to see the color of the bands. As the particles diffuse away from one another and broaden the component's band, the color of the band fades and can become more difficult to see, which may also make it harder to collect pure samples of the mixture's components.
The extent of band broadening in chromatography is determined by the Van Deemter equation3. This equation relates the efficiency of the chromatography procedure to three different factors. The Van Deemter equation is shown below:
$H = Au^{1/3} + \dfrac{B}{u} + Cu \nonumber$
Where H is the height equivalent of a theoretical plate4 (HETP) and u is the velocity (flow rate) of the mobile phase. The lower the resulting value of $H$ is, the greater the efficiency of the procedure. So, ideally, a scientist will want to minimize all three terms in order to minimize $H$. The other three terms refer to factors that come into play while the chromatography is performed.
The A factor is determined by a phenomenon called Eddy Diffusion5. This is also called the multi-path term, as molecular particles of a certain compound have a multitude of options when it comes to finding a pathway through a packed column. The following figure helps in visualizing Eddy diffusion:
Because there is an almost infinite number of different paths that a particle can travel by through a column, some paths will be longer than others. The particles that find the shortest path through the column will be eluted more quickly than those that travel a longer way. In the figure, particle $B$ will be eluted before particle $C$, and both will be eluted before particle A. Since it is improbable for all particles of one compound to find the shortest path, there will be fractions of the component that will behave like particles $A$, $B$, and $C$. This leads to the broadening of the band. There is little a scientist can do to minimize the Eddy Diffusion factor, as it is influenced by the nature of column being used and by the particles' movement through that column. The A term is loosely affected by the flow rate of the mobile phase, and sometimes the affect of the flow rate is negligible. It is for this reason that sometimes the Van Deemter equation is written as such:
$H = A + \dfrac{B}{u} + Cu \nonumber$
B/u is called the longitudinal diffusion term, and is caused by the components' natural migration from a place of high concentration (the center of the band) to a place of lower concentration (either side of the band) within the column. Diffusion6 occurs because molecules in a place of high concentration will tend to spread out to areas of lower concentration to achieve equilibrium. Given enough time, diffusion will result in equilibrium of the diffusing fluid via random molecular motion. The figure below helps to visualize this phenomenon:
At time zero in the figure above, the particles of a compound are generally localized in a narrow band within the separating column. If the mobile phase flow rate is too small or if the system is left at rest, the particles begin to separate from one another. This causes a spread in the concentration distribution of that compound within the column, thus bringing about band broadening for the band of that particular compound. As the time that the system is left still approaches infinity, the compound reaches complete concentration equilibrium throughout the entire column. At this point, there is no definitive band for that component, as a single concentration of that compound is present throughout the entire column. Longitudinal diffusion is a chief cause of band broadening in Gas Chromatography, as the diffusion rates of gaseous species are much higher than those of liquids. It is for this reason that longitudinal diffusion is less of an issue in liquid chromatography. The magnitude of the term $B/u$ can be minimized by increasing the flow rate of the mobile phase. Increasing the velocity of the mobile phase does not allow the components in the column to reach equilibrium, and so will hamper longitudinal diffusion. The flow rate of the mobile phase should not be increased in excess, however, as the term $Cu$ is maximized when u is increased.
Cu is referred to as the mass transfer term. Mass transfer refers to when particles are so strongly adhered to the stationary phase that the mobile phase passes over them without carrying them along. This results is particles of a component being left behind. Since it is likely that more than a single particle of any given compound will undergo this occurrence, band broadening results. This results in a phenomenon called tailing, in which a fraction a component lags behind a more concentrated frontal band. Non-equilibrium effects can be caused by two phenomena: laminar flow and turbulent flow. Laminar flow7 occurs in tubular capillaries, and so is most prominent in Capillary Electrophoresis. Turbulent flow occurs as a result of particles becoming overwhelmed by the stationary phase and is more common in column chromatography. This occurrence can be visualized by observing the figure below:
In the above figure, particles of the adsorbent solid become occupied by particles of the sample. If too many particles of the adsorbent are occupied, particle A will have nothing hindering it from flowing through the column. So, the particles of a single compound separate from one another. Also, as the mobile phase moves through the column, particles of the sample leave the stationary phase and migrate with the mobile phase. However, if the flow rate of the mobile phase is too high, many of the sample particles are unable to leave the stationary phase and so get left behind. These occurrences result in band broadening, as the individual particles of a single compound become less closely packed. The high flow rate of the mobile phase makes it more difficult for the components within the column to reach equilibrium between the stationary and mobile phase. It is for this reason that the Cu term is also called the non-equilibrium factor. Minimization of this factor can be achieved by decreasing the flow rate of the mobile phase. Decreasing the flow rate of the mobile phase gives sample components more time to leave the stationary phase and move with the mobile phase, thus reaching equilibrium.
By observing the Van Deemter equation, it can be deduced that an ideal mobile phase flow rate must be determined to yield the best (lowest) value of H. Decreasing the flow rate too much will result in an increase of the longitudinal diffusion factor B/u, while exceedingly increasing the flow rate will increase the significance of the mass transfer term Cu. So, H can be minimized to a finite limit depending on the various parameters involved in the chromatography being performed.
Contributors
• Sean Gottlieb (UCD), Jessica Hosfelt (UCD) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/Chromatography.txt |
Gas chromatography is a term used to describe the group of analytical separation techniques used to analyze volatile substances in the gas phase. In gas chromatography, the components of a sample are dissolved in a solvent and vaporized in order to separate the analytes by distributing the sample between two phases: a stationary phase and a mobile phase. The mobile phase is a chemically inert gas that serves to carry the molecules of the analyte through the heated column. Gas chromatography is one of the sole forms of chromatography that does not utilize the mobile phase for interacting with the analyte. The stationary phase is either a solid adsorbant, termed gas-solid chromatography (GSC), or a liquid on an inert support, termed gas-liquid chromatography (GLC).
Introduction
In early 1900s, Gas chromatography (GC) was discovered by Mikhail Semenovich Tsvett as a separation technique to separate compounds. In organic chemistry, liquid-solid column chromatography is often used to separate organic compounds in solution. Among the various types of gas chromatography, gas-liquid chromatography is the method most commonly used to separate organic compounds. The combination of gas chromatography and mass spectrometry is an invaluable tool in the identification of molecules. A typical gas chromatograph consists of an injection port, a column, carrier gas flow control equipment, ovens and heaters for maintaining temperatures of the injection port and the column, an integrator chart recorder and a detector.
To separate the compounds in gas-liquid chromatography, a solution sample that contains organic compounds of interest is injected into the sample port where it will be vaporized. The vaporized samples that are injected are then carried by an inert gas, which is often used by helium or nitrogen. This inert gas goes through a glass column packed with silica that is coated with a liquid. Materials that are less soluble in the liquid will increase the result faster than the material with greater solubility.The purpose of this module is to provide a better understanding on its separation and measurement techniques and its application.
In GLC, the liquid stationary phase is adsorbed onto a solid inert packing or immobilized on the capillary tubing walls. The column is considered packed if the glass or metal column tubing is packed with small spherical inert supports. The liquid phase adsorbs onto the surface of these beads in a thin layer. In a capillary column, the tubing walls are coated with the stationary phase or an adsorbant layer, which is capable of supporting the liquid phase. However, the method of GSC, has limited application in the laboratory and is rarely used due to severe peak tailing and the semi-permanent retention of polar compounds within the column. Therefore, the method of gas-liquid chromatography is simply shortened to gas chromatography and will be referred to as such here.The purpose of this module is to provide a better understanding on its separation and measurement techniques and its application.
Instrumentation
Sample Injection
A sample port is necessary for introducing the sample at the head of the column. Modern injection techniques often employ the use of heated sample ports through which the sample can be injected and vaporized in a near simultaneous fashion. A calibrated microsyringe is used to deliver a sample volume in the range of a few microliters through a rubber septum and into the vaporization chamber. Most separations require only a small fraction of the initial sample volume and a sample splitter is used to direct excess sample to waste. Commercial gas chromatographs often allow for both split and splitless injections when alternating between packed columns and capillary columns. The vaporization chamber is typically heated 50 °C above the lowest boiling point of the sample and subsequently mixed with the carrier gas to transport the sample into the column.
Carrier Gas
The carrier gas plays an important role, and varies in the GC used. Carrier gas must be dry, free of oxygen and chemically inert mobile-phase employed in gas chromatography. Helium is most commonly used because it is safer than, but comprable to hydrogen in efficiency, has a larger range of flow rates and is compatible with many detectors. Nitrogen, argon, and hydrogen are also used depending upon the desired performance and the detector being used. Both hydrogen and helium, which are commonly used on most traditional detectors such as Flame Ionization(FID), thermal conductivity (TCD) and Electron capture (ECD), provide a shorter analysis time and lower elution temperatures of the sample due to higher flow rates and low molecular weight. For instance, hydrogen or helium as the carrier gas gives the highest sensitivity with TCD because the difference in thermal conductivity between the organic vapor and hydrogen/helium is greater than other carrier gas. Other detectors such as mass spectroscopy, uses nitrogen or argon which has a much better advantage than hydrogen or helium due to their higher molecular weights, in which improve vacuum pump efficiency.
All carrier gasses are available in pressurized tanks and pressure regulators, gages and flow meters are used to meticulously control the flow rate of the gas. Most gas supplies used should fall between 99.995% - 99.9995% purity range and contain a low levels (< 0.5 ppm) of oxygen and total hydrocarbons in the tank. The carrier gas system contains a molecular sieve to remove water and other impurities. Traps are another option to keep the system pure and optimum sensitive and removal traces of water and other contaminants. A two stage pressure regulation is required to use to minimize the pressure surges and to monitor the flow rate of the gas. To monitor the flow rate of the gas a flow or pressure regulator was also require onto both tank and chromatograph gas inlet. This applies different gas type will use different type of regulator.The carrier gas is preheated and filtered with a molecular sieve to remove impurities and water prior to being introduced to the vaporization chamber. A carrier gas is typically required in GC system to flow through the injector and push the gaseous components of the sample onto the GC column, which leads to the detector ( see more detail in detector section).
Column Oven
The thermostatted oven serves to control the temperature of the column within a few tenths of a degree to conduct precise work. The oven can be operated in two manners: isothermal programming or temperature programming. In isothermal programming, the temperature of the column is held constant throughout the entire separation. The optimum column temperature for isothermal operation is about the middle point of the boiling range of the sample. However, isothermal programming works best only if the boiling point range of the sample is narrow. If a low isothermal column temperature is used with a wide boiling point range, the low boiling fractions are well resolved but the high boiling fractions are slow to elute with extensive band broadening. If the temperature is increased closer to the boiling points of the higher boiling components, the higher boiling components elute as sharp peaks but the lower boiling components elute so quickly there is no separation.
In the temperature programming method, the column temperature is either increased continuously or in steps as the separation progresses. This method is well suited to separating a mixture with a broad boiling point range. The analysis begins at a low temperature to resolve the low boiling components and increases during the separation to resolve the less volatile, high boiling components of the sample. Rates of 5-7 °C/minute are typical for temperature programming separations.
Open Tubular Columns and Packed Columns
Open tubular columns, which are also known as capillary columns, come in two basic forms. The first is a wall-coated open tubular (WCOT) column and the second type is a support-coated open tubular (SCOT) column. WCOT columns are capillary tubes that have a thin later of the stationary phase coated along the column walls. In SCOT columns, the column walls are first coated with a thin layer (about 30 micrometers thick) of adsorbant solid, such as diatomaceous earth, a material which consists of single-celled, sea-plant skeletons. The adsorbant solid is then treated with the liquid stationary phase. While SCOT columns are capable of holding a greater volume of stationary phase than a WCOT column due to its greater sample capacity, WCOT columns still have greater column efficiencies.
Most modern WCOT columns are made of glass, but T316 stainless steel, aluminum, copper and plastics have also been used. Each material has its own relative merits depending upon the application. Glass WCOT columns have the distinct advantage of chemical etching, which is usually achieved by gaseous or concentrated hydrochloric acid treatment. The etching process gives the glass a rough surface and allows the bonded stationary phase to adhere more tightly to the column surface.
One of the most popular types of capillary columns is a special WCOT column called the fused-silica wall-coated (FSWC) open tubular column. The walls of the fused-silica columns are drawn from purified silica containing minimal metal oxides. These columns are much thinner than glass columns, with diameters as small as 0.1 mm and lengths as long as 100 m. To protect the column, a polyimide coating is applied to the outside of the tubing and bent into coils to fit inside the thermostatted oven of the gas chromatography unit. The FSWC columns are commercially available and currently replacing older columns due to increased chemical inertness, greater column efficiency and smaller sampling size requirements. It is possible to achieve up to 400,000 theoretical plates with a 100 m WCOT column, yet the world record for the largest number of theoretical plates is over 2 million plates for 1.3 km section of column.
Packed columns are made of a glass or a metal tubing which is densely packed with a solid support like diatomaceous earth. Due to the difficulty of packing the tubing uniformly, these types of columns have a larger diameter than open tubular columns and have a limited range of length. As a result, packed columns can only achieve about 50% of the efficiency of a comparable WCOT column. Furthermore, the diatomaceous earth packing is deactivated over time due to the semi-permanent adsorption of impurities within the column. In contrast, FSWC open tubular columns are manufactured to be virtually free of these adsorption problems.
Different types of columns can be applied for different fields. Depending on the type of sample, some GC columns are better than the others. For example, the FSWC column shown in Figure 5 is designed specially for blood alcohol analysis. It produces fast run times with baseline resolution of key components in under 3 minutes. Moreover, it displays enhanced resolutions of ethanol and acetone peaks, which helps with determining the BAC levels. This particular column is known as Zebron-BAC and it made with polyimide coating on the outside and the inner layer is made of fused silica and the inner diameter ranges from .18 mm to .25 mm. There are also many other Zebron brand columns designed for other purposes.
Another example of a Zebron GC column is known as the Zebron-inferno. Its outer layer is coated with a special type of polyimide that is designed to withstand high temperatures. As shown in figure 6, it contains an extra layer inside. It can withstand up to 430 °C to be exact and it is designed to provide true boiling point separation of hydrocarbons distillation methods. Moreover, it is also used for acidic and basic samples.
Detection Systems
The detector is the device located at the end of the column which provides a quantitative measurement of the components of the mixture as they elute in combination with the carrier gas. In theory, any property of the gaseous mixture that is different from the carrier gas can be used as a detection method. These detection properties fall into two categories: bulk properties and specific properties. Bulk properties, which are also known as general properties, are properties that both the carrier gas and analyte possess but to different degrees. Specific properties, such as detectors that measure nitrogen-phosphorous content, have limited applications but compensate for this by their increased sensitivity.
Each detector has two main parts that when used together they serve as transducers to convert the detected property changes into an electrical signal that is recorded as a chromatogram. The first part of the detector is the sensor which is placed as close the the column exit as possible in order to optimize detection. The second is the electronic equipment used to digitize the analog signal so that a computer may analyze the acquired chromatogram. The sooner the analog signal is converted into a digital signal, the greater the signal-to-noise ratio becomes, as analog signal are easily susceptible to many types of interferences.
An ideal GC detector is distinguished by several characteristics. The first requirement is adequate sensitivity to provide a high resolution signal for all components in the mixture. This is clearly an idealized statement as such a sample would approach zero volume and the detector would need infinite sensitivity to detect it. In modern instruments, the sensitivities of the detectors are in the range of 10-8 to 10-15 g of solute per second. Furthermore, the quantity of sample must be reproducible and many columns will distort peaks if enough sample is not injected. An ideal column will also be chemically inert and and should not alter the sample in any way. Optimized columns will be able to withstand temperatures in the range of -200 °C to at least 400 °C. In addition, such a column would have a short linear response time that is independent of flow rate and extends for several orders of magnitude. Moreover, the detector should be reliable, predictable and easy to operate.
Understandably, it is not possible for a detector meet all of these requirements. The next subsections will discuss some of the more common types of gas chromatography detectors and the relative advantages and/or disadvantages of each.
Table 7: Typical gas chromatography detectors and their detection limits.
Type of Detector
Applicable Samples
Detection Limit
Mass Spectrometer (MS)
Tunable for any sample
.25 to 100 pg
Flame Ionization (FID)
Hydrocarbons
1 pg/s
Thermal Conductivity (TCD)
Universal
500 pg/ml
Electron-Capture (ECD)
Halogenated hydrocarbons
5 fg/s
Atomic Emission (AED)
Element-selective
1 pg
Chemiluminescence (CS)
Oxidizing reagent
Dark current of PMT
Photoionization (PID)
Vapor and gaseous Compounds
.002 to .02 µg/L
Mass Spectrometry Detectors
Mass Spectrometer (MS) detectors are most powerful of all gas chromatography detectors. In a GC/MS system, the mass spectrometer scans the masses continuously throughout the separation. When the sample exits the chromatography column, it is passed through a transfer line into the inlet of the mass spectrometer . The sample is then ionized and fragmented, typically by an electron-impact ion source. During this process, the sample is bombarded by energetic electrons which ionize the molecule by causing them to lose an electron due to electrostatic repulsion. Further bombardment causes the ions to fragment. The ions are then passed into a mass analyzer where the ions are sorted according to their m/z value, or mass-to-charge ratio. Most ions are only singly charged.
The Chromatogram will point out the retention times and the mass spectrometer will use the peaks to determine what kind of molecules are exist in the mixture. The figure below represents a typical mass spectrum of water with the absorption peaks at the appropriate m/z ratios.
Instrumentation
One of the most common types of mass analyzer in GC/MS is the quadrupole ion-trap analyzer, which allows gaseous anions or cations to be held for long periods of time by electric and magnetic fields. A simple quadrupole ion-trap consists of a hollow ring electrode with two grounded end-cap electrodes as seen in figure #. Ions are allowed into the cavity through a grid in the upper end cap. A variable radio-frequency is applied to the ring electrode and ions with an appropriate m/z value orbit around the cavity. As the radio-frequency is increased linearly, ions of a stable m/z value are ejected by mass-selective ejection in order of mass. Ions that are too heavy or too light are destabilized and their charge is neutralized upon collision with the ring electrode wall. Emitted ions then strike an electron multiplier which converts the detected ions into an electrical signal. This electrical signal is then picked up by the computer through various programs. As an end result, a chromatogram is produced representing the m/z ratio versus the abundance of the sample.
GC/MS units are advantageous because they allow for the immediate determination of the mass of the analyte and can be used to identify the components of incomplete separations. They are rugged, easy to use and can analyze the sample almost as quickly as it is eluted. The disadvantages of mass spectrometry detectors are the tendency for samples to thermally degrade before detection and the end result of obliterating all the sample by fragmentation.
Flame Ionization Detectors
Flame ionization detectors (FID) are the most generally applicable and most widely used detectors. In a FID, the sample is directed at an air-hydrogen flame after exiting the column. At the high temperature of the air-hydrogen flame, the sample undergoes pyrolysis, or chemical decomposition through intense heating. Pyrolized hydrocarbons release ions and electrons that carry current. A high-impedance picoammeter measures this current to monitor the sample's elution.
It is advantageous to use FID because the detector is unaffected by flow rate, noncombustible gases and water. These properties allow FID high sensitivity and low noise. The unit is both reliable and relatively easy to use. However, this technique does require flammable gas and also destroys the sample.
Thermal Conductivity Detectors
Thermal conductivity detectors (TCD) were one the earliest detectors developed for use with gas chromatography. The TCD works by measuring the change in carrier gas thermal conductivity caused by the presence of the sample, which has a different thermal conductivity from that of the carrier gas. Their design is relatively simple, and consists of an electrically heated source that is maintained at constant power. The temperature of the source depends upon the thermal conductivities of the surrounding gases. The source is usually a thin wire made of platinum, gold or . The resistance within the wire depends upon temperature, which is dependent upon the thermal conductivity of the gas.
TCDs usually employ two detectors, one of which is used as the reference for the carrier gas and the other which monitors the thermal conductivity of the carrier gas and sample mixture. Carrier gases such as helium and hydrogen has very high thermal conductivities so the addition of even a small amount of sample is readily detected.
The advantages of TCDs are the ease and simplicity of use, the devices' broad application to inorganic and organic compounds, and the ability of the analyte to be collected after separation and detection. The greatest drawback of the TCD is the low sensitivity of the instrument in relation to other detection methods, in addition to flow rate and concentration dependency.
Chromatogram
Figure 13 represents a standard chromatogram produced by a TCD detector. In a standard chromatogram regardless of the type detector, the x-axis is the time and the y-axis is the abundance or the absorbance. From these chromatograms, retention times and the peak heights are determined and used to further investigate the chemical properties or the abundance of the samples.
Electron-capture Detectors
Electron-capture detectors (ECD) are highly selective detectors commonly used for detecting environmental samples as the device selectively detects organic compounds with moieties such as halogens, peroxides, quinones and nitro groups and gives little to no response for all other compounds. Therefore, this method is best suited in applications where traces quantities of chemicals such as pesticides are to be detected and other chromatographic methods are unfeasible.
The simplest form of ECD involves gaseous electrons from a radioactive ? emitter in an electric field. As the analyte leaves the GC column, it is passed over this ? emitter, which typically consists of nickle-63 or tritium. The electrons from the ? emitter ionize the nitrogen carrier gas and cause it to release a burst of electrons. In the absence of organic compounds, a constant standing current is maintained between two electrodes. With the addition of organic compounds with electronegative functional groups, the current decreases significantly as the functional groups capture the electrons.
The advantages of ECDs are the high selectivity and sensitivity towards certain organic species with electronegative functional groups. However, the detector has a limited signal range and is potentially dangerous owing to its radioactivity. In addition, the signal-to-noise ratio is limited by radioactive decay and the presence of O2 within the detector.
Atomic Emission Detectors
Atomic emission detectors (AED), one of the newest addition to the gas chromatographer's arsenal, are element-selective detectors that utilize plasma, which is a partially ionized gas, to atomize all of the elements of a sample and excite their characteristic atomic emission spectra. AED is an extremely powerful alternative that has a wider applicability due to its based on the detection of atomic emissions.There are three ways of generating plasma: microwave-induced plasma (MIP), inductively coupled plasma (ICP) or direct current plasma (DCP). MIP is the most commonly employed form and is used with a positionable diode array to simultaneously monitor the atomic emission spectra of several elements.
Instrumentation
The components of the Atomic emission detectors include 1) an interface for the incoming capillary GC column to induce plasma chamber,2) a microwave chamber, 3) a cooling system, 4) a diffration grating that associated optics, and 5) a position adjustable photodiode array interfaced to a computer.
GC Chemiluminescence Detectors
Chemiluminescence spectroscopy (CS) is a process in which both qualitative and quantitative properties can be be determined using the optical emission from excited chemical species. It is very similar to AES, but the difference is that it utilizes the light emitted from the energized molecules rather than just excited molecules. Moreover, chemiluminescence can occur in either the solution or gas phase whereas AES is designed for gaseous phases. The light source for chemiluminescence comes from the reactions of the chemicals such that it produces light energy as a product. This light band is used instead of a separate source of light such as a light beam.
Like other methods, CS also has its limitations and the major limitation to the detection limits of CS concerns with the use of a photomultiplier tube (PMT). A PMT requires a dark current in it to detect the light emitted from the analyte.
Photoionization Detectors
Another different kind of detector for GC is the photoionization detector which utilizes the properties of chemiluminescence spectroscopy. Photoionization detector (PID) is a portable vapor and gas detector that has selective determination of aromatic hydrocarbons, organo-heteroatom, inorganice species and other organic compounds. PID comprise of an ultrviolet lamp to emit photons that are absorbed by the compounds in an ionization chamber exiting from a GC column. Small fraction of the analyte molecules are actually ionized, nondestructive, allowing confirmation analytical results through other detectors. In addition, PIDs are available in portable hand-held models and in a number of lamp configurations. Results are almost immediate. PID is used commonly to detect VOCs in soil, sediment, air and water, which is often used to detect contaminants in ambient air and soil. The disavantage of PID is unable to detect certain hydrocarbon that has low molecular weight, such as methane and ethane.
Instrumentation
Limitations
1. Not suitable for detecting semi-volatile compounds
2. Only indicates if volatile organic compounds are presents.
3. High concentration so methane are required for higher performance.
4. Frequent calibration are required.
5. Units of parts per million range
6. Enviromental distraction, especially water vapor.
7. Strong electrical fieldsRapid variation in temperature at the detector and naturally occurring compounds may affect instrumental signal.
Applications
Gas chromatography is a physical separation method in where volatile mixtures are separated. It can be used in many different fields such as pharmaceuticals, cosmetics and even environmental toxins. Since the samples have to be volatile, human breathe, blood, saliva and other secretions containing large amounts of organic volatiles can be easily analyzed using GC. Knowing the amount of which compound is in a given sample gives a huge advantage in studying the effects of human health and of the environment as well.
Air samples can be analyzed using GC. Most of the time, air quality control units use GC coupled with FID in order to determine the components of a given air sample. Although other detectors are useful as well, FID is the most appropriate because of its sensitivity and resolution and also because it can detect very small molecules as well.
GC/MS is also another useful method which can determine the components of a given mixture using the retention times and the abundance of the samples. This method be applied to many pharmaceutical applications such as identifying the amount of chemicals in drugs. Moreover, cosmetic manufacturers also use this method to effectively measure how much of each chemical is used for their products.
Equations
“Height equivalent to a theoretical plate” (HETP) use to calculate the flow rate by usingthe total number of theoretical plates (N) and column length (L). Some application, HETP concepts is used in industrial practice to convert number of theoretical plates to packing height. HETP can be calculate with the Van Deemter equation, which is given by
$HETP= A + \dfrac{B}{υ} + Cv \tag{1}$
Where A and B and C are constants and v is the linear velocity (carrier flow rate).
• A is the "Eddy-Diffusion" term and causes the broadening of the solute band.
• B is the "Longitudinal diffusion" term whereby the concentration of the analyte, in which diffuses out from the center to the edges.This causes the broadering of the analyte band.
• C is the "Resistance to Mass Transfer " term and causes the band of the analyte broader.
$HETP= \dfrac{L}{N} \tag{2}$
L is the length of the column, where N is the number of theoretical plates, tR is the retention time, and ω is the width of the elution peak at its base.
$N= 16 \left (\dfrac{tR}{ω} \right)^2 \tag{3}$
In which, the more plates give a better resolution and more efficiency. Resolution can be determined by $R= 2\left[ \dfrac{(tR)B – (tR)A}{ WA +WB}\right] \tag{4}$ A relationship between the plates and resolution is giving by, $R= (N)1/2 /4) ( \alpha -\dfrac{1}{\alpha}) ( 1+ \dfrac{K’B}{ K’B}) \tag{5}$ Where the selectivity, a, and k' is the capacity factors take places of the two solutes. The selectivity and capacity factors can be control by improving separation, such as changing mobile/ stationary phase composition, column temperature and use a special chemical effect.
Contributors and Attributions
• Kyaw Thet (UC Davis), Nancy Woo (UC Davis) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/Gas_Chromatography.txt |
High Performance Liquid Chromotagraphy (HPLC) is an analytical technique used for the separation of compounds soluble in a particular solvent.
History of HPLC
Liquid chromatography was initially discovered as an analytical technique in the early twentieth century and was first used as a method of separating colored compounds. This is where the name chromatography chroma means color, graphy means writing, was derived. A Russian botanist named Mikhail S. Tswett used a rudimentary form of chromatographic separation to purify mixtures of plant pigments into the pure constituents. He separated the pigments based on their interaction with a stationary phase, which is essential to any chromatographic separation. The stationary phase he used was powdered chalk and aluminia, the mobile phase in his separation was the solvent. After the solid stationary phase was packed into a glass column (essentially a long, hollow, glass tube) he poured the mixture of plant pigments and solvent in the top of the column. He then poured additional solvent into the column until the samples were eluted at the bottom of the column. The result of this process most crucial to his investigation was that the plant pigments separated into bands of pure components as they passed through the stationary phase. Modern high performance liquid chromatography or HPLC has its roots in this separation, the first form of liquid chromatography. The chromatographic process has been significantly improved over the last hundred years, yielding greater separation efficiency, versatility and speed.
Affinities for Mobile and Stationary Phases
All chromatographic separations, including HPLC operate under the same basic principle; every compound interacts with other chemical species in a characteristic manner. Chromatography separates a sample into its constituent parts because of the difference in the relative affinities of different molecules for the mobile phase and the stationary phase used in the separation.
Distribution Constant
All chemical reactions have a characteristic equilibrium constant. For the reaction
$A_{aq} + B_s \rightleftharpoons AB_s \label{1}$
There is a chemical equilibrium constant Keq that dictates what percentage of compound A will be in solution and what percentage will be bound to the stationary compound B. During a chromatographic separation, there is similar relationship between compound A and the solvent, or mobile phase, C. This will yield an overall equilibrium equation which dictates the quantity of A that will be associated with the stationary phase and the quantity of A that will be associated with the mobile phase.
$A_{mobile} \rightleftharpoons A_{stationary} \label{2}$
The equilibrium between the mobile phase and stationary phase is given by the constant Kc.
$K_c = \dfrac{(a_A )_S}{(a_A )_M} \approx \dfrac{c_S}{c_M} \label{3}$
Where Kc, the distribution constant, is the ratio of the activity of compound A in the stationary phase and activity of compound A in the mobile phase. In most separations, which contain low concentrations of the species to be separated, the activity of A in each is approximately equal to the concentration of A in that state. The distribution constant indicates the amount of time that compound A spends adsorbed to the stationary phase as the opposed to the amount of time A spends solvated by the mobile phase. This relationship determines the amount of time it will take for compound A to travel the length of the column. The more time A spends adsorbed to the stationary phase, the more time compound A will take to travel the length of the column. The amount of time between the injection of a sample and its elution from the column is known as the retention time; it is given the symbol tR.
The amount of time required for a sample that does not interact with the stationary phase, or has a Kc equal to zero, to travel the length of the column is known as the void time, tM. No compound can be eluted in less than the void time.
Retention Factor
Since Kc is a factor that is wholly dependent on a particular column and solvent flow rate, a quantitative measure of the affinity of a compound for a particular set of mobile and stationary phases that does not depend on the column geometry is useful. The retention factor, k, can be derived from Kc and is independent of the column size and the solvent flow rate.
$k_C = \dfrac{K_C V_S }{V_M } \label{4}$
The retention factor is calculated by multiplying the distribution constant by the volume of stationary phase in the column and dividing by the volume of mobile phase in the column.
Selectivity
In order to separate two compounds, their respective retention factors must be different, otherwise both compounds would be eluted simultaneously; the selectivity factor is the ratio of the retention factors.
$\alpha = \dfrac{k_B }{k_A} \label{5}$
Where B is the compound that is retained more strongly by the column and A is the compound with the faster elution time.
Band Broadening
As a compound passes through the column it slowly diffuses away from the initial injection band, which is the area of greatest concentration. The initial, narrow, band that contained all of the sample becomes broader the longer the analyte remains in the column. This band broadening increases the time required for complete elution of a particular compound and is generally undesirable. It must be minimized so that overly broad elution bands do not overlap with one another. We will see how this is measured quantitatively when we discuss peak resolution momentarily.
Separation Efficiency
The overriding purpose of a chromatographic separation is just that, to separate two or more compounds contained in solution. In analytical chemistry, a quantitative metric of every experimental parameter is desired, and so separation efficiency is measured in plates. The concept of plates as a separation metric arose from the original method of fractional distillation, where compounds were separated based on their volatilities through many simultaneous simple distillations, each simple distillation occurred on one of many distillation plates. In chromatography, no actual plates are used, but the concept of a theoretical plate, as a distinct region where a single equilibrium is maintained, remains. In a particular liquid chromatographic separation, the number of theoretical plates and the height equivalent to a theoretical plate (HETP) are related simply by the length of the column
$N = \dfrac{L}{H} \label{6}$
Where N is the number of theoretical plates, L is the length of the column, and H is the height equivalent to a theoretical plate. The plate height is given by the variance (standard deviation squared) of an elution peak divided by the length of the column.
$H = \dfrac{\sigma ^2}{L} \label{7}$
The standard deviation of an elution peak can be approximated by assuming that a Gaussian elution peak is roughly triangular, in that case the plate height can be given by the width of the elution peak squared times the length of the column over the retention time of the that peak squared times 16.
$H = \dfrac{LW^2 }{16t_R^2} \label{8}$
Using the relationship between plate height and number of plates, the number of plates can also be found in terms of retention time and peak width.
$N = 16 \left( \dfrac{t_R}{W} \right)^2\label{9}$
In order to optimize separation efficiency, it is necessary in maximize the number of theoretical plates, which requires reducing the plate height. The plate height is related to the flow rate of the mobile phase, so for a fixed set of mobile phase, stationary phase, and analytes; separation efficiency can be maximized by optimizing flow rate as dictated by the van Deemter equation.
$H = A + \dfrac{B}{v} + Cv \label{10}$
The three constants in the van Deemter equation are factors that describe possible causes of band broadening in a particular separation. $A$ is a constant which represents the different possible paths that can be taken by the analyte through the stationary phase, it decreases if the packing of the column is kept as small as possible. $B$ is a constant that describes the longitudinal diffusion that occurs in the system. $C$ is a constant that describes the rate of adsorption and desorption of the analyte to the stationary phase. $A$, $B$ and $C$ are constant for any given system (with constant analyte, stationary phase, and mobile phase), so flow rate must be optimized accordingly. If the flow rate is too low, the longitudinal diffusion factor ($\dfrac{B}{v}$) will increase significantly, which will increase plate height. At low flow rates, the analyte spends more time at rest in the column and therefore longitudinal diffusion in a more significant problem. If the flow rate is too high, the mass transfer term ($Cv$) will increase and reduce column efficiency. At high flow rates the adsorption of the analyte to the stationary phase results in some of the sample lagging behind, which also leads to band broadening.
Resolution
The resolution of a elution is a quantitative measure of how well two elution peaks can be differentiated in a chromatographic separation. It is defined as the difference in retention times between the two peaks, divided by the combined widths of the elution peaks.
$R_S = \dfrac{2\left[ {\left( {t_R } \right)_B - \left( {t_R } \right)_A } \right]}{W_B + W_A} \label{11}$
Where B is the species with the longer retention time, and tR and W are the retention time and elution peak width respectively. If the resolution is greater than one, the peaks can usually be differentiated successfully.
HPLC as a solution to efficiency problems
While all of these basic principles hold true for all chromatographic separations, HPLC was developed as method to solve some of the shortcomings of standard liquid chromatography. Classic liquid chromatography has several severe limitations as a separation method. When the solvent is driven by gravity, the separation is very slow, and if the solvent is driven by vacuum, in a standard packed column, the plate height increases and the effect of the vacuum is negated. The limiting factor in liquid chromatography was originally the size of the column packing, once columns could be packed with particles as small as 3 µm, faster separations could be performed in smaller, narrower, columns. High pressure was required to force the mobile phase and sample through these new columns, and previously unneeded apparatus was required to maintain reproducibility of results in this new instruments. The use of high pressures in a narrow column allowed for a more effective separation to be achieved in much less time than was required for previous forms of liquid chromatography.
Apparatus
Specialized apparatus is required for an HPLC separation because of the high pressures and low tolerances under which the separation occurs. If the results are to be reproducible, then the conditions of the separation must also be reproducible. Thus HPLC equipment must be of high quality; it is therefore expensive.
Solvent
The mobile phase, or solvent, in HPLC is usually a mixture of polar and non-polar liquid components whose respective concentrations are varied depending on the composition of the sample. As the solvent is passed through a very narrow bore column, any contaminants could at worst plug the column, or at the very least add variability to the retention times during repeated different trials. Therefore HPLC solvent must be kept free of dissolved gases, which could come out of solution mid-separation, and particulates.
Column
In the HPLC column, the components of the sample separate based on their differing interactions with the column packing. If a species interacts more strongly with the stationary phase in the column, it will spend more time adsorbed to the column's adsorbent and will therefore have a greater retention time. Columns can be packed with solids such as silica or alumina; these columns are called homogeneous columns. If stationary phase in the column is a liquid, the column is deemed a bonded column. Bonded columns contain a liquid stationary phase bonded to a sold support, which is again usually silica or alumina. The value of the constant C described in the van Deemter equation is proportional, in HPLC, to the diameter of the particles that constitute the column's packing material.
Pump
The HPLC pump drives the solvent and sample through the column. To reduce variation in the elution, the pump must maintain a constant, pulse free, flow rate; this is achieved with multi-piston pumps. The presence of two pistons allows the flow rate to be controlled by one piston as the other recharges. A syringe pump can be used for even greater control of flow rate; however, the syringe pump is unable to produce as much pressure as a piston pump, so it cannot be used in all HPLC applications.
Detector
The HPLC detector, located at the end of the column, must register the presence of various components of the sample, but must not detect the solvent. For that reason there is no universal detector that works for all separations. A common HPLC detector is a UV absorption detector, as most medium to large molecules absorb UV radiation. Detectors that measure fluorescence and refractive index are also used for special applications. A relatively new development is the combination of an HPLC separation with an NMR detector. This allows the pure components of the sample to be identified and quantified by nuclear magnetic resonance after having been separated by HPLC, in one integrated process.
Technique
Normal Phase vs. Reverse Phase
If the stationary phase is more polar than the mobile phase, the separation is deemed normal phase. If the stationary phase is less polar than the mobile phase, the separation is reverse phase. In reverse phase HPLC the retention time of a compound increases with decreasing polarity of the particular species. The key to an effective and efficient separation is to determine the appropriate ratio between polar and non-polar components in the mobile phase. The goal is for all the compounds to elute in as short a time as possible, while still allowing for the resolution of individual peaks. Typical columns for normal phase separation are packed with alumina or silica. Alkyl, aliphatic or phenyl bonded phases are typically used for reverse phase separation.
Gradient Elution vs. Isocratic Elution
If the composition of the mobile phase remains constant throughout the HPLC separation, the separation is deemed an isocratic elution. Often the only way to elute all of the compounds in the sample in a reasonable amount of time, while still maintaining peak resolution, is to change the ratio of polar to non-polar compounds in the mobile phase during the sample run. Known as gradient chromatography, this is the technique of choice when a sample contains components of a wide range of polarities. For a reverse phase gradient, the solvent starts out relatively polar and slowly becomes more non-polar. The gradient elution offers the most complete separation of the peaks, without taking an inordinate amount of time. A sample containing compounds of a wide range of polarities can be separated by a gradient elution in a shorter time period without a loss of resolution in the earlier peaks or excessive broadening of later peaks. However, gradient elution requires more complex and expensive equipment and it is more difficult to maintain a constant flow rate while there are constant changes in mobile phase composition. Gradient elution, especially at high speeds, brings out the limitations of lower quality experimental apparatus, making the results obtained less reproducible in equipment already prone to variation. If the flow rate or mobile phase composition fluctuates, the results will not be reproducible.
Applications
HPLC can be used in both qualitative and quantitative applications, that is for both compound identification and quantification. Normal phase HPLC is only rarely used now, almost all HPLC separation can be performed in reverse phase. Reverse phase HPLC (RPLC) is ineffective in for only a few separation types; it cannot separate inorganic ions (they can be separated by ion exchange chromatography). It cannot separate polysaccharides (they are too hydrophilic for any solid phase adsorption to occur), nor polynucleotides (they adsorb irreversibly to the reverse phase packing). Lastly, incredibly hydrophobic compounds cannot be separated effectively by RPLC (there is little selectivity). Aside from these few exceptions, RPLC is used for the separation of almost all other compound varieties. RPLC can be used to effectively separate similar simple and aromatic hydrocarbons, even those that differ only by a single methylene group. RPLC effectively separates simple amines, sugars, lipids, and even pharmaceutically active compounds. RPLC is also used in the separation of amino acids, peptides, and proteins. Finally RPLC is used to separate molecules of biological origin. The determination of caffeine content in coffee products is routinely done by RPLC in commercial applications in order to guarantee purity and quality of ground coffee. HPLC is a useful addition to an analytical arsenal, especially for the separation of a sample before further analysis.
Contributors and Attributions
• Matthew Barkovich (UCD) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/High_Performance_Liquid_Chromatography.txt |
Liquid chromatography is a technique used to separate a sample into its individual parts. This separation occurs based on the interactions of the sample with the mobile and stationary phases. Because there are many stationary/mobile phase combinations that can be employed when separating a mixture, there are several different types of chromatography that are classified based on the physical states of those phases. Liquid-solid column chromatography, the most popular chromatography technique and the one discussed here, features a liquid mobile phase which slowly filters down through the solid stationary phase, bringing the separated components with it.
General Scheme
Components within a mixture are separated in a column based on each component's affinity for the mobile phase. So, if the components are of different polarities and a mobile phase of a distinct polarity is passed through the column, one component will migrate through the column faster than the other. Because molecules of the same compound will generally move in groups, the compounds are separated into distinct bands within the column. If the components being separated are colored, their corresponding bands can be seen. Otherwise as in high performance liquid chromatography (HPLC), the presence of the bands are detected using other instrumental analysis techniques such as UV-VIS spectroscopy1. The following figure shows the migration of two components within a mixture:
In the first step, the mixture of components sits atop the wet column. As the mobile phase passes through the column, the two components begin to separate into bands. In this example, the red component has a stronger affinity for the mobile phase while the blue component remains relatively fixed in the stationary phase. As each component is eluted from the column, each can be collected separately and analyzed by whatever method is favored. The relative polarities of these two compounds are determined based on the polarities of the stationary and mobile phases. If this experiment were done as normal phase chromatography, the red component would be less polar than the blue component. On the other hand, this result yielded from reverse phase chromatography would show that the red component is more polar than the blue component.
History of Liquid Chromatography
The first known chromatography is traditionally attributed to Russian botanist Mikhail Tswett who used columns of calcium carbonate to separate plant compounds during his research of chlorophyll. This happened in the 20th century (1901). Further development of chromatography occurred when the Nobel Prize was awarded to Archer John Porter Martin and Richard Laurence Millington Synge in 1952. They were able to establish the basics of partition chromatography and also develop Plate theory.
Column Chromatography
The stationary phase in column chromatography is most typically a fine adsorbent solid; a solid that is able hold onto gas or liquid particles on its outer surface. The column typically used in column chromatography looks similar to a Pasteur pipette (Pasteur pipettes are used as columns in small scale column chromatography). The narrow exit of the column is first plugged with glass wool or a porous plate in order to support the column packing material and keep it from escaping the tube. Then the adsorbent solid (usually silica) is tightly packed into the glass tube to make the separating column. The packing of the stationary phase into the glass column must be done carefully to create a uniform distribution of material. A uniform distribution of adsorbent is important to minimize the presence of air bubbles and/or channels within the column. To finish preparing the column, the solvent to be used as the mobile phase is passed through the dry column. Then the column is said to be "wetted" and the column must remain wet throughout the entire experiment. Once the column is correctly prepared, the sample to be separated is placed at the top of the wet column. A photo of a packed separating column can be found in the links.
Components
Chromatography is effective because different components within a mixture are attracted to the adsorbent surface of the stationary phase with varying degrees depending on each components polarity and its unique structural characteristics, and also its interaction with the mobile phase. The separation that is achieved using column chromatography is based on factors that are associated with the sample. So, a component that is more attracted to the stationary phase will migrate down the separating column at a slower rate than a component that has a higher affinity for the mobile phase. Also, the efficacy of the separation is dependent on the nature of the adsorbent solid used and the polarity of the mobile phase solvent.
Stationary Phase
The type of adsorbent material used as the stationary phase is vital for efficient separation of components in a mixture. Several different solid may be employed. Adsorbent material can be chosen based on particle size and activity of the solid. The activity of the adsorbent is represented by its activity grade, which is a measure of an adsorbent's attraction for solutes in the sample solution. The solids with the highest activity grading are those that are completely anhydrous. Silica gel and alumina are among the most popular adsorbents used. Alumina caters well to samples that that require specific conditions to adequately separate. However, the use of non-neutral stationary phases should be done with great caution, an increase or decrease of pH in the alumina stationary phase may allow chemical reactions within the components of the mixture. Silica gel, however, is less active than alumina and can generally be used as an all-around adsorbent for most components in solution. Silica is also preferred because of its high sample capacity, making it one of the most popular adsorbent materials.
Mobile Phase
The proper mobile phase must also be chosen for the best separation of the components in an unknown mixture. This eluent will be chosen based on its polarity relative to the sample and the stationary phase. With a strong polar adsorbent stationary phase like alumina, a polar solvent used as the mobile phase will be adsorbed by the stationary phase, which may displace molecules of sample in the mixture and may cause the sample components to elute vary quickly. This will provide little separation of the sample, so it is best to start elution with a solvent of lower polarity to elute the components that are weakly adsorbed to the stationary phase first. The solvent may also be changed during separation in order to change the polarity and therefore elute the various components separately in a more timely manner. This method is very similar to the gradient method of separation used in High Performance Liquid Chromatography (HPLC).
Types of Chromatography
• Normal Phase Chromatography: The components in a mixture will elute at different rates depending on each one's polarity relative to the next. When the column to be used for the separation is more polar than the mobile phase, the experiment is said to be a normal phase method. In normal phase chromatography, the stationary phase is polar, and so the more polar solutes being separated will adhere more to the stationary adsorbent phase. When the solvent or gradient of solvents is passed through the column, the less polar components will be eluted faster than the more polar ones. The components can then be collected separately, assuming adequate separation was achieved, in order of increasing polarity. This method of chromatography is not unique to liquid-solid column chromatography and is often used when performing High Performance Liquid Chromatography (HPLC). Although HPLC is an example of liquid-liquid chromatography, in which both the stationary and mobile phases are liquid, normal phase elution is achieved by coating the solid adsorbent column with a polar liquid.
• Reverse Phase Chromatography: In reverse phase chromatography, the polarities of the mobile and stationary phases are opposite to what they were when performing normal phase chromatography. Instead of choosing a non-polar mobile phase solvent, a polar solvent wil be chosen. Or, if the experiment requires a solvent polarity gradient, the gradient must be carried out with the most polar solvent first and the least polar solvent last (reverse order of normal phase chromatography). Common polar solvents mixtures of solvents include water, methanol, and acetonitrile. It is slightly more difficult and expensive to obtain a column where the stationary phase is non polar, as all solid adsorbents are polar by nature. The non polar stationary phase can be prepared by coating silanized silica gel with a non polar liquid. Silanizing the silica gel reduces the silica gel's ability to adsorb polar molecules. Common non polar liquid phases include silicone and various hydrocarbons. An alternative to this type of column is used in HPLC, in which a bonded liquid phase is used as the stationary phase. The less polar liquid is chemically bonded to the polar silica gel in the column. So using reverse phase, the most polar compounds in the sample solution will be eluted first, with the components following having decreasing polarities.
• Flash Chromatography: Because the elution rate of the mobile phase in regular column chromatography as described above is controlled primarily by gravity, chromatographic runs can potentially take a very long time to complete. Flash chromatography is a modified method of column chromatography in which the mobile phase moves faster through the column with the help of either pressurized air or a vacuum. A vacuum line is attached to the bottom of the separating column, this pulls the mobile phase solvent, and the components in the mobile phase, through the column at a faster rate than gravity does. A figure of this set-up can be seen in the links section. Flash chromatography is powered by compressed air or air pumps works by pushing the mobile phase through the column and achieves faster flow rates of the mobile phase just as vacuum facilitated flash chromatography does. For this method, a pressurized air line is attached to the top of the separating column. It is for this reason that flash chromatography is also referred to as medium pressure chromatography. An inert gas is used as to not interact with the mobile or stationary phase or the component mixture. Nitrogen gas is commonly used for this method of chromatography. Many instruments are available to perform flash chromatography as efficiently as possible: expensive columns, pumps, and flow controllers. This maintains a constant and precise air pressure or vacuum to the column in order to obtain steady flow rate of the mobile phase and favorable separation of the samples in solution. However, less expensive alternatives are available, as flow controllers can be made so that pressurized air can be used to facilitate flash chromatography.
By using the above apparatus, purchasing expensive air pumps can be avoided. This method is useful to an extent. Since the flow rate of the pressurized gas is controlled manually by the flow rate controller, it is more difficult to quantify the flow rate and keep that flow rate constant. Instruments available for flash chromatography are able to set flow rates digitally and keep flow rate constant.
Flash chromatography is similar to HPLC in that the mobile phase is moved through the column by applying pressure to the solvent in order to achieve a quicker result. However, in flash chromatography, only medium pressure is applied to the system within the solution. In HPLC, pressures as high as 5000 psi can be applied in the column by high performance pumps.
Other Varieties of Liquid Chromatography
• Partition Chromatography: In this method, both the stationary phase and the mobile phase are liquid. The stationary phase liquid would be an immiscible liquid with the mobile phase.
• Liquid-Solid Chromatography: This method is similar to partition chromatography only that the stationary phase has been replaced with a bonded rigid silica or silica based component onto the inside of the column. Sometimes the stationary phase may be alumina. The analytes that are in the mobile phase that have an affinity for the stationary phase will be adsorbed onto it and those that do not will pass through having shorter retention times. Both normal and reverse phases of this method are applicable.
• Ion Exchange or Ion Chromatography: This is a type of chromatography that is applied to separate and determine ions on columns that have a low ion exchange capacity. This is based on the equilibrium of ion exchange between the ions in solution and the counter ions to pair with the oppositely charged ions that are fixed to the stationary phase. This stationary phase would either have positive of negative functional groups affixed to it, usually sulfonate (-SO3-) or a quaternary amine (-N(CH3)3+), being a cation and anion exchanger respectively.
• Size Exclusion Chromatography: Size exclusion chromatography separates molecules by their size. This is done by having the stationary phase be packed with small particles of silica or polymer to form uniform pores. The smaller molecules will get trapped in the silica particles and will elude from the column at a rate that is greater than that of larger molecules. Thus, the retention time depends on the size of the molecules. Larger molecules will be swept away in the mobile phase, therefore having a smaller retention time. Also notice that in this type of chromatography there isn’t any interaction, being physical or chemical, between the analyte and the stationary phase.
• Affinity Chromatography: This type of chromatography involves binding a reagent to the analyte molecules in a sample. After the binding, only the molecules that have this ligand are retained in the column, the unbound analyte is passed through in the mobile phase. The stationary phase is usually agrose or a porous glass bead that is able to immobilize the bonded molecule. It is possible to change the elution conditions by manipulating the pH or the ionic strength of the binding ligand. This method is often used in biochemistry in the purification of proteins. The ligand tag is bonded and after separation the tag is then removed and the and the pure protein is obtained.
• Chiral Chromatography: Chiral chromatography enables the use of liquid chromatography to separate a racemic mixture into its enantiomeric parts. A chiral additive can be added to the mobile phase, or a stationary phase that has chiral properties can be used. A chiral stationary phase is the most popular option. The stationary phase has to be chiral in order to recognize the chirality of the analyte, this will create attractive forces between the bonds and also form inclusion complexes.
Plate Theory and Rate Theory
Plate theory and Rate theory are two theories that are applicable to chromatography. Plate theory describes a chromatography system as being in equilibrium between the stationary and mobile phases. This views the column as divided into a number of imaginary theoretical plates. This is significant because as the number of plates in a column increases or the height equivalent theoretical plates or HETP increases, so does the separation of components. It also provides an equation that describes the elution curve or the chromatogram of a solute it can also be used to find the volume and the column efficiency.
$HETP = \dfrac{L}{N} \nonumber$
where L= column length and N= number of theoretical plates
The Rate theory on the other hand describes the migration of molecules in a column. This included band shape, broadening, and the diffusion of a solute. Rate theory follows the Van Deemter equation, which is the most appropriate for prediction of dispersion in liquid chromatography columns. It does this by taking into account the various pathways that a sample must travel through a column. Using the Van Deemter equation, it is possible to find the optimum velocity and and a minimum plate height.
$H=A+\dfrac{B}{u} = Cu \nonumber$
where $A$ = Eddy-Diffusion, $B$ = Longitudinal Diffusion, $C$ = mass transfer, $u$ = linear velocity
Instrumentation
This schematic is of the basic instrumentation of a liquid-solid chromatograph. The solvent inlet brings in the mobile phase which is then pumped through the inline solvent filter and passed through the injection valve. This is where the mobile phase will mix with the injected sample. It then gets passed through another filter and then passed through the column where the sample will be separated into its components. The detector detects the separation of the analytes and the recorder, or usually a computer will record this information. The sample then goes through a backpressure filter and into waste.
A basic LC system consists of (a) a solvent inlet filter, (b) pump, (c) inline solvent filter, (d) injection valve, (e) precolumn filter, (f) column, (g) detector, (h) recorder, (i) backpressure regulator, and a (j) waste reservoir.
Advantages / Disadvantages
Liquid-solid column chromatography is an effective separation technique when all appropriate parameters and equipment are used. This method is especially effective when the compounds within the mixture are colored, as this gives the scientist the ability to see the separation of the bands for the components in the sample solution. Even if the bands are not visible, certain components can be observed by other visualization methods. One method that may work for some compounds is irradiation with ultraviolet light. This makes it relatively easy to collect samples one after another. However, if the components within the solution are not visible by any of these methods, it can be difficult to determine the efficacy of the separation that was performed. In this case, separate collections from the column are taken at specified time intervals. Since the human eye is the primary detector for this procedure, it is most effective when the bands of the distinct compounds are visible.
Liquid-solid column chromatography is also a less expensive procedure than other methods of separation (HPLC, GC, etc.). This is because the most basic forms of column chromatography do not require the help of expensive machinery like high pressure solvent pumps used in HPLC. In methods besides flash chromatography, the flow of the mobile phase, the detection of each separation band, and the collection of each component, are all done manually by the scientist. Although this introduces many potential instances of experimental error, this method of separation can be very effective when done correctly. Also, the glass wear used for liquid-solid column chromatography is relatively inexpensive and readily available in many laboratories. Burets are commonly used as the separating column, which in many cases will work just as well as an expensive pre-prepared column. For smaller scale chromatography, Pasteur pipettes are often used.
Flash chromatography has the potential to be more costly than the previous methods of separation, especially when sophisticated air pumps and vacuum pumps are needed. When these pieces of machinery are not needed, however, a vacuum line can be instead connected to an aspirator2 on a water faucet. Also, home-made pressurized air flow controllers can be made as shown previously.
Contributors and Attributions
• Jennifer Betancourt (UC Davis), Sean Gottlieb (UC Davis) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/Liquid_Chromatography.txt |
An introduction to various forms of chromatography: thin layer, column, high performance liquid (HPLC), gas-liquid and paper.
• A. Introducing Chromatography: Thin Layer Chromatography
This page is an introduction to chromatography using thin layer chromatography as an example. Although if you are a beginner you may be more familiar with paper chromatography, thin layer chromatography is equally easy to describe and more straightforward to explain.
• B. Column Chromatography
This page shows how the same principles used in thin layer chromatography can be applied on a larger scale to separate mixtures in column chromatography. Column chromatography is often used to purify compounds made in the lab.
• C. High Performance Liquid Chromatography (HPLC)
High performance liquid chromatography is a powerful tool in analysis. This page looks at how it is carried out and shows how it uses the same principles as in thin layer chromatography and column chromatography.
• D. Gas-Liquid Chromatography
• E. Paper Chromatography
This page is an introduction to paper chromatography - including two way chromatography.
V. Chromatography
Chromatography is used to separate mixtures of substances into their components. All forms of chromatography work on the same principle. They all have a stationary phase (a solid, or a liquid supported on a solid) and a mobile phase (a liquid or a gas). The mobile phase flows through the stationary phase and carries the components of the mixture with it. Different components travel at different rates.
Thin layer chromatography is done exactly as it says - using a thin, uniform layer of silica gel or alumina coated onto a piece of glass, metal or rigid plastic. The silica gel (or the alumina) is the stationary phase. The stationary phase for thin layer chromatography also often contains a substance which fluoresces in UV light - for reasons you will see later. The mobile phase is a suitable liquid solvent or mixture of solvents.
We'll start with a very simple case - just trying to show that a particular dye is in fact a mixture of simpler dyes.
A pencil line is drawn near the bottom of the plate and a small drop of a solution of the dye mixture is placed on it. Any labelling on the plate to show the original position of the drop must also be in pencil. If any of this was done in ink, dyes from the ink would also move as the chromatogram developed. When the spot of mixture is dry, the plate is stood in a shallow layer of solvent in a covered beaker. It is important that the solvent level is below the line with the spot on it.
The reason for covering the beaker is to make sure that the atmosphere in the beaker is saturated with solvent vapor. To help this, the beaker is often lined with some filter paper soaked in solvent. Saturating the atmosphere in the beaker with vapor stops the solvent from evaporating as it rises up the plate. As the solvent slowly travels up the plate, the different components of the dye mixture travel at different rates and the mixture is separated into different coloured spots.
The diagram shows the plate after the solvent has moved about half way up it. The solvent is allowed to rise until it almost reaches the top of the plate. That will give the maximum separation of the dye components for this particular combination of solvent and stationary phase.
Measuring Rf values
If all you wanted to know is how many different dyes made up the mixture, you could just stop there. However, measurements are often taken from the plate in order to help identify the compounds present. These measurements are the distance traveled by the solvent, and the distance traveled by individual spots. When the solvent front gets close to the top of the plate, the plate is removed from the beaker and the position of the solvent is marked with another line before it has a chance to evaporate.
These measurements are then taken:
The Rf value for each dye is then worked out using the formula:
$R_f= \dfrac{\text{distance traveled by sample}}{\text{distance traveled by solvent}} \nonumber$
For example, if the red component traveled 1.7 cm from the base line while the solvent had traveled 5.0 cm, then the $R_f$ value for the red dye is:
\begin{align*}R_f &= \dfrac{1.7}{5.0} \[4pt] &= 0.34 \end{align*}
If you could repeat this experiment under exactly the same conditions, then the Rf values for each dye would always be the same. For example, the Rf value for the red dye would always be 0.34. However, if anything changes (the temperature, the exact composition of the solvent, and so on), that is no longer true. You have to bear this in mind if you want to use this technique to identify a particular dye. We'll look at how you can use thin layer chromatography for analysis further down the page.
What if the substances you are interested in are colorless?
There are two simple ways of getting around this problem.
Using fluorescence
You may remember that I mentioned that the stationary phase on a thin layer plate often has a substance added to it which will fluoresce when exposed to UV light. That means that if you shine UV light on it, it will glow. That glow is masked at the position where the spots are on the final chromatogram - even if those spots are invisible to the eye. That means that if you shine UV light on the plate, it will all glow apart from where the spots are. The spots show up as darker patches.
While the UV is still shining on the plate, you obviously have to mark the positions of the spots by drawing a pencil circle around them. As soon as you switch off the UV source, the spots will disappear again.
Showing the spots up chemically
In some cases, it may be possible to make the spots visible by reacting them with something which produces a coloured product. A good example of this is in chromatograms produced from amino acid mixtures. The chromatogram is allowed to dry and is then sprayed with a solution of ninhydrin. Ninhydrin reacts with amino acids to give coloured compounds, mainly brown or purple.
In another method, the chromatogram is again allowed to dry and then placed in an enclosed container (such as another beaker covered with a watch glass) along with a few iodine crystals. The iodine vapor in the container may either react with the spots on the chromatogram, or simply stick more to the spots than to the rest of the plate. Either way, the substances you are interested in may show up as brownish spots.
Using thin layer chromatography to identify compounds
Suppose you had a mixture of amino acids and wanted to find out which particular amino acids the mixture contained. For simplicity we'll assume that you know the mixture can only possibly contain five of the common amino acids. A small drop of the mixture is placed on the base line of the thin layer plate, and similar small spots of the known amino acids are placed alongside it. The plate is then stood in a suitable solvent and left to develop as before. In the diagram, the mixture is M, and the known amino acids are labelled 1 to 5.
The left-hand diagram shows the plate after the solvent front has almost reached the top. The spots are still invisible. The second diagram shows what it might look like after spraying with ninhydrin. There is no need to measure the Rf values because you can easily compare the spots in the mixture with those of the known amino acids - both from their positions and their colours. In this example, the mixture contains the amino acids labelled as 1, 4 and 5. And what if the mixture contained amino acids other than the ones we have used for comparison? There would be spots in the mixture which didn't match those from the known amino acids. You would have to re-run the experiment using other amino acids for comparison.
How does thin layer chromatography work?
The stationary phase - silica gel
Silica gel is a form of silicon dioxide (silica). The silicon atoms are joined via oxygen atoms in a giant covalent structure. However, at the surface of the silica gel, the silicon atoms are attached to -OH groups. So, at the surface of the silica gel you have Si-O-H bonds instead of Si-O-Si bonds. The diagram shows a small part of the silica surface.
The surface of the silica gel is very polar and, because of the -OH groups, can form hydrogen bonds with suitable compounds around it as well as van der Waals dispersion forces and dipole-dipole attractions.
The other commonly used stationary phase is alumina - aluminium oxide. The aluminium atoms on the surface of this also have -OH groups attached. Anything we say about silica gel therefore applies equally to alumina.
What separates the compounds as a chromatogram develops?
As the solvent begins to soak up the plate, it first dissolves the compounds in the spot that you have put on the base line. The compounds present will then tend to get carried up the chromatography plate as the solvent continues to move upwards.How fast the compounds get carried up the plate depends on two things:
• How soluble the compound is in the solvent. This will depend on how much attraction there is between the molecules of the compound and those of the solvent.
• How much the compound sticks to the stationary phase - the silica gel, for example. This will depend on how much attraction there is between the molecules of the compound and the silica gel.
Suppose the original spot contained two compounds - one of which can form hydrogen bonds, and one of which can only take part in weaker van der Waals interactions. The one which can hydrogen bond will stick to the surface of the silica gel more firmly than the other one. We say that one is adsorbed more strongly than the other. Adsorption is the name given to one substance forming some sort of bonds to the surface of another one.
Adsorption isn't permanent - there is a constant movement of a molecule between being adsorbed onto the silica gel surface and going back into solution in the solvent. Obviously the compound can only travel up the plate during the time that it is dissolved in the solvent. While it is adsorbed on the silica gel, it is temporarily stopped - the solvent is moving on without it. That means that the more strongly a compound is adsorbed, the less distance it can travel up the plate.
In the example we started with, the compound which can hydrogen bond will adsorb more strongly than the one dependent on van der Waals interactions, and so won't travel so far up the plate.
What if both components of the mixture can hydrogen bond?
It is very unlikely that both will hydrogen bond to exactly the same extent, and be soluble in the solvent to exactly the same extent. It isn't just the attraction of the compound for the silica gel which matters. Attractions between the compound and the solvent are also important - they will affect how easily the compound is pulled back into solution away from the surface of the silica. However, it may be that the compounds don't separate out very well when you make the chromatogram. In that case, changing the solvent may well help - including perhaps changing the pH of the solvent. This is to some extent just a matter of trial and error - if one solvent or solvent mixture doesn't work very well, you try another one. (Or, more likely, given the level you are probably working at, someone else has already done all the hard work for you, and you just use the solvent mixture you are given and everything will work perfectly!) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/V._Chromatography/A._Introducing_Chromatography%3A_Thin_Layer_Chromatography.txt |
This page shows how the same principles used in thin layer chromatography can be applied on a larger scale to separate mixtures in column chromatography. Column chromatography is often used to purify compounds made in the lab.
The column
In thin layer chromatography, the stationary phase is a thin layer of silica gel or alumina on a glass, metal or plastic plate. Column chromatography works on a much larger scale by packing the same materials into a vertical glass column. Various sizes of chromatography columns are used, and if you follow a link at the bottom of the page to the Organic Chemistry section of the Colorado University site, you will find photographs of various columns. In a school lab, it is often convenient to use an ordinary burette as a chromatography column.
Using the Column
Suppose you wanted to separate a mixture of two colored compounds - one yellow, one blue. The mixture looks green. You would make a concentrated solution of the mixture preferably in the solvent used in the column. First you open the tap to allow the solvent already in the column to drain so that it is level with the top of the packing material, and then add the solution carefully to the top of the column. Then you open the tap again so that the colored mixture is all absorbed into the top of the packing material, so that it might look like this:
Next you add fresh solvent to the top of the column, trying to disturb the packing material as little as possible. Then you open the tap so that the solvent can flow down through the column, collecting it in a beaker or flask at the bottom. As the solvent runs through, you keep adding fresh solvent to the top so that the column never dries out. The next set of diagrams shows what might happen over time.
Explaining what is happening
This assumes that you have read the explanation for what happens during thin layer chromatography. If you haven't, follow the very first link at the top of the page and come back to this point afterwards.
The blue compound is obviously more polar than the yellow one - it perhaps even has the ability to hydrogen bond. You can tell this because the blue compound doesn't travel through the column very quickly. That means that it must adsorb more strongly to the silica gel or alumina than the yellow one. The less polar yellow one spends more of its time in the solvent and therefore washes through the column much faster. The process of washing a compound through a column using a solvent is known as elution. The solvent is sometimes known as the eluent.
What if you want to collect the blue compound as well?
It is going to take ages to wash the blue compound through at the rate it is travelling at the moment! However, there is no reason why you can't change the solvent during elution. Suppose you replace the solvent you have been using by a more polar solvent once the yellow has all been collected. That will have two effects, both of which will speed the blue compound through the column.
• The polar solvent will compete for space on the silica gel or alumina with the blue compound. Any space temporarily occupied by solvent molecules on the surface of the stationary phase isn't available for blue molecules to stick to and this will tend to keep them moving along in the solvent.
• There will be a greater attraction between the polar solvent molecules and the polar blue molecules. This will tend to attract any blue molecules sticking to the stationary phase back into solution.
The net effect is that with a more polar solvent, the blue compound spends more time in solution, and so moves faster.
So why not use this alternative solvent in the first place? The answer is that if both of the compounds in the mixture travel quickly through the column right from the beginning, you probably won't get such a good separation.
What if everything in your mixture is colorless?
If you were going to use column chromatography to purify the product of an organic preparation, it is quite likely that the product that you want will be colorless even if one or more of the impurities is colored. Let's assume the worst case that everything is colorless.
How do you know when the substance you want has reached the bottom of the column?
There is no quick and easy way of doing this! What you do is collect what comes out of the bottom of the column in a whole series of labelled tubes. How big each sample is will obviously depend on how big the column is - you might collect 1 cm3 samples or 5 cm3 samples or whatever is appropriate.
You can then take a drop from each solution and make a thin layer chromatogram from it. You would place the drop on the base line alongside a drop from a pure sample of the compound that you are making. By doing this repeatedly, you can identify which of your samples collected at the bottom of the column contain the desired product, and only the desired product.
Once you know this, you can combine all of the samples which contain your pure product, and then remove the solvent. (How you would separate the solvent from the product isn't directly relevant to this topic and would vary depending on their exact nature - so I'm not even going to attempt a generalisation.) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/V._Chromatography/B._Column_Chromatography.txt |
High performance liquid chromatography is a powerful tool in analysis. This page looks at how it is carried out and shows how it uses the same principles as in thin layer chromatography and column chromatography.
Carrying out HPLC
Introduction
High performance liquid chromatography is basically a highly improved form of column chromatography. Instead of a solvent being allowed to drip through a column under gravity, it is forced through under high pressures of up to 400 atmospheres. That makes it much faster.
It also allows you to use a very much smaller particle size for the column packing material which gives a much greater surface area for interactions between the stationary phase and the molecules flowing past it. This allows a much better separation of the components of the mixture.
The other major improvement over column chromatography concerns the detection methods which can be used. These methods are highly automated and extremely sensitive.
The column and the solvent
Confusingly, there are two variants in use in HPLC depending on the relative polarity of the solvent and the stationary phase.
Normal phase HPLC
This is essentially just the same as you will already have read about in thin layer chromatography or column chromatography. Although it is described as "normal", it isn't the most commonly used form of HPLC.
The column is filled with tiny silica particles, and the solvent is non-polar - hexane, for example. A typical column has an internal diameter of 4.6 mm (and may be less than that), and a length of 150 to 250 mm.
Polar compounds in the mixture being passed through the column will stick longer to the polar silica than non-polar compounds will. The non-polar ones will therefore pass more quickly through the column.
Reversed phase HPLC
In this case, the column size is the same, but the silica is modified to make it non-polar by attaching long hydrocarbon chains to its surface - typically with either 8 or 18 carbon atoms in them. A polar solvent is used - for example, a mixture of water and an alcohol such as methanol.
In this case, there will be a strong attraction between the polar solvent and polar molecules in the mixture being passed through the column. There won't be as much attraction between the hydrocarbon chains attached to the silica (the stationary phase) and the polar molecules in the solution. Polar molecules in the mixture will therefore spend most of their time moving with the solvent.
Non-polar compounds in the mixture will tend to form attractions with the hydrocarbon groups because of van der Waals dispersion forces. They will also be less soluble in the solvent because of the need to break hydrogen bonds as they squeeze in between the water or methanol molecules, for example. They therefore spend less time in solution in the solvent and this will slow them down on their way through the column.
That means that now it is the polar molecules that will travel through the column more quickly.
Reversed phase HPLC is the most commonly used form of HPLC.
Looking at the whole process
A flow scheme for HPLC
Injection of the sample
Injection of the sample is entirely automated, and you wouldn't be expected to know how this is done at this introductory level. Because of the pressures involved, it is not the same as in gas chromatography (if you have already studied that).
Retention time
The time taken for a particular compound to travel through the column to the detector is known as its retention time. This time is measured from the time at which the sample is injected to the point at which the display shows a maximum peak height for that compound. Different compounds have different retention times. For a particular compound, the retention time will vary depending on:
• the pressure used (because that affects the flow rate of the solvent)
• the nature of the stationary phase (not only what material it is made of, but also particle size)
• the exact composition of the solvent
• the temperature of the column
That means that conditions have to be carefully controlled if you are using retention times as a way of identifying compounds.
The detector
There are several ways of detecting when a substance has passed through the column. A common method which is easy to explain uses ultra-violet absorption.
Many organic compounds absorb UV light of various wavelengths. If you have a beam of UV light shining through the stream of liquid coming out of the column, and a UV detector on the opposite side of the stream, you can get a direct reading of how much of the light is absorbed.
The amount of light absorbed will depend on the amount of a particular compound that is passing through the beam at the time .
You might wonder why the solvents used don't absorb UV light. They do! But different compounds absorb most strongly in different parts of the UV spectrum.
Methanol, for example, absorbs at wavelengths below 205 nm, and water below 190 nm. If you were using a methanol-water mixture as the solvent, you would therefore have to use a wavelength greater than 205 nm to avoid false readings from the solvent.
Interpreting the output from the detector
The output will be recorded as a series of peaks - each one representing a compound in the mixture passing through the detector and absorbing UV light. As long as you were careful to control the conditions on the column, you could use the retention times to help to identify the compounds present - provided, of course, that you (or somebody else) had already measured them for pure samples of the various compounds under those identical conditions.
But you can also use the peaks as a way of measuring the quantities of the compounds present. Let's suppose that you are interested in a particular compound, X.
If you injected a solution containing a known amount of pure X into the machine, not only could you record its retention time, but you could also relate the amount of X to the peak that was formed.
The area under the peak is proportional to the amount of X which has passed the detector, and this area can be calculated automatically by the computer linked to the display. The area it would measure is shown in green in the (very simplified) diagram.
If the solution of X was less concentrated, the area under the peak would be less - although the retention time will still be the same. For example:
This means that it is possible to calibrate the machine so that it can be used to find how much of a substance is present - even in very small quantities.
Be careful, though! If you had two different substances in the mixture (X and Y) could you say anything about their relative amounts? Not if you were using UV absorption as your detection method.
In the diagram, the area under the peak for Y is less than that for X. That may be because there is less Y than X, but it could equally well be because Y absorbs UV light at the wavelength you are using less than X does. There might be large quantities of Y present, but if it only absorbed weakly, it would only give a small peak.
Coupling HPLC to a mass spectrometer
This is where it gets really clever! When the detector is showing a peak, some of what is passing through the detector at that time can be diverted to a mass spectrometer. There it will give a fragmentation pattern which can be compared against a computer database of known patterns. That means that the identity of a huge range of compounds can be found without having to know their retention times. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/V._Chromatography/C._High_Performance_Liquid_Chromatography_%28HPLC%29.txt |
Gas-liquid chromatography (often just called gas chromatography) is a powerful tool in analysis. It has all sorts of variations in the way it is done - if you want full details, a Google search on gas chromatography will give you scary amounts of information if you need it! This page just looks in a simple introductory way at how it can be carried out.
Carrying out gas-liquid chromatography
All forms of chromatography involve a stationary phase and a mobile phase. In all the other forms of chromatography you will meet at this level, the mobile phase is a liquid. In gas-liquid chromatography, the mobile phase is a gas such as helium and the stationary phase is a high boiling point liquid absorbed onto a solid. How fast a particular compound travels through the machine will depend on how much of its time is spent moving with the gas as opposed to being attached to the liquid in some way.
A flow scheme for gas-liquid chromatography
Injection of the sample
Very small quantities of the sample that you are trying to analyse are injected into the machine using a small syringe. The syringe needle passes through a thick rubber disc (known as a septum) which reseals itself again when the syringe is pulled out.
The injector is contained in an oven whose temperature can be controlled. It is hot enough so that all the sample boils and is carried into the column as a gas by the helium (or other carrier gas).
How the column works
The packing material
There are two main types of column in gas-liquid chromatography. One of these is a long thin tube packed with the stationary phase; the other is even thinner and has the stationary phase bonded to its inner surface.
To keep things simple, we are just going to look at the packed column.
The column is typically made of stainless steel and is between 1 and 4 metres long with an internal diameter of up to 4 mm. It is coiled up so that it will fit into a thermostatically controlled oven.
The column is packed with finely ground diatomaceous earth, which is a very porous rock. This is coated with a high boiling liquid - typically a waxy polymer.
The column temperature
The temperature of the column can be varied from about 50°C to 250°C. It is cooler than the injector oven, so that some components of the mixture may condense at the beginning of the column.
In some cases, as you will see below, the column starts off at a low temperature and then is made steadily hotter under computer control as the analysis proceeds.
How separation works on the column
One of three things might happen to a particular molecule in the mixture injected into the column:
• It may condense on the stationary phase.
• It may dissolve in the liquid on the surface of the stationary phase.
• It may remain in the gas phase.
None of these things is necessarily permanent.
A compound with a boiling point higher than the temperature of the column will obviously tend to condense at the start of the column. However, some of it will evaporate again in the same way that water evaporates on a warm day - even though the temperature is well below 100°C. The chances are that it will then condense again a little further along the column.
Similarly, some molecules may dissolve in the liquid stationary phase Some compounds will be more soluble in the liquid than others. The more soluble ones will spend more of their time absorbed into the stationary phase; the less soluble ones will spend more of their time in the gas.
The process where a substance divides itself between two immiscible solvents because it is more soluble in one than the other is known as partition. Now, you might reasonably argue that a gas such as helium can't really be described as a "solvent". But the term partition is still used in gas-liquid chromatography.
You can say that a substance partitions itself between the liquid stationary phase and the gas. Any molecule in the substance spends some of its time dissolved in the liquid and some of its time carried along with the gas.
Retention Time
The time taken for a particular compound to travel through the column to the detector is known as its retention time. This time is measured from the time at which the sample is injected to the point at which the display shows a maximum peak height for that compound. Different compounds have different retention times. For a particular compound, the retention time will vary depending on:
• the boiling point of the compound. A compound which boils at a temperature higher than the column temperature is going to spend nearly all of its time condensed as a liquid at the beginning of the column. So high boiling point means a long retention time.
• the solubility in the liquid phase. The more soluble a compound is in the liquid phase, the less time it will spend being carried along by the gas. High solubility in the liquid phase means a high retention time.
• the temperature of the column. A higher temperature will tend to excite molecules into the gas phase - either because they evaporate more readily, or because they are so energetic that the attractions of the liquid no longer hold them. A high column temperature shortens retention times for everything in the column.
For a given sample and column, there isn't much you can do about the boiling points of the compounds or their solubility in the liquid phase - but you do have control over the temperature.
The lower the temperature of the column, the better the separation you will get - but it could take a very long time to get the compounds through which are condensing at the beginning of the column!
On the other hand, using a high temperature, everything will pass through the column much more quickly - but less well separated out. If everything passed through in a very short time, there isn't going to be much space between their peaks on the chromatogram.
The answer is to start with the column relatively cool, and then gradually and very regularly increase the temperature.
At the beginning, compounds which spend most of their time in the gas phase will pass quickly through the column and be detected. Increasing the temperature a bit will encourage the slightly "stickier" compounds through. Increasing the temperature still more will force the very "sticky" molecules off the stationary phase and through the column.
Detectors
There are several different types of detector in use. The flame ionisation detector described below is commonly used and is easier to describe and explain than the alternatives.
A flame ionization detector
In terms of reaction mechanisms, the burning of an organic compound is very complicated. During the process, small amounts of ions and electrons are produced in the flame. The presence of these can be detected. The whole detector is enclosed in its own oven which is hotter than the column temperature. That stops anything condensing in the detector.
If there is nothing organic coming through from the column, you just have a flame of hydrogen burning in air. Now suppose that one of the compounds in the mixture you are analysing starts to come through.
As it burns, it will produce small amounts of ions and electrons in the flame. The positive ions will be attracted to the cylindrical cathode. Negative ions and electrons will be attracted towards the jet itself which is the anode.
This is much the same as what happens during normal electrolysis.
At the cathode, the positive ions will pick up electrons from the cathode and be neutralised. At the anode, any electrons in the flame will transfer to the positive electrode; and negative ions will give their electrons to the electrode and be neutralised.
This loss of electrons from one electrode and gain at the other will result in a flow of electrons in the external circuit from the anode to the cathode. In other words, you get an electric current.
The current won't be very big, but it can be amplified. The more of the organic compound there is in the flame, the more ions will be produced, and so the higher the current will be. As a reasonable approximation, especially if you are talking about similar compounds, the current you measure is proportional to the amount of compound in the flame.
Disadvantages of the flame ionisation detector
The main disadvantage is that it destroys everything coming out of the column as it detects it. If you wanted to send the product to a mass spectrometer, for example, for further analysis, you couldn't use a flame ionisation detector.
Interpreting the output from the detector
The output will be recorded as a series of peaks - each one representing a compound in the mixture passing through the detector. As long as you were careful to control the conditions on the column, you could use the retention times to help to identify the compounds present - provided, of course, that you (or somebody else) had already measured them for pure samples of the various compounds under those identical conditions.
But you can also use the peaks as a way of measuring the relative quantities of the compounds present. This is only accurate if you are analysing mixtures of similar compounds - for example, of similar hydrocarbons.
The areas under the peaks are proportional to the amount of each compound which has passed the detector, and these areas can be calculated automatically by the computer linked to the display. The areas it would measure are shown in green in the (very simplified) diagram.
Note that it isn't the peak height that matters, but the total area under the peak. In this particular example, the left-hand peak is both tallest and has the greatest area. That isn't necessarily always so. There might be a lot of one compound present, but it might emerge from the column in relatively small amounts over quite a long time. Measuring the area rather than the peak height allows for this.
Coupling a gas chromatogram to a mass spectrometer
This can't be done with a flame ionization detector which destroys everything passing through it. Assuming you are using a non-destructive detector When the detector is showing a peak, some of what is passing through the detector at that time can be diverted to a mass spectrometer. There it will give a fragmentation pattern which can be compared against a computer database of known patterns. That means that the identity of a huge range of compounds can be found without having to know their retention times. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/V._Chromatography/D._Gas-Liquid_Chromatography.txt |
Chromatography is used to separate mixtures of substances into their components. All forms of chromatography work on the same principle. They all have a stationary phase (a solid, or a liquid supported on a solid) and a mobile phase (a liquid or a gas). The mobile phase flows through the stationary phase and carries the components of the mixture with it. Different components travel at different rates. We'll look at the reasons for this further down the page. In paper chromatography, the stationary phase is a very uniform absorbent paper. The mobile phase is a suitable liquid solvent or mixture of solvents.
Producing a paper chromatogram
You probably used paper chromatography as one of the first things you ever did in chemistry to separate out mixtures of colored dyes - for example, the dyes which make up a particular ink. That's an easy example to take, so let's start from there.
Suppose you have three blue pens and you want to find out which one was used to write a message. Samples of each ink are spotted on to a pencil line drawn on a sheet of chromatography paper. Some of the ink from the message is dissolved in the minimum possible amount of a suitable solvent, and that is also spotted onto the same line. In the diagram, the pens are labeled 1, 2 and 3, and the message ink as M.
The paper is suspended in a container with a shallow layer of a suitable solvent or mixture of solvents in it. It is important that the solvent level is below the line with the spots on it. The next diagram doesn't show details of how the paper is suspended because there are too many possible ways of doing it and it clutters the diagram. Sometimes the paper is just coiled into a loose cylinder and fastened with paper clips top and bottom. The cylinder then just stands in the bottom of the container.
The reason for covering the container is to make sure that the atmosphere in the beaker is saturated with solvent vapour. Saturating the atmosphere in the beaker with vapour stops the solvent from evaporating as it rises up the paper.
As the solvent slowly travels up the paper, the different components of the ink mixtures travel at different rates and the mixtures are separated into different colored spots.
The diagram shows what the plate might look like after the solvent has moved almost to the top.
It is fairly easy to see from the final chromatogram that the pen that wrote the message contained the same dyes as pen 2. You can also see that pen 1 contains a mixture of two different blue dyes - one of which might be the same as the single dye in pen 3.
Rf values
Some compounds in a mixture travel almost as far as the solvent does; some stay much closer to the base line. The distance travelled relative to the solvent is a constant for a particular compound as long as you keep everything else constant - the type of paper and the exact composition of the solvent, for example.
The distance travelled relative to the solvent is called the Rf value. For each compound it can be worked out using the formula:
For example, if one component of a mixture travelled 9.6 cm from the base line while the solvent had travelled 12.0 cm, then the Rf value for that component is:
In the example we looked at with the various pens, it wasn't necessary to measure Rf values because you are making a direct comparison just by looking at the chromatogram.
You are making the assumption that if you have two spots in the final chromatogram which are the same color and have travelled the same distance up the paper, they are most likely the same compound. It isn't necessarily true of course - you could have two similarly colored compounds with very similar Rf values. We'll look at how you can get around that problem further down the page.
What if the substances you are interested in are colorless?
In some cases, it may be possible to make the spots visible by reacting them with something which produces a colored product. A good example of this is in chromatograms produced from amino acid mixtures.
Suppose you had a mixture of amino acids and wanted to find out which particular amino acids the mixture contained. For simplicity we'll assume that you know the mixture can only possibly contain five of the common amino acids. A small drop of a solution of the mixture is placed on the base line of the paper, and similar small spots of the known amino acids are placed alongside it. The paper is then stood in a suitable solvent and left to develop as before. In the diagram, the mixture is M, and the known amino acids are labeled 1 to 5.
The position of the solvent front is marked in pencil and the chromatogram is allowed to dry and is then sprayed with a solution of ninhydrin. Ninhydrin reacts with amino acids to give colored compounds, mainly brown or purple.
The left-hand diagram shows the paper after the solvent front has almost reached the top. The spots are still invisible. The second diagram shows what it might look like after spraying with ninhydrin.
There is no need to measure the Rf values because you can easily compare the spots in the mixture with those of the known amino acids - both from their positions and their colors. In this example, the mixture contains the amino acids labeled as 1, 4 and 5. And what if the mixture contained amino acids other than the ones we have used for comparison? There would be spots in the mixture which didn't match those from the known amino acids. You would have to re-run the experiment using other amino acids for comparison.
Two way paper chromatography
Two way paper chromatography gets around the problem of separating out substances which have very similar Rf values. I'm going to go back to talking about colored compounds because it is much easier to see what is happening. You can perfectly well do this with colorless compounds - but you have to use quite a lot of imagination in the explanation of what is going on!
This time a chromatogram is made starting from a single spot of mixture placed towards one end of the base line. It is stood in a solvent as before and left until the solvent front gets close to the top of the paper.
In the diagram, the position of the solvent front is marked in pencil before the paper dries out. This is labeled as SF1 - the solvent front for the first solvent. We shall be using two different solvents.
If you look closely, you may be able to see that the large central spot in the chromatogram is partly blue and partly green. Two dyes in the mixture have almost the same Rf values. They could equally well, of course, both have been the same color - in which case you couldn't tell whether there was one or more dye present in that spot.
What you do now is to wait for the paper to dry out completely, and then rotate it through 90°, and develop the chromatogram again in a different solvent.
It is very unlikely that the two confusing spots will have the same Rf values in the second solvent as well as the first, and so the spots will move by a different amount.
The next diagram shows what might happen to the various spots on the original chromatogram. The position of the second solvent front is also marked.
You wouldn't, of course, see these spots in both their original and final positions - they have moved! The final chromatogram would look like this:
Two way chromatography has completely separated out the mixture into four distinct spots. If you want to identify the spots in the mixture, you obviously can't do it with comparison substances on the same chromatogram as we looked at earlier with the pens or amino acids examples. You would end up with a meaningless mess of spots. You can, though, work out the Rf values for each of the spots in both solvents, and then compare these with values that you have measured for known compounds under exactly the same conditions.
How does paper chromatography work?
Although paper chromatography is simple to do, it is quite difficult to explain compared with thin layer chromatography. The explanation depends to some extent on what sort of solvent you are using, and many sources gloss over the problem completely. If you haven't already done so, it would be helpful if you could read the explanation for how thin layer chromatography works (link below). That will save me a lot of repetition, and I can concentrate on the problems.
The essential structure of paper
Paper is made of cellulose fibres, and cellulose is a polymer of the simple sugar, glucose.
The key point about cellulose is that the polymer chains have -OH groups sticking out all around them. To that extent, it presents the same sort of surface as silica gel or alumina in thin layer chromatography.
It would be tempting to try to explain paper chromatography in terms of the way that different compounds are adsorbed to different extents on to the paper surface. In other words, it would be nice to be able to use the same explanation for both thin layer and paper chromatography. Unfortunately, it is more complicated than that!
The complication arises because the cellulose fibres attract water vapour from the atmosphere as well as any water that was present when the paper was made. You can therefore think of paper as being cellulose fibres with a very thin layer of water molecules bound to the surface.
It is the interaction with this water which is the most important effect during paper chromatography.
Paper chromatography using a non-polar solvent
Suppose you use a non-polar solvent such as hexane to develop your chromatogram.
Non-polar molecules in the mixture that you are trying to separate will have little attraction for the water molecules attached to the cellulose, and so will spend most of their time dissolved in the moving solvent. Molecules like this will therefore travel a long way up the paper carried by the solvent. They will have relatively high Rf values.
On the other hand, polar molecules will have a high attraction for the water molecules and much less for the non-polar solvent. They will therefore tend to dissolve in the thin layer of water around the cellulose fibres much more than in the moving solvent.
Because they spend more time dissolved in the stationary phase and less time in the mobile phase, they aren't going to travel very fast up the paper.
The tendency for a compound to divide its time between two immiscible solvents (solvents such as hexane and water which won't mix) is known as partition. Paper chromatography using a non-polar solvent is therefore a type of partition chromatography.
Paper chromatography using a water and other polar solvents
A moment's thought will tell you that partition can't be the explanation if you are using water as the solvent for your mixture. If you have water as the mobile phase and the water bound on to the cellulose as the stationary phase, there can't be any meaningful difference between the amount of time a substance spends in solution in either of them. All substances should be equally soluble (or equally insoluble) in both.
And yet the first chromatograms that you made were probably of inks using water as your solvent.
If water works as the mobile phase as well being the stationary phase, there has to be some quite different mechanism at work - and that must be equally true for other polar solvents like the alcohols, for example. Partition only happens between solvents which don't mix with each other. Polar solvents like the small alcohols do mix with water.
In researching this topic, I haven't found any easy explanation for what happens in these cases. Most sources ignore the problem altogether and just quote the partition explanation without making any allowance for the type of solvent you are using. Other sources quote mechanisms which have so many strands to them that they are far too complicated for this introductory level. I'm therefore not taking this any further - you shouldn't need to worry about this at UK A level, or its various equivalents. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/V._Chromatography/E._Paper_Chromatography.txt |
In 1949, Lyman Craig introduced an improved method for separating analytes with similar distribution ratios.1 The technique, which is known as a countercurrent liquid–liquid extraction, is outlined in Figure A16.1 and discussed in detail below. In contrast to a sequential liquid–liquid extraction, in which we repeatedly extract the sample containing the analyte, a countercurrent extraction uses a serial extraction of both the sample and the extracting phases. Although countercurrent separations are no longer common—chromatographic separations are far more efficient in terms of resolution, time, and ease of use—the theory behind a countercurrent extraction remains useful as an introduction to the theory of chromatographic separations.
To track the progress of a countercurrent liquid-liquid extraction we need to adopt a labeling convention. As shown in Figure A16.1, in each step of a countercurrent extraction we first complete the extraction and then transfer the upper phase to a new tube containing a portion of the fresh lower phase. Steps are labeled sequentially beginning with zero. Extractions take place in a series of tubes that also are labeled sequentially, starting with zero. The upper and lower phases in each tube are identified by a letter and number, with the letters U and L representing, respectively, the upper phase and the lower phase, and the number indicating the step in the countercurrent extraction in which the phase was first introduced. For example, U0 is the upper phase introduced at step 0 (during the first extraction), and L2 is the lower phase introduced at step 2 (during the third extraction). Finally, the partitioning of analyte in any extraction tube results in a fraction p remaining in the upper phase, and a fraction q remaining in the lower phase. Values of q are calculated using equation A16.1, which is identical to equation 7.26 in Chapter 7.
$(q_{aq})_1 = \dfrac{(\text{moles aq})_1 }{(\text{moles aq})_0 }= \dfrac{V_{aq}}{(DV_{org} + V_{aq})} \tag{A16.1}$
The fraction p, of course is equal to 1 – q. Typically Vaq and Vorg are equal in a countercurrent extraction, although this is not a requirement.
Let’s assume that the analyte we wish to isolate is present in an aqueous phase of 1 M HCl, and that the organic phase is benzene. Because benzene has the smaller density, it is the upper phase, and 1 M HCl is the lower phase. To begin the countercurrent extraction we place the aqueous sample containing the analyte in tube 0 along with an equal volume of benzene. As shown in Figure A16.1a, before the extraction all the analyte is present in phase L0. When the extraction is complete, as shown in Figure A16.1b, a fraction p of the analyte is present in phase U0, and a fraction q is in phase L0. This completes step 0 of the countercurrent extraction. If we stop here, there is no difference between a simple liquid–liquid extraction and a countercurrent extraction.
After completing step 0, we remove phase U0 and add a fresh portion of benzene, U1, to tube 0 (see Figure A16.1c). This, too, is identical to a simple liquid-liquid extraction. Here is where the power of the countercurrent extraction begins—instead of setting aside the phase U0, we place it in tube 1 along with a portion of analyte-free aqueous 1 M HCl as phase L1 (see Figure A16.1c). Tube 0 now contains a fraction q of the analyte, and tube 1 contains a fraction p of the analyte. Completing the extraction in tube 0 results in a fraction p of its contents remaining in the upper phase, and a fraction q remaining in the lower phase. Thus, phases U1 and L0 now contain, respectively, fractions pq and q2 of the original amount of analyte. Following the same logic, it is easy to show that the phases U0 and L1 in tube 1 contain, respectively, fractions p2 and pq of analyte. This completes step 1 of the extraction (see Figure A16.1d). As shown in the remainder of Figure A16.1, the countercurrent extraction continues with this cycle of phase transfers and extractions.
In a countercurrent liquid–liquid extraction, the lower phase in each tube remains in place, and the upper phase moves from tube 0 to successively higher numbered tubes. We recognize this difference in the movement of the two phases by referring to the lower phase as a stationary phase and the upper phase as a mobile phase. With each transfer some of the analyte in tube r moves to tube r + 1, while a portion of the analyte in tube r – 1 moves to tube r. Analyte introduced at tube 0 moves with the mobile phase, but at a rate that is slower than the mobile phase because, at each step, a portion of the analyte transfers into the stationary phase. An analyte that preferentially extracts into the stationary phase spends proportionally less time in the mobile phase and moves at a slower rate. As the number of steps increases, analytes with different values of q eventually separate into completely different sets of extraction tubes.
We can judge the effectiveness of a countercurrent extraction using a histogram showing the fraction of analyte present in each tube. To determine the total amount of analyte in an extraction tube we add together the fraction of analyte present in the tube’s upper and lower phases following each transfer. For example, at the beginning of step 3 (see Figure A16.1g) the upper and lower phases of tube 1 contain fractions pq2 and 2pq2 of the analyte, respectively; thus, the total fraction of analyte in the tube is 3pq2. Table A16.1 summarizes this for the steps outlined in Figure A16.1. A typical histogram, calculated assuming distribution ratios of 5.0 for analyte A and 0.5 for analyte B, is shown in Figure A16.2. Although four steps is not enough to separate the analytes in this instance, it is clear that if we extend the countercurrent extraction to additional tubes, we will eventually separate the analytes.
Table A16.1: Fraction of Analyte Remaining in Tube
nr 0 1 2 3
0 1
1 q p
2 q2 2pq p2
3 q3 3pq2 3p2q p3
Figure A16.1 and Table A16.1 show how an analyte’s distribution changes during the first four steps of a countercurrent extraction. Now we consider how we can generalize these results to calculate the amount of analyte in any tube, at any step during the extraction. You may recognize the pattern of entries in Table A16.1 as following the binomial distribution
$f(r,n) = \dfrac{n!}{(n−r)!r!} p^rq^{n−r} \tag{A16.2}$
where f(r, n) is the fraction of analyte present in tube r at step n of the countercurrent extraction, with the upper phase containing a fraction p×f(r, n) of analyte and the lower phase containing a fraction q×f(r, n) of the analyte.
Example $\PageIndex{A1}$:
The countercurrent extraction shown in Figure A16.2 is carried out through step 30. Calculate the fraction of analytes A and B in tubes 5, 10, 15, 20, 25, and 30.
Solution
To calculate the fraction, q, for each analyte in the lower phase we use equation A6.1. Because the volumes of the lower and upper phases are equal, we get
qA = 1 / (DA + 1) = 1 / (5 + 1) = 0.167
qB = 1 / (DB+ 1) = 1 / (4 + 1) = 0.200
Because we know that p + q = 1, we also know that pA is 0.833 and that pB is 0.333. For analyte A, the fraction in tubes 5, 10, 15, 20, 25, and 30 after the 30th step are
f(5,30) = (30! / ((30−5)!5!))(0.833)5(0.167)30−5 = 2.1×10−15 ≈ 0
f(10,30) = (30! / ((30−10)!10!))(0.833)10(0.167)30−10 = 1.4×10−9 ≈ 0
f(15,30) = (30! / ((30−15)!15!))(0.833)15(0.167)30−15 = 2.2×10−5 ≈ 0
f(20,30) = (30! / ((30−20)!20!))(0.833)20(0.167)30−20 = 0.013
f(25,30) = (30! / ((30−25)!25!))(0.833)25(0.167)30−25 = 0.192
f(30,30) = (30! / ((30−30)!30!))(0.833)30(0.167)30−30= 0.004
The fraction of analyte B in tubes 5, 10, 15, 20, 25, and 30 is calculated in the same way, yielding respective values of 0.023, 0.153, 0.025, 0, 0, and 0. Figure A16.3, which provides the complete histogram for the distribution of analytes A and B, shows that 30 steps is sufficient to separate the two analytes.
Constructing a histogram using equation A16.2 is tedious, particularly when the number of steps is large. Because the fraction of analyte in most tubes is approximately zero, we can simplify the histogram’s construction by solving equation A16.2 only for those tubes containing an amount of analyte exceeding a threshold value. For a binomial distribution, we can use the mean and standard deviation to determine which tubes contain a significant fraction of analyte. The properties of a binomial distribution were covered in Chapter 4, with the mean, μ, and the standard deviation, s, given as
μ = np
σ = √(np(1p)) = √(npq)
Furthermore, if both np and nq are greater than 5, the binomial distribution is closely approximated by the normal distribution and we can use the properties of a normal distribution to determine the location of the analyte and its recovery.2
Example $\PageIndex{A2}$:
Two analytes, A and B, with distribution ratios of 9 and 4, respectively, are separated using a countercurrent extraction in which the volumes of the upper and lower phases are equal. After 100 steps determine the 99% confidence interval for the location of each analyte.
Solution
The fraction, q, of each analyte remaining in the lower phase is calculated using equation A16.1. Because the volumes of the lower and upper phases are equal, we find that
qA = 1 / (DA + 1) = 1 / (9 + 1) = 0.10
qB = 1 / (DB + 1) = 1 / (4 + 1) = 0.20
Because we know that p + q = 1, we also know that pA is 0.90 and pB is 0.80. After 100 steps, the mean and the standard deviation for the distribution of analytes A and B are
µA = npA = (100)(0.90) = 90 and σA = √(npAqA) = √((100)(0.90)(0.10)) = 3
µB = npB = (100)(0.80) = 80 and σB = √(npBqB) = √((100)(0.80)(0.20)) = 4
Given that npA, npB, nqA, and nqB are all greater than 5, we can assume that the distribution of analytes follows a normal distribution and that the confidence interval for the tubes containing each analyte is
r = µ ± zσ
where r is the tube’s number and the value of z is determined by the desired significance level. For a 99% confidence interval the value of z is 2.58 (Appendix 4); thus,
rA = 90 ± (2.58)(3) = 90 ± 8
rB = 80 ± (2.58)(4) = 80 ± 10
Because the two confidence intervals overlap, a complete separation of the two analytes is not possible using a 100 step countercurrent extraction. The complete distribution of the analytes is shown in Figure A16.4.
Example $\PageIndex{A3}$:
For the countercurrent extraction in Example A16.2, calculate the recovery and separation factor for analyte A if the contents of tubes 85–99 are pooled together.
Solution
From Example A16.2 we know that after 100 steps of the countercurrent extraction, analyte A is normally distributed about tube 90 with a standard deviation of 3. To determine the fraction of analyte A in tubes 85–99, we use the single-sided normal distribution in Appendix 3 to determine the fraction of analyte in tubes 0–84, and in tube 100. The fraction of analyte A in tube 100 is determined by calculating the deviation z
z = (r − µ) / σ = (99 − 90) / 3 = 3
and using the table in Appendix 3 to determine the corresponding fraction. For z = 3 this corresponds to 0.135% of analyte A. To determine the fraction of analyte A in tubes 0–84 we again calculate the deviation
z = (r − µ) / σ = (85 − 90) / 3 = –1.67
From Appendix 3 we find that 4.75% of analyte A is present in tubes 0–84. Analyte A’s recovery, therefore, is
100% – 4.75% – 0.135% ≈ 95%
To calculate the separation factor we determine the recovery of analyte B in tubes 85.99 using the same general approach as for analyte A, finding that approximately 89.4% of analyte B remains in tubes 0.84 and that essentially no analyte B is in tube 100. The recover for B, therefore, is
100% – 89.4% – 0% ≈ 10.6%
and the separation factor is
SB,A= RA / RB = 10.6 / 95 = 0.112
References
1. Craig, L. C. J. Biol. Chem. 1944, 155, 519–534.
2. Mark, H.; Workman, J. Spectroscopy 1990, 5(3), 55–56. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Countercurrent_Separations.txt |
Cyclic Voltammetry (CV) is an electrochemical technique which measures the current that develops in an electrochemical cell under conditions where voltage is in excess of that predicted by the Nernst equation. CV is performed by cycling the potential of a working electrode, and measuring the resulting current.
Introduction
The potential of the working electrode is measured against a reference electrode which maintains a constant potential, and the resulting applied potential produces an excitation signal such as that of figure 1.² In the forward scan of figure 1, the potential first scans negatively, starting from a greater potential (a) and ending at a lower potential (d). The potential extrema (d) is call the switching potential, and is the point where the voltage is sufficient enough to have caused an oxidation or reduction of an analyte. The reverse scan occurs from (d) to (g), and is where the potential scans positively. Figure 1 shows a typical reduction occurring from (a) to (d) and an oxidation occurring from (d) to (g). It is important to note that some analytes undergo oxidation first, in which case the potential would first scan positively. This cycle can be repeated, and the scan rate can be varied. The slope of the excitation signal gives the scan rate used.
A cyclic voltammogram is obtained by measuring the current at the working electrode during the potential scans.² Figure 2 shows a cyclic voltammogram resulting from a single electron reduction and oxidation. Consider the following reversible reaction:
$M^+ + e^- \rightleftharpoons M \nonumber$
In Figure 2, the reduction process occurs from (a) the initial potential to (d) the switching potential. In this region the potential is scanned negatively to cause a reduction. The resulting current is called cathodic current (ipc). The corresponding peak potential occurs at (c), and is called the cathodic peak potential (Epc). The Epc is reached when all of the substrate at the surface of the electrode has been reduced. After the switching potential has been reached (d), the potential scans positively from (d) to (g). This results in anodic current (Ipa) and oxidation to occur. The peak potential at (f) is called the anodic peak potential (Epa), and is reached when all of the substrate at the surface of the electrode has been oxidized.
Useful Equations for Reversible Systems
Electrode potential ($E$):
$E = E_i + vt \tag{1}$
where
• $E_i$ is the initial potential in volts,
• $v$ is the sweep rate in volts/s, and
• $t$ is the time in seconds.
When the direction of the potential sweep is switched, the equation becomes,
$E = E_s - vt \tag{2}$
Where $E_s$ is the potential at the switching point. Electron stoichiometry ($n$):
$E_p - E_{p/2} > \dfrac{0.0565}{n} \tag{3}$
where
• $E_{pa}$ is the anodic peak potential,
• $E_{pc}$ is the cathodic peak potential, and
• $n$ is the number of electrons participating in the redox reactions.
Formal Reduction Potential (E°’) is the mean of the $E_{pc}$ and $E_{pa}$ values:
$E°’ = \dfrac{E_{pa} + E_{pc}}{2}. \nonumber$
Concentration Profiles at the Electrode Surface
In an unstirred solution, mass transport of the analyte to the electrode surface occurs by diffusion alone.¹ Fick’s Law for mass transfer diffusion relates the distance from the electrode (x), time (t), and the reactant concentration (CA) to the diffusion coefficient (DA).
$\dfrac{\partial c_A}{\partial t} = D_A \dfrac{\partial^2c_A}{\partial x^2} \tag{4}$
During a reduction, current increases until it reaches a peak: when all M+ exposed to the surface of an electrode has been reduced to M. At this point additional M+ to be reduced can travel by diffusion alone to the surface of the electrode, and as the concentration of M increases, the distance M+ has to travel also increases. During this process the current which has peaked, begins to decline as smaller and smaller amounts of M+ approach the electrode. It is not practical to obtain limiting currents Ipa, and Ipc in a system in which the electrode has not been stirred because the currents continually decrease with time.¹
In a stirred solution, a Nernst diffusion layer ~10-2 cm thick, lies adjacent to the electrode surface. Beyond this region is a laminar flow region, followed by a turbulent flow region which contains the bulk solution.¹ Because diffusion is limited to the narrow Nernst diffusion region, the reacting analytes cannot diffuse into the bulk solution, and therefore Nernstian equilibrium is maintained and diffusion-controlled currents can be obtained. In this case, Fick’s Law for mass transfer diffusion can be simplified to give the peak current
$i_p = (2.69 \; x \; 10^5) \; n^{3/2} \; SD_A^{1/2} \; V^{1/2} \; C_A^* \tag{5}$
Here, (n) is equal to the number of electrons gained in the reduction, (S) is the surface area of the working electrode in cm², (DA) is the diffusion coefficient, (v) is the sweep rate, and (CA) is the molar concentration of A in the bulk solution.
Instrumentation
A CV system consists of an electrolysis cell, a potentiostat, a current-to-voltage converter, and a data acquisition system. The electrolysis cell consists of a working electrode, counter electrode, reference electrode, and electrolytic solution. The working electrode’s potential is varied linearly with time, while the reference electrode maintains a constant potential. The counter electrode conducts electricity from the signal source to the working electrode. The purpose of the electrolytic solution is to provide ions to the electrodes during oxidation and reduction. A potentiostat is an electronic device which uses a dc power source to produce a potential which can be maintained and accurately determined, while allowing small currents to be drawn into the system without changing the voltage. The current-to-voltage converter measures the resulting current, and the data acquisition system produces the resulting voltammogram.
Applications
Cyclic Voltammetry can be used to study qualitative information about electrochemical processes under various conditions, such as the presence of intermediates in oxidation-reduction reactions, the reversibility of a reaction. CV can also be used to determine the electron stoichiometry of a system, the diffusion coefficient of an analyte, and the formal reduction potential, which can be used as an identification tool. In addition, because concentration is proportional to current in a reversible, Nernstian system, concentration of an unknown solution can be determined by generating a calibration curve of current vs. concentration.
Contributors and Attributions
• Amanda Quiroga (UCD) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Cyclic_Voltammetry.txt |
When an X-ray is shined on a crystal, it diffracts in a pattern characteristic of the structure.
• Bragg's Law
The structures of crystals and molecules are often being identified using x-ray diffraction studies, which are explained by Bragg’s Law. The law explains the relationship between an x-ray light shooting into and its reflection off from crystal surface.
• Powder X-ray Diffraction
When an X-ray is shined on a crystal, it diffracts in a pattern characteristic of the structure. In powder X-ray diffraction, the diffraction pattern is obtained from a powder of the material, rather than an individual crystal. Powder diffraction is often easier and more convenient than single crystal diffraction since it does not require individual crystals be made. Powder X-ray diffraction (XRD) also obtains a diffraction pattern for the bulk material of a crystalline solid, rather than of a s
• X-ray Crystallography
X-ray Crystallography is a scientific method used to determine the arrangement of atoms of a crystalline solid in three dimensional space. This technique takes advantage of the interatomic spacing of most crystalline solids by employing them as a diffraction gradient for x-ray light, which has wavelengths on the order of 1 angstrom (10-8 cm).
• X-ray Diffraction
The construction of a simple powder diffractometer was first described by Hull in 1917 (1) which was shortly after the discovery of X-rays by Wilhelm Conrad Röntgen in 1895 (2). Diffractometer measures the angles at which X-rays get reflected and thus get the structural information they contains. Nowadays resolution of this technique get significant improvement and it is widely used as a tool to analyze the phase information and solve crystal structures of solid-state materials.
Thumbnail: Photo of an X-Ray Diffraction machine. Photo from the Australian Microscopy & Microanalysis Research Facility Website. .
Diffraction Scattering Techniques
The structures of crystals and molecules are often being identified using x-ray diffraction studies, which are explained by Bragg’s Law. The law explains the relationship between an x-ray light shooting into and its reflection off from crystal surface.
Introduction
Bragg’s Law was introduced by Sir W.H. Bragg and his son Sir W.L. Bragg. The law states that when the x-ray is incident onto a crystal surface, its angle of incidence, $\theta$, will reflect back with a same angle of scattering, $\theta$. And, when the path difference, $d$ is equal to a whole number, $n$, of wavelength, a constructive interference will occur.
Consider a single crystal with aligned planes of lattice points separated by a distance d. Monochromatic X-rays A, B, and C are incident upon the crystal at an angle θ. They reflect off atoms X, Y, or Z.
The path difference between the ray reflected at atom X and the ray reflected at atom Y can be seen to be 2YX. From the Law of Sines we can express this distance YX in terms of the lattice distance and the X-ray incident angle:
If the path difference is equal to an integer multiple of the wavelength, then X-rays A and B (and by extension C) will arrive at atom X in the same phase. In other words, given the following conditions:
then the scattered radiation will undergo constructive interference and thus the crystal will appear to have reflected the X-radiation. If, however, this condition is not satisfied, then destructive interference will occur.
Bragg’s Law
$n\lambda = 2d\sin\theta \nonumber$
where:
• $\lambda$ is the wavelength of the x-ray,
• $d$ is the spacing of the crystal layers (path difference),
• $\theta$ is the incident angle (the angle between incident ray and the scatter plane), and
• $n$ is an integer
The principle of Bragg’s law is applied in the construction of instruments such as Bragg spectrometer, which is often used to study the structure of crystals and molecules. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Diffraction_Scattering_Techniques/Bragg%27s_Law.txt |
When an X-ray is shined on a crystal, it diffracts in a pattern characteristic of the structure. In powder X-ray diffraction, the diffraction pattern is obtained from a powder of the material, rather than an individual crystal. Powder diffraction is often easier and more convenient than single crystal diffraction since it does not require individual crystals be made. Powder X-ray diffraction (XRD) also obtains a diffraction pattern for the bulk material of a crystalline solid, rather than of a single crystal, which doesn't necessarily represent the overall material. A diffraction pattern plots intensity against the angle of the detector, $2\theta$.
Introduction
Since most materials have unique diffraction patterns, compounds can be identified by using a database of diffraction patterns. The purity of a sample can also be determined from its diffraction pattern, as well as the composition of any impurities present. A diffraction pattern can also be used to determine and refine the lattice parameters of a crystal structure. A theoretical structure can also be refined using a method known as Rietveld refinement. The particle size of the powder can also be determined by using the Scherrer formula, which relates the particle size to the peak width. The Scherrer fomula is
$t = \dfrac{0.9 \lambda}{\sqrt{B^2_M-B^2_s} \cos \theta} \nonumber$
with
• $\lambda$ is the x-ray wavelength,
• $B_M$ is the observed peak width,
• $B_S$ is the peak width of a crystalline standard, and
• $\theta$ is the angle of diffraction.
To the left is an example XRD pattern for $Ba_{24}Ge_{100}$. The x axis is $2\theta$ and the y axis is the intensity.
Bragg's Law
X-rays are partially scattered by atoms when they strike the surface of a crystal. The part of the X-ray that is not scattered passes through to the next layer of atoms, where again part of the X-ray is scattered and part passes through to the next layer. This causes an overall diffraction pattern, similar to how a grating diffracts a beam of light. In order for an X-ray to diffract the sample must be crystalline and the spacing between atom layers must be close to the radiation wavelength.
If beams diffracted by two different layers are in phase, constructive interference occurs and the diffraction pattern shows a peak, however if they are out of phase, destructive interference occurs appear and there is no peak. Diffraction peaks only occur if
$\sin \theta = \dfrac{n\lambda}{2d} \nonumber$
where
• $\theta$ is the angle of incidence of the X-ray,
• $n$ is an integer,
• $\lambda$ is the wavelength, and
• $d$ is the spacing between atom layers.
Since a highly regular structure is needed for diffraction to occur, only crystalline solids will diffract; amorphous materials will not show up in a diffraction pattern.
Instrumentation
A powder X-ray diffractometer consists of an X-ray source (usually an X-ray tube), a sample stage, a detector and a way to vary angle θ. The X-ray is focused on the sample at some angle θ, while the detector opposite the source reads the intensity of the X-ray it receives at 2θ away from the source path. The incident angle is than increased over time while the detector angle always remains 2θ above the source path.
X-ray Tubes
While other sources such as radioisotopes and secondary fluorescence exist, the most common source of X-rays is an X-ray tube. The tube is evacuated and contains a copper block with a metal target anode, and a tungsten filament cathode with a high voltage between them. The filament is heated by a separate circuit, and the large potential difference between the cathode and anode fires electrons at the metal target. The accelerated electrons knock core electrons out of the metal, and electrons in the outer orbitals drop down to fill the vacancies, emitting X-rays. The X-rays exit the tube through a beryllium window. Due to massive amounts of heat being produced in this process, the copper block must usually be water cooled
X-ray Detectors
While older machines used film as a detector, most modern equipment uses transducers that produce an electrical signal when exposed to radiation. These detectors are often used as photon counters, so intensities are determined by the number of counts in a certain amount of time.
Gas-Filled Transducers
A gas-filled transducer consists of a metal chamber filled with an inert gas, with the walls of the chamber as a cathode and a long anode in the center of the chamber. As an X-ray enters the chamber, its energy ionizes many molecules of the gas. The free electrons then migrate towards the anode and the cations towards the cathode, with some recombining before they reach the electrodes. The electrons that reach the anode cause current to flow, which can be detected. The sensitivity and dead time (when the transducer will not respond to radiation) both depend on the voltage the transducer is operated at. At high voltage, the transducer will be very sensitive but have a long dead time, and at low voltage the transducer will have a short dead time but low sensitivity.
Scintillation Counters
In a scintillation counter, a phosphor is placed in front of a photomultiplier tube. When X-rays strike the phosphor, it produces flashes of light, which are detected by the photomultiplier tube.
Semiconductor Transducers
A semiconductor transducer has a gold coated p-type semiconductor layered on a lithium containing semiconductor intrinsic zone, followed by an n-type semiconductor on the other side of the intrinsic zone. The semiconductor is usually composed of silicon; germanium is used if the radiation wavelength is very short. The n-type semiconductor is coated by an aluminum contact, which is connected to a preamplifier. The entire crystal has a voltage applied across it. When an X-ray strikes the crystal, it elevates many electrons in the semiconductor into the conduction band, which causes a pulse of current.
References
1. Dann, S.E. Reactions and Characterization of SOLIDS. Royal Society of Chemistry, USA (2002).
2. Skoog, D.A.; Holler, F.J.; Crouch, S.R. Principles of Instrumental Analysis. Sixth Edition, Thomson Brooks/Cole, USA (2007).
Exercise $1$
Copper emits radiation at 1.5418Å. If a diffraction pattern taken with a copper X-ray tube source shows a peak at 40, what is the corresponding d spacing? (Hint: Don't forget that diffraction patterns are plotted in $2θ$, not $θ$. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Diffraction_Scattering_Techniques/Powder_X-ray_Diffraction.txt |
X-ray Crystallography is a scientific method used to determine the arrangement of atoms of a crystalline solid in three dimensional space. This technique takes advantage of the interatomic spacing of most crystalline solids by employing them as a diffraction gradient for x-ray light, which has wavelengths on the order of 1 angstrom (10-8 cm).
Introduction
In 1895, Wilhelm Rontgen discovered x- rays. The nature of x- rays, whether they were particles or electromagnetic radiation, was a topic of debate until 1912. If the wave idea was correct, researchers knew that the wavelength of this light would need to be on the order of 1 Angstrom (A) (10-8 cm). Diffraction and measurement of such small wavelengths would require a gradient with spacing on the same order of magnitude as the light.
In 1912, Max von Laue, at the University of Munich in Germany, postulated that atoms in a crystal lattice had a regular, periodic structure with interatomic distances on the order of 1 A. Without having any evidence to support his claim on the periodic arrangements of atoms in a lattice, he further postulated that the crystalline structure can be used to diffract x-rays, much like a gradient in an infrared spectrometer can diffract infrared light. His postulate was based on the following assumptions: the atomic lattice of a crystal is periodic, x- rays are electromagnetic radiation, and the interatomic distance of a crystal are on the same order of magnitude as x- ray light. Laue's predictions were confirmed when two researchers: Friedrich and Knipping, successfully photographed the diffraction pattern associated with the x-ray radiation of crystalline $CuSO_4 \cdot 5H_2O$. The science of x-ray crystallography was born.
The arrangement of the atoms needs to be in an ordered, periodic structure in order for them to diffract the x-ray beams. A series of mathematical calculations is then used to produce a diffraction pattern that is characteristic to the particular arrangement of atoms in that crystal. X-ray crystallography remains to this day the primary tool used by researchers in characterizing the structure and bonding of organometallic compounds.
Diffraction
Diffraction is a phenomena that occurs when light encounters an obstacle. The waves of light can either bend around the obstacle, or in the case of a slit, can travel through the slits. The resulting diffraction pattern will show areas of constructive interference, where two waves interact in phase, and destructive interference, where two waves interact out of phase. Calculation of the phase difference can be explained by examining Figure 1 below.
In the figure below, two parallel waves, BD and AH are striking a gradient at an angle $θ_o$. The incident wave BD travels farther than AH by a distance of CD before reaching the gradient. The scattered wave (depicted below the gradient) HF, travels father than the scattered wave DE by a distance of HG. So the total path difference between path AHGF and BCDE is CD - HG. To observe a wave of high intensity (one created through constructive interference), the difference CD - HG must equal to an integer number of wavelengths to be observed at the angle psi, $CD - HG = n\lambda$, where $\lambda$ is the wavelength of the light. Applying some basic trigonometric properties, the following two equations can be shown about the lines:
$CD = x \cos(θ o) \nonumber$
and
$HG = x \cos (θ) \nonumber$
where $x$ is the distance between the points where the diffraction repeats. Combining the two equations,
$x(\cos θ_o - \cos θ) = n \lambda \nonumber$
Bragg's Law
Diffraction of an x-ray beam, occurs when the light interacts with the electron cloud surrounding the atoms of the crystalline solid. Due to the periodic crystalline structure of a solid, it is possible to describe it as a series of planes with an equal interplaner distance. As an x-ray's beam hits the surface of the crystal at an angle ?, some of the light will be diffracted at that same angle away from the solid (Figure 2). The remainder of the light will travel into the crystal and some of that light will interact with the second plane of atoms. Some of the light will be diffracted at an angle $theta$, and the remainder will travel deeper into the solid. This process will repeat for the many planes in the crystal. The x-ray beams travel different pathlengths before hitting the various planes of the crystal, so after diffraction, the beams will interact constructively only if the path length difference is equal to an integer number of wavelengths (just like in the normal diffraction case above). In the figure below, the difference in path lengths of the beam striking the first plane and the beam striking the second plane is equal to BG + GF. So, the two diffracted beams will constructively interfere (be in phase) only if $BG + GF = n \lambda$. Basic trigonometry will tell us that the two segments are equal to one another with the interplaner distance times the sine of the angle $\theta$. So we get:
$BG = BC = d \sin \theta \label{1}$
Thus,
$2d \sin \theta = n \lambda \label{2}$
This equation is known as Bragg's Law, named after W. H. Bragg and his son, W. L. Bragg; who discovered this geometric relationship in 1912. {C}{C}Bragg's Law relates the distance between two planes in a crystal and the angle of reflection to the x-ray wavelength. The x-rays that are diffracted off the crystal have to be in-phase in order to signal. Only certain angles that satisfy the following condition will register:
$\sin \theta = \dfrac{n \lambda}{2d} \label{3}$
For historical reasons, the resulting diffraction spectrum is represented as intensity vs. $2θ$.
Instrument Components
The main components of an x-ray instrument are similar to those of many optical spectroscopic instruments. These include a source, a device to select and restrict the wavelengths used for measurement, a holder for the sample, a detector, and a signal converter and readout. However, for x-ray diffraction; only a source, sample holder, and signal converter/readout are required.
The Source
x-ray tubes provides a means for generating x-ray radiation in most analytical instruments. An evacuated tube houses a tungsten filament which acts as a cathode opposite to a much larger, water cooled anode made of copper with a metal plate on it. The metal plate can be made of any of the following metals: chromium, tungsten, copper, rhodium, silver, cobalt, and iron. A high voltage is passed through the filament and high energy electrons are produced. The machine needs some way of controlling the intensity and wavelength of the resulting light. The intensity of the light can be controlled by adjusting the amount of current passing through the filament; essentially acting as a temperature control. The wavelength of the light is controlled by setting the proper accelerating voltage of the electrons. The voltage placed across the system will determine the energy of the electrons traveling towards the anode. X-rays are produced when the electrons hit the target metal. Because the energy of light is inversely proportional to wavelength ($E=hc=h(1/\lambda$), controlling the energy, controls the wavelength of the x-ray beam.
X-ray Filter
Monochromators and filters are used to produce monochromatic x-ray light. This narrow wavelength range is essential for diffraction calculations. For instance, a zirconium filter can be used to cut out unwanted wavelengths from a molybdenum metal target (see figure 4). The molybdenum target will produce x-rays with two wavelengths. A zirconium filter can be used to absorb the unwanted emission with wavelength Kβ, while allowing the desired wavelength, Kα to pass through.
Needle Sample Holder
The sample holder for an x-ray diffraction unit is simply a needle that holds the crystal in place while the x-ray diffractometer takes readings.
Signal Converter
In x-ray diffraction, the detector is a transducer that counts the number of photons that collide into it. This photon counter gives a digital readout in number of photons per unit time. Below is a figure of a typical x-ray diffraction unit with all of the parts labeled.
Fourier Transform
In mathematics, a Fourier transform is an operation that converts one real function into another. In the case of FTIR, a Fourier transform is applied to a function in the time domain to convert it into the frequency domain. One way of thinking about this is to draw the example of music by writing it down on a sheet of paper. Each note is in a so-called "sheet" domain. These same notes can also be expressed by playing them. The process of playing the notes can be thought of as converting the notes from the "sheet" domain into the "sound" domain. Each note played represents exactly what is on the paper just in a different way. This is precisely what the Fourier transform process is doing to the collected data of an x-ray diffraction. This is done in order to determine the electron density around the crystalline atoms in real space. The following equations can be used to determine the electrons' position:
$p(x,y,z) = \sum_h \sum_k \sum_l F(hkl) e ^{-2\pi i (hx+ky+lz)} \label{1A}$
$\int _0^1 \int _0^1 \int _0^1 p(x,y,z) e ^{2\pi i (hx+ky+lz)} dx\;dy\;dz \label{2B}$
$F(q) = | F(q) | e^{i \phi(q)} \label{3C}$
where $p(xyz)$ is the electron density function, and $F(hkl)$ is the electron density function in real space. Equation 1 represents the Fourier expansion of the electron density function. To solve for $F(hkl)$, the equation 1 needs to be evaluated over all values of h, k, and l, resulting in Equation 2. The resulting function $F(hkl)$ is generally expressed as a complex number (as seen in equation 3 above) with $| F(q)|$ representing the magnitude of the function and $\phi$ representing the phase.
Crystallization
In order to run an x-ray diffraction experiment, one must first obtain a crystal. In organometallic chemistry, a reaction might work but when no crystals form, it is impossible to characterize the products. Crystals are grown by slowly cooling a supersaturated solution. Such a solution can be made by heating a solution to decrease the amount of solvent present and to increase the solubility of the desired compound in the solvent. Once made, the solution must be cooled gradually. Rapid temperature change will cause the compound to crash out of solution, trapping solvent and impurities within the newly formed matrix. Cooling continues as a seed crystal forms. This crystal is a point where solute can deposit out of the solution and into the solid phase. Solutions are generally placed into a freezer (-78 ºC) in order to ensure all of the compound has crystallized. One way to ensure gradual cooling in a -78 ºC freezer is to place the container housing the compound into a beaker of ethanol. The ethanol will act as a temperature buffer, ensuring a slow decrease in the temperature gradient between the flask and the freezer. Once crystals are grown, it is imperative that they remain cold as any addition of energy will cause a disruption of the crystal lattice, which will yield bad diffraction data. The result of an organometallic chromium compound crystallization can be seen below.
Mounting the Crystal
Due to the air-sensitivity of most organometallic compounds, crystals must be transported in a highly viscous organic compound called paratone oil (Figure $7$). Crystals are abstracted from their respective Schlenks by dabbing the end of a spatula with the paratone oil and then sticking the crystal onto the oil. Although there might be some exposure of the compounds to air and water, crystals can withstand more exposure than solution (of the preserved protein) before degrading. On top of serving to protect the crystal, the paratone oil also serves as the glue to bind the crystal to the needle.
Rotating Crystal Method
To describe the periodic, three dimensional nature of crystals, the Laue equations are employed:
$a(\cos \theta_o – \cos \theta) = h\lambda \label{eq1}$
$b(\cos \theta_o – \cos \theta) = k\lambda \label{eq2}$
$c(\cos \theta_o – \cos \theta) = l\lambda \label{eq3}$
where $a$, $b$, and $c$ are the three axes of the unit cell, $θ_o$, $o$, $?o$ are the angles of incident radiation, and ?, ?, ? are the angles of the diffracted radiation. A diffraction signal (constructive interference) will arise when $h$, $k$, and $l$ are integer values. The rotating crystal method employs these equations. X-ray radiation is shown onto a crystal as it rotates around one of its unit cell axis. The beam strikes the crystal at a 90 degree angle. Using equation 1 above, we see that if $\theta_o$ is 90 degrees, then $\cos \theta_o = 0$. For the equation to hold true, we can set h=0, granted that $\theta= 90^o$. The above three equations will be satisfied at various points as the crystal rotates. This gives rise to a diffraction pattern (shown in the image below as multiple h values). The cylindrical film is then unwrapped and developed. The following equation can be used to determine the length axis around which the crystal was rotated:
$a = \dfrac{ch \lambda}{\sin \tan^{-1} (y/r} \nonumber$
where $a$ is the length of the axis, y is the distance from $h=0$ to the $h$ of interest, $r$ is the radius of the firm, and ? is the wavelength of the x-ray radiation used. The first length can be determined with ease, but the other two require far more work, including remounting the crystal so that it rotates around that particular axis.
X-ray Crystallography of Proteins
The crystals that form are frozen in liquid nitrogen and taken to the synchrotron which is a highly powered tunable x-ray source. They are mounted on a goniometer and hit with a beam of x-rays. Data is collected as the crystal is rotated through a series of angles. The angle depends on the symmetry of the crystal.
Proteins are among the many biological molecules that are used for x-ray Crystallography studies. They are involved in many pathways in biology, often catalyzing reactions by increasing the reaction rate. Most scientists use x-ray Crystallography to solve the structures of protein and to determine functions of residues, interactions with substrates, and interactions with other proteins or nucleic acids. Proteins can be co - crystallized with these substrates, or they may be soaked into the crystal after crystallization.
Protein Crystallization
Proteins will solidify into crystals under certain conditions. These conditions are usually made up of salts, buffers, and precipitating agents. This is often the hardest step in x-ray crystallography. Hundreds of conditions varying the salts, pH, buffer, and precipitating agents are combined with the protein in order to crystallize the protein under the right conditions. This is done using 96 well plates; each well containing a different condition and crystals; which form over the course of days, weeks, or even months. The pictures below are crystals of APS Kinase D63N from Penicillium chrysogenum taken at the Chemistry building at UC Davis after crystals formed over a period of a week. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Diffraction_Scattering_Techniques/X-ray_Crystallography.txt |
The construction of a simple powder diffractometer was first described by Hull in 19171 which was shortly after the discovery of X-rays by Wilhelm Conrad Röntgen in 18952. Diffractometer measures the angles at which X-rays get reflected and thus get the structural information they contains. Nowadays resolution of this technique get significant improvement and it is widely used as a tool to analyze the phase information and solve crystal structures of solid-state materials.
Introduction
Since the wavelength of X-rays is similar to the distance between crystal layers, incident X-rays will be diffracted, interacting with certain crystalline layers and diffraction patterns containing important structural information about the crystal can be obtained. The diffraction pattern is considered the fingerprint of the crystal because each crystal structures produce unique diffraction patterns and every phase in a mixture produces its diffraction pattern independently. We can use grinded bulk sample into fine powders, which are typical under 10 µm,2 as samples in powder X-ray Diffraction (XRD). Unlike single crystal X-ray diffraction (X-ray Crystallography) technique, the sample will distribute evenly at every possible orientation and powder XRD collects one-dimensional information, which is a diagram of diffracted beam intensity vs. Bragg angle θ, rather than three-dimensional information.
Theoretical consideration
In this section, let us take a look at the theoretical basis of powder X-ray diffraction technique. (e.g. lattice structures and how X-rays interacts with crystal structures)
Unit cells
“Crystals are built up of regular arrangements of atoms in three dimensions; these arrangements can be represented by a repeat unit or motif called the unit cell.”2 In crystallography, all the crystal unit cells can be classied into 230 space groups. Some basic knowledge about crystallography is necessary for a well understanding of powder XRD technique. In crystallography, the basic possible classifications are: 6 crystal families, 7 crystal systems, 5 centering position, 14 Bravais lattices and 32 crystal classes.
Based on the angles and the length of the axes sides, unit cell can be divided into 6 crystal families, which are cubic, tetragonal, hexagonal, orthorhombic, monoclinic and triclinic. As the hexagonal family can have two different appearances, we can divide it into two systems which are trigonal lattice and hexagonal lattice. That is how the 7 crystal systems generate. If forget the shape of lattice and just consider the atoms' positions, we can divide the lattices into primitive lattices and non-primitive ones. A primitive lattice (also defined as simple) is the lattice with the smallest possible atomic coordination number2, e.g.wheneight atoms lie in the eight corners. And all the other lattices are called non-primitive lattice. Based on the three-dimension position of the atoms in the unit cell, we can divided the non-primitive lattice into three types: face centered (F), side centered (C), body centered (I) and based centered(R).
Bravais lattice is a “combination of lattice type and crystal systems”1.And you can find a chart of examples of all the 14 Bravais lattice in outside link.
32 crystal classes refer to 32 crystallographic point group classfied by the possible symmetric operations, which are rotation, reflection and inversion. You may wonder why only 32 possible point group. The answer is crystallographic restriction, which means crystal system can only have 5 kinds of rotation axises: 1-fold, 2-fold, 3-fold, 4-fold and 6-fold. To be simplify, only the permissible rotation axises allow unit cells grow uniformly without any openings among them.
230 space groups are combinations of 14 Bravais lattice and 32 crystal classes. Those space groups are generated from translations of related Bravais lattice and glide plane and/or scew axis of relative crystal classes. They are represented by Hermann-Mauguin. For example, the NO.62 space group, Pnma is derived from D2h crystal class. P indicated it is primitive structure and n, m, a stand for a diagonal glide plane, a mirror plane and a axial glide planes. The space group belongs to orthorhombic crystal family.
Miller Indices and Reciprocal Lattice
Miller indices and reciprocal lattice are essential to understanding the geometry of lattice planes and X-ray diffraction technique, because they are widely used to index the planes and orientations in crystallography and allow data handling in a simple and mathematical method. To assign the Miller indices (h,k,l) to a certain set of parallel planes which are defined as a plane family , first we need to find the first plane next to the plane, passing through the origin. Then we can find the three intersection of this plane on the unit cell vectors, a, b, c. “The Miller indices would be the reciprocals of the fractional intersections.”1 Why we want the reciprocal of fraction instead of the fraction directly? To get the answer, we need get to understand what reciprocal lattice is.
“Geometrically, the planes can be specified by two quantities: (1) their orientation in the crystal and (2) their d-spacing.” 4 (d-spacing is the interplanar distance) This allows us to use a vector d*, which is normal to the planes and whose length is inverse to d-spacing, i.e. d*=K/d 4 to stand for a certain family of planes. d* is called reciprocal lattice vector and similarly in three dimension system, reciprocal lattice vector d*hkl stand for (h,k,l). And the end point of reciprocal lattice vector form a grid or lattice- reciprocal lattice unit cell4. Reciprocal lattice cell vector a*, b*, c* is reciprocal form of direct unit cell vector a, b, c. Then it is easy to find out that d*hkl=ha*+kb*+lc*. By take the reciprocal number of the intercepts of Miller indices, those two notation systems are very consistent and straightforward in indexing the crystal lattice.
Bragg’s Law
Bragg’ s law is the theoretical basis of X-ray diffractometer. Let us consider the crystalline as built up in planes. As shown in the diagram, X-ray beam shines into the planes and is reflected by different planes. The beam reflected by the lower plane will travel an extra distance (shown in Figure 2.2.1 in red)than that reflected by the upper one,which is 2dsinθ. If that distance equals nλ(n is an integer), we will get constructive interference, which corresponds to the bright contrast in diffraction pattern. So the Bragg equation as shown below defines the position of the existence of constructive diffraction at different orders.
D-spacing, which is the inter plane distance d in Bragg’s equation, is decided by the lattice parameter a, b, c, as shown below.
So after finding out d-spacing from detected Bragg’s angle, we can figure out the lattice parameter which contains vital structural information. Also we can reconstruct the unknown structure by figuring out all the possible d-spacing. Powder XRD can define the phase contained in a mixture on the basis of separating and recognizing characteristic diffraction pattern.
Structure factor
The sample of powder X-ray diffraction will distribute evenly at every possible orientation, so after diffracted, the diffraction pattern appears as circles with same center point instead of dots in single crystal diffraction patterns. The circles in the diffraction patterns with smaller radius correspond to smaller h, k, l. In certain types of unit cells, not all the lattice planes will have their diffraction observed, which is usually called systematic absence, because the diffracted beam may happen to be out of phase by 180°and the overall intensity would be zero. Structure factor Fhkl can decide the systematic absences and intensity.Systematic absences arise when F=0, so no diffraction will be observed. For example:
For a fcc crystal, Fhkl=f{1+eπi(h+l)+eπi(k+l)+eπi(h+k)}. When h, k, l are all odd or all even, F=4f. For the other situation, F=0 and thus diffraction intensity will also be zero. Structure factor is important in the structure determination step because it helps understand the Miller indices and intensities of diffraction peaks. The other common rules for reflection to be observed are listed as follows:
Table 2.3.1: Systematic absence due to lattice type.2
Lattice type
Rule for reflection to be observed
Primitive, P
None
Body centered, I
hkl: h+k+l=2n
Face centered, F
hkl: h, k, l either all odd or all even
Side centered, C
hkl: h+k=2n
Rhombohedral, R
hkl: -h+k+l=3n or (h-k+l=3n)
Instrumentation
Powder X-ray diffractometer consists of three components: X-ray source, sample holder and detector.
Source
Possible X-ray sources are X-ray tube, Synchrotron radiation and cyclotron radiation. X-ray tube equipped with filter is commonly used in laboratory diffractometer. Synchrotron radiation is a brighter source and as a result can increase the resolution.
The cathode part of X-ray tube generated electrons under electric current. Electrons travel from cathode to anode through a high acceleration voltage, typically 30~150kV. In this process, most of the energy is released as heat and X-ray only account for approximately 1% of total energy. The X-ray tube needs lasing cooling water to protect it from over heat while working. After X rays hit the anode(red part in the schematic), the anode generates characteristic X-rays, which comes from the process of excited electrons falling down to lower electron shell and correspond to the energy difference between electron shells. In Bruker D8 diffractometer, the anode is made of Cu, so the X-ray souce is Cu-Ka1 and Cu-Ka2. K means the electrons falls to K shell from higher shells. α means the excited electrons lies in L shell, one shell higher than K shell. If the excited electron comes from M shell which is two electron shell higher, then what we have is defined as Kβ. The difference between Cu-Ka1 and Cu-Ka2 is that they come from different subshell, Cu-Ka1 corresponds to 2p2/3 to 1s shell while Cu-Ka2 corresponds to 2p1/2 to 1s shell.
In a reflection geometry instrumentation, X-ray tube usually contains a side window made of Be to allow the generated X-rays to emit at the demanding angle. The reason that Be is used as a X-ray window, is that the fluorescence yield (ration between characteristic X ray and Auger emission) of Be is close to zero, so it can make sure the X-ray source is monochromatic and does not contain introduced chracteristic X-rays from other metals.
Sample Holder
There are many holder options to holder all kinds of samples and meet people’s requirement. Usually, evenly grinded sample powder is dissolved in organic solvent such as acetone or pressed into a plain on a glass slide to make sure the sampel is flat. The sample holder has a press ring to fix the slide. At low angles, the signal to noise ratio can be relatively larger and we can use a zero background sample holder to avoid that. It is usually made of single crystal silicon.
Detector
Photographic film serves as detector in earlier methods, Debye-Scherrer camera and Guinier camera methods2. The film is placed around the sample as a circle and records the diffracted X-ray beams. The positions of diffraction lines correspond to Bragg angle. And Photographic film can record both the reflected and transmitted X-ray beams. Nowadays, people tend to only select the reflected beam and use radiation counter as detector. Compared to film, scintillationcounter can measure diffraction intensities and Bragg angles more accurately. And it is very convenient use computer to analyze data.To prevent the X-ray beam from going though the sample, we need more powder sample.
Application
Phase Analysis
X-ray diffractometer is most widely used in the phase analysis because compare to other characterization method, XRD gives a fast and reliable measurement(measurement time is determined by the step size, angle range and the number of second per step) and easy sample preperation(well grinded powder). What people do after get their raw data is opening the data in a XRD data handling software (JADE, WinXPOW, ect.) and compare the raw data with the standard pattern in ICDD database. In many cases, the database may not contain the pattern of a specific compound you are working on, then you can easily generate a calculated pattern based on a crystal informtion file or the space group and lattice parameters. The diffraction pattern of a well-prepared sample should be very reliable (all the peaks should match the peaks in the reference pattern) and contains much information. When the sample has impurities, every kinds of substance will generate their own pattern independently and allows us to analysize seperately and help people control and optimize the reactions. Other factor that can influence the pattern may be the X-ray souce, sample crystallinity (smaller crystallinity can broaden the peaks), ect.
XRD technique is also capable for quantitative analysis of mixtures. The XRD data will not give the quantitative imformation because the intensity is not directly related to weight percantage. However, we can make a serie of controlled samples with different weight percentage of the impurity. Then XRD is performed on each of these sample and we can get a linear calibration curve which is the intensity ratio vs. weight percentage. The weight percentage in the sample can be determined based on the curve. If all the atomic and crystalline information are known, we can also conduct qualitative analysis with Rietveld method. A square least approximation method will be applied to modify all the parameters so that the difference between experimental point and the fitted pattern can be decreased to a least amount. During this process, the scale factor can also be determined. Rietveld method is widely used in samples containing more than one impurities.
Structure determination
Powder X-ray diffraction can not only be used to analyze phase information,but can also be used to determine the structure of unknown substances. However, since in powder XRD we can only get one-dimensional information rather than three-dimensional information, resolution in powder XRD is much lower than that from single crystal method and data refinement process is more sophisticated.“If representative single crystal method are available, then single crystal diffraction is the preferred method.”1
As shown in Figure 4.1, one Bragg Ring corresponds to a certain Miller plane. With the detected Bragg angle and equation 2.1, we can figure out the lattice parameters. The common method to manage data is direct method, Patterson method and Fourier method2. We can also use the square least approximation method, i.e. Rietveld method to refine the data, to refine the data and it has increased the resolution a lot.
Common single crystal methods are Laue method, four-circle diffractometer and rotating-crystal method.
Reference
1. W. I. F. David, K. Shankland, L. B. McCusker, Ch. Baerlocher; Structure Determination from Powder Diffraction Data; Oxford; New York : Oxford University Press, 2002
2. Anthony R. West; Basic Solid State Chemistry, Second Edition; New York : John Wiley & Sons, c1999
3. Transimission Electron Microscope, Volume 1, David B. Williams, C. Barry Carter Springer,2009
4. The Basics of Crystallography and Diffraction, Second Edition,Christopher Hammond;Oxford Science Publications, 2001
Problems
1. Draw the lattice planes (1,1,1) (0,1,2) and (1,0,1) in a cubic lattice.
2. If we use Cu αradiation as X-ray source, and the first order Bragg diffraction peak is found at the semi-angle 35,calculate the d-spacing of the crystal.
3. X-rays with wavelength 1.54 A are reflected from the (2,1,1) planes of a cubic crystal. The d-spacing is found to be 5.12A. Calculate the crystal parameter.
4. Prove that for a body centered cubic lattice, reflection can be observed only when h+l+k=2n
5. If you need to conduct powder XRD on an air sensitive crystal, choose a proper sample holder you need on line. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Diffraction_Scattering_Techniques/X-ray_Diffraction.txt |
Learning Objectives
The basic theory of lasers will be presented with emphasis on:
• laser radiation properties
• laser components and design
• laser light generation
• common laser types
This module discusses basic concepts related to Lasers. Lasers are light sources that produce electromagnetic radiation through the process of stimulated emission. Laser light has properties different from more common light sources, such as incandescent bulbs and fluorescent lamps. Typically, laser radiation spans a small range of wavelengths and is emitted in a beam that is spatially narrow. The word laser is an acronym for Light Amplification by Stimulated Emission of Radiation. Lasers are ubiquitous in our lives and are broadly applied in areas that include scientific research, medicine, engineering, telecommunications, industry and business (see the Applications page for examples). This module is aimed at presenting the most basic principles of lasers and discussing aspects of common types. Properties of laser radiation and laser optical components are introduced.
Introduction to Lasers
• Laser development has an exciting history and includes a fair bit of controversy, some of which remains unresolved [1-4].
• Charles Townes [5] laid groundwork for the laser in the 1950s by demonstrating amplification of electromagnetic waves by stimulated emission. He was awarded the Nobel Prize in Physics in 1964.
• The first working laser was demonstrated in 1960 by Theodore Maiman [6] at Hughes Research Laboratories.
• The first laser was constructed from a small ruby rod. It was excited by an intense xenon lamp and emitted light at 694.3 nm [1,4].
• The development of gas and semiconductor lasers followed soon after [2,4].
01 Common Devices
• Supermarket barcode readers, CD and DVD players, and laser pointers used in presentations are examples of commonly encountered devices that rely on lasers.
• Lasers are essential for signal transmission in modern telecommunications, including telephone, ethernet and digital TV.
• Early applications of lasers were in medicine. The ability to direct a narrow, bright and focusable beam of light to specific regions in tissue and bone has led to advances in, for example, dental and surgical procedures, vision correction (e.g., LASIK surgery), dermatological treatments and disease diagnosis.
• High power lasers are employed in applications that include welding, drilling, cutting, and surface modification through machining and promotion of phase transitions and chemical reactions.
• Lasers are used widely in scientific research. In addition, laser systems are under study to increase the wavelengths accessible and provide greater power and compactness. Experiments with lasers push the limits of the time and spatial dimensions probed.
03 Applications
Barcode Readers, CD Players and Laser Pointers
• Take advantage of the brightness and highly directional properties of laser radiation and often employ diode lasers.
• Supermarket barcode readersa - As the light beam is scanned across the barcode, the white and black regions produce a modulation in the reflected light intensity. Lasers enable supermarket scanners to read barcodes rapidly and when presented over a range of angles.
• CD (and DVD) playersb, c - Lasers are used to both encode and read information on CDs and DVDs. A smaller beam focus enables a greater density of information to be stored. Blu-ray technology uses a shorter wavelength than earlier technologies and hence provides greater information storage.
http://www.fotosearch.com/photos-images/laser.html
02 Signal Transmission
Telecommunicationsa,b,c
• Lasers are used in communications to encode digital signals for transmission along an optical fiber in a manner similar to electrical transmission of binary data on a metal wire.
• Laser signal transmission is central to modern telephone, ethernet and digital TV communications.
• Lasers provide the power necessary to transmit signals long distances around the globe.
• The narrow bandwidth attainable by lasers allows for signal transmission through optical fiber using multiple wavelengths. Each wavelength provides a channel for data transmission, and narrow laser bandwidths allow for a greater number of channels by enabling channels to be packed closely on a fiber.
• Diode lasersd,e are key to communication systems, as they can be designed to transmit at narrow bandwidth and at photon energies that span a range of wavelengths within the optical fiber transmission windowa,b,c.
03 Medicine
Lasers in Medicine a, 8
• Many applications developed from the ability of a laser to produce localized heating. In surgical procedures, for example, the heat from a laser can cauterize and thereby reduce bleeding as tissue is cut
• The LASIK eye surgical procedure employs a laser to reshape corneal tissue by ablation.
• In the treatment of pigmented regions of tissue or bone, the laser wavelength is sometimes selected to maximize the absorption of radiation.
• Lasers assist in disease diagnosis. They are employed in tissue biopsy and are essential parts of instruments used in screening. Laser microscopy b,c,d produces images that enable microscopic areas of interest to be visualized.
• Medical applications take advantage of the broad spectrum of laser types. The varied procedures have different requirements for wavelength, power and pulsed versus continuous wave output.
http://www.fotosearch.com/photos-images/laser.html
04 High Power
High Power Lasers a
• Metals and polymers are frequent targets for modification by high power lasers. Power densities in the range of 103 - 105 W/cm2 are often required in manufacturing applications.
• Sample modification takes place when laser radiation is absorbed and the energy transformed into heat.
• In addition to heating a small spot by a tightly focused beam, the techniques of masking, beam scanning and beam shaping and the use of laser diode arrays can be applied to create heated lines and patterned regions.
• Common high power lasers include CO2, Nd:YAG and, more recently, compact diode systems a,b,c.
05 Scientific Research
Some Examples of Lasers in Research
• Ultrafast lasers are used to probe properties of chemical bonds in molecules and changes that take place in molecular and electronic structure during chemical reactions 8, a, b.
• Lasers are employed in chemical sensing c, d, imaging c, d and in experimental strategies aimed at single molecule detection d, e, f.
• Laser beams are used as molecular tweezers in optical trapping g experiments.
• Lasers are used as sensitive probes of surfaces and interfaces e, h.
Imaging protein expression in a live cell d
Lasers as probes of interfaces h | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Introduction_to_Lasers/02_History.txt |
Laser Radiation Properties I
• Laser radiation is nearly monochromatic. Monochromatic refers to a single wavelength, or “one color” of light. Laser radiation contains a narrow band of wavelengths and can be produced closer to monochromatic than light from other sources.
• Laser radiation is highly directional. The radiation is produced in a beam that is spatially narrow and has low divergence relative to other light sources.
• Laser radiation is highly coherent, which means the waves of light emitted have a constant relative phase. The waves of light in a laser beam are thought of as in phase with one another at every point. The degree of coherence is proportional to the range of wavelengths in the light beam, or the beam’s monochromaticity. Laser radiation has both spatial and temporal coherence, characterized by the coherence length and the coherence time.
Coherence
• Temporal coherence is the ability of light to maintain a constant phase at one point in space at two different times, separated by delay τ. Temporal coherence characterizes how well a wave can interfere with itself at two different times and increases as a source becomes more monochromatic.
• A coherence time (τcor) and coherence length (c × τcor, where c is the speed of light) can be calculated from the spread of wavelengths (Δλ), or frequencies (Δν), in a beam. Expressed in terms of Δν, or bandwidth”:
$\tau_{cor} = \dfrac{1}{2 \pi Δν}$
Laser Radiation Properties II
• Laser radiation has high brightness, a quantity defined as the power emitted per unit surface area per unit solid angle. Because laser light is emitted as a narrow beam with small divergence, the brightness of a 1 mW laser pointer, for example, is > 1,000 ×’s greater than that of the sun, which emits more than 1025 W of radiant power a.
• Laser output can be continuous or pulsed. Continuous wave (CW) lasers are characterized by their average power, whereas peak power, energy per pulse and pulse repetition rate are figures of merit that apply to pulsed lasers. Pulse widths in the ns-ps range are employed more routinely than fs pulses, and attosecond pulses can be generated. A 10 fs pulse with only 10 mJ energy has a peak power of 1012 W, or 1 TW!
Laser Radiation Properties III
• The narrow range of frequencies, or wavelengths, emitted is referred to as the laser bandwidth. This output is determined by the spectral emission properties of the gain medium and the modes supported by the cavity.
• When the bandwidth of the gain medium is larger than the cavity mode spacing, the laser output consists of a series of narrow spectral bands (see the following figure and “Laser Radiation Properties IV” below).
• Cavity modes develop as a consequence of the properties of light reflection and interference. In the simplest case of a cavity formed by two flat mirrors, the allowable axial modes have wavelength λ = 2L/q, where L is the cavity length and q is an integer. The frequency spacing (Δν) between modes is given by Δν = c/(2L), where c is the speed of light. Parabolic mirrors produce more complex cavity modes leading to a Gaussian beam b.
Laser Radiation Properties IV
Laser bandwidth frequency (Δν) and wavelength (Δλ) are related as follows:
$\Delta \lambda \approx\left(\frac{\lambda_{0}^{2}}{c}\right) \Delta v$
where λo is the band center wavelength and c is the speed of light. A HeNe laser operating at 632.8 nm has a gain bandwidth of 1.5 GHz, or 0.002 nm. When the gain medium bandwidth is smaller than the cavity mode spacing, the laser output consists of a single mode and operates as a single frequency laser c.
• A HeNe laser with 20 cm cavity length has mode spacings of Δν = 750 MHz, or Δλ = .001 nm. HeNe lasers are often equipped for and use single frequency operation c, d.
• Mode locking e produces a fixed phase relationship between laser cavity modes and results in pulsed output. See Refs [2,7,8,9] for more details on mode locking and methods for producing ultra-short laser pulses and other aspects of single frequency laser operation. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Introduction_to_Lasers/03_Basic_Principles/01_Laser_Radiation_Properties.txt |
Laser Operation and Components
• The process of light stimulated emission is fundamental to laser operation.
• Laser light is produced by an active medium, or gain medium inside the laser optical cavity. The active medium is a collection of atoms, or molecules that can undergo stimulated emission. The active medium can be in a gaseous, liquid or solid form.
• For lasing to take place, the active medium must be pumped into an excited state capable of undergoing stimulated emission. The energy required for excitation is often supplied by an electric current or an intense light source, such as a flashlamp.
• To induce stimulated emission, the laser cavity must provide a means to reflect, or feedback emitted light into the gain medium.
• A laser must have an output coupler to allow a portion of the laser light to leave the optical cavity.
Laser Optical Cavity
Sketch showing the main components of a laser optical (or resonator) cavity. The optical cavity is formed by a pair of mirrors that surround the gain medium and enable feedback of light into the medium. The output coupler is a partially reflective mirror that allows a portion of the laser radiation to leave the cavity. The gain medium is excited by an external source (not shown), such as a flash lamp, electric current or another laser. The light trapped between the mirrors forms standing wave structures called modes. Although beyond the scope of this discussion, the reader interested in cavity modes can consult References 7-10 and the “Laser Radiation Properties” section.
Stimulated Emission 7-10, 12, 13
• Stimulated emission occurs when a photon of light induces an atom or molecule to lose energy by producing a second photon. The second photon has the same phase, frequency, direction of travel and polarization state as the stimulating photon.
• Since from one photon a second identical photon is produced, stimulated emission leads to light amplification.
• Stimulated emission can be understood from an energy level diagram within the context of the competing optical processes of stimulated absorption and spontaneous emission.
• For stimulated emission to take place, a population inversion must be created in the laser gain medium.
• For more on stimulated emission, see energy level diagrams and subsequent sections.
Energy Level Diagrams
• An energy level diagram displays states of an atom, molecule or material as levels ordered vertically according to energy.
• The states contain contributions from several sources, as appropriate for the matter considered. Sources include the orbital and spin angular momentum of electrons, vibrations of nuclei, molecular rotations, and spin contributions from nuclei.
• The lowest energy level is called the ground state.
• Absorption and emission of energy occurs when matter undergoes transitions between states.
Energy level diagram showing states of a sodium atom. Each state is labeled by a term symbol and includes effects of electron orbital and spin angular momentum.
Term Symbols
• Term symbols are a shorthand for describing the angular momentum and coupling interactions among electrons in atoms and molecules.
• As a starting point for understanding a term symbol, write the electron configuration for the state considered. For Na, the electron configuration of the ground state is: 1s22s22p63s1
• The central letter describes the total orbital angular momentum. Only the valence electrons need to be considered. For Na, there is one valence electron, and it occupies an s-orbital. The angular momentum quantum number for an s-orbital is l = 0. The total orbital angular momentum for ground state Na is L = l = 0. Symbols are assigned to the values of L as follows: L = 0 (S), L = 1 (P), L = 2 (D), etc.
• The left superscript reflects the coupling of valence electron spin angular momentum and gives the degeneracy of spin states. For Na, s = 1/2 for the valence electron; therefore, the total spin, S = 1/2 and the degeneracy = (2 S + 1) = 2.
• The right subscript reflects the coupling between spin and orbital angular momentum. For ground state Na, J = L + S = 1/2.
• For a detailed discussion of term symbols, see Ref 11.
Absorption and Emission Processes and Transitions Between Energy States
• Stimulated absorption (a) occurs when light, or a photon of light (hν), excites matter to a higher energy (or excited) state.
• Spontaneous emission (b) is a process whereby energy is spontaneously released from matter as light.
• Molecules typically transition to vibrationally excited levels within the excited electronic state.
• Following excitation, the vibrational energy is quickly released by non-radiative pathways (c).
• In molecules, spontaneous emission known as fluorescence (b) occurs by transition from the lowest level in the excited electronic state, to upper vibrational levels of the lower electronic state.
Energy level diagram for a typical dye molecule. The vibrational levels of each electronic state, labeled by S0 and S1, are included.
Stimulated Emission - Details 7-10, 12, 13
• Laser radiation is produced when energy in atoms or molecules is released as stimulated emission (c).
• Stimulated emission requires a population inversion in the laser gain medium.
• A population inversion occurs when the number of atoms or molecules in an excited state exceeds the number in lower levels (usually the ground state).
• To create the population inversion, the gain medium must transition to a metastable state, which is long lived relative to spontaneous emission.
• The three-level diagram (below) shows excitation followed by non-radiative (nr) decay (b) to 2E states. The 2E states are long lived, because the transition to 4A2 requires a change in the electron spin state.
• A photon of the same energy as the 2E → 4A2 transition can stimulate the emission of a second photon (c), leading to light amplification, or lasing.
Three-level energy diagram. Simplified diagram showing transitions for Cr3+ in a ruby laser.
Three and Four Level Lasers 7-10, 12, 13
• Three-level lasers require intense pumping to maintain the population inversion, because the lasing transition re-populates the ground state.
• Lasers based on transitions between four energy levels (see below), can be more efficiently pumped, because the lower level of the lasing transition is not the ground state.
• Only four-level lasers provide continuous output. HeNe and Nd:YAG are common four-level lasers.
• A population inversion is necessary for lasing, because without one, the photon inducing stimulated emission would instead have a greater probability of undergoing absorption in the gain medium.
• For more in depth information about laser transitions and population inversion, Refs 7-10, 12 (pg 96) and 13 can be consulted.
Four-level energy diagram. Simplified diagram showing transitions for Nd3+ in a Nd:YAG laser. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Introduction_to_Lasers/03_Basic_Principles/02_Laser_Operation_and_Components.txt |
See Refs. 2, 7-10 and 12-14 for more in depth coverage of the above systems, and others including excimer, free-electron, chemical and X-ray lasers.
05 Types of Lasers
• Examples of gas lasers include helium-neon (HeNe), nitrogen and argon-ion
• The gain medium in these lasers is a gas-filled tube
• Excitation of gas molecules is achieved by the passage of an electric current or discharge through the gas
• In a HeNe laser, an electric discharge excites He atoms to excited levels. Collisions between He and Ne atoms transfer energy and produce excited Ne atoms. Lasing occurs when Ne atoms attain a population inversion.
• The lasing transition in a HeNe laser procudes light at 632.8 nm.
Simplified energy level diagram showing HeNe laser transitions.
02 Solid State Lasers
• Nd:YAG and Ruby are examples of solid-state laser systems
• A flashlamp is used to excite (or “pump”) the gain medium in these lasers
• Cr3+ ions in ruby undergo transitions to produce lasing in a Ruby laser
• Nd3+ ions in a yttrium aluminum garnet (YAG) matrix are the optically active species in a Nd:YAG laser
• Transitions in ruby occur mainly between three levels, whereas Nd:YAG is referred to as a 4-level system (see the energy level diagram, below).
• The fundamental lasing transitions are at 694 nm and 1064 nm for Ruby and Nd:YAG lasers, respectively.
Four-level energy diagram. Simplified diagram showing transitions for Nd3+ in a Nd:YAG laser.
03 Diode Lasers
• Semiconductor materials layered to form a diode [2, a] serve as the gain medium.
• Excitation is achieved by the passage of electric current (forward biased) through the diode p-n junction, which forms at the interface between semiconductors with different electronic doping levels.
• Light emission occurs when electrons and holes in the vicinity of the p-n junction recombine following excitation.
• The layered structure and high refractive index of semiconductor materials enables the laser optical cavity to be formed on the diode (see drawing, right).
• Diode lasers are finding application in a wide range of areas, such as communications, medicine and chemical analysis (see Ref 2, 14, a, b).
1. http://www.rp-photonics.com/laser_diodes.html
2. www1.union.edu/newmanj/lasers...ctorLasers.htm
04 Dye Lasers
• The gain medium in a dye laser consists of organic dye molecules dissolved in a solvent.
• Light, sometimes another laser, is used to excite the gain medium
• Because fluorescence emission from organic dye molecules occurs across a broad range of wavelengths, dye lasers can be scanned (or “tuned”) to select a narrow band of emission light from across a wide spectral range
Sketch of a Nd:YAG pumped dye laser.
06 References
1. http://micro.magnet.fsu.edu/primer/l...sersintro.html
2. D. Sands, Diode Lasers, lOP Publishing, 2005.
3. http://www.bell-labs.com/history/laser/
4. http://laserstars.org/history
5. nobelprize.org/nobel_prizes/p...ownes-bio.html
6. http://micro.magnet.fsu.edu/optics/t...le/maiman.html
7. J.C. Wright, M.J. Wirth, Anal. Chem. 52, 1980, 988A and 1087 A.
8. W. Demtröder, Laser Spectroscopy, Springer, Berlin, 2002 (3rd Ed).
9. P.W. Milonni, J.H. Eberly Lasers Wiley, NY 1988.
10. http://www.rp-photonics.com/encyclopedia.html
11. M. Gerloch, Orbitals, Terms and States, Wiley, New York, 1986.
12. H.J.R. Dutton, Understanding Optical Communications, IBM Report SG24-5230-00, 1998, http://www.redbooks.ibm.com (History and fiber-optics)
13. http://www1.union.edu/newmanj/Physics100/index.htm
14. High Power Diode Lasers, F. Bachmann, et al. Eds.; Springer: 2007.
Additional references are included in the "Applications" section.
Author Contact Information
Carol Korzeniewski
Department of Chemistry & Biochemistry
Texas Tech University
Lubbock, TX 79409-1061
[email protected] | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Introduction_to_Lasers/05_Types_of_Lasers/01_Gas_Lasers.txt |
LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser is a type of light source which has the unique characteristics of directionality, brightness, and monochromaticity. The goal of this module is to explain how a laser operates (stimulated or spontaneous emission), describe important components, and give some examples of types of lasers and their applications.
• Gas Lasers
Gas lasers have lasing media that are made-up of one or a mixture of gases or vapors. Gas lasers can be classified in terms of the type of transitions that lead to their operation: atomic or molecular. The most common of all gas lasers is the helium-neon (He-Ne) laser.
• Laser Theory
There are four laser demands: population inversion, laser threshold, energy source and active medium.
• Overview of Lasers
LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser is a type of light source which has the unique characteristics of directionality, brightness, and monochromaticity. The goal of this module is to explain how a laser operates (stimulated or spontaneous emission), describe important components, and give some examples of types of lasers and their applications.
• Semiconductor and Solid-state lasers
In both solid-state and semiconductor lasers the lasing medium is a solid. Aside from this similarity, however, these two laser types are very different from each other. In the case of the solid-state lasers the lasing species is typically an impurity that resides in a solid host, a crystal of some sort. The crystal modifies some of the quantized energy levels of the impurity, but still the lasing is almost atomic - similar to gas lasers.
Lasers
In these lasers the lasing medium is made-up of one or a mixture of gases or vapors. Gas lasers can be classified in terms of the type of transitions that lead to their operation: atomic or molecular. The most common of all gas lasers is the helium-neon (He-Ne) laser. The presence of two atomic species (helium and neon) in this gas laser might suggest that the medium is made of molecules, but these two species of atoms do not form a stable molecule. In fact, all inert atoms like helium, argon, krypton, etc. (those in the last column of the Periodic Table) hold tightly to their own electron clouds and seldom form a molecule or react with other atoms (hence the name: inert). In the He-Ne laser the transition that produces the output light is an atomic transition. Gas lasers that employ molecular gas or vapor for their lasing medium use molecular transitions for their lasing operation. Molecular transitions tend to be more complex than atomic ones. As a consequence the laser light produced by molecular lasers tends to have a wider and more varied collection of properties. Examples of some common molecular gas lasers are carbon monoxide (CO), carbon dioxide (CO2), excimer, and nitrogen (N2) lasers.
Helium Neon (He-Ne) Lasers
The He-Ne laser was the first continuous wave (cw) laser invented. A few months after Maiman announced his invention of the pulsed ruby laser, Ali Javan and his associates W. R. Bennet and D. R. Herriott announced their creation of a cw He-Ne laser. This gas laser is a four-level laser that use helium atoms to excite neon atoms. It is the atomic transitions in the neon that produces the laser light. The most commonly used neon transition in these lasers produces red light at 632.8 nm. But these lasers can also produce green and yellow light in the visible as well as UV and IR (Javan's first He-Ne operated in the IR at 1152.3 nm). By using highly reflective mirrors designed for one of these many possible lasing transitions, a given He-Ne's output is made to operate at a single wavelength.
He-Ne lasers typically produce a few to tens of mW (milli-Watt, or $10^{-3}$ W) of power. They are not sources of high power laser light. Probably one of the most important features of these lasers is that they are highly stable, both in terms of their wavelength (mode stability) and intensity of their output light (low jitter in power level). For these reasons, He-Ne lasers are often used to stabilize other lasers. They are also used in applications, such as holography, where mode stability is important. Until the mid 1990's, He-Ne lasers were the dominant type of lasers produced for low power applications - from range finding to scanning to optical transmission, to laser pointers, etc. Recently, however, other types of lasers, most notably the semiconductor lasers, seem to have won the competition because of reduced costs.
The above energy level diagram shows the two excited states of helium atom, the 2 3S and 2 1S, that get populated as a result of the electromagnetic pumping in the discharge. Both of these states are metastable and do not allow de-excitations via radiative transitions. Instead, the helium atoms give off their energy to neon atoms through collisional excitation. In this way the 4s and 5s levels in neon get populated. These are the two upper lasing levels, each for a separate set of lasing transitions. Radiative decay from the 5s to the 4s levels are forbidden. So, the 4p and 3p levels serve as the lower lasing levels and rapidly decay into the metastable 3s level. In this way population inversion is easily achieved in the He-Ne. The 632.8 nm laser transition, for example, involves the 5s and 3p levels, as shown above.
In most He-Ne lasers the gas, a mixture of 5 parts helium to 1 part neon, is contained in a sealed glass tube with a narrow (2 to 3 mm diameter) bore that is connected to a larger size tube called a ballast, as shown above. Typically the laser's optical cavity mirrors, the high reflector and the output coupler, form the two sealing caps for the narrow bore tube. High voltage electrodes create a narrow electric discharge along the length of this tube, which then leads to the narrow beam of laser light. The function of the ballast is to maintain the desired gas mixture. Since some of the atoms may get imbedded in the glass and/or the electrodes as they accelerate within the discharge, in the absence of a ballast the tube would not last very long. To further prolong tube lifetime some of these lasers also use "getters", often metals such as titanium, that absorb impurities in the gas.
Above photograph shows a commercial He-Ne tube. The thicker cylinder closest to the meter-stick (shown for scale) is the ballast. The thinner tube houses the resonant cavity where the lasing occurs. Notice the two mirrors that seal the two ends of the bore. For mode stability reasons, these mirrors are concave; they serve as the output coupler and the high reflector.
A typical commercially available He-Ne produces about a few mW of 632.8 nm light with a beam width of a few millimeters at an overall efficiency of near 0.1%. This means that for every 1 Watt of input power from the power supply, 1 mW of laser light is produced. Still, because of their long operating lifetime of 20,000 hours or more and their relatively low manufacturing cost, He-Ne lasers are among the most popular gas lasers.
Argon-ion Lasers
Another commonly used gas laser is the argon-ion laser. In these lasers, as in the He-Ne the lasing transition type is atomic. But instead of a neutral atom, here the lasing is the result of the de-excitations of the ion. It takes more energy to ionize an atom than to excite it. By the same token, more energy can be obtained from the de-excitation of the ion. So, doubly (Ar++) and singly ionized (Ar+) argon atoms can radiate shorter wavelength light than could the neutral argon atom, Ar. Because of this, argon-ion lasers can produce uv light with a wavelength as short as 334 nm. In addition, these lasers can produce much more power than He-Ne lasers. Argon-ion lasers typically range in output power from one to as much as 20 W. At the higher power levels their output is multi-mode, i.e. contains several distinct wavelengths. Some of these wavelengths are:
• 334 nm, UV
• 351 nm, UV
• 364 nm, UV
• 458 nm, violet
• 477 nm , violet
• 488 nm, (strong) blue
• 497 nm, blue-green
• 514 nm, (strongest) green
Because of these two reasons, high power and multicolor output, argon-ion is one of the most commonly used lasers in laser light shows, as well as in a variety of applications.
The make-up of a typical argon-ion laser is very similar to a He-Ne's, but with a few slight differences. First, these lasers are much larger in size. A typical Ar++ laser tube is about one meter long, as compared to just 20 cm for that of a He-Ne. Second, the optical cavity of these lasers is built external to the tube. This is partly because of the high power operation of the laser and partly because such external arrangement allows for the use of optional wavelength selection optics within the optical cavity. A prism or a diffraction grating located just before the high reflecting mirror selects only one of the lasing transitions for amplification within the cavity; other wavelengths are deflected out of the resonant cavity. In this way these ion lasers can operate in a so called single mode.
With this arrangement the two mirrors holders on opposite sides of the laser tube are typically attached together with an invar rod for thermal stabilization (Invar is a steel alloy that contains nickel). Its most valued property is that it expands and contracts very little when its temperature changes. As a result, when the laser's temperature changes as it heats up due to the large electric current within the electromagnetic pump discharge, the optical path length, and therefore the modal character of the laser output, remains relatively unchanged. Finally, because of their high power argon-ion lasers require active cooling. This is most commonly accomplished by circulating water, either directly from tap or from commercially produced chillers, in closed coils that surround the plasma (a gas of charged ions) tube and parts of the electric power supply. Some of the lower powered argon-ion lasers are just air cooled using a fan, which makes them less cumbersome to use.
The above two photographs show a 5 W argon-ion laser. Notice the one-meter long laser tube, the large ballast, and the umbilical cord that connects to the laser power supply. This cord contains not only the power line that supplies the laser with the electric power to generate the plasma, but also the water lines that circulate water to cool the laser.
Another type of ion laser, the krypton laser, operates very much the same as the argon-ion laser. To take advantage of all the colors available in both argon and krypton lasers, manufacturers make argon-krypton ion lasers by using a suitable mixture of these two gases. The mixed gas lasers are very useful for entertainment applications because, in addition to many colors, they can also produce a "white" beam. (Why is the word "white" in quotations?)
Carbon-dioxide ($CO_2$) and Carbon-monoxide ($CO$) Lasers
In both of these lasers the gaseous medium is made-up of molecules, which in addition to electronic energy levels of atoms also have both molecular vibrational and rotational energy levels. The vibrational energy levels are similar to finer spaced ladder rungs that span two rungs of the electronic energy levels. The rotational levels are still more finely spaced rungs that span the vibrational rungs! In these gas lasers the lasing transitions occur among the vibrational levels, typically belonging to different electronic levels.
Above diagram shows two electronic and several of their associated vibrational levels for a hypothetical molecule. (Electronic levels are shown as "bent rungs" because in the molecule atoms can change their separation distance and therefore their electronic energy. Also, note that rotational levels are not shown.) A thick arrow depicts a pump that excites the molecule from its lowest vibrational level belonging to the lower electronic level to the 5th highest vibrational level of the next upper electronic level. The excited molecule can then de-excite out of this upper level into many possible vibrational levels. Each one of these de-excitations produces a photon whose energy, and therefore its wavelength, corresponds to that specific de-excitation. As a result, when a collection of these molecules are all excited by this pump they generate a number (eleven, in this drawing) of different wavelength photons.
Specifically, $CO_2$ lasers can generate an output wavelength from about 9 micro-meters (mm, or microns) to about 11 microns (1 micron is one millionth of a meter, or 1000 nm.) These outputs generally contain many closely spaced wavelengths, if the laser is used for high power output. But for more wavelength specific applications the optical cavity of the laser is designed to amplify just one or a few of the vibrational radiative decay lines. The wavelength range for the CO laser is lower, from about 5 to 6 mm. Another feature of these gas lasers that make them one of the most versatile of all gas lasers is that they can be made to operate over a large range of power outputs, either in a pulsed or cw mode. The $CO_2$ laser, in particular, ranges in cw power from few Watts to kWs, making these lasers ideal for many industrial applications including welding and drilling.
Molecular vibrations for $CO_2$, a linear molecule, are shown in the figure below. Other combinations of these are possible but these three are fundamental. There are different varieties of $CO_2$ lasers that flow fresh gases through the resonant cavity area in order both to remove heat and to provide lots of gas to achieve high laser powers. For these lasers in the cw mode powers can reach as high as 100 kW. These intense laser beams are essentially tremendous invisible "heat" beams that can cut through thick pieces of metal and are used extensively in industrial applications.
We mention two interesting tidbits about these lasers. First, since glass in not very transparent to IR light, the mirrors are actually made of special crystalline materials that are transparent to the IR. Second, recall that IR light is invisible to our eyes and so special precautions are needed to protect people working around these lasers. It turns out that although these lasers can easily cut through metal, they cannot pass through a thin sheet of clear plexiglass, and so often these systems are housed in a plexiglass shell to block any stray reflected IR light.
Other types of gas lasers include the nitrogen laser (N2), excimers, copper-vapor, gold-vapor, and chemical lasers. Of these the excimer lasers and the chemical lasers are the most different from the ones we have already discussed above.
Excimer Lasers
Similar to $CO_2$, $CO$, and $N_2$ lasers, these gas lasers also use molecular transitions for their lasing operation. What makes them especially different is that the molecular gas used for these lasers has no ground state! Typically these molecules include an atom belonging to the inert gas family (argon, xenon, krypton) and one from the halide group (chlorine, fluorine, and bromine). The inert gas atoms (also known as the rare gases) do not want to interact with any other atoms. On the other hand, the halide gases are highly reactive. Still, they cannot bond with the inert gases to form a molecule. But when sufficient energy is provided to these atoms they bind together in a short-lived excited state that soon (few nanoseconds) decays back into the original two separate atoms (i.e. the molecule dissociates). Because of this rapid molecular dissociation, these lasers obtain population inversion just by excitation alone! In fact, the word excimer is short for "excited dimer," although most excimer lasers do not use two identical atoms as a strict dimer would.
The excimer molecules are created from a mixture of inert gases along with one of the halides. Typically a few percent of Ar, Kr and Xe are mixed with a few percent of a halide to form excimer molecules: ArF, KrF , and XeF. The other 90% of the gas mix consists of other inert gases such as He and Ne which act simply as a buffer and do not take part in the reaction. A large electric pulse is often used for the excitation and formation of the excimer molecule. The rapid decay of the short-lived molecule then leads to a very short laser pulse lasting 10 - 100 ns (10-8 - 10-7 s). So, another unusual feature of the excimers is that they do not require an optical amplifier. They are very efficiently formed in the reaction with an efficiency of around 30% so that the gain is extremely high. A high reflector and a glass (really quartz - why?) window is sufficient for laser light production. This means that only about 4% of the light is reflected back into the cavity at the front window, but the gain is so high that only a single pass through the cavity is needed to produce lots of uv light. Typically about 1 Joule of energy is in a 10 ns pulse, so that the pulse power is 1J/10ns = 100 MW. If this power were steadily produced it would be equivalent to powers generated by large power plants. However, only about 1 - 100 pulses are produced per second, so that the average power produced is about 1 - 100 W.
Because of the highly reactive nature of the halide gas used in these lasers, excimers are not very easy to operate. The halides tend to be very corrosive and therefore add a great deal to the operational cost (as well as the danger!) of these lasers. But still these lasers are very much in use because their output wavelength is in the UV, from 350 nm down to as low as 193 nm; all with a good deal of power.
Chemical Lasers
In these lasers the energy of excitation comes from a chemical reaction that takes place in the medium itself. In this sense, then, chemical lasers are self-pumped. Even more interesting is the operation of one of the most common of these lasers, the hydrogen-fluoride laser that operates in the IR at 4.6 microns. The added intrigue is due to the chain reaction that takes place to excite the laser molecule, $\ce{HF}$. A mixture of hydrogen (molecular) gas, $\ce{H2}$, and fluorine gas, $\ce{F_2}$, is subjected to an electric discharge to start the chain reaction, which results in the production of a hydrogen-fluoride molecule in an excited vibrational level (excited state is denoted by a starred superscript: $HF^*$) and the dissociative production of H or F for the next reaction:
$\ce{F + H_2 \rightarrow HF^{*} + H} \nonumber$
$\ce{F2 + H \rightarrow HF^{*} + F} \nonumber$
$\ce{F + H2 \rightarrow HF^{*} + H} \nonumber$
$\ce{F_2 + H \rightarrow HF^{*} + F} \nonumber$
etc.
So, the medium in these lasers is used up as a fuel to generate their laser light. Therefore, practical limitations aside, the power of these lasers depends on the amount of the chemical (gas volume) that is used in the laser. Because of their conceivably limitless power output, these lasers have been studied mostly for their military applications. The most famous of these is the US Army's Mid-Infrared Advanced Chemical Laser, MIRACL, located at White Sands Missile Range, New Mexico. This is a formidable chemical laser that has been developed as part of the Department of Defense's efforts for its Strategic Defense Initiative (SDI).
MIRACL is the US's most powerful laser. It operates in a band from 3.6 to 4.2 microns producing megawatts of CW output for as long as 70 seconds. For its fuel it burns ethylene, $C_2H_4$, with nitrogen trifluoride, NF3. The resulting free fluorine atoms combine with deuterium gas that is injected to this burning fuel to form deuterium fluoride molecules (DF), which ultimately provide the laser light. In this laser the beam within the optical cavity is about 21 cm high and 3 cm wide, resulting in a 14 cm2 output beam. For testing purposes this laser is used with an aiming and focusing telescope system called the SEALITE Beam Director (SLBD) which was first developed by the Navy. The moving part of SLBD that is capable of fast rotations and high accelerations weighs about 18,000 pounds. Its telescope can focus the beam to any target that is located within a range of a minimum of 400 meters up to infinity. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Lasers/Gas_Lasers.txt |
Energy State Populations
For reasons that we cannot explain, it appears that all things in nature prefer to go to the lowest energy state available to them. This seems to be how nature behaves. Apples fall all the way to the ground, once they are let go by the tree branches they grew on. Of course unless they fall into a hole, in which case they go even lower than the ground level - to the bottom of the hole. To raise the energy of the fallen apple someone, or something, has to intervene. A child must pick it up off the ground, for example. Otherwise, the apple will remain at its lowest point. Similarly, atoms tend to prefer to always stay in their ground state, unless some intervening force causes them to reach an excited state. In the language used in the study of lasers, any process that feeds energy into a collection of atoms or molecules and causes them to vacate their ground state is referred to as an energy pump, or simply as a pump.
In most lamps electricity is used, by one mechanism or other, to pump atoms out of their ground-state into some excited state. We have already seen that incandescent bulbs produce light from Radiative transitions after excitation via collisions of the electrons in an electric current with the filament atoms. As we've also seen, other light sources, like the fluorescent lamps or neon tubes, use gas or vapor for their medium of excitation instead of a solid filament. In most lamps, the pumping mechanism is electricity. This is also the case for most lasers, but in some lasers optical pumping is the mechanism that is used to cause excitation. In optical pumping a light source generates photons with enough energy that they are able to get absorbed by the atoms in the lasing medium and cause them to go into an excited state.
Evidently, all excitations occur "instantaneously", but de-excitations can lag by a measurable time interval that depends on the properties of the excited state. That is to say, an atom in an excited state does not instantaneously de-excite. The time that it spends, on average, in that excited level is called the lifetime of that state. Lifetimes can vary in duration depending on the atom and on the energy level. Excited state lifetimes are typically a few nanoseconds (10-9 s, or a billionth of a second), but they could be as short as a picosecond (10-12 s, or a thousand times shorter still) or as long as a few milliseconds (10-3 s). The ground state, of course, has an infinitely long lifetime since an atom in its ground state can no longer decrease its energy. So, the most stable of states is the ground state. Long-lived states are referred to as meta-stable states.
In the case of Radiative emission, atoms happen to take two very distinctly different approaches: spontaneous emission, or stimulated emission. Spontaneous emission refers to the case when the excited atom de-excites, rather randomly whenever it "feels like it", and emits a photon. This photon has an energy equal to the difference between the two energy levels of the transition, but its direction of travel and its other properties, such as polarization are random. Stimulated emission was first theorized by Albert Einstein in 1917. For stimulated emission to occur a second, non-participating yet stimulating, photon must be present. The energy of this second photon must exactly match the allowed energy of the transition. Then the emitted photon will not only have the same energy as the stimulating one, but it will also travel in the same direction, and will be essentially identical to it.
So, independent of what type of medium is used in a laser, in the absence of a pump the atoms or molecules are almost all in their ground state. Let us imagine that we could count the number of atoms in our laser medium that are in each energy state and denote the ones which are in their ground state by Ngs , those in the first excite state by N1, those in the second excited state by N2 , and so on. Then in the absence of a pump we are mostly certain that N1= N2 = ... = 0, and Ngs = total number of all the atoms in the medium. Once the pump is turned on it will deplete the number of atoms that were originally in the ground state and increases the number of atoms in the excited states that it is pumping to. Because excited atoms de-excite quickly and return to their ground states by spontaneous emission, in almost all lasers even when the pump is feeding energy into the medium the number of atoms in their ground state remains many times more than atoms in any other energy state. That is to say, almost always we could safely state that:
Nground state much bigger than Nany other state .
To make a laser we need to not only excite the atoms in the laser medium, but somehow encourage them to undergo a decay through stimulated emission. In stimulated emission a passer-by photon which has an energy exactly equal to the transition energy stimulates the atom to emit a photon, identical to the passer-by photon, instantly. The problem with this is that the same passer-by photon could instead get absorbed by a de-excited atom. So, aside from pumping the atoms to excited states, we need to use clever procedures to insure that there are more excited atoms that could use the passer-by photon for stimulated emission than there are de-excited atoms which could absorb it; i.e. we need to generate a population inversion. Although Einstein predicted stimulation emission in 1917, it was not found experimentally for over 10 years and took over another 30 years just to predict the possibility of a laser. This is basically because it was not considered possible to produce a population inversion because of the above inequality.
Lasing in Two-Level Systems
For the sake of our studies, let's first consider a laser medium whose atoms have only two energy states: a ground state and one excited state. In such an idealized atom the only possible transitions are excitation from the ground state to the excited state, and de-excitation from the excited state back into ground state. Could such an atom be used to make a laser?
There are several important conditions that our laser must satisfy. First of all, the light that it produces must be coherent. That is to say, it must emit photons that are in-phase with one another. Secondly, it should emit monochromaticlight, i.e. photons of the same frequency (or wavelength).Thirdly, it would be desirable if our laser's output were collimated, producing a sharply defined "pencil-like" beam of light (this is not crucial, but clearly a desirable condition). Lastly, it would also be desirable for our laser to be efficient, i.e. the higher the ratio of output energy - to - input energy, the better.
Let us begin by examining the requirements for our first condition for lasing, coherence. This condition is satisfied only when the lasing transition occurs through stimulated emission. As we have already seen, stimulated emission produces identical photons that are of equal energy and phase and travel in the same direction. But for stimulated emission to take place a "passer-by" photon whose energy is just equal to the de-excitation energy must approach the excited atom before it de-excites via spontaneous emission. Typically, a photon emitted by the spontaneous emission serves as the seed to trigger a collection of stimulated emissions. Still, if the lifetime of the excited state is too short, then there will not be enough excited atoms around to undergo stimulated emission. So, the first criteria that we need to satisfy is that the upper lasing state must have a relatively long lifetime, otherwise known as a meta-stablestate, with typical lifetimes in the milliseconds range. In addition to the requirement of a long lifetime, we need to ensure that the likelihood of absorption of the "passer-by" photons is minimized. This likelihood is directly related to the ratio of the atoms in their ground state versus those in the excited state. The smaller this ratio, the more likely that the "passer-by" photon will cause a stimulated emission rather than get absorbed. So, to satisfy this requirement, we need to produce a population inversion: create more atoms in the excited state than those in the ground state.
Achieving population inversion in a two-level atom is not very practical. Such a task would require a very strong pumping transition that would send any decaying atom back into its excited state. This would be similar to reversing the flow of water in a water fall. It can be done, but is very energy costly and inefficient. In a sense, the pumping transition would have to work against the lasing transition.
It is clear, from the above diagram, that in the two-level atom the pump is, in a way, the laser itself! Such a two-level laser would work only in jolts. That is to say, once the population inversion is achieved the laser would lase. But immediately it would end up with more atoms in the lower level. Such two-level lasers involve a more complicated process. We will see, in later material, examples of these in the context of excimer lasers, which are pulsed lasers. For a continuous laser action we need to consider other possibilities, such as a three-level atom.
Lasing in Three-Level Systems
In fact, the first laser that was demonstrated to operate was a three-level laser, Maiman's ruby laser.
In the above diagram of a three level laser the pump causes an excitation from the ground state to the second excited state. This state is a rather short-lived state, so that the atom quickly decays into the first excited level. [Decays back to the ground state also occur, but these atoms can be pumped back to the second excited state again.] The first excited state is a long-lived (i.e. metastable) state which allows the atom to "wait" for the "passer-by" photon while building up a large population of atoms in this state. The lasing transition, in this laser, is due to the decay of the atom from this first excited metastable state to the ground state. If the number of atoms in the ground state exceeds the number of atoms that are pumped into the excited state, then there is a high likelihood that the "lasing photon" will be absorbed and we will not get sustained laser light. The fact that the lower lasing transition is the ground state makes it rather difficult to achieve efficient population inversion. In a ruby laser this task is accomplished by providing the ruby crystal with a very strong pulsating light source, called a flash lamp. The flash lamp produces a very strong pulse of light that is designed to excite the atoms from their ground state into any short-lived upper level.t In this way the ground state is depopulated and population inversion is achieved until a pulse of laser light is emitted. In the ruby laser the flash lamp light lasts for about 1/1000 of a second (1 ms) and can be repeated about every second. The duration of the laser pulse is shorter than this, typically 0.1 ms. In some pulsed lasers the pulse duration can be tailored using special methods to be much shorter than this, down to about 10 fs (where 1 fs = 10-15 s or one thousandth of a millionth of a millionth of a second). So, the output of a three-level laser is not continuous, but consists of pulses of laser light.
Lasing in Four-Level Systems
To achieve a continuous beam of laser light a four-level laser is required.
Here, the lower laser level is not the ground state. As a result, even a pump that may not be very efficient could produce population inversion, so long as the upper level of the laser transition is longer lived than the lower level. Of course, all attempts are made to design a pump that maximizes the number of excited atoms. A typical four-level laser is the helium-neon (He-Ne) gas laser. In these lasers electric pumping excites helium atoms to an excited state whose energy is roughly the same as the upper short-lived state in the neon atom. The sole purpose of the helium atoms is to exchange energy with neon atoms via collisional excitation. As it turns out, this is a very efficient way of getting neon atoms to lase.
Laser components
All lasers have three primary components:
• Medium
• Pump
• Resonant Cavity
The laser medium can be gaseous, liquid, or a solid. These could include atoms, molecules, or collections of atoms that would be involved in a laser transition. Typically, a laser is distinguished by its medium, even though two lasers using different media may have more in common than two which have similar media.
There are three different laser pumps: electromagnetic, optical, and chemical. Most lasers are pumped electro-magnetically, meaning via collisions with either electrons or ions. Dye lasers and many solid state lasers are pumped optically; however, solid state lasers are typically pumped with a broad band (range of wavelengths) flash-lamp type light source, or with a narrow band semiconductor laser. Chemically pumped lasers, using chemical reactions as an energy source, are not very efficient. So far, these lasers have been made to work not so much for their usefulness as for their curious operation.
Up to now in our discussion of laser theory we have not really seen how the beam is generated. We know that photons emitted by stimulated emission travel coherently in the same direction, but what is it that defines the beam direction and what allows the intensity of the laser light to get large? The answer to these two questions is coupled together in the resonant cavity. Laser resonant cavities usually have two flat or concave mirrors, one on either end, that reflect lasing photons back and forth so that stimulated emission continues to build up more and more laser light. The "back" mirror is made as close to 100% reflective as possible, while the "front" mirror typically is made only 95 - 99% reflective so that the rest of the light is transmitted by this mirror and leaks out to make up the actual laser beam outside the laser device.
The resonant cavity thus accounts for the directionality of the beam since only those photons that bounce back and forth between the mirrors lead to amplification of the stimulated emission. Once the beam escapes through the front mirror it continues as a well-directed laser beam. However, as the beam exits the laser it undergoes diffraction and does have some degree of spreading. Typically this beam divergence is as small as 0.05o but even this small amount will be apparent if the beam travels long distances.
Even more, the resonant cavity also accounts for the amplification of the light since the path through the laser medium is elongated by repeated passes back and forth. Typically this amplification grows exponentially, similar to the way compound interest works in a bank. The more money in your bank account, with compound interest, the faster you earn more interest dollars. Similarly, the more photons there are to produce stimulated emission, the larger the rate at which new coherent photons are produced. The term used for laser light is gain, or the number of additional photons produced per unit path length.
The last question to address in this section is: why is the resonant cavity called by that name? What does resonance have to do with having mirrors on either end of a region containing the laser medium? Recall that when we discussed resonance on a string, we spoke about the wave traveling one way along the string (say to the right) interfering with the wave reflected at the end traveling back to the left. At a resonant frequency, there are points at which the two waves exactly add or cancel all the time, leading to a standing wave. At other frequencies the waves will randomly add or cancel and the wave will not have a large amplitude. The case of a light wave traveling back and forth in the resonant cavity is exactly analogous in that only at certain resonant frequencies will the light wave be amplified. The required condition is easy to see. The mirror separation distance, L, must be equal to a multiple of half a wavelength of light, just as we saw in the case of a string. In symbols, we have that L = nl/2, where l is the wavelength of the light and n is some integer. In the case of light, because of the small wavelength n is a very large number, implying that there are a huge number of resonant frequencies. On the other hand, only those resonant frequencies that are amplified by the laser medium will have large amplitudes and so usually there are only a few so-called laser modes or laser resonant frequencies present in the light from a laser, as shown in the figure.
Questions on Laser Components
• What are the three primary components of a laser?
• What different types of laser medium are used?
• What is a pump? Describe the different types of pumps.
• What is a resonant cavity?
• Describe how the resonant cavity produces a collimated laser beam.
• What is the beam divergence of a laser caused by? What is a typical value for it?
• What does the term gain mean? Is it related to pain?
• Describe why a typical laser has only a few modes, while the resonant cavity produces a huge number of them. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Lasers/Laser_Theory.txt |
LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser is a type of light source which has the unique characteristics of directionality, brightness, and monochromaticity. The goal of this module is to explain how a laser operates (stimulated or spontaneous emission), describe important components, and give some examples of types of lasers and their applications.
Introduction
The word LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. In 1916, Albert Einstein discovered the physical principle responsible for this amplification, and the foundation principle is called stimulated emission. It was widely accepted at that time that laser would represent a big leap in science and technology, even before Theodore H. Maiman built the first one in 1960. The 1951 Nobel Prize in physics was shared by Charles H. Townes, Nokolay Basov, and Aleksandr Prokhorov, in citation, “For Fundamental work in the field of quantum electronics, which has led to the construction of oscillator and amplifiers based on the maser-laser principle”.
The early lasers developed in the 1950s by Charles H. Townes and Arthur Shawlow were gas and solid-state lasers for use in spectroscopy. The principles of lasers were adapted from masers. MASER is an acronym that stands for Microwave Amplification by Stimulated Emission of Radiation. It uses the idea of stimulated emission and population inversion to produce a coherent amplified radiation of light in the microwave region. Stimulated emission is when an electron in an excited state falls back to ground state after absorbing energy from an incident photon. Amplified radiation or light is produced with the same direction and energy as the incident light. Population inversion is when you have a greater population of electrons in the excited state than in ground state. Population inversion is achieved through various pumping mechanisms. The laser uses these same ideas except that the electromagnetic wave created is in the visible light region. When emission begins the light oscillates within the resonant cavity and gains magnitude. Once enough light has been acquired, the laser beam is produced. This allows lasers to be used as a powerful light source. Three unique characteristics of a laser are its properties of monochromaticity, directionality, and brightness.
The monochromaticity of lasers is due to the fact that lasers are highly selective in the wavelength of light produced, which in itself is due to the resonant frequency inside the active material. Resonant frequency means that the light is oscillating in a single mode creating a monochromatic beam of light. The property of directionality depends on the angle of which the light propagates out of the source. Since lasers have large spatial and temporal coherence directionality is maximized. Temporal coherence is when there are small fluctuations in the phase. Spatial coherence has small changes in the amplitude of the emitted light. Like monochromaticity, directionality is dependent on the resonant cavity of the active material. The property of brightness is a result of the directionality and the coherence of the light. Due to these properties, lasers today are used in simple laser pointers, cutting devices, the development of military technologies, spectroscopy, and medical treatments. Their direct application to spectroscopy has allowed scientists to measure lifetimes of excited state molecules, structural analysis, probing far regions of the atmosphere, photochemistry and their use as ionization sources.
History
One of the most important characteristics of light is that it has wave-like properties and that it is an electromagnetic wave. Experiments on the blackbody radiation demonstrated a comprehensive idea of emission and absorption of electromagnetic waves. In 1900, Max Plank developed the theory that electromagnetic waves can only exist in distinct quantities of energy, which are directly proportional to a given frequency ($\nu$). In 1905, Albert Einstein proposed the dual nature of light, having both wave-like and particle-like properties. He used the photoelectric effect to show that light acts as a particle, with energy inversely proportional to the wavelength of light. This is important, because the number of particles is directly related to how intense a light beam will be. In 1915, Einstein introduced the idea of stimulated emission- a key concept to lasers.
In 1957, Townes and Shawlow proposed the concept of lasers in the infrared and optical region by adapting the concept of masers to produce monochromatic and coherent radiation. In 1953, Townes was the first to build a maser with an ammonia gas source. Masers use stimulated emission to generate microwaves. Townes and other scientists wanted to develop the optical maser to generate light. Optical masers would soon adopt the name LASER: Light Amplification by Stimulated Emission of Radiation. An optical maser would need more energy than what can be provided by microwave frequencies and a resonant cavity of the order of 1μm or less. Townes and Shawlow proposed the use of a Fabry-Pérot interferometer equipped with parallel mirrors, where Interference of radiation which is traveling back and forth between parallel mirrors in the cavity allowed for selection of certain wavelengths. Townes built an optical maser with potassium gas. That failed because the mirrors degraded over time. In 1957, Gordon Gould improved upon Townes' and Shawlow's laser concept. It was Gould who renamed the optical maser to laser. In April 1959, Gould filed a patent for the laser and later in March 1960, Townes and Shawlow had also made a request for a patent. Since Gould’s notebook was officially dated the idea was his first, but he did not receive the patent until 1977.
Components
A laser consists of three main components: a lasing medium, a resonant or optical cavity, and an output coupler. The lasing medium consists of a group of atoms, molecules , or ions in solid, liquid or gaseous form, which acts as an amplifier for light waves. For amplification, the medium has to achieve population inversion, which means in a state in which the number of atoms in the upper energy level is greater than the number of atoms in the lower energy level. The output coupler serves as energy source which provides for obtaining such a state of population inversion between a pair of energy levels of the atomic system. When the active medium is placed inside an optical resonator, the system acts as an oscillator.
Lasing Medium
The lasing medium is the component used to achieve lasing, such as chromium in the aluminum oxide crystal-found in a ruby laser. Helium and neon gas are two materials most commonly used in gas lasers. These are only a few examples of lasing mediums or materials that have been used in the past and present states of the laser. For further information about different types of lasing mediums please refer to the section where Types of Lasers is discussed.
Optical Cavity and Output Coupler
Rays of light moving along an optical path tend to diverge over time, because the energy of radiation has very high frequency. Therefore an optical cavity is needed to refocus the light. Figure 1, represents the basics of an optical cavity where the light inside moves back and forth between two mirrors. These redirect and focus the light each time it hits the surface of the mirrors. There are two types of cavities: stable cavities and unstable cavities. A stable cavity is when the ray of light does not diverge far from the optical axis. An unstable cavity is when the ray of light bounces off and away from a mirrors surface. The importance of the cavities is that it allows for the laser to have properties of directionality, monochromaticity and brightness.
Light oscillating between the first mirror (Mo) and the second mirror (M1) separated by distance, d, will have a round-trip phase shift (RTPS) of 2θ=2kd=q2π- ϕ. In Figure 1, a round-trip can be described as the beam traveling from Mo to M1back to Mo. Resonance occurs in the cavity because the light propagating between the two mirrors is uniform. The ABCD law describes that an optical cavity has a field distribution, because it reproduces itself as it is making these round-trips between the two parallel mirrors. The ABCD law was first applied to a Gaussian beam with a beam parameter, q, which is described as
$q_2=\dfrac {(Aq_1+B)}{(Cq_1+D)} \nonumber$
This law states that the beam, which is oscillating through an optical system, will experience some changing as it moves in the cavity. Fields that are created in an optical cavity have analogous shape and phase as they make each trip back and forth. However, the one thing that changes is the size of the field because the electromagnetic wave is unrestricted, unlike a wave in a short-circuited coaxial cable used to build a resonator or a microwave cavity mode. Since there is a field distribution of Eoat the surface of Mo, it can be said that there is a field distribution at the surface of M1. Since there will be a change in size of the field this means that the electromagnetic wave will have change in amplitude by ρ01 and a phase factor of $e^-jk2d$, creating additional fields. This is an example of a phasorial addition of all fields between Mo and M1, creating a total field ET (Figure 2). Phasorial addition is described by RTPS, where each additional En will have a delay of angle ϕ which is related to kd.
ET will always be greater than Eoonly if ρ0 and ρ1are not greater than 1 and ϕ=0. In this case, when ϕ=0 resonance is enhanced because all factors such as ET travelling between Mo and M1, the intensity of the electromagnetic waves, number of photons traveling between Moand M1, and the amount of energy that is stored are maximized. The resonant wavelength can also be determined by using the relationship between RTPS, and because
$k=\dfrac {\omega n}{c}=\dfrac {2\pi}{\lambda}$
Using $2\theta=2kd=q2\pi$
$\dfrac{2\pi2d}{\lambda}=q2π$
$d=\dfrac {q\lambda}{2}$
Where the wavelength of interest is given by $\lambda=\lambda_0/n$, where n is the index of refraction and $\lambda_0$ is the free-space wavelength .
Since we are dealing with light as a wave, the light in the resonant cavity can be described in terms of frequency, ν. Where
$k2d=\omega\dfrac {2nd}{c}=2\pi \nu \dfrac {2nd}{c}=q(2\pi)$
$\nu=q\dfrac {c}{2nd}$
A Fabry-Pérot interferometer is a prime example of an optical cavity used in a laser. The Fabry-Pérot is equipped with two parallel mirrors, one that is completely reflective and the other that is partially reflective. As light is accumulating in the cavity after taking several round trips between the two mirrors, some light is transmitted through the partially reflective mirror and a laser beam is produced. The beam can be in pulsed mode or continuous-wave (CW) mode. To increase the performance of the resonant cavity, the length of the cavity (d) must be considered as a way to avoid a decrease in the laser beam intensity due to any diffraction losses. The size of the aperture of the cavity is also important because it determines the strength or the intensity of the laser beam. In fact, determining the best length of a resonant cavity will enhance the coupling conditions of the output coupler by producing a frequency that is stable, which ultimately generates a laser beam that is coherent and has high power.
There are essentially six stages in the lasing process. First is the ground state where there is no excitation of the lasing medium. Second is pumping, which is applied to the medium where spontaneous emission occurs. Then the third stage is when emitted photons collide with an excited molecule where stimulated emission occurs. In the fourth stage the photons are produced in multiples, however those moving parallel in the cavity will hit a mirror and then hit the second mirror. During the fifth stage this process continues until there is an accumulation of light that is coherent and of a specific frequency. Finally, the sixth stage is when the light or laser beam exits the partially reflective mirror which is also known as the output coupler. An output coupler is the last important component of a laser because it must be efficient to produce an output of light with maximum intensity. If the output coupler is too transparent than there is much more loss of electromagnetic waves and this will decrease lasing significantly because population inversion will no longer be maintained. If the output coupler or partially reflective mirror is too reflective, then all the accumulated light that is built up in the resonant cavity will be trapped in the cavity. The beam will not pass through the output coupler, producing little to no light making the laser ineffective.
Emission
Lasers create a high energy beam of light by stimulated emission or spontaneous emission. Within in a molecule there are discrete energy levels. A simple molecular description has a low energy ground state (E1) and a high energy excited state (E2). When an electromagnetic wave, referred to as the incident light, irradiates a molecule there are two processes that can occur: absorption and stimulated emission.
Absorption occurs when the energy of the incident light matches the energy difference between the ground and excited state, causing the population in the ground state to be promoted to the excited state. The rate of absorption is given by the equation:
$\dfrac {dN_1}{dt}=-W_{12} N_1$
Where N1 is the population in E1, and W12 is the probability of this transition. The probability of the transition can also be related to the photon flux (intensity of incident light):
$W_{12}=\sigma_{12} F$
Where F is the photon flux and σ12 is the cross section of the transition with units of area. When absorption occurs photons are removed from the incident light and the intensity of the light is decreased.
Stimulated emission is the reverse of absorption. Stimulated emission has two main requirements: there must be population in the excited state and the energy of the incident light must match the difference between the excited and ground state. When these two requirements are met, population from the excited state will move to the ground energy level. During this process a photon is emitted with the same energy and direction as the incident light. Unlike absorption, stimulated emission adds to the intensity of the incident light. The rate for stimulated emission is similar to the rate of absorption, except that it uses the population of the higher energy level:
$W_{21}=\sigma_{21} F$
Like absorption the probability of the transition is related to the photon flux of the incident light through the equation:
$\dfrac {dN_2}{dt}=-W_{21} N_2$
When absorption and stimulated emission occur simultaneously in a system the photon flux of the incident light can increase or decrease. The change in the photon flux is a combination of the rate equations for absorption and stimulated emission. This is given by the equation:
$dF=\sigma F(N_2-N_1 )d\tau \nonumber$
Spontaneous emission has the same characteristics as stimulated emission except that no incident light is required to cause the transition from the excited to ground state. Population in the excited state is unstable and will decay to the ground state through several processes. Most decays involve non-radiative vibrational relaxation, but some molecules will decay while emitting a photon matching the energy of the energy difference between the two states. The rate of spontaneous emission is given by:
$\dfrac {dN_2}{dt}=-AN_2 \nonumber$
Where A is the spontaneous emission probability which depends on the transition involved. The coefficient A is an Einstein coefficient obtained from the spontaneous emission lifetime. Since spontaneous emission is not competing with absorption, the photon flux is based solely on the rate of spontaneous emission.
The population ratio of a molecule or atom is found using the Boltzmann distribution and the energy of the ground state (E1) and the excited state (E2):
$\dfrac{N_2}{N_1} = e^{\dfrac{-(E_2-E_1)}{kT}} \nonumber$
Under normal conditions, the majority, if not all, of the population is in the lower energy level (E1). This is because the energy of the excited is greater than the ground state. Normal thermal energy available (kT) is not enough to overcome the difference, and the ratio of population favors the ground state. For example, if the difference in energy between two states absorbes light at 500nm, the ratio of N1 to N2 is 5.1x1041:1. The photon flux of the incident light is directly proportional to the difference in populations. Since the ground state has more populations, the photon flux decreases: there is more absorption occurring than stimulated emission. In order to increase the photon flux there must be more population in the excited state than in the ground state, generally known as a population inversion.
In a two level energy system it is impossible to create the population inversion needed for a laser. Instead three or four level energy systems are generally used (Figure 5).
Three level processes involve pumping of population from the lowest energy level to the highest, third energy state. The population can then decay down to the second energy level or back down to the first energy level. The population that makes it to the second energy level is available for stimulated emission. Light matching the energy difference between the second and first energy level will cause a stimulated emission. Four level systems follow roughly the same process except that population is moved from the lowest state to the highest fourth level. Then it decays to the third level and lasing happens when the incident light matches the energy between the third and second level. After lasing there is decay to the first level.
Pumping Process
Pumping is the movement of population from the ground state to a higher excited state. The general rate at which this is done is given by:
$(\dfrac {dN_g}{dt})_p=W_p N_g$
Where Ng is the population in the ground level and Wp is the pump rate. Pumping can be done optically, electronically, chemically (see chemical laser), using gases at high flow rates, and nuclear fission. Only optical and electrical pumping will be discussed in detail.
Optical Pumping
Optical pumping uses light to create the necessary population inversion for a laser. Usually high pressure xenon or krypton lamps are used to excite solid or liquid laser systems. The active material in the laser absorbs the light from the pump lamp, promoting the population from the ground state to the higher energy state. The material used in the laser can be continuously exposed to the pumping light which creates a continuous wave laser (CW). A pulsed laser can be created by using flashes of pumping light.
In optical pumping there are three types of efficiency: transfer, lamp radiative, and pump quantum efficiency. Transfer efficiency is the ratio of the energy created by the lamp and the power of the light emitted by the laser. The lamp radiative efficiency is the measure of how much electrical power is converted into light in the optical lamp. Pump quantum efficiency accounts for the ratio of population that decays to the correct energy level and population that decays either back to the ground state or another incorrect energy level. For example, the overall pumping rate of the first ruby laser was around 1.1%.
The average pump rate for optical pumping depends on the total efficiency of the pump ($η_p$), volume of the laser material (V), ground state population (Ng), power input (P), and frequency of the lasing transition (ν0):
$〈W_p 〉=\eta_p (P/(VN_g ℏυ_0 ))$
Electrical Pumping
Electrical pumping is a much more complicated process than optical pumping. Usually used for gas and semiconducting lasers, electrical pumping uses electrical current to excite and promote the ground state population. In a simple gas laser that contains only one species (A), current passes through the gas medium and creates electrons that collide with the gas molecules to produce excited state molecules (A*):
$A+e \longrightarrow A^*+e$
During electron impact either an ion or an excited state can be created. The ability to make the excited state depends mostly on the material used in the laser and not the electrical pumping source making it difficult to describe the efficiency of the pumping. Total efficiencies have been calculated and tabulated for most active materials used in electrical pumping. Where eficiencies range from a < 0.1% N2 gas laser to 70% for some CO2 gas lasers.
Like the pumping rate of optical pumps, the rate of electrical pumping is found using the overall efficiency of the pump, power applied, and population of the ground state. However instead of using the frequency of the ground to upper state transition, electrical pumping uses the energy of the upper state (ħωp) and the volume of the electron discharge (V):
$〈W_p 〉=\eta_p (P/(VN_g ℏω_p ))$
Pulsed operation
Q-Switching
The technique of Q switching allows the generation of laser pulses of short duration from a few nanoseconds to a few tens of nanoseconds and high peak power from a few megawatts to a few tens of megawatts.
Suggest we put a shutter into the laser cavity. If the shutter is closed, laser action cannot occur and the population inversion can be very high. If the shutter is opened suddenly, the stored energy will be released in a short and intense light pulse. This technique is known as Q-switching. Q here denotes the ratio of the energy stored to the energy dissipated in the cavity. This technique is used in many types of solid-stat lasers and CO2 lasers to get a high-power pulsed output.
To produce high inversion required for Q-switching, four requirements must be satisfied.
1. The lifetime of the upper level must be longer than cavity buildup time.
2. The pumping flux duration must be longer than the cavity build up time.
3. The initial cavity losses must be high enough during the pumping duration to prevent oscillation occurring.
4. The cavity losses must be reduced instantaneously.
Mode-Locking
The technique of mode locking allows the generation of laser pulses of ultrashort duration from less than a picosecond to femtoseconds and very high peak, a few gigawatts.
Mode-locking is achieved by inducing the different longitudinal modes of a laser to a locked mode. When combining the electromagnetic waves modes with different frequencies and random phases, they produce a random and average output. When the modes are added in phase, they combine to produce a total amplitude and intensity output with a repeated pulse.
FIgure 8:Laser mode structure
Types of Lasers
There are many different types of lasers with a wide range applications, and below is a brief description of some of the main types.
Solid State Lasers
A solid-state laser is one that uses a solid active medium generally in a rod shape. Inside the active material is a dopant that acts as the light emitting source. Optical pumping is used to create population inversion of the active material. Solid-state lasers generally use stimulated emission as the mechanism for creating the high energy beam.
Ruby Laser
The ruby laser was the first operating laser and was built in 1960. It has a three-level (Figure 6) energy system that uses aluminum oxide with some of the aluminum atom replaced with chromium as its active material. The chromium in the aluminum oxide crystal is the active part of the laser. Electrons in the ground state of chromium absorb the incident light and become promoted to higher energy states. The short lived excited state relaxes down to a metastable state with a longer lifetime. Laser emission happens when there is relaxation from the metastable state back to the ground state.
A xenon flash lamp emitting light at wavelengths of 6600Å and 4000Å (matching the energy needed to excite the chromium atoms). In order to create resonance of the incident light in the active material silver platting was put at both ends of the ruby rod. One end was completely covered while the other end was partially covered so lasing light could exit the system.
Nd: YAG Laser
Nd: YAG laser are the most popular type of solid state laser. The laser medium is a crystal of Y3Al5O12 which are commonly called YAG, an acronym for yttrium aluminum garnet. A simplified energy-level scheme for Nd:YAG is shown in Fig. 9. The λ=1.06 μm laser transition is the strongest of the 4F3/2→4I11/2 transitions.
The major application of the Nd laser is in various form of material processing: drilling, spot welding, and laser marking. Because they can be focused to a very small spot, the laser are also used in resistor trimming and in circuit mask , memory repair and also in cutting out specialized circuits. Medical applications include many types of surgeries. Many medical applications take advantage of low-loss optical fiber delivery systems that can be inserted into the body to wherever is needed. Nd lasers are also used in military applications such as range finding and target designation. High power pilsed versions are also used for X-ray spectral regions. In addition, Nd lasers are used in scientific lab as good sources for pumping dye laser and other types of lasers.
Semiconductor Laser
The semiconductor laser is another type of solid state laser that uses a semiconducting material like germanium or silicon. When the temperature of the semiconducting material is increased, electrons move from the valence band to the conducting band creating holes in the valence band (Figure 7). In between the conducting band and valence band is a region where there are no energy levels, called the band gap. Applying a voltage to the semiconductor causes electrons to move to the conduction band creating a population inversion. Irradiating a semiconductor with incident light matching the energy of the forbidden area causes a large transition from the conduction band to the valence band, increasing and amplifying the incident light.
Gas Lasers
A gas laser contains active material composed of a mixture of gases with similar energy states inside a small gas chamber. Electrical pumping is used to create the population inversion where one gas is excited through collisions with electrons and in turn excites the other gas through collisions.
Helium-Neon Laser
The helium-neon laser was the first gas laser. It consists of a long narrow tube that contains He and Ne gas. Mirrors are placed at both ends of the gas tube to form the resonant cavity with one of the mirrors partially reflecting the incident light. Stimulated emission of the gas mixture is carried out by first exciting the He gas to a higher energy state through electron collision with electrons from the electronic pumping source (electrical pumping). Then the excited He atoms collide with the Ne atoms transferring their energy and exciting them to a higher energy level. The Ne atoms in the higher energy level will then relax to a lower metastable energy state. Lasing occurs when there is relaxation from the metastable state to a lower energy state causing spontaneous emission. The Ne gas then returns to the ground state when it collides with the outer walls of the gas tube (Figure 8).
Carbon Dioxide Laser
The carbon dioxide laser is a gas laser that uses the energy difference between rotational-vibrational energy levels. Within the vibrational levels of CO2 there are rotational sub-energy levels. A mixture of N2 and CO2 gas are placed inside a chamber. The N2 atoms are excited through an electrical pumping mechanism. The excited atoms then collide with the CO2 atoms transfer energy. This transfer of energy causes the CO2 to go into a higher vibrational level. The excited CO2 molecules then go through spontaneous emission when they are relaxed to lower rotational-vibrational levels increasing the signal of the incident light (Figure 9). Carbon dioxide lasers are extremely efficient, around 70%, and powerful compared to other gas lasers making them useful for welding and cutting.
Liquid Lasers
Liquid lasers consist of a liquid active material usually composed of an organic dye compound. The most common type of liquid laser uses rhodamine 6G (Figure 10) dye mixed with alcohol and is excited by different types of lasers, such as an argon-ion laser or a nitrogen laser. Organic dyes are large compounds that have absorption bands in the UV or visible region with a strong intense fluorescence spectrum. The free π electrons of the dye are excited using an optical pumping source and the transition from the S1 to the S0 state creates the lasing light (see Jablonski diagrams). Liquids are generally used because they can easily be tuned to emit a certain wavelength by changing the resonant frequency within the cavity. Wavelengths from the visible to the infrared can be covered. There are many benefits of liquid lasers, some include that they can be cooled in a relative amount of time, they cannot be damaged unlike a solid-laser, and their production is cost-effective. The efficiency of liquid lasers is low because the lifetime of the excited state is relatively short; there are many non-radiative decay processes, and the material degrades over time. Liquid lasers tend to be used only as a pulse laser when tunability is required. Liquid lasers can be used for high-resolution spectroscopy since they are easily tuned over a wide range of wavelengths. They can also be used because they have concentrations which are manageable when dissolved in solids or other liquids.
Chemical Lasers
Chemical lasers are different from other lasers because the population inversion is the direct product of a chemical reaction when energy is released as a result of an exothermic reaction. Usually reactions involve gases where the energy created is used to make vibrationally excited molecules. Light used for lasing is then created from vibrational-rotational relaxation like in the CO2 gas laser. An example of a chemical laser is the HF gas laser. Inside the gas chamber fluorine and hydrogen react to form an excited HF molecule:
F + H2 → HF + H
The excess energy from the reaction allows HF to stay in its excited state. As it relaxes, light is emitted through spontaneous emission. Deuterium can also be used in place of hydrogen. Deterium fluoride is useds for applications that require high-power. For example, MIRACL was built for military research and was known to produce 2.2 megawatts of power. The uniqueness of a chemical laser is that the power required for lasing is produced in the reaction itself.
Laser Applications
The applications of lasers are numerous and cover scientific and technological fields. In general, these applications are a direct consequence of the special characteristics of lasers. Below are a few examples of the laser applications, for a complete list please go to en.Wikipedia.org/wiki/List_of...ons_for_lasers
Lidar
Lidar is short for light detection and ranging which is an optical remote sensing technology can be used for monitoring the environment. A typical lidar system involves a transmitter of laser radiation and a receiver for the detection and analysis of backscattered light. A beam expander is usually used at transmitter to reduce divergence of the laser beam before it propagates into the atmosphere. The receiver includes a wavelength filter, a photo detector, and computers and electronics for data acquisition and analysis.
Lidar system dates back to the 1930, because of the laser, it has become one of the primary tools in atmospheric and environmental research. Other than that, Lidar has been put into various uses. In agriculture, lidar can be used to create topographic map to help farmer to decide appropriate amount of fertilizing to achieve a better crop yield. In Archaeology, lidar can be used to create a geographic information system to help archaeologists to find sites. In transportation, lidar has been used in autonomous cruise control system to prevent road accident and policemen are also using lidar speed gun to enforce the speed limit regulation.
Laser in Material Processing
The beam of a laser is usually a few millimeters in diagram. For most material processing applications, lenses are used to increase the intensity of the beam. The beam from a laser is either plane or spherical. After passing through a lens, the beam should get focused to a point. But in actual practice, diffraction effects have to be taken into consideration, the incoming will focus into a region of radius. If λ is the wavelength of the laser light, a is the radius of the beam, and f is the focal length of the lens, then the radius of the region is
$b =\dfrac {\lambda f }{ a }$
If P represents the power of the laser beam, the intensity I, obtained at the focused region would be given by,
$I =\dfrac { P }{ \pi b^2 }=\dfrac { P a^2 }{ \pi πλ^2f^2 }$
The high-power (P>100w) laser are widely used in material processing such as welding, drilling, cutting, surface treatment, and alloying. The main advantage of the laser beam can be summarized as follow: (1) The heating produced by the laser is less than that in conventional process. Material distortion is considerably reduced. (2) Possibility of working in inaccessible region. Any region which can be seen can be processed by a laser. (3) The process can be better controlled and easily automatized. However, against all these advantages, the disadvantages are: (1) high cost of the laser system. (2) Reliability and reproducibility problems of the laser system. (3) Safety problems.
Laser in Medicine
In field of medicine, the major use of lasers is for surgery such as laser eye surgery commonly known as LASIK. Besides that, there are also a few diagnosetic applications such as clinical use of flow microfluormeters, Doppler velocimetry to measure the blood velocity, laser fluorescence bronchoscope to detect tumors in their early phase.
For surgery, the laser beams are used instead of a conventional scalpel. The infrared beam from the CO2 laser is strongly absorbed by water molecules in the tissue. It produces a rapid evaporation of these molecules, consequently cutting the tissue. The main advantage of laser beam surgery can be summarized as follows: (1) High precision. The incision can be made with a high precision particularly when the beam is directed by means of a microscope. (2) Possibility of operating in inaccessible region. Laser surgery can be operated in any region of the body which can be observed by means of an optical system. (4) Limited damage to blood vessel and adjacent tissue. However, the disadvantages are: (1) considerable cost. (2) Smaller velocity of the laser scalpel. (3) Reliability and safety problems associated with the laser procedure.
Problems
1. Determine the free-space wavelength ($\lambda_0$) in Å, and frequency of the resonant cavity for a beam parameter q1 that is 632,110 and q2 that is 632,111 in a helium-neon gas laser at 1 atm. The index of refraction n is 1.00, the length of the resonant cavity is 20 cm and the wavelength region of interest is 6328Å.
2. What wavelength of light will be released by the spontaneous emission of Ne gas, where the the energy difference between the excited and ground state is 9.9 x 10-19 J.
3. What is the population ratio of the above question at 300K.
Answers
1. For q1 $\lambda_0=6328.0125 \angstrom$ Å, q2 $\lambda_0=6328.0025 \angstrom$ Å, and v=474 THz for both q values
2. 200 nm
3. N2/N1=1.9 x 10-105
Contributors and Attributions
• Greg Allen (UCD), Arpana Vaniya (UCD), Zheng Zhang (UC Davis) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Lasers/Overview_of_Lasers.txt |
In both solid-state and semiconductor lasers the lasing medium is a solid. Aside from this similarity, however, these two laser types are very different from each other. In the case of the solid-state lasers the lasing species is typically an impurity that resides in a solid host, a crystal of some sort. The crystal modifies some of the quantized energy levels of the impurity, but still the lasing is almost atomic - similar to gas lasers. The physics of the quantization, in the case of semiconductor lasers, is very different. In addition, solid-state lasers are always optically pumped, whereas semiconductor lasers are excited by the passage of electric current through them.
We have already seen how atomic energy levels become modified when two or more atoms bind to form a molecule. The energy quantization picture becomes more complex when atoms bind together to form a solid. Below, we will first examine energy quantization in solids before we begin a more detailed examination of solid-state and semiconductor lasers.
Crystals and Energy Bands
You may recall from high school chemistry that a model that is very successful in explaining the electronic composition of stable atomic species is called the shell model. In this model, very similar to the planetary model, the atom is thought to be made of shells of different types. Each shell-type can accommodate a certain number of electrons. For example, an "s-shell" can be occupied by no more than two electrons, whereas p-shells can have up to six electrons in them, etc. To "make stable atoms" we start by filling electrons in these shells starting in "level" 1, then level-2, etc., following the specified hierarchy: 1s; 2s, 2p; 3s, 3p; 4s, 3d, 4p; 5s, 4d, 5p; etc. (Remember that the chemistry of materials is due to electromagnetic interactions alone, so for this we only need to concern ourselves with electrons.) We start with the innermost shell, an s-shell. Our first atom-type has one electron in its s-shell; that's of course the hydrogen atom, H. The next atom has two electrons in its s-shell; that's He. The third atom on our list (see the Periodic Table) has three electrons, two in its 1 s-shell and then one in its 2 s-shell; and so on. According to this model, which incidentally is very well supported by the more rigorous and fundamental laws of Quantum Mechanics, atoms prefer to fill their outer most shell. Inert gases, for example, all (except for He) have a completely filled outer p-shell. Those atoms that manage this are energetically very stable. Those that don't would then prefer to interact with another atom of the correct type to allow an exchange of electrons so that they come closer to fulfilling this desire for closed shells in order to reduce their overall energy. A bond between two types of atoms by this give-and-take is strongest when both atoms get the most fulfillment - both end up with closed shells.
A good example of this is the case of table-salt: NaCl. Sodium atoms, Na, are hydrogenic (hydrogen-like); i.e. they have one electron in their outermost shell, which is an s-shell just outside an inert gas shell. So, to emulate the rare gases, all they need to do is to get rid of this electron. They cannot do this on their own (why?), but they can bind with an atom that wants an additional electron to complete its outer most shell. This happens to be true for the chlorine atom that has 5 electrons in its outermost shell, a p-shell. By sharing this electron with chlorine, the sodium atom still remains neutral, but now it, as well as the Cl, both end up with "filled" outer shells. In more ways than one, this is a happy marriage. Of course, in a grain of salt there are many Na and Cl atoms. (One grain of salt has roughly about 1 mg of mass. One proton is about 10-27 kg. How many atoms are then in one grain?) These atoms arrange themselves in a very organized pattern, called a crystal. These are rows and columns of atoms, as in a three-dimensional array, with interchanging Na and Cl species. In a salt crystal, a Na atom shares its electron with several Cl atoms, and yet because of the geometry of the crystal each Cl atom ends up with a net total of one extra and each Na atom with only one fewer electron (how?).
The electric attraction of the electrons to the atom's positively charged nucleus is the primary force that holds the atom together as a whole. So, in our shell model of the atom we must also account for the difference in attraction of electrons in different shells as well as the repulsion among the electrons themselves. Electrons in outer shells are further away from the nucleus, so their force of attraction is weaker than the inner shell electrons. In this respect, those electrons that belong to the unfilled outermost shell are the most loosely bound electrons in the atom. These electrons are the ones that are mostly responsible for the atom's interaction with another atom, so they are given a special name: the valence electrons.
How is the shell model related to our picture of atomic energy levels? It is primarily the valence electrons that connect these two pictures. Each atom species has its own unique energy levels, but its shell structure is not unique. When it is in its ground state, then its shells are filled according to the shell model's recipe, mentioned above. But once the atom is excited, then its electron configuration will change. The first excited state, for example, could require the valence electron to jump into the next higher shell. Further excitation of the atom could lead to even more drastic changes in its shell structure. But each of the excited energy levels corresponds to a different occupation of the accessible shells by the atom's electrons.
If atoms were made of (stationary) electrostatic charges, then most atomic species would have no reason to get together to make molecules or solids. But fortunately for us, this is not the case. Two neutral atoms can attract each other simply by changing their charge distribution so that they each become polar, i.e. the electron cloud of charges shifts over so that its center is not at the positive nucleus (see the figure below). In this sense even atoms of the same species find it energetically favorable to form a molecule. Because of this two totally neutral atoms of nitrogen prefer to form an N2 molecule even in the gas form. See the model drawing, below, which shows three time frames. The top frame shows two atoms with spherically symmetric charge distributions. These atoms do not initially interact with each other. But a momentary polarization of one atom can cause the other atom to polarize as well. This is the picture in the middle frame. The bottom frame the shows that two polarized atoms attract and form a molecule in which the electronic clouds of the two atoms overlaps.
When we cool a gas, the ease with which it makes a liquid has a great deal to do with how easily the atoms in the gas form a molecule. Things get even more interesting when the liquid is further cooled to make a solid. The form, i.e. atomic structure, in the solid has not only to do with the atoms wanting to share electrons, but also with a totally different aspect of energy for a large collection of atoms. This relates to the probability of arrangements, or the so called entropy. What concerns us here is that for some species of atoms the lowest possible energy is reached when the atoms arrange themselves is a very regular and repeatable array of positions, called a crystal.
In every-day terminology the word "crystal" usually refers to a glassy and transparent object. But in our sense of the "regular network" a crystal may be totally opaque. In fact, most metals are crystalline in structure and are not at all transparent (at least in the visible range). Why should atoms form a regular array and make up a crystal? Why shouldn't they just clump together? In most substances atoms clump randomly, forming a solid. But in some special situations atoms crystallize to make very regular network structures. These networks can have different geometrical regularities. For example, in some crystals, the network structure is a simple cube that repeats in all directions. One atom sits at each corner of the cube, making up a simple cubic crystal. In a variation of this, in addition to the simple cubic structure each face of the cube has an atom at its center; this is called a face centered cubic crystal, etc. The salt crystal of NaCl has the simple cubic structure in which Cl and Na atoms occupy alternate corners of the cube, shown below:
This network structure in the crystal is called a lattice. The atoms in the lattice are not motionless. They jiggle and oscillate, but on average remain at a fixed site. The higher the temperature of the solid, the more vigorous is the jiggling oscillation of the lattice atoms. In the case of metals, which are electric conductors, the electron clouds of the lattice atoms overlap allowing the valence electrons to jump from lattice site to site when an electric potential (or voltage) is applied to the metal. So conductors allow these electrons to move freely - they are called "free electrons" - and make up an electric current, or flow of electric charge. Insulators, on the other hand, do not share their electrons and they are not free to migrate around. These materials, such as glass, wood, rubber, most plastic, and pure water, do not support electric currents under normal circumstances. What happens to the quantized energy levels of an atom when it is influenced by the presence of all the other lattice atoms in the crystal? It turns out that the idea of quantized energy levels still remains valid, but in a totally different picture. Instead of each atom having its own set of energy levels the solid as a whole can be described by a set of so called energy bands. In this picture the bands represent the quantized energy of the whole crystal. This is represented by the number of the energy bands, their energy widths, the gap between the bands, and whether the band is full, partially full, or empty. So, in a way these bands in the crystal play a similar role to the shells in the case of individual atoms. Just as in the atomic shell model we "fill" energy bands in a solid with electrons. Depending on the atomic species that make up the crystal, these bands get filled or not. For example, in the case of the sodium atom that we examined above, the corresponding energy band picture for its solid form is shown below:
Again it should be stressed that this band picture is really a combination of the energy-level-diagram and the occupancy picture of the shell model. So, here the large gap between the 1s and 2s bands represents the relatively large amount of energy that is necessary to promote an electron in a single atom of sodium from the 1s to the 2s shell. Also the filled 1s, 2s, and 2p bands indicate Na's shell structure. At the same time, the unfilled 3s band shows that these valence electrons in solid sodium can continuously change their energy within this band. This is totally due to the huge number of atoms in the solid structure and was not allowed in the quantized single atom.
Insulators, Semiconductors, and Conductors
The electric conductivity of different materials is a consequence of their individual band structure. Metals, such as sodium, copper, aluminum, etc. have a crystalline band structure in which the upper most energy band, the valence band, is partially filled (or better yet, partially empty!). Electrons in this band can gain energy from an applied external electric potential, such as a battery, without the restriction of quantization. Insulators, on the other hand, have full valence bands, so for an electron to be promoted to its next higher available energy it has to leave this band and jump into the upper empty band. This is possible, but it takes a huge electric potential to make it happen (that's why lightning can pass through walls and kill people, but typical wiring is safe with just a thin plastic insulation over it). Semiconductors are intermediate between insulators and conductors. Their band structure is the same as an insulators' but the separating gap between their valance band and the next empty band, the conduction band, is very narrow.
Because of this narrow so called band gap, semiconductors can use the thermal energy of their lattice (collisions between the valence electrons and the oscillating lattice atoms) to promote a few of their valence electrons to the conduction band. In this way these material can conduct electricity, but because of the limited number of electrons that get promoted, they are not very good conductors. Examples of crystalline materials with narrow band gaps are silicon (Si) and germanium (Ge). In both of these materials the outermost shell is a partially filled p-shell with only two (out of a maximum of 6) electrons in it. In these crystals the two valence p-shell electrons are shared by the other neighboring atoms in the lattice.
To make a better conductor out of these semiconductors, atom impurities are introduced in the crystal formation through a process called doping. If one of the silicon atoms in the lattice is replaced by an arsenic (As) atom, which has three p-shell valence electrons, then the extra electron becomes loosely attached to its mother atom. This loose electron can take part in the electric conduction and therefore increases the conductivity of the crystal. Strangely enough, a similar process can occur if one of the silicon atoms is replaced by a gallium (Ga) atom, which has only one p-shell valence electron. In this case, the Ga impurity strongly attracts one of the p-shell electrons from one of its Si neighbors. This leads to the production of a so called "electron hole". The resulting electron vacancy (hole) can travel in the crystal just as an electron would - but in the opposite direction - and results in an increased conductivity. Doped Si crystals with extra electrons, negative carriers, are referred to as n-type; those with holes are called p-type. These impurities modify the crystals' energy band structure by introducing atomic-type energy levels between the valance and conduction bands of the crystal and thus further reduce the band gap. This is shown below:
Semiconductor Devices
Diodes are the most basic of semiconductor devices. They are simply one-way conductors. By suitably doping a Si crystal, its conductivity can be altered to make it behave as a conductor in one direction and an insulator in the other direction! This is commonly done in a p-n junction. When the crystal is being formed it is first introduced to a donor-type impurity (say As for an Si crystal). As the crystal grows the concentration of the donar impurity is reduced and replaced with a growing concentration of an acceptor impurity (say Ga for an Si crystal). So, the resulting crystal begins at one end as an n-type and eventually becomes a p-type at the other end. The n-type portion has a large concentration of electrons and the p-type end a large concentration of holes. This variation causes this device to act as an effective insulator for the flow of electricity in it from the n to the p-side, but a very good conductor in the opposite direction. So, unlike a regular conductor that allows internal flow of electricity in both directions, this device acts like a unidirectional valve. An electronic component that allows electricity to flow in one way (the forward bias), but not the opposite way (reverse bias) is called a diode.
Transistors are the most common of semiconductor devices. They are controllable switches made from either p-n-p or n-p-n arrays of doped silicon. Today's technology can put millions of transistors together into single integrated modules, or chips, to make a large variety of different types of devices such as computer memory chips or microprocessors.
Another semiconductor device is the familiar solar cell. This is also a p-n junction with variations in the hole and electron concentrations along the crystal. When light strikes this device it excites bound electrons from the valence band into the conduction band. This creates not only a mobile electron, but a hole in the valence band. Both the electron and the oppositely traveling hole produce electric current. So, when light strikes this device it behaves as a source of electricity, just like a battery. In the case of a battery chemical energy is used to produce free electrons and thereby the electric current. In solar cells photon energy is converted into the production of electron-hole pairs and the resulting electric current in the junction.
Light Emitting Diodes (LEDs) behave just the opposite of solar cells. In these devices, also having a p-n junction, the combination of hole-electron pairs, created by an applied source of electricity, give off light. So, light is produced when a free electron fills a hole, i.e. a vacancy, in the crystal that is created by the presence of a p-type impurity. The light generated by LEDs is very broad-band and not of a single wavelength. It is also generated in all directions, so these devices do not make laser light. But they are useful light sources in that they are very compact and consume little electric power. To make them more efficient and reduce absorption of the generated light by the crystal's lattice, typically the junction is made near the surface. These light sources were first made in the early 1960's. Their creation soon lead to the discovery of semiconductor lasers.
Semiconductor Lasers
Also known as diode-lasers, these are by far the most inexpensive and commonly used lasers in the world. The first diode-laser was invented in 1962 at the General Electric Research and Development Center, in Niskayuna, New York - just a few miles from the Union College campus. But it was not until the early 1980's, with the development of innovative semiconductor chip manufacturing techniques, that their mass production reached the consumer market. Similar to their LED cousins, these semiconductor devices generate light from the energy extracted when electron-hole pairs recombine. Also, as in the case of LEDs, the electron-hole pairs in the semiconductor lasers are produced by the flow of electric current in the junction - this is called the injection current. The primary difference between these lasers and photodiodes is that at high injection currents the electron-hole pair densities increase to produce population inversion, which leads to lasing. These lasers, in fact, behave very much like LEDs until the critical injection current, called the threshold current, is reached.
A typical semiconductor laser, shown above, is about a few hundred microns in dimension, much smaller than a grain of table salt. (Recall that a human hair is about 100 microns thick.) It produces light from an active area which is still smaller, a few tenths of one micron, perpendicular to the flow of electric current. Therefore (recall that diffraction from a very small exit aperture leads to a large angular spread of the beam) its laser light is very divergent and requires the use of a lens to collimate it. Most commonly used diode lasers are made of several different layers of compound semiconductors, often GaAlAs. Both the doping and thickness designs of these layers are used to control the confinement of laser light in the active lasing region. For some applications reflective coatings are deposited on the front and back facets to increase the efficiency of the optical amplification by acting as mirrors. But in most cases simple cleaving of the output facets leads to sufficient light amplification for lasing action. One of the drawbacks of the diode laser's small size is that it has a short coherence length. This limits its use in applications that require large coherence lengths, such as interferometry and holography. But the portability of its size makes it far more useful in many other applications. Below are two photographs of typical diode laser modules, shown in comparison with a typical writing pen or with a quarter. Notice that the laser's packaging "can" has a glass window to protect the fragile diode laser housed inside it. This package also includes a diagnostic photodiode detector that measures the output laser light leaking from the back facet of the laser chip.
Diode lasers operate in several modes. Their wavelength depends primarily on the size of the band gap of the semiconductor. But the design of the semiconductor layers, as well as active feedback into the laser can generate band-widths of a few hundred kHz. Since changing the temperature of the diode expands or contracts the crystal and in this way changes the dimensions of the laser's optical cavity, the laser's wavelength can be changed either by adjusting the injection current or by directly changing diode's temperature with another device. Wavelength tunability of a typical diode laser is about a few nm over a temperature range of about 10 oC. (This is very small compared to tunable dye lasers, for example.) Room temperature diode lasers have been manufactured with wavelength outputs from a few microns in the IR all the way to the green in the visible. Researchers are currently producing blue diode lasers, but these are not yet mass produced.
These lasers have a very high efficiency and are manufactured to produce laser light from a few mW to several tenths of a Watt. They can be made to operate in cw as well as in a pulsed mode. Because of their compact size an array of these can be manufactured on single chips, called laser diode arrays, that can generate output powers in the Watt range. But the single property that makes these lasers extremely useful is that their output light can be modulated at very fast rates. Because of this, and the very narrow band-width light that they can produce, diode lasers have overtaken all other lasers used in the communication industry.
Measurement of the speed of light using a GHz pulsed diode laser (center) with its beam split by the glass slide to either directly enter the fast photodiode detector on the right or to travel several meters distance before being steered to enter the second photodiode detector on the left. By measuring the total path traveled and the time between the arrival of the two pulses, the speed of light can be measured on a table top to better than 1% accuracy.
Solid-state Lasers
The first laser ever made, the ruby laser, was a solid-state laser. In this laser the lasing is a result of atomic transitions of an impurity atom in a crystalline host. Even though the atomic levels of the lasing species are often modified because of the host crystal, the lasing process is atomic and very different from that in semiconductor lasers. All solid-state lasers are pumped optically. So, in these lasers the host must be transparent to the pumping radiation. (Why?) Also, the host must be a good heat conductor in order for it to effectively dissipate waste energy that is not used for lasing. Below we will review two of the most commonly used solid-state lasers: the ruby and YAG lasers.
Ruby is an aluminum oxide ($Al_2O_3$) crystal, called sapphire, with a small amount of chromium oxide ($Cr_2O_3$) added to it. Sapphire is colorless and transparent, but the chromium doped crystal is pinkish-red in color because it strongly absorbs both in the green and in the blue. When this crystal is excited through the absorption of blue and/or green light it soon causes excitation of a meta-stable energy state of the chromium ion ($Cr^{3+}$). After a typical lifetime of a few milliseconds this state de-excites to the ground state with the emission of a 694.3 nm photon, which is visibly red in color. So, the primary role of the aluminum oxide crystal, aside from hosting the chromium ion, is to absorb the pump energy and to excite the ion through collisions.
The above simplified energy-level diagram shows the two optically pumped broad levels that quickly decay into the upper metastable lasing levels via non-radiative transitions (shown with dashed arrows). The energy of these non-radiative transitions is lost to the crystal as heat. Finally, the chromium ion de-excites back to its ground state by emitting the laser's 694.3 nm photon. Note that Ruby is a 3-level laser and as such it requires a good deal of pump energy to achieve a population inversion. It can not produce a cw beam but produces a series of bright laser pulses.
To achieve population inversion, a strong pulse of broadband light is used to excite most chromium ions to their excited state. This is typically accomplished with the use of a helical flashlamp that surrounds a cylindrical ruby crystal. The two ends of this cylinder are coated (using evaporation techniques) to reflect the red 694.3 nm light. Because the wavelength output of the ruby laser is narrow-band and the pulse of light can be strong, this laser is one of the most preferred for holography. The following diagram shows a ruby rod surrounded with a helical flashtube. The cylindrical ends are coated for reflectivity in the red to form the optical cavity of the laser.
Typical ruby rods range in length from about 10 cm to 1/4 m and in diameter from a few to about 25 mm. The standard ruby lasers produce a few ms long duration pulses that range in energy from 10 to 100 J. But because ruby rods degrade quickly from excessive heating, these lasers are often operated at rather slow repetition rates (rep-rate) of few pulses per second (few Hz).
Applications that require large numbers of photons in a short period of time, benefit from laser pulses that are short in duration, but have a high peak. The following graph shows two pulses of light that carry equal number of photons (equal areas under their graphs), but have very different peak heights.
The one that has the higher peak, then has the shortest duration. (Note that for simplicity these pulses are depicted as rectangular in shape suggesting that lasing turns on and off at precise instants. In reality, however, laser pulses look more like hills; gradually increasing, reaching a peak, and then gradually decreasing before fully turning off.) Since the energy of each photon is fixed depending on the value of its wavelength (or its frequency), then both of these pulses carry the same amount of energy. For example, if we are told that the above pulses each have N photons emitted by a ruby laser, then we could easily calculate the pulse energies:
$\text{pulse energy} = \text{(number of photons in the pulse)} \times \text{(energy of a single photon)} \nonumber$
i.e. pulse energy = N x (hn) = N x ( hc / l ) = N x { (6.63 x 10-34) (3.00 x 108) / (694.3 x 10-9) }
$= N \times ( 2.87 \times 10^{-19}) Joules. \nonumber$
Alternatively, if we were told that each of the above ruby laser pulses is a 10 J pulse, then we could easily calculate the number of photons in each pulse:
number of photons = N = (pulse energy ) / (energy of a single photon)
or,
$N = \dfrac{10}{ 2.87 \times 10^{-19}} = 3.5 \times 10^{19} \nonumber$
Devices that measure laser light energies do not count the number of photons. Instead, they measure the energy that the photons produce in the device. (For example, photodiodes, which work like solar cells, absorb the photon energy and create electron-hole pairs, which produce a voltage across the p-n junction. This voltage is proportional to the absorbed energy and is displayed by the device indicating the pulse energy.) The following graphs are more realistic depictions of pulse energy measurements:
Here again we see two graphs of roughly equal areas and therefore equal pulse energies. The top pulse creates "less energy for a longer time" while the lower one creates "more energy for a shorter time". Since power is defined as:
power = ( energy ) / ( time ) (with units of 1 Watt = 1 Joule / 1 second )
Then the pulse shown on the lower graph has much more power than the top one. (Why?) This is called the pulse power, or the peak power. For comparison with a cw laser it is convenient to average the energy of the pulse over not just its duration, but also over the time that the laser is on but there is not light emitted from it. To do this, all we need do is divide the total pulse energy by the time interval from when a pulse begins until the time that the next pulse begins; i.e. the "pulse-to-pulse time":
average power = ( pulse energy ) / ( pulse-to-pulse time )
In pulsed lasers there are several ways to achieve a large peak power. One of these is called Q-switching, which refers to an increase in the efficiency of the optical cavity. The Q of an amplifier is just a measure of its efficiency. An amplifier that uses little energy for its amplification is said to have a large-Q. In pulsed lasers a method of increasing efficiency is to simply increase peak power. This can happen when instead of producing a long pulse one makes the laser produce a shorter pulse with a larger peak energy. One method for doing this is, instead of causing stimulated emission, to let the medium buildup more and more of a population inversion in its metastable state. This can be done rather easily by "blocking" one of the optical cavity's reflective mirrors for long enough that the maximum number of the atoms are in the upper laser level. Then to cause all of the atoms in the upper lasing level to undergo stimulated emission "all at once" a very quick "unblocking" of the reflective mirror results in many atoms to undergo stimulated emission simultaneously; therefore the resulting laser pulse has a high peak power. In this way Q-switched lasers can produce very high peak powers, in the hundreds of megawatts!
YAG Lasers
Yttrium-Aluminum-Garnet ($Y_3Al_5O_{12}$), YAG, is a clear and transparent crystal that is most commonly used as the host crystal for neodymium impurity atoms in Nd:YAG lasers. Even though the lasing transition occurs in the Neodymium (Nd) ions, these lasers are often called YAG lasers. Typically 1 - 2% of the Y is replaced by Nd in these lasers. Similar to the ruby laser, here again the crystal not only hosts the atom, but also its broad absorption bands (in the presence of the Nd-impurity) effectively absorb optical radiation - mostly around 700 nm and 800 nm. This energy is then transferred to the impurity ions through non-radiative processes.
Similar to the level diagram for the ruby laser, the above diagram is a simplified representation of the transitions for the neodymium ion, Nd+++, that take place in the YAG laser. Again, the dashed arrows indicate non-radiative transitions, where the energy lost in them is transferred as heat to the crystal. But unlike the ruby laser, the Nd:YAG is a four-level laser and has much more efficient lasing transitions than the ruby laser. Because of this efficiency in population inversion it can be pumped to produce a wide variety of laser output energies and can be operated in the cw as well as pulsed modes. The lasing photon has a wavelength of 1064 nm, well in the IR range.
Another host used for Nd is plain glass, but its optical and thermal properties are not as desirable as YAG's. The trade off is that growing large YAG crystals is not easy. A typical YAG rod, which is drilled out of a crystal block, is smaller than 1 cm in diameter and from a few to 10 cm in length. Still, these lasers can produce high output powers by using several rods in tandem. Pulsed YAG lasers can produce high peak powers by Q-switching and/or by using several rods as amplifiers. When used as amplifiers, the rods are not coated for reflectivity to produce optical amplification in the rod. Instead, the output of the first rod (the oscillator) is used as a Q-switch to generate stimulated emission in the next rod (the amplifier), and so on.
In the cw mode YAG lasers are pumped either by an arc lamp or a semiconductor laser. Pumping with a diode laser is more efficient because the diode's wavelength can be chosen to closely match the absorption of the YAG. Flashlamps are often used for pulsed operation of these lasers. In this mode Q-switching the laser can produce ns pulses with relatively high peak powers. But, to reach very high powers, like that generated by the NOVA laser, many rod amplifications are necessary.
YAG lasers are extremely versatile. They are used in applications from welding and drilling to range finding. Because their output wavelength is not visible, YAG lasers are used in many applications that require secrecy; from military to security applications. It is now possible to convert IR radiation into the visible range using so called non-linear crystals (also known as second-harmonic-generators). When high density photons impinge on such a crystal, the non-linear properties of its index of refraction causes the crystal to absorb two long wavelength photons and generate, in their place, one photon with twice their energy and therefore half their wavelength. Using these crystals, the YAG laser wavelength can be shortened to 532 nm (1064 nm/2), which is seen by the eye as green light. Today's green-colored laser pointers are diode-pumped YAG lasers with doubling crystals.
Although we can change the wavelength of the YAG laser from IR to green (and even further into UV, via another second harmonic generation), this is not a continuous tuning. Other interesting solid-state lasers, called vibronic lasers, use effective bands as their lower lasing level. In this way these lasers can produce a tunable output; i.e. one with a continually changeable wavelength. The most commonly used laser of this type is the Ti:sapphire laser, which is an aluminum oxide crystal, the same as ruby's host, but doped with titanium instead of chromium atoms.
Questions on Semiconductor & Solid-state lasers
1. What does the shell-model of the atom describe?
2. How is the shell-model related to the energy-level model of the atom?
3. What is the valence electron in an atom?
4. How can two neutral atoms form a bound molecule? Give an example of this.
5. Describe what a crystal is.
6. What are energy bands? How are they similar to and different from the energy levels/shell model?
7. What is the valence band in a crystal?
8. What are the differences in the energy band configurations between conductors, insulators, and semiconductors?
9. What is a p-n junction? What is a transistor?
10. What kind of transition leads to the production of light in a semiconductor p-n junction?
11. Describe what a semiconductor laser is and how it operates.
12. Give a few examples of the uses and advantages of semiconductor lasers.
13. What are some of the output wavelengths and powers of semiconductor lasers?
14. In what ways are the transitions in solid-state lasers different from atomic ones?
15. What is ruby? What is its color and what species of atom is involved in lasing in a ruby laser?
16. How are ruby lasers made to laser, i.e. please explain their pump, medium, and laser cavity?
17. What is the color of ruby laser light? Is its output cw or pulsed? Why?
18. What is Q-switching and what is its function?
19. What are ruby lasers used for?
20. What are the required properties of the host crystal in solid-state lasers?
21. What is a YAG laser? What is the host and what is its lasing species in the YAG?
22. What is the typical output wavelength of a YAG laser?
23. What are some of the advantages of YAG lasers? Some of its applications?
24. What is a Ti:sapphire laser? How is it different from a ruby laser in make-up and in operation? | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Lasers/Semiconductor_and_Solid-state_lasers.txt |
Mass spectrometry is an analytic method that employs ionization and mass analysis of compounds in order to determine the mass, formula and structure of the compound being analyzed. A mass analyzer is the component of the mass spectrometer that takes ionized masses and separates them based on charge to mass ratios and outputs them to the detector where they are detected and later converted to a digital output.
• Accelerator Mass Spectroscopy
Accelerator Mass Spectroscopy (AMS) is a highly sensitive technique that is useful in isotopic analysis of specific elements in small samples (1mg or less of sample containing 106 atoms or less of the isotope of interest).
• Fragmentation Patterns in Mass Spectra
This page looks at how fragmentation patterns are formed when organic molecules are fed into a mass spectrometer, and how you can get information from the mass spectrum.
• How the Mass Spectrometer Works
This page describes how a mass spectrum is produced using a mass spectrometer.
• Introductory Mass Spectrometry
• MALDI-TOF
Proteins and peptides have been characterized by high pressure liquid chromatography (HPLC) or SDS PAGE by generating peptide maps. These peptide maps have been used as fingerprints of protein or as a tool to know the purity of a known protein in a known sample. Mass spectrometry gives a peptide map when proteins are digested with amino end specific, carboxy end specific, or amino acid specific digestive enzymes.
• Mass Spec
A mass spectrometer creates charged particles (ions) from molecules. It then analyzes those ions to provide information about the molecular weight of the compound and its chemical structure. There are many types of mass spectrometers and sample introduction techniques which allow a wide range of analyses. This discussion will focus on mass spectrometry as it's used in the powerful and widely used method of coupling Gas Chromatography (GC) with Mass Spectrometry (MS).
• Mass Spectra Interpretation: ALDEHYDES
• Mass Spectrometers (Instrumentation)
• Mass Spectrometry: Isotope Effects
The ability of a mass spectrometer to distinguish different isotopes is one of the reasons why mass spectrometry is a powerful technique. The presence of isotopes gives each fragment a characteristic series of peaks with different intensities. These intensities can be predicted based on the abundance of each isotope in nature, and the relative peak heights can also be used to assist in the deduction of the empirical formula of the molecule being analyzed.
• Organic Compounds Containing Halogen Atoms
This page explains how the M+2 peak in a mass spectrum arises from the presence of chlorine or bromine atoms in an organic compound. It also deals briefly with the origin of the M+4 peak in compounds containing two chlorine atoms.
• The Mass Spectra of Elements
This page looks at the information you can get from the mass spectrum of an element. It shows how you can find out the masses and relative abundances of the various isotopes of the element and use that information to calculate the relative atomic mass of the element. It also looks at the problems thrown up by elements with diatomic molecules - like chlorine.
• The Molecular Ion (M⁺) Peak
This page explains how to find the relative formula mass (relative molecular mass) of an organic compound from its mass spectrum. It also shows how high resolution mass spectra can be used to find the molecular formula for a compound.
• The M+1 Peak
This page explains how the M+1 peak in a mass spectrum can be used to estimate the number of carbon atoms in an organic compound.
Thumbnail: SIMS mass spectrometer, model IMS 3f. (GNU Free Documentation Licenses; CAMECA Archives).
Mass Spectrometry
Accelerator Mass Spectroscopy (AMS) is a highly sensitive technique that is useful in isotopic analysis of specific elements in small samples (1mg or less of sample containing 106 atoms or less of the isotope of interest).[1]
Accelerator Mass Spectroscopy
AMS requires a particle accelerator, originally used in nuclear physics research, which limits its widespread use due to high costs and technical complexity. Fortunately, UC Davis researchers have access to the Lawrence Livermore National Laboratory Center for Accelerator Mass Spectrometry (CAMS LLNL), one of over 180 AMS research facilities in the world. AMS is distinct from conventional Mass Spectrometry (MS) because it accelerates ions to extremely high energies (millions of electron volts) compared to the thousands of electron volts in MS (1keV=1.6×10-16 J). This allows AMS to resolve ambiguities that arise in MS due to atomic and molecular ions of the same mass. AMS is most widely used for isotope studies of 14C, which has applications in a variety of fields such as radiocarbon dating, climate studies, and biomedical analysis.[2] Some of the most fascinating applications of AMS range from exposure dating of surface rocks, 14C labeled drug tracer studies, and even radiocarbon dating of artifacts such as the Shroud of Turin and the Dead Sea Scrolls.[3]
Theory
In conventional atomic mass spectrometry, samples are atomized and ionized, separated by their mass-to-charge ratio, then measured and/or counted by a detector. Rare isotopes such as 14C present a challenge to conventional MS due to their low natural abundance and high background levels. Researchers were challenged by isobaric interference (interference from equal mass isotopes of different elements exemplified by 14N in 14C analysis), isotopic interference (interference from equal mass to charge isotopes of different elements), and molecular interference (interference from equal mass to charge molecules, such as 12CH2-, 12CD, or 13CH- in 14C analysis). Most AMS systems employ an electrostatic tandem accelerator that has a direct improvement in background rejection, resulting in a 108 time increase in the sensitivity of isotope ratio measurements. As the natural abundance of 14C in modern carbon is 10-12 (isotopic ratio of 14C:12C), a sensitivity of 10-15 is a prerequisite for 14C analysis.
Figure 1, above, starts with a negative ion sputter source, which commonly consists of a stream of Cesium ions (Cs+) with energies of 2-3 keV focused on the surface of a solid sample in order to transfer enough energy to the target material to produce free atoms and ions of the sample material. This process, called sputtering, separates neutral, as well as positive and negative ions from the sample surface. The sample is held at a negative potential, and negatively charged ions are accelerated away from the sample, resulting in a beam of negative ions (Figure 2, below). Cs+ is particularly useful in 14C studies because it does not form a negative ion from 14N, thereby eliminating isobar interference.[4] It is important to have a beam of negative ions entering the accelerator because the negative ions are attacted to the high -voltage terminal which results in their net acceleration.
The low energy (~5-10 keV) diverging beam that leaves the ion source is accelerated, focused and transported to the accelerator by the injector system.[2] CAMS LLNL employs a low-energy mass spectrometer that selects for the desired atomic mass[5] that separates ions by their mass to charge ratio (12C, 13C, and 14C ions pass through separately). Most AMS systems use sequential injection, a process that switches between stable and rare isotopes via the application of varying voltages to the electrically insulated vacuum chamber of the analyzer magnet. In sequential injection, typical injection repetition rates are 10 sec-1 to minimize variations in the electrical load.[2] This process allows the development of more versatile systems, allowing for analysis of a wide range of isotopes.[1] The alternative to sequential injection is simultaneous injection, a process adopted in accelerators dedicated to 14C analysis. A recombinator is used following sequential injection, which is a sequence of magnetic analyzers and quadrupole lenses that focus the stable and rare isotopes so they recombine and enter the accelerator together.
The traditional accelerator was first developed in the early 1930s for nuclear physics research. In 1939, UC Berkeley scientists Luis Alvarez and Robert Cornog were the first to use AMS in the detection of 3He in nature using the 88-inch Berkeley cylclotron.[5] Now, over 70 years later, cyclotrons have been replaced by an accelerator type with greater energy stability: the tandem electrostatic accelerator. An electrostatic accelerator works by accellerating particles though a magnetic field generated by high voltages using a mechanic transport system that continuously transports charges from ground to the insulated high-voltage terminal. All tandem accelerators with a maximum terminal voltage above 5 MV use such a mechanical system.[2] The negative ions that enter the accelerator are attracted to the high-voltage terminal, which is what accellerates theCAMS LLNL employs a tandem Van de Graaff accelerator, in which a second acceleration of millions of volts is applied. In all tandem accelerators, atoms are stripped at the high-voltage terminal using either a thin Carbon foil or Argon gas. Stripping is the process in which two or more electrons are removed. The Van de Graaff accelerator removes at least four electrons. It is preferrable to remove at least three electrons because by this process that molecular isobars of 14C (such as 12CH2-, 12CD, or 13CH-) are destroyed due to the high instability of their positively charged forms, and atomic C+ ions such as 12C+, 13C+, and 14C+ are separated due to their different mass to charge ratios.[4] The negative ions are changed to positively charged ions and are thus accelerated back to the ground potential in the high-energy part of the accelerator. Transmission through a foil changes with time due to radiation damage and foil thickening, thus gas strippers are used in all modern analyzers due to their increased transmission stability.[2]
Magnetic lenses focus the high energy particles leaving the accelerator into a magnetic dipole, (the high energy analyzing magnet). Stable isotopes can be collected at off-axis beam stops where secondary focusing lenses and additional analyzing equipment remove unwanted ions and molecular fragments to eliminate background. At CAMS LLNL, a magnetic quadrupole lens focuses the desired isotope and charge state to a high-energy mass spectrometer which passes 12C+ and 13C+ into Faraday cups and further focuses and stabilizes 14C in a quadrupole/electrostatic cylindrical analyzer that leads to a gas ionization detector.[5] The magnetic quadrupole and electrostatic selectors coupled together ensure high selectivity and sensitivity, respectively. Other detectors commonly found in AMS systems include surface barrier, time-of-flight, gas filled magnets, and x-ray detectors.
Interpretation
Rare isotopes analyzed by AMS are always measured as a ratio of a stable, more abundant (but not too abundant) isotope. For example, the ratio in 14C studies is generally shown as 14C/13C. Less abundant isotopes are preferable in AMS because the decreased flux of ions reduces background and wear on the instrument, which is of particular concern due to the quick deterioration of particle detectors (performance deteriorates at rates higher than a few thousand particles per second[1]).
Applications
Common radioisotope elements measured with AMS and their applications are shown in Table 1[4], below. Because 14C analysis is by far the most popular application of AMS, the methods discussed below are all techniques used involving 14C.
Table 1. Radioisotope elements generally measured with AMS and their applications.
Element (Common Isotope) Radioisotope with AMS Natural abundance Half-life (yr) Study application
Hydrogen (1H) 3H trace 12.33 Biological/biomedical Nutritional trace
Beryllium (9Be) 10Be trace 1,510,000 Geochronology
Hydrogeological study
Exposure dating
Carbon (12C) 14C 1 x10-10% 5730 Biological/biomedical
Nutritional trace
Aluminum (27Al) 26Al trace, synthetic 720,000 Biological/biomedical Exposure dating
Chlorine (35Cl) 36Cl 7x10−11% 301,000 Earth Science Hydrogeological study Exposure dating Migration of nuclear waste
Calcium (40Ca) 41Ca trace, synthetic 116,000 Biological/biomedical
Nuclear weapon testing
Nickel (58Ni) 59Ni trace, synthetic 112,000 Nutritional trace
Iodine (127I) 129I trace, synthetic 15,700,000 Biological/biomedical
Migration of nuclear waste
Environmental study
Radiocarbon dating is an analytical method based on the rate of decay of 14C, a radioactive carbon isotope formed in the atmosphere by the reaction between neutrons from cosmic rays and 14N (neutron + 14N = 14C + proton).[2] Resultant 14C atoms are taken up by plants in the form of 14CO2, then transferred to animals though the food chain. When animals and plants die, they cease to uptake 14C, and a steady decay of 14C continues in their tissues over time. 14C atoms decay via electron emission (β radiation) to form 14N, a process which has a half life of 5,730 years.[5] Radiocarbon levels in the atmosphere change according to complex patterns which are affected by a variety of fluctuations ranging from the sun’s solar activity and the earth’s magnetic field, to ocean ventilation rate and climate. 14C analysis of tree rings, corals, lake sediments, ice cores, and other sources has led to a detailed record of 14C variations through time, allowing researchers to establish an official radiocarbon calibration curve (also referred to as a radiocarbon clock) dating back 26,000 calendar years. In the 1960s, nuclear weapons testing released large amounts of neutrons into the atmosphere, nearly doubling 14C activity.[2] Samples taken after this time period can be radiocarbon dated using a 14C bomb curve like the peak shown below in Figure 3, can retrieve very precise dates (within 1 year at the steepest part of the curve).
14C analysis provides valuable information in the radiocarbon dating of the world’s most priceless artifacts. One such example of the monumental impact of 14C AMS is the radiocarbon dating of the Dead Sea Scrolls to dates from 300 BC to AD 61 by labs in Zurich and Arizona. AMS has also contributed greatly to environmental and atmospheric studies by providing information regarding particle composition and origin. In the biochemical field, synthesized 14C labeled compounds can be administered as a tracer dose for in-vivo human metabolic and drug studies which require AMS analysis of graphitized biological samples.
AMS is a highly sensitive method for isotopic analysis that has numerous key applications that are only growing with advances in technology. High costs and technical complexities that arise with the use of a particle accelerator are the only limits to the widespread use of AMS. Recent times have seen the emergence of commercially available compact accelerators that use as low as 200 kV for radiocarbon dating and biomedical applications, and as particle accelerators become more commonplace, modifications to the instrument have also broadened the number of isotopes the instrument can measure.
• Laura McWade | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Accelerator_Mass_Spectroscopy.txt |
This page looks at how fragmentation patterns are formed when organic molecules are fed into a mass spectrometer, and how you can get information from the mass spectrum.
The formation of molecular ions
When the vaporized organic sample passes into the ionization chamber of a mass spectrometer, it is bombarded by a stream of electrons. These electrons have a high enough energy to knock an electron off an organic molecule to form a positive ion. This ion is called the molecular ion - or sometimes the parent ion. The molecular ion is often given the symbol $\ce{M^{+}}$ or $\ce{M^{\cdot +} }$- the dot in this second version represents the fact that somewhere in the ion there will be a single unpaired electron. That's one half of what was originally a pair of electrons - the other half is the electron which was removed in the ionization process.
Fragmentation
The molecular ions are energetically unstable, and some of them will break up into smaller pieces. The simplest case is that a molecular ion breaks into two parts - one of which is another positive ion, and the other is an uncharged free radical.
$M^{\cdot +} \rightarrow X^+ + Y^{\cdot} \nonumber$
The uncharged free radical won't produce a line on the mass spectrum. Only charged particles will be accelerated, deflected and detected by the mass spectrometer. These uncharged particles will simply get lost in the machine - eventually, they get removed by the vacuum pump. The ion, X+, will travel through the mass spectrometer just like any other positive ion - and will produce a line on the stick diagram. All sorts of fragmentations of the original molecular ion are possible - and that means that you will get a whole host of lines in the mass spectrum. For example, the mass spectrum of pentane looks like this:
It's important to realize that the pattern of lines in the mass spectrum of an organic compound tells you something quite different from the pattern of lines in the mass spectrum of an element. With an element, each line represents a different isotope of that element. With a compound, each line represents a different fragment produced when the molecular ion breaks up.
In the stick diagram showing the mass spectrum of pentane, the line produced by the heaviest ion passing through the machine (at m/z = 72) is due to the molecular ion. The tallest line in the stick diagram (in this case at m/z = 43) is called the base peak. This is usually given an arbitrary height of 100, and the height of everything else is measured relative to this. The base peak is the tallest peak because it represents the commonest fragment ion to be formed - either because there are several ways in which it could be produced during fragmentation of the parent ion, or because it is a particularly stable ion.
Using fragmentation patterns
This section will ignore the information you can get from the molecular ion (or ions). That is covered in three other pages which you can get at via the mass spectrometry menu. You will find a link at the bottom of the page.
Example $1$: Mass Spectrum of Pentane
Let's have another look at the mass spectrum for pentane:
What causes the line at m/z = 57?
Solution
How many carbon atoms are there in this ion? There can't be 5 because 5 x 12 = 60. What about 4? 4 x 12 = 48. That leaves 9 to make up a total of 57. How about C4H9+ then?
C4H9+ would be [CH3CH2CH2CH2]+, and this would be produced by the following fragmentation:
The methyl radical produced will simply get lost in the machine.
The line at m/z = 43 can be worked out similarly. If you play around with the numbers, you will find that this corresponds to a break producing a 3-carbon ion:
The line at m/z = 29 is typical of an ethyl ion, [CH3CH2]+:
The other lines in the mass spectrum are more difficult to explain. For example, lines with m/z values 1 or 2 less than one of the easy lines are often due to loss of one or more hydrogen atoms during the fragmentation process.
Example $2$: Pentan-3-one
This time the base peak (the tallest peak - and so the commonest fragment ion) is at m/z = 57. But this isn't produced by the same ion as the same m/z value peak in pentane.
If you remember, the m/z = 57 peak in pentane was produced by [CH3CH2CH2CH2]+. If you look at the structure of pentan-3-one, it's impossible to get that particular fragment from it.
Work along the molecule mentally chopping bits off until you come up with something that adds up to 57. With a small amount of patience, you'll eventually find [CH3CH2CO]+ - which is produced by this fragmentation:
You would get exactly the same products whichever side of the CO group you split the molecular ion. The m/z = 29 peak is produced by the ethyl ion - which once again could be formed by splitting the molecular ion either side of the CO group.
Peak heights and the stability of ions
The more stable an ion is, the more likely it is to form. The more of a particular sort of ion that's formed, the higher its peak height will be. We'll look at two common examples of this.
Examples involving carbocations (carbonium ions)
Summarizing the most important conclusion from the page on carbocations:
Order of stability of carbocations
primary < secondary < tertiary
Applying the logic of this to fragmentation patterns, it means that a split which produces a secondary carbocation is going to be more successful than one producing a primary one. A split producing a tertiary carbocation will be more successful still. Let's look at the mass spectrum of 2-methylbutane. 2-methylbutane is an isomer of pentane - isomers are molecules with the same molecular formula, but a different spatial arrangement of the atoms.
Look first at the very strong peak at m/z = 43. This is caused by a different ion than the corresponding peak in the pentane mass spectrum. This peak in 2-methylbutane is caused by:
The ion formed is a secondary carbocation - it has two alkyl groups attached to the carbon with the positive charge. As such, it is relatively stable. The peak at m/z = 57 is much taller than the corresponding line in pentane. Again a secondary carbocation is formed - this time, by:
You would get the same ion, of course, if the left-hand CH3 group broke off instead of the bottom one as we've drawn it. In these two spectra, this is probably the most dramatic example of the extra stability of a secondary carbocation.
Examples involving acylium ions, [RCO]+
Ions with the positive charge on the carbon of a carbonyl group, C=O, are also relatively stable. This is fairly clearly seen in the mass spectra of ketones like pentan-3-one.
The base peak, at m/z=57, is due to the [CH3CH2CO]+ ion. We've already discussed the fragmentation that produces this.
Using mass spectra to distinguish between compounds
Suppose you had to suggest a way of distinguishing between pentan-2-one and pentan-3-one using their mass spectra.
pentan-2-one CH3COCH2CH2CH3
pentan-3-one CH3CH2COCH2CH3
Each of these is likely to split to produce ions with a positive charge on the CO group. In the pentan-2-one case, there are two different ions like this:
• [CH3CO]+
• [COCH2CH2CH3]+
That would give you strong lines at m/z = 43 and 71.
With pentan-3-one, you would only get one ion of this kind:
• [CH3CH2CO]+
In that case, you would get a strong line at 57. You don't need to worry about the other lines in the spectra - the 43, 57 and 71 lines give you plenty of difference between the two. The 43 and 71 lines are missing from the pentan-3-one spectrum, and the 57 line is missing from the pentan-2-one one.
The two mass spectra look like this:
Computer matching of mass spectra
As you've seen, the mass spectrum of even very similar organic compounds will be quite different because of the different fragmentations that can occur. Provided you have a computer data base of mass spectra, any unknown spectrum can be computer analysed and simply matched against the data base. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Fragmentation_Patterns_in_Mass_Spectra.txt |
This page describes how a mass spectrum is produced using a mass spectrometer.
How a mass spectrometer works
If something is moving and you subject it to a sideways force, instead of moving in a straight line, it will move in a curve - deflected out of its original path by the sideways force. Suppose you had a cannonball traveling past you and you wanted to deflect it as it went by you. All you've got is a jet of water from a hose-pipe that you can squirt at it. Frankly, its not going to make a lot of difference! Because the cannonball is so heavy, it will hardly be deflected at all from its original course. But suppose instead, you tried to deflect a table tennis ball traveling at the same speed as the cannonball using the same jet of water. Because this ball is so light, you will get a huge deflection.
The amount of deflection you will get for a given sideways force depends on the mass of the ball. If you knew the speed of the ball and the size of the force, you could calculate the mass of the ball if you knew what sort of curved path it was deflected through. The less the deflection, the heavier the ball. You can apply exactly the same principle to atomic sized particles.
An outline of what happens in a mass spectrometer
Atoms can be deflected by magnetic fields - provided the atom is first turned into an ion. Electrically charged particles are affected by a magnetic field although electrically neutral ones aren't.
The sequence is :
• Stage 1: Ionization: The atom is ionised by knocking one or more electrons off to give a positive ion. This is true even for things which you would normally expect to form negative ions (chlorine, for example) or never form ions at all (argon, for example). Mass spectrometers always work with positive ions.
• Stage 2: Acceleration: The ions are accelerated so that they all have the same kinetic energy.
• Stage 3: Deflection: The ions are then deflected by a magnetic field according to their masses. The lighter they are, the more they are deflected. The amount of deflection also depends on the number of positive charges on the ion - in other words, on how many electrons were knocked off in the first stage. The more the ion is charged, the more it gets deflected.
• Stage 4: Detection: The beam of ions passing through the machine is detected electrically.
A full diagram of a mass spectrometer
Understanding what's going on
The need for a vacuum
It's important that the ions produced in the ionization chamber have a free run through the machine without hitting air molecules.
Ionization
The vaporized sample passes into the ionization chamber. The electrically heated metal coil gives off electrons which are attracted to the electron trap which is a positively charged plate.
The particles in the sample (atoms or molecules) are therefore bombarded with a stream of electrons, and some of the collisions are energetic enough to knock one or more electrons out of the sample particles to make positive ions. Most of the positive ions formed will carry a charge of +1 because it is much more difficult to remove further electrons from an already positive ion. These positive ions are persuaded out into the rest of the machine by the ion repeller which is another metal plate carrying a slight positive charge.
Acceleration
The positive ions are repelled away from the very positive ionization chamber and pass through three slits, the final one of which is at 0 volts. The middle slit carries some intermediate voltage. All the ions are accelerated into a finely focused beam.
Deflection
Different ions are deflected by the magnetic field by different amounts. The amount of deflection depends on:
• the mass of the ion. Lighter ions are deflected more than heavier ones.
• the charge on the ion. Ions with 2 (or more) positive charges are deflected more than ones with only 1 positive charge.
These two factors are combined into the mass/charge ratio. Mass/charge ratio is given the symbol m/z (or sometimes m/e). For example, if an ion had a mass of 28 and a charge of 1+, its mass/charge ratio would be 28. An ion with a mass of 56 and a charge of 2+ would also have a mass/charge ratio of 28.
In the last diagram, ion stream A is most deflected - it will contain ions with the smallest mass/charge ratio. Ion stream C is the least deflected - it contains ions with the greatest mass/charge ratio.
It makes it simpler to talk about this if we assume that the charge on all the ions is 1+. Most of the ions passing through the mass spectrometer will have a charge of 1+, so that the mass/charge ratio will be the same as the mass of the ion. Assuming 1+ ions, stream A has the lightest ions, stream B the next lightest and stream C the heaviest. Lighter ions are going to be more deflected than heavy ones.
Detection
Only ion stream B makes it right through the machine to the ion detector. The other ions collide with the walls where they will pick up electrons and be neutralised. Eventually, they get removed from the mass spectrometer by the vacuum pump.
When an ion hits the metal box, its charge is neutralized by an electron jumping from the metal on to the ion (right hand diagram). That leaves a space amongst the electrons in the metal, and the electrons in the wire shuffle along to fill it. A flow of electrons in the wire is detected as an electric current which can be amplified and recorded. The more ions arriving, the greater the current.
Detecting the other ions
How might the other ions be detected - those in streams A and C which have been lost in the machine?
Remember that stream A was most deflected - it has the smallest value of m/z (the lightest ions if the charge is 1+). To bring them on to the detector, you would need to deflect them less - by using a smaller magnetic field (a smaller sideways force). To bring those with a larger m/z value (the heavier ions if the charge is +1) on to the detector you would have to deflect them more by using a larger magnetic field.
If you vary the magnetic field, you can bring each ion stream in turn on to the detector to produce a current which is proportional to the number of ions arriving. The mass of each ion being detected is related to the size of the magnetic field used to bring it on to the detector. The machine can be calibrated to record current (which is a measure of the number of ions) against m/z directly. The mass is measured on the 12C scale.
What the mass spectrometer output looks like
The output from the chart recorder is usually simplified into a "stick diagram". This shows the relative current produced by ions of varying mass/charge ratio. The stick diagram for molybdenum looks like this:
You may find diagrams in which the vertical axis is labeled as either "relative abundance" or "relative intensity". Whichever is used, it means the same thing. The vertical scale is related to the current received by the chart recorder - and so to the number of ions arriving at the detector: the greater the current, the more abundant the ion.
As you will see from the diagram, the commonest ion has a mass/charge ratio of 98. Other ions have mass/charge ratios of 92, 94, 95, 96, 97 and 100. That means that molybdenum consists of 7 different isotopes. Assuming that the ions all have a charge of 1+, that means that the masses of the 7 isotopes on the carbon-12 scale are 92, 94, 95, 96, 97, 98 and 100. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/How_the_Mass_Spectrometer_Works.txt |
When molecules go through a mass spectrometer, some of them arrive intact at the detector, but many of them break into pieces in a variety of different ways. To establish a charge on a molecule, an electron had to be removed; removal of that electron is effected through a collision, usually with a high-energy electron. During that collision, energy is transferred from the high-energy electron to the molecule, and that energy has to go somewhere. Part of it gets partitioned into various bond vibrations, so bonds start to vibrate quite a lot, until some of them snap completely. The molecular ion breaks apart and forms a fragment ion.
Some fragment ions are very common in mass spectrometry. These ions are seen frequently for either of two reasons:
• there is not a pathway available to break the ion down.
• the ion is relatively stable, so it forms easily.
Fragmentations occur through well-defined pathways or mechanisms. A mechanism is a step-by-step series of events that happens in a reaction. It is important to understand how reactions happen, but we will look at fragmentations when we study radical reactions.
However, it is useful to know what factors make cations stable.
Some Common Ions
There are a number of ions commonly seen in mass spectrometry that tell you a little bit about the structure. Just like with anions, there are a couple of common factors influence cation stability:
• Electronegativity plays a role. More electronegative atoms are less likely to be cations.
• Polarizability also plays a role. More polarizable atoms are more likely to be cations.
However, in most cases, we will be looking at carbon with a positive charge, and there are additional factors to distinguish between them
• Delocalization stabilizes a cation by spreading out the charge onto two or more different atoms.
• In Lewis structure terms, the easiest way to delocalize charge is via resonance.
• Resonance can involve other carbons, like in allyl and benzyl cations.
• Resonance can also involve other atoms, like in acylium or iminium cations.
• Delocalization can also be accomplished through inductive effects. The trend in carbocations is that the more substituents on teh carbocation, the greater the stability.
• Tertiary cations, with three substituents on the carbocation, are more stable than secondary cations, with two substituents on the carbocation. Secondary cations are more stable than primary ones. Primary cations are more stable than methyl cations.
Molecular orbital calculations suggest that the cation is stabilized through interaction with neighboring C-H bonds in the alkyl groups. Specifically, a C-H sigma bonding orbital has symmetry similar to the empty p orbital on the positive carbon. The lobes on the two orbitals can overlap such that they are in phase, and that allows electrons to be donated from the C-H bond to the central, electron-deficient carbon. Formally, there is a bonding interaction and an antibonding interaction between these two orbitals. Since one of these orbitals is empty, the antibonding combination remains unoccupied. The bonding combination is populated, however, and since it is lower in energy than either the p orbital or the C-H sigma bond (all bonding combinations are lower in energy than the orbitals that combine to form them), there is a net decrease in energy.
Problem MS8.
Draw as many resonance structures as you can that help explain teh stability of the following cations:
a) allyl cation b) benzyl cation c) tropylium cation d) an acylium ion e) an iminium ion | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Introductory_Mass_Spectrometry/Fragmentation.txt |
Two common categories of mass spectrometry are high resolution mass spectrometry (HRMS) and low resolution mass spectrometry (LRMS). Not all mass spectrometers simply measure molecular weights as whole numbers. High resolution mass spectrometers can measure mass so accurately that they can detect the minute differences in mass between two compounds that, on a regular low-resolution instrument, would appear to be identical.
The reason is because atomic masses are not exact multiples of the mass of a proton, as we might usually think.
• An atom of 12C weighs 12.00000 amu.
• An atom of 16O weighs 15.9949 amu.
• An atom of 14N weighs 14.0031 amu.
• An atom of 1H weighs 1.00783 amu.
As a result, on a high resolution mass spectrometer, 2-octanone, C8H16O, has a molecular weight of 128.12018 instead of 128. Naphthalene, C10H8, has a molecular weight of 128.06264. Thus a high resolution mass spectrometer can supply an exact molecular formula for a compound because of the unique combination of masses that result.
• In LRMS, the molecular weight is determined to the nearest amu. The type of instrument used here is more common because it is less expensive and easier to maintain.
• In HRMS, the molecular weight in amu is determined to several decimal places. That precision allows the molecular formula to be narrowed down to only a few possibilities.
HRMS relies on the fact that the mass of an individual atom does not correspond to an integral number of atomic mass units.
Problem MS7.
Calculate the high-resolution molecular weights for the following formulae.
1. C12H20O and C11H16O2
2. C6H13N and C5H11N2
Introduction to Mass Spectrometry
• The y-axis is usually labeled "abundance" or "relative intensity". This axis shows how the relative ratios of molecules in the sample that have a particular mass.
• The x-axis is labeled "m/z" and corresponds to molecular mass. The designation m/z refers to the fact that this technique really measures the ratio of an ion's mass to its charge.
• A mass spectrum does not show the results from one molecule, but from millions of molecules. Because it is displaying results for a population of molecules, more than one mass is shown
• Many of the molecules in the sample fall apart during the experiment.
• A mass spectrum is a bar graph showing the weights of entire molecules as well as smaller pieces of molecules. The entire molecule must have the largest mass, the one farthest to the right, because if a molecule falls into pieces the pieces would be smaller than the whole.
Problem MS1.
The following figure shows the mass spectrum of a saturated hydrocarbon (containing only carbon and hydrogen with only single bonds between carbons, not double bonds).
1. Draw five different structures that would have the molecular weight of this compound.
2. Choose four smaller m/z values from the spectrum and draw one structure for each of them. Note that these fragments will not have complete Lewis structures.
Isotopes: 13C
Isotopomers or isotopic isomers are isomers with isotopic atoms, having the same number of each isotope of each element but differing in their positions.
Isotopomers
If you look closely at the mass spectrum of an organic compound, 2-butanone, you see a line at m/z 72, which corresponds to 4 carbons, an oxygen and 8 hydrogens.
• Usually, whole numbers are used for molecular weights in mass spectrometry.
• The atomic masses in the periodic table, out to 4 decimal places, are the average masses including different possible isotopes.
• Because mass spectrometry examines individual molecules, individual atomic masses are needed, not average ones. Usually that means using a whole number.
In addition, there are a number of other lines at lower values of m/z; these correspond to the masses of smaller pieces of those 2-butanone molecules that fall apart during the experiment. We won't look too closely at how those arise until we get to radical reactions later in the course. However, we will look at some factors that make cations stable later in this chapter.
If you look closely at the mass spectrum of 2-butanone, you'll also see another little peak at m/z 73. This is referred to as the M+1 peak (one greater than the molecular ion), and it arises because of 13C. This compound is referred to as an isotopomer; that means the same compound with a different isotope.
• 12C is about 99% abundant; 99% of carbon atoms have a mass of 12 amu.
• 13C is about 1% abundant; 1% of carbon atoms have a mass of 13 amu.
• Compounds that contain a 13C atom have a mass one larger than expected.
The chance that a molecule in a sample contains a 13C atom is related to the number of carbons present. If there is just one carbon atom in the molecule, it has a 1% chance of being a 13C. That means the M+1 peak would be only 1/100th as tall as M+, the peak for the molecular ion.
• The M+1 peak from a 13C atom is very small.
• The more carbons there are in a molecule, the bigger the M+1 peak.
• If there are 10 carbon atoms in the molecule, there is a 10% chance of a 13C atom being present. The M+1 peak is about 1/10th the size of the M+ peak.
• If there are 100 carbons in the molecule, there is a very good chance that a 13C atom is present. At that point, the M+1 peak is actually much larger than the M+. peak | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Introductory_Mass_Spectrometry/High_Resolution_vs_Low_Resolution.txt |
Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 22 August 2008)
• Chlorinated compounds show an M+2 peak that is 1/3 as large as the M+ peak.
Note also that halogens are easily lost during mass spectrometry. If you subtract the mass of the halogen from the molecular ion mass, you will often find a peak that corresponds to the remainder of the structure.
Problem MS4.
Draw one possible structure for the compound in each of the following mass spectra.
Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 25 August 2008)
Problem MS5.
In the following mass spectrum, more than one halogen atom is present.
1. What is a possible structure of the compound?
2. Show why this pattern of molecular ions is observed.
Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 25 August 2008)
Molecular Ion and Nitrogen
Molecular Weight: Even or Odd?
This phenomenon is a result of the fact that the most common elements in organic compounds, carbon and oxygen, have even atomic weights (12 and 16, respectively), so any number of carbons and oxygens will have even weights. The most common isotope of hydrogen has an odd molecular weight, but because carbon and oxygen both have even valences (carbon forms four bonds and oxygen forms two), there is always an even number of hydrogen atoms in an organic compound containing those elements, so they also add up to an even numbered weight.
Nitrogen has an even atomic weight (14), so any number of nitrogen atoms will add up to an even molecular weight. Nitrogen, however, has an odd valence (it forms three bonds), and as a result there will be an odd number of hydrogens in a nitrogenous compound, and the molecular weight will be odd because of the presence of an extra hydrogen.
Of course, if there are two nitrogens in a molecule, there will be two extra hydrogens, so the molecular weight will actually be even. That means the rule about molecular weight and nitrogen should really be expressed as:
• odd numbers of nitrogen atoms in a molecule in an odd molecular weight.
What about those other atoms that sometimes show up in organic chemistry, such as the halogens? Halogens all have odd molecular weights (19 amu for fluorine, 35 or 37 for chlorine, 79 or 81 for bromine, and X for iodine). However, halogens all have a valence of 1, just like hydrogen. As a result, to add a halogen to methane, we would need to erase one of the hydrogen atoms and replace it with the halogen. Since we are just substituting one odd numbered atomic weight for another, the total weight remains even.
Problem MS6.
Calculate molecular weights for the following compounds.
Molecular Ions
Problem MS3.
Draw an equation for the formation of a molecular ion from each of the following compounds.
The Mass Spectrometry Experiment
Mass spectrometry only works with ions, not with neutral molecules. That means a neutral molecules must become charged in order to do this experiment. It is common to generate a cation from the molecule by removing one electron. The electron is knocked off the molecule in a collision. The collision can be caused in two different ways:
• The molecule can be sent through a stream of high-energy electrons. This method is called electron ionization.
• The molecule is sent through a stream of small molecules, such as ammonia or methane. This method is called chemical ionization.
• Electron ionization frequently results in the molecule falling to pieces because of the high energy of the electrons.
• Chemical ionization results in a "softer" collision because momentum can be dissipated through various bonds in both colliding molecules. Chemical ionization results in less fragmentation of the target molecule.
• However, after chemical ionization, the ionizing molecule sometimes sticks to the target molecule, leading to a greater "molecular" mass. For example, if ammonia is used for ionization, an extra mass may be observed at 17 amu higher than expected.
The reason the x-axis on a mass spectrum is labeled m/z (mass-to-charge ratio) is to acknowledge that there are really two factors contributing to the experiment.
MALDI-TOF
Proteins and peptides have been characterized by high pressure liquid chromatography (HPLC) or SDS PAGE by generating peptide maps. These peptide maps have been used as fingerprints of protein or as a tool to know the purity of a known protein in a known sample. Mass spectrometry gives a peptide map when proteins are digested with amino end specific, carboxy end specific, or amino acid specific digestive enzymes. This peptide map can be used to search a sequence database to find a good match from the existing database. This is because the more accurately the peptide masses are known, the less chance there is of bad matches.
• Basir Syed | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Introductory_Mass_Spectrometry/Isotopes%3A_Br_and_Cl.txt |
Gas Chromatography/Mass Spectrometry (GS/MS)
A mass spectrometer creates charged particles (ions) from molecules. It then analyzes those ions to provide information about the molecular weight of the compound and its chemical structure. There are many types of mass spectrometers and sample introduction techniques which allow a wide range of analyses. This discussion will focus on mass spectrometry as it's used in the powerful and widely used method of coupling Gas Chromatography (GC) with Mass Spectrometry (MS).
Pictured above is a GC/MS instrument used in the organic teaching labs.
Gas Chromatograph (GC)
A mixture of compounds to be analysed is initially injected into the GC where the mixture is vaporized in a heated chamber. The gas mixture travels through a GC column, where the compounds become separated as they interact with the column. The chromatogram on the right shows peaks which result from this separation. Those separated compounds then immediately enter the mass spectrometer.
Mass Spectrometer (MS)
Below is a general schematic of a mass spectrometer. The blue line illustrates ions of a particular mass/charge ratio which reach the detector at a certain voltage combination. All mass spectrometers consist of three distinct regions:
1. Ionizer
2. Ion Analyzer
3. Detector
Ionizer
In the GC-MS discussed in this introduction, the charged particles (ions) required for mass analysis are formed by Electron Impact (EI) Ionization. The gas molecules exiting the GC are bombarded by a high-energy electron beam (70 eV). An electron which strikes a molecule may impart enough energy to remove another electron from that molecule. Methanol, for example, would undergo the following reaction in the ionizing region:
CH3OH + 1 electron CH3OH+.+ 2 electrons
(note: the symbols +. indicate that a radical cation was formed)
EI Ionization usually produces singly charged ions containing one unpaired electron. A charged molecule which remains intact is called the molecular ion. Energy imparted by the electron impact and, more importantly, instability in a molecular ion can cause that ion to break into smaller pieces (fragments). The methanol ion may fragment in various ways, with one fragment carrying the charge and one fragment remaining uncharged. For example:
CH3OH+.(molecular ion)CH2OH+(fragment ion) + H.
or
CH3OH+.(molecular ion)CH3+(fragment ion) + .OH
Ion Analyzer
Molecular ions and fragment ions are accelerated by manipulation of the charged particles through the mass spectrometer. Uncharged molecules and fragments are pumped away. The quadrupole mass analyzer in this example uses positive (+) and negative (-) voltages to control the path of the ions. Ions travel down the path based on their mass to charge ratio (m/z). EI ionization produces singly charged particles, so the charge (z) is one. Therefore an ion's path will depend on its mass. If the (+) and (-) rods shown in the mass spectrometer schematic were ‘fixed' at a particular rf/dc voltage ratio, then one particular m/z would travel the successful path shown by the solid line to the detector. However, voltages are not fixed, but are scanned so that ever increasing masses can find a successful path through the rods to the detector.
Detector
There are many types of detectors, but most work by producing an electronic signal when struck by an ion. Timing mechanisms which integrate those signals with the scanning voltages allow the instrument to report which m/z strikes the detector. The mass analyzer sorts the ions according to m/z and the detector records the abundance of each m/z. Regular calibration of the m/z scale is necessary to maintain accuracy in the instrument. Calibration is performed by introducing a well known compound into the instrument and "tweaking" the circuits so that the compound's molecular ion and fragment ions are reported accurately.
Interpreting spectra
A simple spectrum, that of methanol, is shown below. CH3OH+. (the molecular ion) and fragment ions appear in this spectrum.
Major peaks are shown in the table next to the spectrum. The x-axis of this bar graph is the increasing m/z ratio. The y-axis is the relative abundance of each ion, which is related to the number of times an ion of that m/z ratio strikes the detector. Assignment of relative abundance begins by assigning the most abundant ion a relative abundance of 100% (CH2OH+ in this spectrum). All other ions are shown as a percentage of that most abundant ion. For example, there is approximately 64% of the ion CHO+ compared with the ion CH2OH+ in this spectrum. The y-axis may also be shown as abundance (not relative). Relative abundance is a way to directly compare spectra produced at different times or using different instruments.
EI ionization introduces a great deal of energy into molecules. It is known as a "hard" ionization method. This is very good for producing fragments which generate information about the structure of the compound, but quite often the molecular ion does not appear or is a smaller peak in the spectrum.
Of course, real analyses are performed on compounds far more complicated than methanol Spectra interpretation can become complicated as initial fragments undergo further fragmentation, and as rearrangements occur. However, a wealth of information is contained in a mass spectrum and much can be determined using basic organic chemistry "common sense".
Following is some general information which will aid EI mass spectra interpretation
• Molecular ion (M.+): If the molecular ion appears, it will be the highest mass in an EI spectrum (except for isotope peaks discussed below). This peak will represent the molecular weight of the compound. Its appearance depends on the stability of the compound. Double bonds, cyclic structures and aromatic rings stabilize the molecular ion and increase the probability of its appearance.
• Reference Spectra: Mass spectral patterns are reproducible. The mass spectra of many compounds have been published and may be used to identify unknowns. Instrument computers generally contain spectral libraries which can be searched for matches.
• Fragmentation: General rules of fragmentation exist and are helpful to predict or interpret the fragmentation pattern produced by a compound. Functional groups and overall structure determine how some portions of molecules will resist fragmenting, while other portions will fragment easily. A detailed discussion of those rules is beyond the scope of this introduction, and further information may be found in your organic textbook or in mass spectrometry reference books. A few brief examples by functional group are described (see Fragmentation Patterns).
• Isotopes: Isotopes occur in compounds analyzed by mass spectrometry in the same abundances that they occur in nature. A few of the isotopes commonly encountered in the analyses of organic compounds are below along with an example of how they can aid in peak identification.
Relative Isotope Abundance of Common Elements
Element Isotope Relative
Abundance
Isotope Relative
Abundance
Isotope Relative
Abundance
Carbon 12C 100 13C 1.11
Hydrogen 1H 100 2H 0.016
Nitrogen 14N 100 15N 0.38
Oxygen 16O 100 17O 0.04 18O 0.20
Sulfur 32S 100 33S 0.78 34S 4.40
Chlorine 35Cl 100 37Cl 32.5
Bromine 79Br 100 81Br 98.0
Methyl Bromide: An example of how isotopes can aid in peak identification.
The ratio of peaks containing 79Br and its isotope 81Br (100/98) confirms the presence of bromine in the compound.
Other Methods
An array of ionization methods and mass analyzers are available to meet the needs of many types of chemical analysis. A few are listed here with a highlight of their usefulness.
Sample introduction/ionization methods
Ionization
method
Typical
Analytes
Sample
Introduction
Mass
Range
Method
Highlights
Electron Impact (EI) Relatively
small
volatile
GC or
liquid/solid
probe
to
1,000
Daltons
Hard method
versatile
provides
structure info
Chemical Ionization (CI) Relatively
small
volatile
GC or
liquid/solid
probe
to
1,000
Daltons
Soft method
molecular ion
peak [M+H]+
Electrospray (ESI) Peptides
Proteins
nonvolatile
Liquid
Chromatography
or syringe
to
200,000
Daltons
Soft method
ions often
multiply
charged
Fast Atom Bombardment (FAB) Carbohydrates
Organometallics
Peptides
nonvolatile
Sample mixed
in viscous
matrix
to
6,000
Daltons
Soft method
but harder
than ESI or
MALDI
Matrix Assisted Laser Desorption (MALDI) Peptides
Proteins
Nucleotides
Sample mixed
in solid
matrix
to
500,000
Daltons
Soft method
very high
mass
Mass Analyzers
Analyzer System Highlights
Quadrupole Unit mass resolution, fast scan, low cost
Sector (Magnetic and/or Electrostatic) High resolution, exact mass
Time-of-Flight (TOF) Theoretically, no limitation for m/z maximum, high throughput
Ion Cyclotron Resonance (ICR) Very high resolution, exact mass, perform ion chemistry
Linked Systems
GC/MS: Gas chromatography coupled to mass spectrometry
LC/MS: Liquid chromatography coupled to electrospray ionization mass spectrometry
Outside Links
• A library of spectra can be found in the NIST WebBook, a data collection of the National Institute of Standards and Technology.
• Useful tools such as an exact mass calculator and a spectrum generator can be found in the MS Tools section of Scientific Instrument Services webpage.
• The JEOL Mass Spectrometry website contains tutorials, reference data and links to other sites.
• More general information and tutorials can be found in Scimedia, an educational resource.
• At the University of Arizona, the Wysocki Research Group studies surface-induced dissociation (SID) tandem mass spectrometry.
• Many more interesting and useful links can be found by following the site links in the above references.
Contributors and Attributions
Dr. Linda Breci, Associate Director Arizona Proteomics Consortium University of Arizona
Mass Spec
Here are a list of steps to follow when interpreting a mass spectrum. This simplified list will help you to interpret many spectra, however there are other mechanisms of fragmentation which cannot be covered in this brief tutorial.
Steps to interpret a mass spectrum
1. Look for the molecular ion peak.
• This peak (if it appears) will be the highest mass peak in the spectrum, except for isotope peaks.
• Nominal MW (meaning=rounded off) will be an even number for compounds containing only C, H, O, S, Si.
• Nominal MW will be an odd number if the compound also contains an odd number of N (1,3,...).
2. Try to calculate the molecular formula:
• The isotope peaks can be very useful, and are best explained with an example.
• Carbon 12 has an isotope, carbon 13. Their abundances are 12C=100%, 13C=1.1%. This means that for every 100 (12)C atoms there are 1.1 (13)C atoms.
• If a compound contains 6 carbons, then each atom has a 1.1% abundance of (13)C.
• Therefore, if the molecular ion peak is 100%, then the isotope peak (1 mass unit higher) would be 6x1.1%=6.6%.
• If the molecular ion peak is not 100% then you can calculate the relative abundance of the isotope peak to the ion peak. For example, if the molecular ion peak were 34% and the isotope peak 2.3%: (2.3/34)x100 = 6.8%. 6.8% is the relative abundance of the isotope peak to the ion peak. Next, divide the relative abundance by the isotope abundance: 6.8/1.1=6 carbons.
• Follow this order when looking for information provided by isotopes: (A simplified table of isotopes is provided in the introduction, more detailed tables can be found in chemistry texts.)
• Look for A+2 elements: O, Si, S, Cl, Br
• Look for A+1 elements: C, N
• "A" elements: H, F, P, I
3. Calculate the total number of rings plus double bonds:
• For the molecular formula: CxHyNzOn
• rings + double bonds = x - (1/2)y + (1/2)z + 1
4. Postulate the molecular structure consistent with abundance and m/z of fragments.
• More information on specific fragmentation can be found in the quiz for each functional group. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Mass_Spec/Interpreting_a_Mass_Spectrum.txt |
Following are examples of compounds listed by functional group, which demonstrate patterns which can be seen in mass spectra of compounds ionized by electron impact ionization. These examples do not provide information about the fragmentation mechanisms that cause these patterns. Additional information can be found in mass spectrometry reference books.
Alcohol
An alcohol's molecular ion is small or non-existent. Cleavage of the C-C bond next to the oxygen usually occurs. A loss of H2O may occur as in the spectra below.
3-Pentanol (C5H12O) with MW = 88.15
Aldehyde
Cleavage of bonds next to the carboxyl group results in the loss of hydrogen (molecular ion less 1) or the loss of CHO (molecular ion less 29).
3-Phenyl-2-propenal (C9H8O) with MW = 132.16
Alkane
Molecular ion peaks are present, possibly with low intensity. The fragmentation pattern contains clusters of peaks 14 mass units apart (which represent loss of (CH2)nCH3).
Hexane (C6H14) with MW = 86.18
Amide
Primary amides show a base peak due to the McLafferty rearrangement.
3-Methylbutyramide (C5H11NO) with MW = 101.15
Amine
Molecular ion peak is an odd number. Alpha-cleavage dominates aliphatic amines.
n-Butylamine (C4H11N) with MW = 73.13
Another example is a secondary amine shown below. Again, the molecular ion peak is an odd number. The base peak is from the C-C cleavage adjacent to the C-N bond.
n-Methylbenzylamine (C8H11N) with MW = 121.18
Aromatic
Molecular ion peaks are strong due to the stable structure.
Naphthalene (C10H8) with MW = 128.17
Carboxylic Acid
In short chain acids, peaks due to the loss of OH (molecular ion less 17) and COOH (molecular ion less 45) are prominent due to cleavage of bonds next to C=O.
2-Butenoic acid (C4H6O2) with MW = 86.09
Ester
Fragments appear due to bond cleavage next to C=O (alkoxy group loss, -OR) and hydrogen rearrangements.
Ethyl acetate (C4H8O2) with MW = 88.11
Ether
Fragmentation tends to occur alpha to the oxygen atom (C-C bond next to the oxygen).
Ethyl methyl ether (C3H8O) with MW = 60.10
Halide
The presence of chlorine or bromine atoms is usually recognizable from isotopic peaks.
1-Bromopropane (C3H7Br) with MW = 123.00
Ketone
Major fragmentation peaks result from cleavage of the C-C bonds adjacent to the carbonyl.
4-Heptanone (C7H14O) with MW = 114.19
Contributors and Attributions
Dr. Linda Breci, Associate Director Arizona Proteomics Consortium University of Arizona
Mass Spectroscopy: Quizes
1 Identify the molecular ion in this spectrum.
Click on image to enlarge
a) 45
b) 46
c) 47
2 For this same spectrum, choose the compound that the spectrum represents.
Web Reference
Click on image to enlarge
a) formic acid
b) 1-propanol
c) ethanol
d) methanol
e) isopropyl alcohol
3 Find the alkyl ion series in the spectrum below. (Check the hint!)
Web Reference
Click on image to enlarge
a) 15, 29, 43, 57
b) 31, 45
c) none
4 Find the alkyl LOSS ion series in the same spectra shown below. (Check the hint!)
Web Reference
Click on image to enlarge
a) 15, 29, 43, 57
b) 31, 45
c) none
5 For the same spectrum shown in the previous two questions, choose the compound that the spectrum represents.
Web Reference
Click on image to enlarge
a) 2-methyl-2-propanol
b) 1-butanol
c) 2-butanol
d) 1-pentanol
e) 2-methyl-1-propanol
6 Identify the molecular ion in this spectrum.
Web Reference
Click on image to enlarge
a) 70
b) 71
c) 87
d) 88
7 For the same spectrum, choose the compound that the spectrum represents.
Web Reference
Click on image to enlarge
a) 1-hexanol
b) 1-pentanol
c) 2-methyl-2-butanol
d) 2-pentanol
e) 1-butanol
8 Choose the compound that this spectrum represents.
Web Reference
Click on image to enlarge
a) 2-methyl-2-butanol
b) 2-methyl-2-propanol
c) 2-butanol
d) 1-butanol
e) 2-methyl-1-propanol
9 Choose the compound that this spectrum represents.
Web Reference
Click on image to enlarge
a) 2-pentanol
b) 2-methyl-2-propanol
c) 1-butanol
d) 2-methyl-1-propanol
e) 2-butanol
Mass Spectra Interpretation: ALDEHYDES
ELECTRON IMPACT MASS SPECTROMETRY PRODUCED THE SPECTRA BELOW. MANY PEAKS ARE LABELLED TO AID INTERPRETATION: M/Z (RELATIVE ABUNDANCE). LOOK AT THE HINTS AND FOLLOW THE WEB REFERENCE LINK FOR HELP. YOU WILL NEED A PERIODIC TABLE, A TABLE OF COMMON ISOTOPES (WHICH YOU CAN FIND IN THE WEB REFERENCE), A CALCULATOR, AND SCRATCH PAPER TO WORK THIS QUIZ.
2 For this same spectrum, choose the compound that the spectrum represents.
Click on image to enlarge
a) butanal
b) propanal
c) 2-propenal
d) methoxy-ethene
e) ethanediol
3. Find the alkyl ion series in the spectra below. (Check the hint!)
Web Reference
Click on image to enlarge
a) 15, 29, 43
b) 29, 43, 57
c) none
4. Find the alkyl LOSS ion series in the same spectra shown below. (Check the hint!)
Web Reference
Click on image to enlarge
a) 15, 29, 43
b) 29, 43, 57
c) none
5. For this same spectrum, choose the compound that the spectrum represents.
Web Reference
Click on image to enlarge
a) 2-propenal
b) 2-methyl propanal
c) pentanal
d) 2-oxo-propanal
e) butanal
6. Choose the compound that this spectrum represents.
Web Reference
Click on image to enlarge
a) hexanal
b) 3-methyl butanal
c) pentanal
d) 2-methyl propanal
e) 2,2-dimethyl propanal
7 Choose the compound that this spectrum represents.
Web Reference
Click on image to enlarge
a) 2-methyl propanal
b) butanal
c) 2-oxo-propanal
d) 2-propenal
e) pentanal
8 Choose the compound that this spectrum represents.
Web Reference
Click on image to enlarge
a) 2,2-dimethyl propanal
b) hexanal
c) 2-oxo-propanal
d) 3-methyl-butanal
e) pentanal | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Mass_Spec/Mass_Spectrometry_-_Fragmentation_Patterns.txt |
Electrospray ionization is a soft ionization technique that is typically used to determine the molecular weights of proteins, peptides, and other biological macromolecules. Soft ionization is a useful technique when considering biological molecules of large molecular mass, such as the aformetioned, because this process does not fragment the macromolecules into smaller charged particles, rather it turns the macromolecule being ionized into small droplets. These droplets will then be further desolvated into even smaller droplets, which creates molecules with attached protons. These protonated and desolvated molecular ions will then be passed through the mass analyzer to the detector, and the mass of the sample can be determined.
Introduction
Electrospray ionization mass spectrometry is a desorption ionization method. Desorption ionization methods can be performed on solid or liquid samples, and allows for the sample to be nonvolatile or thermally unstable. This means that ionization of samples such as proteins, peptides, olgiopeptides, and some inorganic molecules can be performed. Electrospray ionization mass spectrometry requires that a molecule be of a fairly large mass. The instrument has a small mass range that it is able to detect, so therefore the mass of the unknown injected sample can easily be determined; as it must be in the range of the instrument. This quantitative analysis is done by considering the mass to charge ratios of the various peaks in the spectrum (Figure 1). The spectrum is shown with the mass-to-charge (m/z) ratio on the x-axis, and the relative intensity (%) of each peak shown on the y-axis. Calculations to determine the unknown mass, Mr, from the spectral data can then be performed using
\[ p = \dfrac{m}{z}\]
\[p_1 = \dfrac{M_r + z_1}{ z_1} \]
\[p_2 = \dfrac{M_r + (z_1 - 1)}{z_1 - 1}\]
where p1 and p2 are adjacent peaks. Peak p1 comes before peak p2 in the spectrum, and has a lower m/z value. The z1 value represents the charge of peak one. It should be noted that as the m/z value increases, the number of protons attached to the molecular ion decreases. Figure 1 below illustrates these concepts. Electrospray ionization mass spectrometry research was pioneered by the analytical chemistry professor John Bennet Fenn, who shared the Nobel Prize in Chemistry with Koichi Tanaka in 2002 for his work on the subject.
Calculations for m/z in spectrum
[M + 6H]6+ [M + 5H]5+ [M + 4H]4+
m/z = 15006/6 = 2501 m/z = 15005/5 = 3001 m/z = 15004/4 = 3751
[M + 3H]3+ [M + 2H]2+ [M + H]1+
m/z = 15003/3 = 5001 m/z = 15002/2 = 7501 m/z = 15001/1= 15001
Sample Preparation
Samples for injection into the electrospray ionization mass spectrometer work the best if they are first purified. The reason purity in a sample is important is because this technique does not work well when mixtures are used as the analyte. For this reason a means of purification is often employed to inject a homogeneous sample into the capillary needle. High performance liquid chromatography, Capillary Electrophoresis, and Liquid-Solid Column Chromatography are methods of choice for this purpose. The chosen purification method is then attached to the capillary needle, and the sample can be introduced directly.
Advantages and Disadvantages
There are some clear advantages to using electrospray ionization mass spectrometry as an analytical method. One advantage is its ability to handle samples that have large masses. Another advantage is that this ionization method is one of the softest ionization methods available, therefore it has the ability to analyze biological samples that are defined by non-covalent interactions. A quadrupole mass analyzer can also be used for this method, which means that a sample's structure can be determined fairly easily. The m/z ratio range of the quadrupole instrument is fairly small, which means that the mass of the sample can be determined to with a high amount of accuracy. Finally, the sensitivity for this instrument is impressive and therefore can be useful in accurate quantitative and qualitative measurements.
Some disadvantages to electrospray ionization mass spectrometry are present as well. A major disadvantage is that this technique cannot analyze mixtures very well, and when forced to do so, the results are unreliable. The apparatus is also very difficult to clean and has a tendency to become overly contaminated with residues from previous experiments. Finally, the multiple charges that are attached to the molecular ions can make for confusing spectral data. This confusion is further fueled by use of a mixed sample, which is yet another reason why mixtures should be avoided when using an electrospray ionization mass spectrometer.
Apparatus
Capillary Needle
The capillary needle is the inlet into the apparatus for the liquid sample. Once in the capillary needle, the liquid sample is nebulized and charged. There is a large amount of pressure being applied to the capillary needle, which in effect nebulizes the liquid sample forming a mist. The stainless steel capillary needle is also surrounded by an electrode that retains a steady voltage of around 4000 volts. This applied voltage will place a charge on the droplets. Therefore, the mist that is ejected from the needle will be comprised of charged molecular ions.
Desolvating Capillary
The molecular ions are oxidized upon entering the desolvating capillary, and a continual voltage is applied to the gas chamber in which this capillary is located. Here the desolvation process begins, through the use of a dry gas or heat, and the desolvation process continues through various pumping stages as the molecular ion travels towards the mass analyzer. An example of a dry gas would be an N2 gas that has been dehydrated. The gas or heat then provides means of evaporation, or desolvation, for the ionized droplets. As the droplets become smaller in size, their electric field densities become more concentrated. The increase in electric field density causes the like charges to repel one another, which induces an increase in surface tension. The point where the droplet can no longer support this increase in surface tension is known as the Rayleigh limit. At this point, the droplet divides into smaller droplets of either positive or negative chrage. This process is refered to as either a coulombic explosion or the ions are described as exiting the droplet through the "Taylor cone". Once the molecular ions have reached the entrance to the mass analyzer, they have been effectively reduced through protonation.
Mass Analyzer
Mass Analyzers (Mass Spectrometry) are used to determine the mass-to-charge ratio (m/z), this ratio is used to differentiate between molecular ions that were formed in the desolvating capillary. In order for a mass-to-charge ratio to be determined, the mass analyzer must be able to separate even the smallest masses. The ability of the analyzer to resolve the mass peaks can be defined with the following equation;
\[R = \dfrac{m}{\Delta m} \]
This equation represents the mass of the first peak (m), divided by the difference between the neighboring peaks \(\Delta m\). The better the resolution, the more useful the data. The mass analyzer must also be able to measure the ion currents produced by the multiply charged particles that are created in this process.
Mass analyzers use electrostatic lenses to direct the beam of molecular ions to the analyzer. A vacuum system is used to maintain a low pressure environment in order to prevent unwanted interactions between the molecular ions and any components that may be present in the atmosphere. These atmospheric components can effect the determined mass-to-charge ratio, so it is best to keep them to a minimum. The mass-to-charge ratio is then used to determine quantitative and qualitative properties of the liquid sample.
The mass analyzer used for electrospray ionization is a quadrupole mass spectrometer. A quadrupole mass spectrometer uses four charged rods, two negatively charged and two positively charged, that have alternating AC and DC currents. The rods are connected to both the positive terminal of the DC voltage and the negative terminal. Each pair of rods contains a negatively charged rod and a positively charged rod. The molecular ions are then sped through the chamber between these pairs of oppositely charged rods making use of a potential difference to do so. To maintain charge, and ultimately be readable by the detector, the molecular ions must travel through the quadrupole chamber without touching any of the four charged rods. If a molecular ion does run into one of the rods it will deem it neutral and undetectable.
Detector
The molecular ions pass through the mass analyzer to the detector. The detector most commonly used in conjunction with the quadrupole mass analyzer is a high energy dynode (HED), which is a electron multiplier with some slight variations. In an HED detector, the electrons are passed through the system at a high voltage and the electrons are measured at the end of the funnel shaped apparatus; otherwise known as the anode. A HED detector differs from the electron multiplier in that it operates at a much higher sensitivity for samples with a large mass than does the electron multiplier detector. Once the analog signal of the mass-to-charge ratio is recorded, it is then converted to a digital signal and a spectrum representing the data run can be analyzed.
Problems
Using the above spectrum, calculate the mass of the protein. For p1, shown in spectrum, the m/z is 7501 and for p2 the m/z is 15001. (Hint: Use the above equations, and the charge of p1 for z1)
Contributors and Attributions
• Jennifer Murphy | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Mass_Spectrometers_%28Instrumentation%29/Electrospray_Ionization_Mass_Spectrometry.txt |
The principle components of a mass spectrometer are an inlet, ion source, mass analyzer, detector, and data analysis. The function of an inlet system is to introduce a small amount of sample into the ion source with minimal loss of vacuum. However, many mass spectrometers contain more than one inlet system to accommodate a variety of samples. These include batch inlets, direct probe inlets, and chromatographic and capillary electrophoretic inlet systems.
Batch Inlets
The batch inlet system is considered the most common and simplest inlet system. Normally, the inside of the system is lined with glass to elude losses of polar analytes by adsorption. This system externally volatizes the sample which leaks into an empty ionization region. Boiling points up to 500 degrees C of gaseous and liquid samples can be used on typical systems. The system's vacuum contains a sample pressure of 10-4 to 10-5 Torr. Liquids are introduced using a microliter syringe into a reservoir; gases are enclosed in a metering area that is confined between two valves before being expanded into a reservoir container. Liquids that have boiling points lower than 500 degrees C can not be used in the system because the reservoir and tubing need to be kept at high temperatures by ovens and heating tapes. This is to ensure that the liquid samples are transformed to the gaseous phase and then leaked through a metal or glass diaphragm containing pinholes to the ionization area.
Direct Probe Inlets
A direct probe inlet is for small quantities of sample, solids, and nonvolatile liquids. Solids and nonvolatile liquids are injected through a probe, or sample holder. The probe is inserted through a vacuum lock which is designed to limit the volume of air needed to pump from the system after the probe has been inserted into the ionization section. Unlike the batch inlet, the sample will need to be cooled and/or heated on the probe. The probe is placed extremely close (a few millimeters) to the ionization source, where the slit leads to the spectrometer, and the sample is held in place on the surface of a glass or aluminum capillary tube or a small cup. This position makes it possible for thermally unstable compounds to be analyzed before decomposition because of the low pressure in the ionization area which is in close proximity to the sample. Do to the probe, nonvolatile samples such as carbohydrates, steroids, and metal-organic species can be studied because the low pressures lead to increased concentrations of the nonvolatile samples. The principle sample requirement is attainment of an analyte partial pressure of at least 10-8 torr before the onset of decomposition.1
Chromatographic and Capillary Electrophoretic Inlets
Chromatographic systems and Capillary Electrophoretic units are often coupled with mass spectrometers in order to allow separation and identification of the components in the sample. If these systems and units are linked with a mass spectrometer, then other specialized inlets, electrokinetic and pressure injection, are required. Electrokinetic and pressure injection controls the amount of volume injected by the duration of the injection, which typically range between 5 to 50 nL.
Electrokinetic Injection
The electrokinetic injection method involves one end of the capillary and electrode in a small cup removed from the buffer. Ionic migration and electroosmotic flow of the sample in the capillary are a result of a voltage applied for a recorded time. After this voltage is applied, the capillary end and electrode are placed back into the regular buffer solution for the remainder of the separation. The electrokinetic Injection technique injects larger quantities of the mobile ions versus the slower moving ions.
Pressure Injection
The pressure injection method is similar to the electrokinetic injection because the capillary end and electrode are removed from the buffer and placed into a small cup. However, instead of a voltage being applied, a pressure difference drives the sample into the capillary. A vacuum is applied at the detector end which produces the potential difference by pressurizing the sample or elevating the sample end. Pressure injection does not discriminate because of ion mobility, but it cannot be used in gel-filled capillaries.1 | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Mass_Spectrometers_%28Instrumentation%29/Injection_Stage.txt |
Mass spectrometry is an analytic method that employs ionization and mass analysis of compounds to determine the mass, formula and structure of the compound being analyzed. A mass analyzer is the component of the mass spectrometer that takes ionized masses and separates them based on charge to mass ratios and outputs them to the detector where they are detected and later converted to a digital output.
Introduction
There are six general types of mass analyzers that can be used for the separation of ions in a mass spectrometry.
1. Quadrupole Mass Analyzer
2. Time of Flight Mass Analyzer
3. Magnetic Sector Mass Analyzer
4. Electrostatic Sector Mass Analyzer
5. Quadrupole Ion Trap Mass Analyzers
6. Ion Cyclotron Resonance
Quadrupole Mass Analyzer
The DC bias will cause all the charged molecules to accelerate and move away from the center line, the rate being proportional to their charge to mass ratio. If their course goes off too far they will hit the metal rods or the sides of the container and be absorbed. So the DC bias acts like the magnetic field B of the mass spec and can be tuned to specific charge to mass ratios hitting the detector.
The two sinusoidal electric fields at 90 orientation and 90 degrees degrees phase shift will cause an electric field which oscillates as a circle over time. So as the charged particles fly down toward the detector, they will be traveling in a spiral, the diameter of the spiral being determined by the charge to mass ratio of the molecule and the frequency and strength of the electric field. The combination of the DC bias and the circularly rotating electric field will be the charge particles will travel in a spiral which is curved. So by timing the peak of the curved spiral to coincide with the position of the detector at the end of the quadrupole, a great deal of selectivity to molecules charge to mass ratio can be obtained.
TOF (Time of Flight) Mass Analyzer
TOF Analyzers separate ions by time without the use of an electric or magnetic field. In a crude sense, TOF is similar to chromatography, except there is no stationary/ mobile phase, instead the separation is based on the kinetic energy and velocity of the ions.
Ions of the same charges have equal kinetic energies; kinetic energy of the ion in the flight tube is equal to the kinetic energy of the ion as it leaves the ion source:
$KE = \dfrac{mv^2}{2} = zV \label{1}$
The time of flight, or time it takes for the ion to travel the length of the flight tube is:
$T_f = \dfrac{L}{v} \label{2}$
• with $L$ is the length of tube and
• $v$ is the velocity of the ion
Substituting Equation 1 for kinetic energy in Equation 2 for time of flight:
$T_f = L\sqrt{\dfrac{m}{z}}\sqrt{ \dfrac{1}{2 V}} \propto \sqrt{\dfrac{m}{z}} \label{3}$
During the analysis, $L$, length of tube, the Voltage from the ion source $V$ are held constant, which can be used to say that time of flight is directly proportional to the root of the mass to charge ratio.
Unfortunately, at higher masses, resolution is difficult because flight time is longer. Also at high masses, not all of the ions of the same m/z values reach their ideal TOF velocities. To fix this problem, often a reflectron is added to the analyzer. The reflectron consists of a series of ring electrodes of very high voltage placed at the end of the flight tube. When an ion travels into the reflectron, it is reflected in the opposite direction due to the high voltage. The reflectron increases resolution by narrowing the broadband range of flight times for a single m/z value. Faster ions travel further into the reflectrons, and slower ions travel less into the reflector. This way both slow and fast ions, of the same m/z value, reach the detector at the same time rather then at different times, narrowing the bandwidth for the output signal.
Magnetic Sector Mass Analyzer
Similar to time of flight (TOF) analyzer mentioned earlier,in magnetic sector analyzers ions are accelerated through a flight tube, where the ions are separated by charge to mass ratios. The difference between magnetic sector and TOF is that a magnetic field is used to separate the ions. As moving charges enter a magnetic field, the charge is deflected to a circular motion of a unique radius in a direction perpendicular to the applied magnetic field. Ions in the magnetic field experience two equal forces; force due to the magnetic field and centripetal force.
$F_B= zvB =F_c= \dfrac{mv^2}{r} \label{4}$
The above equation can then be rearranged to give:
$v = \dfrac{Bzr}{m} \label{5}$
If this equation is substituted into the kinetic energy equation:
$KE= zV=\dfrac{mv^2}{2} \label{6}$
$\dfrac{m}{z}=\dfrac{B^2r^2}{2V} \label{7}$
Basically the ions of a certain $m/z$ value will have a unique path radius which can be determined if both magnetic field magnitude $B$, and voltage difference $V$ for region of acceleration are held constant. when similar ions pass through the magnetic field, they all will be deflected to the same degree and will all follow the same trajectory path. Those ions which are not selected by $V$ and $B$ values, will collide with either side of the flight tube wall or will not pass through the slit to the detector. Magnetic sector analyzers are used for mass focusing, they focus angular dispersions.
Electrostatic Sector Mass Analyzer
Is similar to time of flight analyzer in that it separates the ions while in flight, but it separates using an electric field. Electrostatic sector analyzer consists of two curved plates of equal and opposite potential. As the ion travels through the electric field, it is deflected and the force on the ion due to the electric field is equal to the centripetal force on the ion. Here the ions of the same kinetic energy are focused, and ions of different kinetic energies are dispersed.
$KE = zV =\dfrac{mv^2}{2} \label{8}$
$F_E= zE= F_c=\dfrac{mv^2}{R} \label{9}$
Electrostatic sector analyzers are energy focusers, where an ion beam is focused for energy. Electrostatic and magnetic sector analyzers when employed individually are single focusing instruments. However when both techniques are used together, it is called a double focusing instrument., because in this instrument both the energies and the angular dispersions are focused.
Quadrupole Ion Trap Mass Analyzers
This analyzer employs similar principles as the quadrupole analyzer mentioned above, it uses an electric field for the separation of the ions by mass to charge ratios. The analyzer is made with a ring electrode of a specific voltage and grounded end cap electrodes. The ions enter the area between the electrodes through one of the end caps. After entry, the electric field in the cavity due to the electrodes causes the ions of certain m/z values to orbit in the space. As the radio frequency voltage increases, heavier mass ion orbits become more stabilized and the light mass ions become less stabilized, causing them to collide with the wall, and eliminating the possibility of traveling to and being detected by the detector.
The quadrupole ion trap usually runs a mass selective ejection, where selectively it ejects the trapped ions in order of in creasing mass by gradually increasing the applied radio frequency voltage.
Ion Cyclotron Resonance (ICR)
ICR is an ion trap that uses a magnetic field in order to trap ions into an orbit inside of it. In this analyzer there is no separation that occurs rather all the ions of a particular range are trapped inside, and an applied external electric field helps to generate a signal. As mentioned earlier, when a moving charge enters a magnetic field, it experiences a centripetal force making the ion orbit. Again the force on the ion due to the magnetic field is equal to the centripetal force on the ion.
$zvB=\dfrac{mv^2}{r} \label{10}$
Angular velocity of the ion perpendicular to the magnetic field can be substituted here $w_c=v/r$
$zB=mw_c \label{11}$
$w_c=\dfrac{zB}{m} \label{12}$
Frequency of the orbit depends on the charge and mass of the ions, not the velocity. If the magnetic field is held constant, the charge to mass ratio of each ion can be determined by measuring the angular velocity $\omega_c$. The relationship is that, at high $w_c$, there is low m/z value, and at low $\omega_c$, there is a high m/z value. Charges of opposite signs have the same angular velocity, the only difference is that they orbit in the opposite direction.
To generate an electric signal from the trapped ions, a vary electric field is applied to the ion trap
$E=E_o \cos{(\omega_c t)} \label{13}$
When the $\omega_c$ in the electric field matches the $\omega_c$ of a certain ion, the ion absorbs energy making the velocity and orbiting radius of the ion increase. In this high energy orbit, as the ion oscillates between two plates, electrons accumulate at one of the plates over the other inducing an oscillating current, or current image. The current is directly proportional to the number of ions in the cell at a certain frequency.
In a Fourier Transform ICR, all of the ions within the cell are excited simultaneously so that the current image is coupled with the image of all of the individual ion frequencies. A Fourier transform is used to differential the summed signals to produce the desired results.
• Ommaima Khan | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Mass_Spectrometers_%28Instrumentation%29/Mass_Analyzers_%28Mass_Spectrometry%29.txt |
The example above is simple, but the same methods can be applied to determine isotope peaks in more complicated molecules as well. The molecule C4Br1O2H5 has several isotope effects: 13C, 2H, 81Br, 17O, and 18O all must be taken into account. First we will look at the (M+1)+ peak in comparison with the M+ peak. Only isotopes that will increase the value of M by 1 must be taken into consideration here – since 81Br and 18O would both increase M by 2, they can be ignored (the most abundant isotopes for Br and O are 79Br and 16O). Like the previous example, there are 1.08 13C atoms for every 100 12C atoms. However, there are 4 carbon atoms in our molecule, and any one of them being a 13C atom would result in a molecule with mass (M+1). So it is necessary to multiply the probability of an atom being a 13C atom by the number of C atoms in the molecule. Therefore, we have:
4C * 1.08 = 4.32 = molecules with a 13C atom per 100 molecules
We can repeat this analysis for 2H and 17O:
5H * 0.015 = 0.075 = molecules with a 2H atom per 100 molecules
2O * 0.04 = 0.08 = molecules with a 17O atom per 100 molecules
Any of the three isotopes, 13C, 2H, or 17O occurring in our molecule would result in an (M+1)+ peak. To get the ratio of (M+1)+/M+, we need to add all three probabilities:
4.32 + 0.075 + 0.08 = 4.475 = (M+ 1)+ molecules per 100 M+ molecules
We can say then that the (M+1)+ peak is 4.475% as high as the M+ peak.
A similar analysis can be easily repeated for (M+2)+:
1Br * 98 = 98 = molecules with an 81Br molecule per 100 molecules
2O * 0.2 = 0.4 = molecules with an 18O molecule per 100 molecules
98 + 0.4 = 98.4 = (M+2)+ molecules per 100 M+ molecules
The (M + 2)+ peak is therefore 98.4% as tall as the M+ peak.
This method is useful because using isotopic differences, it is possible to differentiate two molecules of identical mass numbers.
Contributors and Attributions
• Morgan Kelley (UCD)
Organic Compounds Containing Halogen Atoms
This page explains how the M+2 peak in a mass spectrum arises from the presence of chlorine or bromine atoms in an organic compound. It also deals briefly with the origin of the M+4 peak in compounds containing two chlorine atoms.
One chlorine atom in a compound
The molecular ion peaks (M+ and M+2) each contain one chlorine atom - but the chlorine can be either of the two chlorine isotopes, 35Cl and 37Cl.
The molecular ion containing the 35Cl isotope has a relative formula mass of 78. The one containing 37Cl has a relative formula mass of 80 - hence the two lines at m/z = 78 and m/z = 80.
Notice that the peak heights are in the ratio of 3 : 1. That reflects the fact that chlorine contains 3 times as much of the 35Cl isotope as the 37Cl one. That means that there will be 3 times more molecules containing the lighter isotope than the heavier one.
So . . . if you look at the molecular ion region, and find two peaks separated by 2 m/z units and with a ratio of 3 : 1 in the peak heights, that tells you that the molecule contains 1 chlorine atom.
You might also have noticed the same pattern at m/z = 63 and m/z = 65 in the mass spectrum above. That pattern is due to fragment ions also containing one chlorine atom - which could either be 35Cl or 37Cl. The fragmentation that produced those ions was:
Two chlorine atoms in a compound
The lines in the molecular ion region (at m/z values of 98, 100 ands 102) arise because of the various combinations of chlorine isotopes that are possible. The carbons and hydrogens add up to 28 - so the various possible molecular ions could be:
28 + 35 + 35 = 98
28 + 35 + 37 = 100
28 + 37 + 37 = 102
If you have the necessary math, you could show that the chances of these arrangements occurring are in the ratio of 9:6:1 - and this is the ratio of the peak heights. If you don't know the right bit of math, just learn this ratio! So . . . if you have 3 lines in the molecular ion region (M+, M+2 and M+4) with gaps of 2 m/z units between them, and with peak heights in the ratio of 9:6:1, the compound contains 2 chlorine atoms.
Compounds containing bromine atoms
Bromine has two isotopes, 79Br and 81Br in an approximately 1:1 ratio (50.5 : 49.5 if you want to be fussy!). That means that a compound containing 1 bromine atom will have two peaks in the molecular ion region, depending on which bromine isotope the molecular ion contains. Unlike compounds containing chlorine, though, the two peaks will be very similar in height.
The carbons and hydrogens add up to 29. The M+ and M+2 peaks are therefore at m/z values given by:
29 + 79 = 108
29 + 81 = 110
Hence, if two lines in the molecular ion region are observed with a gap of 2 m/z units between them and with almost equal heights, this suggests the presence of a bromine atom in the molecule. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Mass_Spectrometry%3A_Isotope_Effects.txt |
This page explains how the M+1 peak in a mass spectrum can be used to estimate the number of carbon atoms in an organic compound.
What causes the M+1 peak?
If you had a complete (rather than a simplified) mass spectrum, you will find a small line 1 m/z unit to the right of the main molecular ion peak. This small peak is called the M+1 peak.
The carbon-13 isotope
The M+1 peak is caused by the presence of the 13C isotope in the molecule. 13C is a stable isotope of carbon - don't confuse it with the 14C isotope which is radioactive. Carbon-13 makes up 1.11% of all carbon atoms. If you had a simple compound like methane, CH4, approximately 1 in every 100 of these molecules will contain carbon-13 rather than the more common carbon-12. That means that 1 in every 100 of the molecules will have a mass of 17 (13 + 4) rather than 16 (12 + 4). The mass spectrum will therefore have a line corresponding to the molecular ion [13CH4]+ as well as [12CH4]+. The line at m/z = 17 will be much smaller than the line at m/z = 16 because the carbon-13 isotope is much less common. Statistically you will have a ratio of approximately 1 of the heavier ions to every 99 of the lighter ones. That's why the M+1 peak is much smaller than the M+ peak.
Using the M+1 peak
Imagine a compound containing 2 carbon atoms. Either of them has an approximately 1 in 100 chance of being 13C.
There's therefore a 2 in 100 chance of the molecule as a whole containing one 13C atom rather than a 12C atom - which leaves a 98 in 100 chance of both atoms being 12C. That means that the ratio of the height of the M+1 peak to the M+ peak will be approximately 2 : 98. That's pretty close to having an M+1 peak approximately 2% of the height of the M+ peak.
Using the relative peak heights to predict the number of carbon atoms
If you measure the peak height of the M+1 peak as a percentage of the peak height of the M+ peak, that gives you the number of carbon atoms in the compound. We've just seen that a compound with 2 carbons will have an M+1 peak approximately 2% of the height of the M+ peak. Similarly, you could show that a compound with 3 carbons will have the M+1 peak at about 3% of the height of the M+ peak. The approximations we are making won't hold with more than 2 or 3 carbons. The proportion of carbon atoms which are 13C isn't 1% - it's 1.11%. And the approximation that a ratio of 2 : 98 is about 2% doesn't hold as the small number increases.
Consider a molecule with 5 carbons in it. You could work out that 5.55 (5 x 1.11) molecules will contain 1 13C to every 94.45 (100 - 5.55) which contain only 12C atoms. If you convert that to how tall the M+1 peak is as a percentage of the M+ peak, you get an answer of 5.9% (5.55/94.45 x 100). That's close enough to 6% that you might assume wrongly that there are 6 carbon atoms. Above 3 carbon atoms, then, you shouldn't really be making the approximation that the height of the M+1 peak as a percentage of the height of the M+ peak tells you the number of carbons - you will need to do some fiddly sums!
Contributors and Attributions
Jim Clark (Chemguide.co.uk) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/The_M_1_Peak.txt |
This page looks at the information you can get from the mass spectrum of an element. It shows how you can find out the masses and relative abundances of the various isotopes of the element and use that information to calculate the relative atomic mass of the element. It also looks at the problems thrown up by elements with diatomic molecules - like chlorine, Cl2.
The mass spectrum of monatomic elements
Monatomic elements include all those except for things like chlorine, Cl2, with molecules containing more than one atom.
Example $1$: The Mass Spectrum of Boron
The number of isotopes: The two peaks in the mass spectrum shows that there are 2 isotopes of boron - with relative isotopic masses of 10 and 11 on the 12C scale.
The abundance of the isotopes: The relative sizes of the peaks gives you a direct measure of the relative abundances of the isotopes. The tallest peak is often given an arbitrary height of 100 - but you may find all sorts of other scales used. It doesn't matter in the least. You can find the relative abundances by measuring the lines on the stick diagram. In this case, the two isotopes (with their relative abundances) are:
boron-10 23
boron-11 100
Working out the relative atomic mass
The relative atomic mass (RAM) of an element is given the symbol Ar and is defined as:
The relative atomic mass of an element is the weighted average of the masses of the isotopes on a scale on which a carbon-12 atom has a mass of exactly 12 units A "weighted average" allows for the fact that there won't be equal amounts of the various isotopes. The example coming up should make that clear.
Suppose you had 123 typical atoms of boron. 23 of these would be 10B and 100 would be 11B.
The total mass of these would be (23 x 10) + (100 x 11) = 1330
The average mass of these 123 atoms would be 1330 / 123 = 10.8 (to 3 significant figures).
10.8 is the relative atomic mass of boron.
Notice the effect of the "weighted" average. A simple average of 10 and 11 is, of course, 10.5. Our answer of 10.8 allows for the fact that there are a lot more of the heavier isotope of boron - and so the "weighted" average ought to be closer to that.
Example $2$: The Mass Spectrum of Zirconium
The number of isotopes: The 5 peaks in the mass spectrum shows that there are 5 isotopes of zirconium - with relative isotopic masses of 90, 91, 92, 94 and 96 on the 12C scale.
The abundance of the isotopes: This time, the relative abundances are given as percentages. Again you can find these relative abundances by measuring the lines on the stick diagram. In this case, the 5 isotopes (with their relative percentage abundances) are:
zirconium-90 51.5
zirconium-91 11.2
zirconium-92 17.1
zirconium-94 17.4
zirconium-96 2.8
Working out the relative atomic mass
Suppose you had 100 typical atoms of zirconium. 51.5 of these would be 90Zr, 11.2 would be 91Zr and so on.
The total mass of these 100 typical atoms would be
(51.5 x 90) + (11.2 x 91) + (17.1 x 92) + (17.4 x 94) + (2.8 x 96) = 9131.8
The average mass of these 100 atoms would be 9131.8 / 100 = 91.3 (to 3 significant figures). 91.3 is the relative atomic mass of zirconium.
Example $3$: The Mass Spectrum of Chlorine
Chlorine is taken as typical of elements with more than one atom per molecule. We'll look at its mass spectrum to show the sort of problems involved. Chlorine has two isotopes, 35Cl and 37Cl, in the approximate ratio of 3 atoms of 35Cl to 1 atom of 37Cl. You might suppose that the mass spectrum would look like this:
You would be wrong! The problem is that chlorine consists of molecules, not individual atoms. When chlorine is passed into the ionization chamber, an electron is knocked off the molecule to give a molecular ion, Cl2+. These ions won't be particularly stable, and some will fall apart to give a chlorine atom and a Cl+ ion. The term for this is fragmentation.
$\ce{ Cl_2^{+} \rightarrow Cl + Cl^{+}} \nonumber$
If the Cl atom formed is not then ionized in the ionization chamber, it simply gets lost in the machine - neither accelerated nor deflected.
The Cl+ ions will pass through the machine and will give lines at 35 and 37, depending on the isotope and you would get exactly the pattern in the last diagram. The problem is that you will also record lines for the unfragmented Cl2+ ions.
Think about the possible combinations of chlorine-35 and chlorine-37 atoms in a Cl2+ ion. Both atoms could be 35Cl, both atoms could be 37Cl, or you could have one of each sort. That would give you total masses of the Cl2+ ion of:
35 + 35 = 70
35 + 37 = 72
37 + 37 = 74
That means that you would get a set of lines in the m/z = 70 region looking like this:
These lines would be in addition to the lines at 35 and 37.
The relative heights of the 70, 72 and 74 lines are in the ratio 9:6:1. If you know the right bit of math, it's very easy to show this. If not, don't worry. Just remember that the ratio is 9:6:1.
What you can't do is make any predictions about the relative heights of the lines at 35/37 compared with those at 70/72/74. That depends on what proportion of the molecular ions break up into fragments. That's why you've got the chlorine mass spectrum in two separate bits so far. You must realise that the vertical scale in the diagrams of the two parts of the spectrum isn't the same.
The overall mass spectrum looks like this: | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/The_Mass_Spectra_of_Elements.txt |
This page explains how to find the relative formula mass (relative molecular mass) of an organic compound from its mass spectrum. It also shows how high resolution mass spectra can be used to find the molecular formula for a compound.
When the vaporised organic sample passes into the ionization chamber of a mass spectrometer, it is bombarded by a stream of electrons. These electrons have a high enough energy to knock an electron off an organic molecule to form a positive ion. This ion is called the molecular ion. The molecular ion is often given the symbol $\ce{M^{+}}$ or$\ce{M^{\cdot +}}$ - the dot in this second version represents the fact that somewhere in the ion there will be a single unpaired electron. That's one half of what was originally a pair of electrons - the other half is the electron which was removed in the ionization process. The molecular ions tend to be unstable and some of them break into smaller fragments. These fragments produce the familiar stick diagram. Fragmentation is irrelevant to what we are talking about on this page - all we're interested in is the molecular ion.
In the mass spectrum, the heaviest ion (the one with the greatest m/z value) is likely to be the molecular ion. A few compounds have mass spectra which don't contain a molecular ion peak, because all the molecular ions break into fragments. For example, in the mass spectrum of pentane, the heaviest ion has an m/z value of 72.
Because the largest m/z value is 72, that represents the largest ion going through the mass spectrometer - and you can reasonably assume that this is the molecular ion. The relative formula mass of the compound is therefore 72.
Finding the relative formula mass (relative molecular mass) from a mass spectrum is therefore trivial. Look for the peak with the highest value for m/z, and that value is the relative formula mass of the compound. There are, however, complications which arise because of the possibility of different isotopes (either of carbon or of chlorine or bromine) in the molecular ion. These cases are dealt with on separate pages.
Using a mass spectrum to find a molecular formula
So far we've been looking at m/z values in a mass spectrum as whole numbers, but it's possible to get far more accurate results using a high resolution mass spectrometer. You can use that more accurate information about the mass of the molecular ion to work out the molecular formula of the compound.
For normal calculation purposes, you tend to use rounded-off relative isotopic masses. For example, you are familiar with the atomic mass numbers ($Z$). To 4 decimal places, however, these are the relative isotopic masses.
Isotope $Z$ Mass
1H 1 1.0078
12C 12 12.0000
14N 14 14.0031
16O 16 15.9949
The carbon value is 12.0000, of course, because all the other masses are measured on the carbon-12 scale which is based on the carbon-12 isotope having a mass of exactly 12.
Using these accurate values to find a molecular formula
Two simple organic compounds have a relative formula mass of 44 - propane, C3H8, and ethanal, CH3CHO. Using a high resolution mass spectrometer, you could easily decide which of these you had. On a high resolution mass spectrometer, the molecular ion peaks for the two compounds give the following m/z values:
C3H8 44.0624
CH3CHO 44.0261
You can easily check that by adding up numbers from the table of accurate relative isotopic masses above.
Example $1$:
A gas was known to contain only elements from the following list:
1H 1.0078
12C 12.0000
14N 14.0031
16O 15.9949
The gas had a molecular ion peak at m/z = 28.0312 in a high resolution mass spectrometer. What was the gas?
Solution
After a bit of playing around, you might reasonably come up with 3 gases which had relative formula masses of approximately 28 and which contained the elements from the list. They are N2, CO and C2H4. Working out their accurate relative formula masses gives:
N2 28.0062
CO 27.9949
C2H4 28.0312
The gas is obviously C2H4. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/The_Molecular_Ion_%28M_%29_Peak.txt |
Strictly speaking, a spectrometer is any instrument used to view and analyze a range (or a spectrum) of a given characteristic for a substance (for example, a range of mass-to-charge values as in mass spectrometry), or a range of wavelengths as in absorption spectrometry like nuclear magnetic radiation spectroscopy or infrared spectroscopy). A spectrophotometer is a spectrometer that only measures the intensity of electromagnetic radiation (light) and is distinct from other spectrometers such as mass spectrometers.
A spectrometer is typically used to measure wavelengths of electromagnetic radiation (light) that has interacted with a sample. Incident light can be reflected off, absorbed by, or transmitted through a sample; the way the incident light changes during the interaction with the sample is characteristic of the sample. A spectrometer measures this change over a range of incident wavelengths (or at a specific wavelength).
There are three main components in all spectrometers; these components can vary widely between instruments for specific applications and levels of resolution. Very generally, these components produce the electromagnetic radiation, somehow narrows the electromagnetic radiation to a specified range, and then detect the resulting electromagnetic radiation after is has interacted with the sample.
Sources of Radiation
There are two classes of radiation sources used in spectrometry: continuum sources and line sources. The former are usually lamps or heated solid materials that emit a wide range of wavelengths that must be narrowed greatly using a wavelength selection element to isolate the wavelength of interest. The latter sources include lasers and specialized lamps, that are designed to emit discrete wavelengths specific to the lamp’s material.
Electrode lamps are constructed of a sealed, gas-filled chamber that has one or more electrodes inside. Electrical current is passed through the electrode, which causes excitation of the gas. This excitation produces radiation at a wavelength or a range of wavelengths, specific to the gas. Examples include argon, xenon, hydrogen or deuterium, and tungsten lamps, which emit radiation in the following ranges.
There are also non-electrode lamps used as line sources that contain a gas and a piece of metal that will emit narrow radiation at the desired wavelength. Ionization of the gas occurs from radiation (usually in the radio or microwave frequencies). The metal atoms are then excited by a transfer of energy from the gas, thereby producing radiation at a very specific wavelength.
Laser (an acronym for light amplification by stimulated emission of radiation) sources work by externally activating a lasing material so that photons of a specific energy are produced and aimed at the material. This triggers photon production within the material, with more and more photons being produced as they reflect inside the material. Because all the photons are of equal energy they are all in phase with each other so that energy (and wavelength) is isolated and enhanced. The photons are eventually focused into a narrow beam and then directed at the sample.
Wavelength Selection
There are three methods of narrowing the incident electromagnetic radiation down to the desired wavelength: dispersive or non-dispersive.
Non-dispersive Elements
Wavelength selection elements are non-dispersive materials that filter out the unwanted ranges of wavelengths from the incident light source, thereby allowing only a certain range of wavelengths to pass through. For example, UV filters (as used on cameras) work by absorbing the UV radiation (100-400nm) but allowing other wavelengths to be transmitted. This type of filter is not common in modern spectrometers now that there are more precise elements available for narrowing the radiation.
There are also interference filters that select wavelengths by causing interference effects between the incident and reflected radiation waves at each of the material boundaries in the filter. The filter has layers of a dielectric material, semitransparent metallic films, and glass; the incident light is partitioned according to the properties of each material as it passes through the layers (Ingle). If the light is of the proper wavelength when it encounters the second metallic film, then the reflected portion remains in phase with some of the incident light still entering that layer. This effectively isolates and enhances this particular wavelength while all others are removed via destructive interference.
One filter can be adjusted to allow various wavelengths to pass through it by manually changing the angle of the incident radiation ($\theta$) angle:
$2d \sqrt{\epsilon^2 – \sin^2 } = m \lambda \nonumber$
Where $d$ is the thickness of the dialectic material (on the order of the wavelength of interest), ? is the refractive index of the material, $m$ is the order of interference, and ? is the passable wavelength. This shows that for a given material (constant d, $\epsilon$, and m) changing $\lambda$ results in a different $\theta$. Note that when the incident radiation is normal (perpendicular) to the filter surface, then the transmittable wavelength is independent of the radiation angle:
$\theta = \dfrac{2d\lambda}{m} \nonumber$
Interferometers are also non-dispersive systems that use reflectors (usually mirrors) to direct the incident radiation along a specified path before being recombined and/or focused. Some systems also include a beam splitter that divides the incident beam and directs each portion along a different path before being recombined and directed to the detector. When the beams are recombined, only the radiation that is in phase when the beams recombine will be detected. All other radiation suffers destructive interference and is therefore removed from the spectrum. An interferogram is a photographic record produced by an interferometer.
A Fabry-Perot Interferometer allows the incident radiation to be reflected back and forth between a pair of reflective plates that are separated by an air gap (Ingle). Diffuse, multi-beam incident radiation passes through a lens and is directed to the plates. Some of the radiation reflects out of the plates back towards the incident source. The remaining radiation reflects back and forth between the plates and is eventually transmitted through the pair of plates towards a focusing lens. Here all constructively interfering radiation is focused onto a screen where it creates a dark or bright spot.
Constructive interference occurs when
$2d \cos \theta’ = m? \nonumber$
Where $\theta’$ equals the angle of refraction in the air gap. This air gap can be changed to isolate particular wavelengths.
The mathematics relationships of the Fabry-Perot Interferometer relates the difference in the optical path length, $\Delta(OPL)$, with the reflectance of the plate coatings:
$\Delta(OPL) = 2d\lambda \cos \theta’ \nonumber$
Phase difference = ? = 2? (2d? cos ?’)/ ?
And ? ? 4?/(1- ?)2
Where ? is the reflectance of the plate coating.
A Michelson Interferometer uses a beam splitter plate to divide the incident radiation into two beams of equal intensity. A pair of perpendicular mirrors then reflects the beams back to the splitter plate where they recombine and are directed towards the detector. One mirror is movable and the other is stationary. By moving one mirror, the path length of each beam is different, creating interference at the detector that can be measured as a function of the position of the movable mirror. At a certain distance from the splitter plate, the movable mirror causes constructive interference of the radiation at the detector such that a bright spot is detected. By varying the distance from this location, the adjustable mirror causes the radiation to fluctuate sinusoidally between being “in phase” or “out of phase” at the detector (Ingle).
The sample material to be tested is placed in the path of one of the interferometer’s beams, which changes the path length difference between the two beams. It is the change in the interference pattern at the detector between the two beams that is measured.
Other interferometers work in a similar manner, but change the angle of the mirrors rather than the position. These variations are found in the Sagnac Interferometer or the Mach-Zender Interferometer.
Dispersive Elements
These work by dispersing the incident radiation out spatially, creating a spectrum of wavelengths (Ingle). In a prism the diffuse radiation beam is separated because of the refractive index of the material. For example, when white light is shone onto a prism, a rainbow of colors is observed coming out the other side. This is a result of wavelength dependence on the refractive index of the prism material.
Gratings are also used to disperse incident light into component wavelengths. They work by reflecting the light off the angled grating surface, causing the wavelengths to be dispersed through constructive interference at wavelength-dependent diffraction angles (Ingle).
The condition for constructive interference (and therefore wavelength selection) on a grating surface is:
$d (\sin \theta + \sin \theta) = m \theta \nonumber$
This relationship shows that the wavelength selection is not based on the grating material, but on the angle of incidence ($\theta$). The angle $\theta$ can also be used to describe the angle between normal and ?1.
Detector
Detectors are transducers that transform the analog output of the spectrometer into an electrical signal that can be viewed and analyzed using a computer. There are two types: photon detectors and thermal detectors.
• Photon detectors work generally by either causing electrons to be emitted or development of current when photons strike the detector surface. Examples include photovoltaic cells, phototubes, and charge-transfer transducers. Photovoltaic cells generate electrical current on a semiconductor material when photons in the visible region are absorbed. Phototubes, however, emit electrons from a photon-sensitive material. Charge-transfer transducers (such as photodiodes) develop a charge from visible region photon-induced electron transfer reactions within a silicon material. In each case, it is the current, the number of electrons, or the charge that is actually detected (not the photons themselves) and is then related to the energy/quantity of photons that caused the change in the material.
• Thermal detectors detect a temperature change in a material due to photon absorption. Thermocouples work by measuring the difference in temperature between a pair of junctions (usually the reference against the sample) and are generally used for the infrared wavelengths. The temperature difference is related to a potential difference, which is the output signal. Pyroelectric transducers are used in infrared region and utilize a dielectric material that produces a current when its temperature is changed by radiation absorption.
Spectrometer
This module will go over attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR). It will go over basic physics that describe and dictate the analytical technique. There will be a given example of how ATR-FTIR is used in biological studies. This example will be given in the format of real data and data analysis methods.
Introduction
ATR-FTIR is a tool that has been proven to be useful in various applications1. This technique is able to probe in situ single or multiple layers of adsorbed/deposited species at a solid/liquid interface1,2. It is for this reason that ATR-FTIR has been implemented in various different biological studies in order to probe chemical reactions/structure at the solid/liquid interface2. The reader should be aware that this module will not cover in depth about the physics that govern ATR-FTIR’s ability to probe chemical species, but rather give the reader an introduction on the technique. This module will cover the basic experimental set-up that is currently being utilized in research, briefly go over essential equations that will help the reader better understand methods of experimentation, and finally provide an example of how ATR-FTIR can be utilized.
Experimental Setup
To the left is an image of what a typical ATR set-up includes. It shows the internal components found within a specific ATR setup which allows the infrared radiation to enter from the bottom of the ATR base and redirected by five different mirrors before the radiation enters then exits the internal reflection element (IRE) and continues towards the the detector. This setup allows for the infrared radiation to probe molecules at the interface of the IRE. The one shown above is an image of a multiple reflection internal reflection element. It is called a multiple reflection IRE because the radiation makes more than one reflection within the element prior to exiting towards the detector. There are advantages and disadvantages of using a multiple reflection element. One of the advantages is that the user is able to observe stronger absorbance by the sample due to the multiple points that the radiation is interacting, however the disadvantage of using a multiple reflection element is that the light absorbance is hard to quantify due to scattering that occurs with each reflection. One way for a user to overcome the disadvantage that multiple reflections brings is to use a single reflection IRE rather than a multiple reflection IRE.
ATR Basics
The infrared radiation that interacts with the adsorbed species is limited to a given depth, and is dependent on the wavelength and angle of the incoming radiation, refractive index of the IRE, and the refractive indices of the interacting layers at the IRE interface. This depth is known as the depth of penetration and the governing equation is shown below.
$dp = \dfrac{\lambda}{2\pi \sqrt{n_1}}^2 \text{sin} [\theta ]^2-n_2^2 \tag{1}$
It is good to keep in mind that the if working with systems that have more than one monolayer of adsorbed species that the depth of penetration is dependent on the wavelength of the incoming radiation. If this is the case the depth of penetration will be greater at higher wavelengths, and one will be probing deeper into the medium rather than at the interface of the IRE. Below is a spectrum of a monolayer of DPPC lipid vesicles adsorbed onto the surface of a ZnSe IRE. The inset plot shows the depth of penetration of the infrared radiation with respect to wavenumber.
Vibrational Spectroscopy at the Solid/Liquid Interface
ATR-FTIR has been used to monitor and characterize organic thin film systems such as well formed self-assembled monolayers (SAM). This can be achieved forming a thin film of gold onto the IRE surface, exposing the gold surface to a thiol solution for a given amount of time. The figure below shows an example of an organic thin film formed on the surface of gold that was deposited on an IRE.
Lipid vesicles have received some attention over the years due to their ability to spontaneously form lipid aggregate spherical structures. These spherical structures have a wall made of lipids that separate the intracellular aqueous medium from the extracellular aqueous medium. They have been implemented in current forms of drug delivery systems, biological contrast agents, as well as serving as a model membrane. ATR-FTIR can be used to study these systems in a number of different ways. One such way to monitor the adsorption rate of these lipid vesicles from a bulk solution to a solid interface. Below is an example of one such experiment where 100 nm lipid vesicles were used to study the adsorption rate onto a ZnSe single reflection IRE.
Below is the -C-H stretching region of DPPC found within the 2800 - 3000 cm-1. The interval consists of the overlap of four different bands attributed to different -CH2 and -CH3 vibrational modes found within the acyl chain of DPPC. The bands found at 2850 and 2920 cm-1 are assigned to the -CH2 symmetric and asymmetric vibrational modes, while the bands found at 2880 and 2960 cm-1 are assigned to the -CH3 symmetric and asymmetric vibrational modes. Since the absorbance is proportional to the relative concentration of adsorbed lipid, it has been shown that if one monitors the absorbance under the -CH2 bands one is able to attain precise adsorption rates of the lipid vesicles at the liquid/solid interface. Caution must be taken when finding the absorbance of any absorption band, especially when dealing with overlapping absorption bands like the ones shown below. The proper way to analyze the data is to deconvolute the region of interest by identifying how many absorption bands are present by performing the proper 3rd and 4th derivative analysis on the spectrum, then fitting the data to the correct number of Gaussian/Lorentzian like bands.
Following the deconvolution and integration of the correct absorption bands one may create a time dependent absorbance plot similar to the one below. Where the top and bottom curve represent the time dependent absorbance of the asymmetric and symmetric -CH2 bands. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Spectrometer/ATR-FTIR.txt |
Fourier Transform Infrared (FTIR) spectroscopy utilizes an interferogram produced from a broad spectrum of light to take a full absorption spectrum in moments. This article will discuss the computational techniques that make this possible.
Introduction
FTIR spectrometers are the mostly widely used infrared spectrometers in scientific and industrial applications. Their current dominance stems from a number of relative advantages including low cost and high speed. This article will discuss the computational techniques that allow the large and convoluted data sets to be processed into absorption spectra. This is not a comprehensive overview of all the details that go into the design and operation of an FTIR spectrometer (mirror alignment, beam divergence, sampling frequency, and many more topics are not discussed), but covers those facets that are fundamental to understanding the spectrometer.
This section is designed to give a conceptual introduction to Fourier Transform Infrared (FTIR) spectrometers. The purpose of an FTIR spectrometer is to quickly measure the Infrared (IR) absorbtion spectrum of a sample placed in the interferometer (for a description of an interferometer, go here). To get a feel for how this works, consider the simplified case of an FTIR spectrometer using only two wavelengths.
A Dichromatic FTIR Spectrometer
Imagine a light source that emits only two discreet frequencies of light $\bar{ u_1}$ and $\bar{ u_2}$ of equal intensity (Figure 1).
Now imagine we place an absorbing sample in our interferometer, and that it absorbs light at either $\bar{ u_1}$ or $\bar{ u_2}$, although we do not know which one yet. The result on the measured interferogram is given in blue in Figure 3 (with the reference interferogram from Figure 2 given in green).
These two interferograms are all we need from the instrument! Now comes the FT of the FTIR. Fourier transforms will be covered in more detail in the next section. For now just think of it as a black box function in which you plug in an interferogram and get out a spectrum. Figure 1, therefore, is the Fourier transform of the red interferogram in Figure 2. Performing a Fourier transform on the measured blue interferogram in Figure 3 gives the spectrum in Figure 4.
By comparing the spectrum in Figure 4 to Figure 1, we can immediately deduce that our absorbing sample absorbs light at frequency $\bar{ u_1}$, and the FTIR spectrometer has done its job. Fundamentally that is all there is to it. The FTIR spectrometer takes light of various wavelengths, and measures an reference interferogram. You then place your absorbing sample into the interferometer and measure the interferogram again. Finally you Fourier transform both interferograms, and compare the spectra to determine what frequencies of light the sample is absorbing.
The next sections work on developing a more complete description of the the measured spectra, and the instrument nuances that factor into your data manipulation. To do this, however, we will need to open up the black box we put the Fourier transform in, and get an idea of how it works.
Fourier Transforms
From the previous section we saw that Fourier Transforms (in the context of FTIR) take an interferogram and produce a spectrum. The last section also implicitly presented the idea that the interferogram measured by an FTIR interferometer is just the sum of the interferograms of all the constituent frequencies present in the source. The Fourier transform, then, must break down the interferogram to reveal these constituent frequencies telling us which are present and in what amount.
To make this as easy on ourselves as possible, the first step we can take is to shift a maximum value of the interferogram to the origin. This ensures that it will be inherently cosine in nature (cos(0) = 1 vs sin(0)=0). As we will see in the phase error section below, this also assumes a symmetric interferogram, which you will not generally have.
Now that our interferogram is symmetric about the origin, all we need to do (all the Fourier transform does) is product it with a large variety of cosines to see which fit. The idea behind this is quite simple. If we luck out and hit our interferogram with a cosine that exactly matches one of our constituent frequencies, we will get a $\cos^2(x)$ which is positive for all values of x. If we add up our $\cos^2(x)$ over all values of x we will get a large positive value.
All other cosines not equal to a cosine present in the interferogram will be positive for some x and negative for the rest. Adding the positive and the negative together for all x will give zero. Lets look at the simple case of an $\cos(x)$ interferogram tried against three different cosines and one cosine that exactly matches (Figure 5):
As you can see in 5b every cosine but the one that exactly matches the interferogram spends an equal amount of time above and below 0; such that if we were to add up the area under these curves, only the top (teal) one would be non-zero. We can also look at various test cosines against the interferogram from Figure 2, again ending in a cosine that matches one of the two frequencies present in that interferogram (Figure 6):
Again in Figure 6 we can see that only when we get close to a cosine present in the interferogram (teal is equal to $u_2$) does the area become large and positive. With this in mind we are now able to write the general form of our cosine Fourier Transform, namely, all we were doing above is taking the product of our interferogram, $I(z)$, with a test cosine, $\cos(2 \pi \bar{ u} z)$, and determining the area:
$B(\bar{ u}) = \int_{-\infty}^{\infty} I(z) \cos(2 \pi \bar{ u} z) \,d z \label{3-1}$
Figures 5 and 6 show the product $I(z) \cos(2 \pi \bar{ u} z)$, in a small window of delay, and for only four test function wavelengths ($\bar{ u}$ in inverse centimeters $cm^{-1}$). Equation $\ref{3-1}$, however, gives the total area for any test function wavelength that you ask for. To recover the spectra we need from the FITR spectrometer, we must evaluate Equation $\ref{3-1}$ for a range of wavelengths that include those of the source. As you can imagine this would be a very resource intensive process. In practice a Fast Fourier Transform (FFT), it is a computing trick used that avoids performing a large number of integrals over all space. For more on FFTs go here. Also, when asymmetries are present in the interferogram you will need to perform a complex fourier transform in order to ultimately correct those asymmetries. This will be covered in the phase error section below. Aside from these details, this discussion gives a decent introduction to how Fourier transforms are used in FTIR.
Reconstructing a Measured Interferogram: Major Contributing Factors
Although the concepts developed so far are fundamental and useful, real interferograms used in FTIR spectroscopy don't look anything like Figure 2. This section will take a few steps towards developing an interferogram similar to one you would actually see measured in an FTIR spectrometer, and explain their sources. In the process we will develop all the tools we need to successfully analyze and deconstruct real interferograms produced from FTIR spectrometers.
Polychromatic Light Sources
So far all we have considered are monochromatic and dichromatic FTIR spectrometers that have allowed us to use very simple interferograms. It might occur to you, however, that such instruments would not be terribly useful. We would need our samples to just happen to absorb at whatever frequency of light we selected for our interferometer. Real FTIR spectrometers don't use just two frequencies, but a whole continuum of light typically generated from a black body source (more on instrumentation here).
To get an idea of what this looks like for an interferogram lets see what happens when 5 and 50 frequencies of light interfere with equal intensities (Figure 7):
Figure 7b already has the well defined centerburst of an actual interferogram. We have, however, left out a very important feature of blackbody sources. A blackbody source emits radiation in a continuous envelope described by Planks law, and this modifies our interferogram in a significant way (Figure 8).
At this point it is worth noting that the interferograms of each wavelength add in phase at zero delay (Figure 7a). This might immediately seem strange considering FTIR interferometers use incoherent light sources. Because we are making use of a Michelson Interferometer, however, zero delay means the path lengths to the movable and fixed mirrors are equal. The electric fields at every wavelength are therefore in phase with themselves at zero delay. Because the detector only measures Intensity (proportional to electric field squared) it will therefore read a maximum for zero delay, consistent with what is depicted in Figure 7a and 7b.
In Figure 8a you can clearly see the effect of the Planck distribution on the amplitudes of each individual frequency, and in 8b the result of a large number of these adjusted frequencies interfering. These interferograms are generated by the product
$\dfrac{\alpha \bar{ u}^3}{\exp(\beta \bar{ u})-1} \cos(2 \pi z \bar{ u}) \nonumber$
shown over a small range of z about the origin. $\alpha$ and $\beta$ are constants given physically by the Planck distribution. To reveal the full Interferogram intensity distribution, you must integrate over all possible frequencies $\bar{ u}$:
$I(z) = \int_{-\infty}^{\infty}\dfrac{\alpha \bar{ u}^3}{\exp(\beta \bar{ u})-1} \cos(2 \pi z \bar{ u})\,d {\bar{ u}} \nonumber$
which is a Fourier transform of the general form:
$I(z) = \int_{-\infty}^{\infty}B(\bar{ u}) \cos(2 \pi z \bar{ u}) \,d {\bar{ u}} \label{4.1-1}$
Equation $\ref{4.1-1}$ is known as the Fourier transform pair of Equation $\ref{3-1}$. Remember in this section we are essentially working backwards to generate the interferogram you would measure from an FTIR spectrometer. The general problem is to be given $I(z)$ from the instrument and use it to produce $B(\bar{ u})$
Instrument Line Shape
Equation $\ref{4.1-1}$ shows us that ideally we could move our movable mirror from $- \infty$ to $\infty$ to measure an intensity interferogram. Unfortunately our mirror only has a finite range of motion, and this puts a limit on the resolution obtainable with the spectrometer. The finite range of motion in the mirror coordinate, $z$, is equivalent to taking an entire intensity interferogram $I(z)$ from $- \infty$ to $\infty$ and truncating it so that it only retains its values in the range accessible by the mirror (call it $2 \Delta$).
Mathematically this can be accomplished by using a step function $D(z)$ that is one over the range of the mirror and zero otherwise.
$D(z) = 1$ for $-\Delta \leq z \leq \Delta$
$D(z) = 0$ for $z > | \Delta |$
This truncation function $D(z)$ is acting on the intensity interferogram $I(z)$, such that recovering the spectrum $B(\bar{ u})$ is now a more complicated affair. Namely Equation $\ref{3-1}$ now reads:.
$B(\bar{ u}) = \int_{-\infty}^{\infty} I(z) D(z) \cos(2 \pi \bar{ u} z) \,d z\label{4.2-1}$
Fortunately it is the case the the Fourier transform of the product of two functions is the convolution of the Fourier transform of each individual function. To evaluate Rquation $\ref{4.2-1}$, then, we need the Fourier transform of $D(z)$. The Fourier transform of $D(z)$ is done in most detailed discussions of Fourier transforms, and is:
$FT[D(z)] = 2 \Delta sinc (2 \pi \bar{ u} \Delta) \nonumber$
Figure 9: (Left) The truncation function $D(z)$ is shown. (Right) $D(z)$'s Fourier Transform which will be convolved with the spectrum $B(\bar{ u})$
The consequence of this truncation is evident in the final spectrum $B_{final}(\bar{ u})$. Namely we must now convolve the ideal spectrum from Equation $\ref{3-1}$ with the sinc function to yield our final spectrum. Mathematically:
$B_{final}(\bar{ u}) = \int_{-\infty}^{\infty} B(\bar{ u}') 2 \Delta sinc (2 \pi \Delta (\bar{ u}-\bar{ u}')) \,d \bar{ u}' \label{4.2-2}$
We have now seen that having a movable mirror with finite travel introduces the convolution of a sinc function to our final spectrum. So what does this do to the resolution of our spectrometer? From Equation $\ref{4.2-2}$ you can see that a convolution is analogous to overlapping the two functions (the sinc and $B(\bar{ u})$) at $\bar{ u}=\bar{ u}'$ and then sliding $B(\bar{ u})$ back and forth taking the product at every point.
The absolute cleanest spectrum we can have is perfectly monochromatic light. In this case $B(\bar{ u})$ is a delta function spiked at $u_1$. Convolving this with with $2 \Delta sinc (2 \pi \bar{ u} \Delta)$ gives the shifted sinc function pictured to the on the far right of Figure 10.
The far right of Figure 10 shows that even with perfectly monochromatic light, the fact that we have limited travel with the movable mirror results in a broadened peak in the final spectrum, thus putting an upper limit on the resolution (Figure 11). All is not lost, however. We can artificially manipulate this instrument line shape, $D(z)$, to improve our resolution, by using a new function that removes the feet to the left and right of the sinc max. In Greek, "A podos" means without feet.
Apodization
Previously we used the the instrument line shape $D(z)$ as essentially a window into the spectrum where the widest we could open the window is determined by the distance achievable with the movable mirror. Once we have that information, however, we are free to crop the window in any manner we see fit thus artificially manufacturing an instrument line shape that we might find more favorable. As one example of this consider the triangular window $T(z)$:
$T(z) = 1 - \left|\dfrac{z}{\Delta} \right| \nonumber$
for $-\Delta \leq z \leq \Delta$
$T(z) = 0 \nonumber$
for $z > | -\Delta |$
As it turns out the Fourier transform of $T(z)$ is:
$FT[T(z)] = \Delta sinc^2 (2 \pi \bar{ u} \Delta) \nonumber$
The benefit of choosing this instrument line shape over the unmodified $D(z)$ can be clearly seen in Figure 11.
Instrument Resolution
Let's return for a moment to our dichromatic interferometer to get an idea of what the maximum possible resolution obtainable is. In the case of two frequencies of light, the recurrence of constructive interference depends on the difference between the two frequencies, namely:
$z_{constructive} =\dfrac{1}{\bar{ u_2} - \bar{ u_1}} =\dfrac{1}{\Delta\bar{ u}} \nonumber$
So what is the smallest separation between two frequencies we can detect? Looking at the above equation its apparent that a small $\Delta\bar{ u}$ requires a large z. So the maximum resolution depends on the amount of travel in the movable mirror, z. In fact, the very smallest unit of frequency on the frequency axis after Fourier transformation of $I(z)$ will be
$\Delta \bar{ u}_{min} = \dfrac{1}{\Delta z_{max}} \nonumber$
This intuitive approach actually stands up to more rigorous derivations, and shows us that our maximum frequency resolution depends on total travel of the movable mirror. But how is this usually measured?
To continue, we are going to need to know a little something about the instrumentation. In order to measure z we can make use of a single frequency source of light (a laser) by sending it through our interferometer and watching its interference. Every time we see completely constructive interference at the wavelength of our laser, we can count that as a mark on our z axis. The separation between any two marks on the z axis are therefore:
$\Delta z = \lambda \nonumber$
Where $\lambda$ is the wavelength of the laser. The maximum travel of the movable mirror, $\Delta z_{max}$, can therefore be measured as
$\Delta z_{max} = N \lambda \nonumber$
where $N$ is the total number of points measured for the z axis.
We are now ready to rewrite our frequency and distance axis completely in terms of our laser wavelength. Every point along our z axis is just some integer multiple of the smallest measurable step along z, namely:
$z = n \lambda \label{4.2.2-1}$
where $n\: =\: 0,\: 1,\: 2,\: ... N-1$
and
\begin{align} \bar{ u} &= k \Delta \bar{ u_{min}} \[4pt] &= \dfrac{k}{\Delta z_{max}} \label{4.2.2-2} \[4pt] &= \dfrac{k}{N \lambda} \end{align}
where $k\: =\: 0,\: 1,\: 2,\: ... N-1$
Phase Errors
So far it has been assumed that the interferograms are a result of perfectly symmetric interference as in Figure 8. In real FTIR spectrometers, however, it is generally the case that each wavelength is pulled slightly out of phase with respect to its neighbors. These phase errors can be introduced in a number of ways, but are often the result of electronic filters. Mathematically this can be represented by simply adding a phase term into Equation $\ref{4.1-1}$.
$I(z) = \int_{-\infty}^{\infty} B(\bar{ u}) \cos(2 \pi \bar{ u} z-\phi_{\bar{ u}}) \,d \bar{ u} \label{ 4.3-1}$
The asymmetries introduced by these phase errors can be accounted for using a series of sine waves in a similar way to the Fourier Transform section above. The only difference is instead of using cosine waves we would now use sine waves to take account of the asymmetric elements. Such sine waves, however, do not represent anything physically useful in our spectrometer, so we need to get rid of them by shifting all of our cosine interferograms back to constructively interference at z = 0.
To do this we need a way to independently evaluate the symmetric and asymmetric components of our interferogram using cosines and sines respectively. As it turns out we can use a nicely compact notation to keep track of our two components independently. By use of Euler's formula
$\cos{\theta} - i \sin{\theta} = e^{-i \theta} \nonumber$
Equation $\ref{4.3-1}$ can be rewritten as
$I(z) = \int_{-\infty}^{\infty} B(\bar{ u}) e^{-2 \pi i \bar{ u}z} \,dz \label{4.3-2}$
Equation $\ref{4.3.2}$ is known as a complex Fourier transform, its Fourier transform pair being
$B(\bar{ u}) = \int_{-\infty}^{\infty} I(z) e^{-2 \pi i \bar{ u}z} \,d z \label{4.3-3}$
Given a intensity spectrum $I(z)$, then, we can perform the complex Fourier transform according to equation $\ref{4.3-3}$ which gives us a complex spectrum, $B'(\bar{ u})$
$B'(\bar{ u}) = Re(\bar{ u}) +i Im(\bar{ u}) \nonumber$
From here it is useful to draw out $B'(\bar{ u})$ in the complex plane (Figure 13). By changing coordinates from $B'(\bar{ u}) = Re(\bar{ u}) +i Im(\bar{ u})$ to $B'(\bar{ u}) = r\cos(\phi_{\bar{ u}})+i r\sin(\phi_{\bar{ u}})$ we can apply Euler's formula and obtain
$B' (\bar{ u}) = r e^{i \phi_{\bar{ u}}} \label{4.3-4}$
Figure 13: Showing the vector $B'(\bar{ u})$ in the complex plane along with its polar angle $\phi_{\bar{ u}}$ and radius r.
r is is also known as the magnitude spectrum, $| B(\bar{ u}) |$, and is obtained by simply applying the Pythagorean theorem in Figure 13:
$r^2 \equiv | B(\bar{ u}) |^2 = Re^2(\bar{ u})+Im^2(\bar{ u}) \nonumber$
And so by equation $\ref{4.3-4}$, the real and imaginary spectra are simply related by our frequency dependent phase angle $\phi_{\bar{ u}}$ according to
$B'(\bar{ u}) = | B(\bar{ u}) |e^{i \phi_{\bar{ u}}} \label{4.3-5}$
Rearranging Equation $\ref{4.3-5}$ and keeping only real terms (our final spectrum is real) we can find a phase corrected equation for $B(\bar{ u})$
$B(\bar{ u}) = Re(\bar{ u})\cos{\phi_{\bar{ u}}} + Im(\bar{ u})\sin{\phi_{\bar{ u}}} \label{4.3-6}$
Almost there! Equation $\ref{4.3-6}$ gives us a phase corrected spectrum. All that is left to do is determine $\phi_{\bar{ u}}$. Again referring to Figure 13 all we need is a little trigonometry
$\tan{\phi_{\bar{ u}}} = \dfrac{Im(\bar{ u})}{Re(\bar{ u})} \nonumber$
or
$\phi_{\bar{ u}} = \arctan{ \dfrac{Im(\bar{ u})} {Re(\bar{ u})} } \label{4.3-7}$
And so Equations $\ref{4.3-6}$ and $\ref{4.3-7}$ will phase correct our spectrum.
Deconstructing a Measured Interferogram to Recover the Absorption Spectrum
From section 4 we now have the majority of the tools we need to understand how an interferogram is processed into an absorption spectrum. The Mertz method is one of the more popular techniques for doing this. To illustrate it, lets process a real spectrum using only the tools we have developed so far. The interferogram measured (Figure A) is of air that was enhanced by breathing into it (literally). Our processed absorption spectrum, therefore, should include things like $CO_2$ when compared to a normal air background.
Mertz Method
Ok lets process the spectrum in Figure A using the Mertz method.
Symmetrize
In order to phase correct our spectrum, we are going to need a small symmetric region about the centerburst, say 512 data points. We will later interpolate our extracted phase curve to full resolution. Figure B shows our small region about the centerburst.
Apodize
Next we modify our interferogram using an apodization function (see section 4.2.1). In this case we will use a triangular apodization function (Figure C).
Zero the Centerburst
Now we can rotate our spectrum to appear as the red curve in Figure 2, which helps the Fourier transform we are about to take be more meaningful (Figure D).
Fourier Transform
Notice that Figure B resembles the top of Figure 12, namely there are asymmetries (chirping) indicative of phase variations. We therefore need to take the complex Fourier transform according to Equation $\ref{4.3-3}$, from which we get real (Figure E) and imaginary (Figure F) parts. The frequency axis is given by Equation $\ref{4.2.2-2}$ using a HeNe laser wavelength for $\lambda$.
Extract the Phase Curves
Using Equation $\ref{4.3-7}$ we can now compute the phase curve by taking the arctangent of the ratio of Figures E and F (Figure G). The phase curves, like the transforms in Figures E and F, are symmetric, so only half needs to be shown.
We are going to need the cosine (Figure H) and sine (Figure I) of the total phase cure in Figure G in order to phase correct our interferogram according to Equation $\ref{4.3-6}$.
To use these curves the final step is toto interpolate from 512 to the full 30064 data points of the original data set.
Switch Back to Full Spectrum: Zero the Centerburst, Rotate, and Fourier Transform as Above
Following the same procedure as in Figures C, D, (E-F), for the full spectrum gives Figure J. Again only half is shown of both the real and symmetric components because they are symmetric about 8000 $cm^{-1}$.
Phase Correct
We can now phase correct. Multiplying the left of Figure J with Figure H yields the left of Figure K (below), and multiplying the right of Figure J with Figure I yields the right of Figure K (below).
Figure K: Phase corrected real (left) and imaginary (right) parts of the Fourier transform of the full interferogram. The left is given by $Re(\bar{ u})\cos{\theta_{\bar{ u}}}$ and the right is given by $Im(\bar{ u})\sin{\theta_{\bar{ u}}}$.
Adding these together as according to equation 4.3-6 gives the phase corrected spectrum in Figure L.
Ratio to the Reference Spectrum
All that is left to do is repeat everything for the background spectrum, and then ratio the result to Figure L. This produces the absorption spectrum in Figure M.
A few things have not been covered (zero filling) but as is apparent in Figure M, this works quite well to explain how the Mertz method, and FTIR spectrometers in general, work.
Example $1$
Reproduce the top of Figure 12 using matlab.
Solution
Possible solution:
x = [-5:.01:5];
v = [0:.1:5];
phase=(.01).*v.^2;
Ix = ((.001).^3./(exp((.001)-1))*cos(2*pi*x*(.001)));
for i=2:(length(v))
Ix = [Ix + ((v(i)).^3./(exp(v(i))-1))*cos(2*pi*x*(v(i))-phase(i)*2*pi)];
end
plot(x,Ix)
Example $1$
Perform the Mertz Method on the interferogram file "75_AIR.txt" with its background "75_BG.txt" (in the files tab below). Use the HeNe wavelength 63.3 microns, and the built in matlab FFT for the fourier transform.
Solution
Possible Solution
%Import the Data
data = importdata('75_BG.txt');
x = data(:,1);
Ix = -1*data(:,2);
%Extract 512 points about the centerburst
[maxval centerburst] = max(abs(Ix));
Ixshort = Ix(centerburst-255:centerburst+256);
xshort = x(centerburst-255:centerburst+256);
%Make Apodization function
apod = [1-abs(xshort - xshort(256))./((.5).*length(xshort))];
negapod = find(apod < 0);
apod(negapod) = [0];
%apodize the low res spectrum
Ixshortapod = Ixshort.*apod;
%Rotate it so the centerburst is at zero
Ixshortapodrot = [Ixshortapod(256:end); Ixshortapod(1:255)];
%FT
bprime = fft(Ixshortapodrot);
%Create frequency axis
freqfactor = 1/(length(xshort)*6.33*10^(-5))
v = [1:length(xshort)].*freqfactor;
v = v';
%Break down FT into Real and Imaginary parts
bprimereal = real(bprime);
bprimeimag = imag(bprime);
%determine thetav Note: atan() only uses two quadrants, we need all four
thetav = atan2(bprimeimag, bprimereal);
%interpolate thetav to full resolution
fullthetav = interp1(xshort,thetav,linspace(xshort(1), xshort(end),length(x)));
fullthetav = fullthetav';
%Get phase correction terms from thetav
fullcosterm = cos(fullthetav);
fullsinterm = sin(fullthetav);
%Now switch to full Interferogram
%Full apodization function
apod1 = [1-abs(x - x(centerburst))./((.5).*length(x(centerburst:end)))];
negapod1 = find(apod1 < 0);
apod1(negapod1) = [0];
Ixapod = Ix.*apod1;
%Rotate
Ixapodrot = [Ixapod(centerburst:end); Ixapod(1:(centerburst-1))];
%FT
bprimefull = fft(Ixapodrot);
bprimefullreal = real(bprimefull);
bprimefullimag = imag(bprimefull);
%full freq axis
fullfreqfactor = 1/(length(x)*6.33*10^(-5));
fullv = [1:length(x)].*fullfreqfactor;
fullv = fullv';
%Phase correct
fullbetadjreal = bprimefullreal.*fullcosterm;
fullbetadjimag = bprimefullimag.*fullsinterm;
fullbetadjBG = fullbetadjreal + fullbetadjimag;
%fullbetadjBG is the phase corrected background spectrum.
How an FTIR instrument works
Below you will discover a detailed review of the physical components of a Fourier Transform Infrared (FTIR) Spectrometer. This module focuses on the physical equipments/components which make up the instrument, and not the mathematical aspects of analyzing the resulting data. For the mathematical treatment of FTIR data please see FTIR: Computational.
Introduction
The history of the FTIR is a twisted and somewhat confusing tale, involving the development of technology, math, and materials. The beginnings of the first commercial FTIR spectrometer have been attributed to the work of M.J. Block and his research team in the small company 'Digilab'. Block's personal memoirs of the experience are both interesting and entertaining, involving highly classified information, money laundering, and fraud charges (follow the link if you wish to discover for yourself s-a-s.org/epstein/block/index.htm ). Otherwise let it be enough to say that once the FTIR spectrometer was developed, its impact on the scientific community was paramount. Suddenly it was possible to acquire extremely accurate data in a much shorter amount of time than with traditional IR, as well as allowing for the analysis of exceedingly dilute samples. The device itself is surprisingly simple, with only one moving part. It’s no surprise that the instrument has been growing in popularity ever since its introduction, finding applications in chemistry, biology, materials science, process engineering, pharmaceutical science, and many other professions. FTIR instruments are relatively inexpensive, sturdy, stable, flexible, and fast. Through the years, this instrument have steadily evolved and new applications is continually being developed. Expanded computer power, the trend towards miniaturization, and more sophisticated imaging have all inspired some important new innovations.
FTIR measurements are conducted in the time domain. This is accomplished by directing the radiation from a broadband IR source to a beam splitter, which divides the light into two optical paths. Mirrors in the paths reflect the light back to the beam splitter, where the two beams recombine, and this modulated beam passes through the sample and hits the detector. In a typical interferometer, one mirror remains fixed, and the other retreats from the beam splitter at a constant speed. As the mirror moves, the beams go in and out of phase with each other, which generates a repeating interference pattern—a plot of intensity versus optical path difference—called an interferogram. The interferogram can be converted into the frequency domain via a Fourier transform, which yields the familiar single beam spectrum. The resolution of this spectrum is determined by the distance that the moving mirror traveled. Analyses generally fall into three categories, which are determined by the wavelengths of the radiation. Midrange IR covers the wavenumbers 1400 nm–3000 nm, where strong absorptions from fundamental molecular vibrations are measured. Near-IR (NIR) ranges from 700 nm–1400 nm. Far IR ranges from 3000 nm–1 mm.
Sources of Infrared Radiation
1. Theory
Infrared radiation is a relatively low energy light. All physical objects give of infrared radiation, the wavelength of which is dependent upon the temperature of the object. This phenomenon is known as black body radiation. The ideal IR source would emit radiation across the entire IR spectrum. As this is very difficult, a good compromise is a source which emits continuous mid-infrared radiation. Thankfully this can be achieved by most high temperature black bodies. Black body radiation was studied in depth by Max Planck, and it is through his equations that that the spectral energy density at a given wave number from a blackbody source of a given temperature can be calculated. Not to mention he was the discoverer of the properties of energy quanta. For this Max Planck received the 1918 Nobel Prize in Physics in recognition of the services he rendered to the advancement of Science. Now take a moment to examine the plot of energy density vs. max wave length below.
At first glance it would seem that the source temperature should be as high as possible to maximize the results—this is rarely the case. For example consider a typical incandescent light bulb. The tungsten filament glows at a temperature of 3000k, which would emit massive amounts of IR. The bulb portion of a light bulb is responsible for their lack of use as an IR source. The Bulb is made of glass which seals the tungsten filament in a vacuum. The vacuum is necessary to keep the tungsten from oxidizing at such high temperature, but the glass serves as an IR absorber, blocking its path to the sample. Any source we choose must be in direct contact with the atmosphere, because of this there are drastic limits on the temperature that we may operate an IR source.
There are several other limiting facts that require consideration when choosing an IR source. The material should be thermodynamically stable; otherwise it would quickly break down and need replacing. This would obviously be an expensive and undesired approach. There is also the possibility that the source may produce an excess of IR radiation. This would saturate the detector and possibly over load the analog-to-digital converter.
2. Silicon Carbide Rod (Globar)
The most ubiquitous IR source used in FTIR is a resistively heated silicon carbide rod (see image below). This device is commonly and somewhat simply referred to as a Globar.
An electric current is passed through the bar which become very hot, prducing large amounts of IR raidiation.
A Globar can reach temperatures of 1300K, and in the past required water cooling to keep from damaging the electrical components. Advances in ceramic metal alloys have lead to the production of Globars that no longer require water cooling. However these newer Globars are typically not operated at as high a temperature as 1300K.
3. Nichrome and Kanthanl wire Coils
Nichrome and Kanthanl wire coils where also once popular IR sources. They too did not require water cooling, ran at lower temperatures than a Globar, and possessed lower emissivity.
4. Nerst Glowers
Nernst Glowers are an IR source that is capable of hotter temperatures than a Globar. Nernst Glowers are fabricated from a mixture of refractory oxides. Despite being capable of higher temperature than a globar, the Nernst Glower is not capable of producing IR radiation above 2000 cm-1. As long as the frequency of IR needing to be examined is below 2000 cm-1 The Nernst Glower is an exceptional IR source, but if the entire mid IR range is necessary then using a Nernst Glower would result in low signal to noise ratios.
5. CarbonArcs (an unsuitable IR source)
It should be noted that the carbon IR sources used in many spectrometers today, similar to the Globar discussed above are different then the carbon arcs that you may be familiar with. A carbon arc occurs when an electrical discharge occurs between to carbon electrodes. These sparks are incredibly bright reaching temperature as hot at 6000 K. IR sources capitalizing on the large IR output of these arcs have ultimately shown to possess more draw backs than advantages. Because the carbon electrodes are consumed in the arcing process it would be necessary to continuously feed new rod forward to maintain the arc. The rods would also require an inert atmosphere to avoid combustion of the carbon. These limiting features and add complication of carbon archs makes them unfit as IR sources.
Michaleson Interferometer
History
The creation of today’s FTIR would not have been possible had it not been for the existence of the Michelson interferometer. This essential piece of optical equipment was invented by Albert Abraham Michelson. He received the Nobel Prize in 1907 for his accurate measurements of the wavelengths of light. His Nobel winning experiments were made possible by his invention of the interferometer. Albert Michelson was in fact the first member of the United States of America to receive the Nobel Prize, solidifying the U.S as a world leader in science. Michelson did not invent the interferometer to perform infrared spectroscopy; in fact his experiments had nothing to do with any kind of spectroscopy. Michelson’s goal was to discover evidence for luminiferous aether, the material once believed to permeate the universe allowing for the propagation of light waves. Of course it is now known that no such aether exists and that light is capable of propagating in vacuum. For more information on the extraordinary achievement of Michelson and his invention of the interferometer go to http://en.Wikipedia.org/wiki/Michels...ley_experiment.
Above is a digram of the basic concepts and components of a Michelson Interferometer.
Each portion of the aboved digram will be discussed in turn and with further detail.
1. Beam Splitter
The beam splitter is made of a special material that transmits half of the radiation striking it and reflects the other half. Radiation from the source strikes the beam splitter and separates into two beams. One beam is transmitted through the beam splitter to the fixed mirror and the second is reflected off the beam splitter to the moving mirror. The fixed and moving mirrors reflect the radiation back to the beamsplitter. Again, half of this reflected radiation is transmitted and half is reflected at the beam splitter, resulting in one beam passing to the detector and the second back to the source.
2. Stationary Mirror
The stationary mirror in an FTIR interferometer is nothing more than a flat highly reflective surface.
3. Moving Mirror
The beauty of the FTIR spectrometers design lies in its simplicity. There is present only one moving part in an FTIR spectrometer, its oscillating mirror. Air bearings are used in FTIR spectrometers because of the higher speed that the oscillating mirror is required to move at. The air bearings eliminate friction that would inevitable cause the moving parts of the mirror to break down, as is the case for the mechanical bearings. The air bearing has nearly replaced the mechanical bearing in all modern FTIR spectrometers. The older mechanical bearings required expensive ruby ball bearings, as they were the only material strong enough to endure the high physical demands of oscillating once every millisecond.
Detectors
Infrared detectors are classified into two categories; thermal, and quantum models. A thermal detector uses the energy of the infrared beam as heat, while the quantum mechanical detector uses the IR beam as light and provides for a more sensitive detector.
Thermo Detecto
A thermal detector operates by detecting the changes in temperature of an absorbing material. Their output may be in the form of an electromotive force (thermocouples), a change in resistance of a conductor (bolometer) or semiconductor (thermistor bolometer), or the movement of a diaphragm caused by the expansion of a gas (pneumatic detector). There exist major limitations to these forms of IR detectors. Their response time is much slower (several milliseconds) than the vibrational frequency of the oscillating mirror in FTIR. The mirror is moving with a frequency of approximately 1.25 kHz, there for the response time for an IR detector employed in FTIR must have a response time of less than 1ms. A response time of less than one millisecond is obtainable with cryogenically cooled thermo detectors. These detectors are commonly too expensive to be desired over other forms of detectors.
Pyroelectric Bolometer
There is one kind of thermo detector that is both inexpensive and possesses a response time fast enough to be appropriate, as well as the additional benefit of operating at room temperature. This detector is the Pyroelectric bolometer detector. These detectors incorporate as their heat sensing element ferroelectric materials that exhibit a large spontaneous electrical polarization at temperatures below their curie point. If the temperature of the ferroelectric material is changed the degree of polarization also changes causing an electric current. Pyroelectric bolometer is based on a Pyroelectric crystal (usually LiTaO3 or PZT) covered by absorbing layer (silver or silver blackened with carbon)
Quantum Well Detector
Because of their higher sensitivity, and faster response times, quantum well detectors are much more ubiquitous to FTIR. The detection mechanism of Quantum Well Infrared Photodetector (QWIP) involves photoexcitation of electrons between ground and first excited states of single or a multiquantum well structure. The parameters are designed so that these photo excited carriers can escape from the well and be collected as photocurrent. These quantum wells can be realized by placing thin layers of two different high bandgap semiconductor materials alternately where the bandgap discontinuity creates potential wells associated with conduction bands and valence bands. When IR photons strick these materials they induce a current that is then transformed into a digital signal via a analog digital converter.
These detector work more effectively (increased sensitivity) when at lower temperature. This is in part due to the higher degree of instrumental noise associated with a higher thermal back ground. Today there are available a wide range of these photo detecting diodes that do not require cooling. The finer details of the detector are numerous and dependent on the parameters of the equipment, there for beyond the scope of this module.
References
1. Berthomieu, C, & Hienerwadel, R. (2009). Fourier transform infrared (FTIR) spectroscopy. Photosynthesis research, 101(2-3), 157-170.
2. James P. Smith and Vicki Hinson-Smith. Product Review: The Endearing FTIR Spectrophotometer, Analytical Chemistry 2003 75 (1), 37 A-39 A
Problems
Theoretical Promblems
1. Why is it necesary for there to be a moving mirror in an interferometer?
2. What Other Sources of IR radiation can you imagen might work for FTIR? Explain you reasoning.
3. Explain the Purpose of the refrence laser in FTIR. Why use a lazer instead of any other form of light?
4. What region of the IR spectrum would light with a wavelength of: a) 900nm b) 1500nm c) 3000nm.
Calculations
1. Determine the wavenumber for the wavelengths given in problem 4. of the theoretical questions
2. Calculate the photon energy of laser light with a wavelength of 632.9nm, why might this be the chosen wavelenth for an FTIR refrence lazer? | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Spectrometer/How_an_FTIR_instrument_works/FTIR%3A_Hardware.txt |
Atomic force microscopy utilizes a microscale probe to produce three dimensional image of surfaces at sub nanometer scales. The atomic force microscope obtains images by measurement of the attractive and repulsive forces acting on a microscale probe interacting with the surface of a sample. Ideally the interaction occurs at an atomically fine probe tip being attracted and repulsed by the atoms of the surface giving atomically resolved surface images.
Introduction
The atomic force microscope (AFM) probe is mounted onto a flexible cantilever that is manipulated by a vertical piezoelectric into interacting with the sample. The piezoelectric expands and exerts a force on the cantilever proportional to the potential voltage. The force is balanced by those acting on the probe by its interaction with the surface and the stain on the cantilever.
The flexible cantilever will bend proportionally to the force acting upon it in accordance to Hooke's law. By measuring the reflection of a laser source on the cantilever it is possible to determine the degree of the bend and by a feedback loop control the force exerted by the cantilever. By using the strain as a restoring force, a piezoelectric element can be used to drive the probe as a mechanical oscillator with a calculable resonate frequency, allowing for tapping mode microscopy. The laser acts on a photodiode array to give measurement of the deflection both horizontally and vertically. Using the deflection it is possible to calculate the quantity of force acting on the probe in both the horizontal and vertical directions. From the force applied by the vertical piezoelectric and the force acting on the probe it is possible to obtain a measure of the relative height of the probe.
As the probe encounters a feature, it raises with the feature, causing a deflection measured by the photodiode and a change in the force on the vertical piezoelectric. The potential may be adjusted to minimize the deflection, by feedback from the photodiode array, knowing the expansion rate of the vertical piezoelectric allows for a direct computation of the height, this is the z-sensor. By computation from the total deflection and thus strain on the cantilever, it is also possible to obtain the relative height of the feature given a known spring constant of the cantilever.
The probe is scanned across the surface, with either the probe or the sample being moved by piezoelectric elements. This allows for the measure of the interaction of the forces across the entire sample, allowing the surface to be rendered as a three dimensional image. The force exerted by the probe has the potential to alter the surface by etching or simply moving loosely bound surface features. As such, this microscopy technique can be potentially used to write (etch) as well as read, atomic scale surface features. The strain on the probe tip may cause deformation that leads to loss in resolution due to flattening or the generation of artifacts due to secondary tips.
The probe tip is idealized to be an atomically perfect spherical surface with a nanoscopic radius of curvature, leading to a single point of contact between the probe tip and the surface. Tips however may have multiple points of contact, leading to image artifacts such as doubled images or shadowing. Alternatively tips may be flattened or event indented, causing a lower resolution as smaller surface features are passed over. Significant error may arise from the expansion of the piezoelectric materials as they become heated. This problem is typically mitigated by attempting to maintain an isothermal environment. Drift may be measured and accounted for by repeat measurement of a known point and normalizing the data from that known height.
Modes of Operation
There are mutiple methods of imaging a surface using an atomic force microscope, these imaginging modes uniquely utilize the probe and its interaction with the surface to obtain data.
Contact
If a constant potential is maintained for the vertical piezoelectric, the probe will maintain continuous contact with the surface. It is then possible to use deflection and z-sensor to yield accurate height information of surface features.
Friction
When the probe is in contact with the sample the resistance acting on the probe's horizontal motion causes it to be strained horizontally. This strain is proportional to the resistance, friction, allowing for a direct measurement of the friction of the sample and probe surface.
Adhesion
Also known as non-contact mode, this method utilizes the attractive forces acting on the probe. To measure this, the probe is pulled from the sample by the vertical piezoelectric with a set force, as a feature is encountered the attractive force acting on the probe increases causing it to deflect downward. The downward deflection is counteracted by the vertical piezoelectric in a similar manner to contact mode, reaching the same equilibrium of forces acting on the cantilever.
Tapping
By using a piezoelectric device it is possible to use the cantilever as a harmonic oscillator with a resonance frequency proportional to the known spring constant. The probe then moves with an amplitude proportional to the driving force which is controlled and a frequency which is proportional to the spring constant. When the probe tip contact the surface the effective restoring force increases, increasing the frequency.
The total change in frequency is proportional to the feature's height, and vertical piezoelectric can then be used to raise the cantilever and restore the frequency to the original giving additional data from the z-sensor regarding the feature's height.
Additionally, as the probe contacts the surface it acts as a driving force, deforming the surface which is restored by the internal stress (that acts on the probe to repel it). The phenomenon is proportional to the Young's modulus of compressibility of the sample and will case a phase shift between the oscillation of the piezoelectric driver of the probe and the probe itself.
Other uses of AFM
Use of specialized probes allows a further expansion of atomic force microscopy's role in nanoscience. By alterations in probe design it is possible to: directly obtain data about the surface interaction with other functional groups, alter the surface by etching or causing chemical change, or deposit substrates onto a surface; all at the nanoscopic scale.
Chemical Force Microscopy
Functionalizing a probe can be accomplished by binding a protein or functional groups to the probe tip surface. The probe now takes direct measure of the iteration between the surface and the functional group. This technique is particularly useful for biological application, e.g. affinity of a protein to the binding site of a membrane.
Etching
The strain exerted by the tip on the surface has the potential to manipulate or alter the surface features, allowing for the mechanical etching of the system. By using specialized thermal tips, it becomes possible to heat the surface. Either technique can be used to precisely carve into or otherwise alter a surface at the scales necessary for many nanotechnologic advances.
Depositing
Use of a special tip similar to a fountain pen head, it is possible to deposit units of a substrate onto a surface. Precise deposition allows for building very precise surface structures, e.g. protien binding sites, onto an atomically flat surface.
Data recording
AFM has been demonstrated as a potential means of data storage by the IBM corperation. By using a heated tip it is possible to alter a polymer surface by a reversible polymerization reaction. The indentation created may be read by contact or tapping mode allowing for written data to be read. Data may be erased by use of the thermal tip to cause a polymerization on the surface, sealing the indentation. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Atomic_Force_Microscopy.txt |
• Microscopy - Overview
The word microscopy comes from the Greek words for small and to view. On April 13, 1625, Giovanni Faber coined the term microscope. A microscope is an instrument that enables us to view small objects that are otherwise invisible to our naked eye. One way that microscopes allow us to see smaller objects is through the process of magnification, i.e. enlarging the image of the object. When a microscope enlarges an image of a 1 mm object to 10 mm, this is a 10 x magnification.
Miscellaneous Microscopy
The word microscopy comes from the Greek words for small and to view. On April 13, 1625, Giovanni Faber coined the term microscope. A microscope is an instrument that enables us to view small objects that are otherwise invisible to our naked eye. One way that microscopes allow us to see smaller objects is through the process of magnification, i.e. enlarging the image of the object. When a microscope enlarges an image of a 1 mm object to 10 mm, this is a 10 x magnification.
Introduction
Lens: The lens is the part of a microscope that bends a beam of light and focuses this on the object or sample.
Spatial Resolution
The resolution of a microscope is the smallest distance between two objects that results in two images that are distinguishable from each other. For example, the resolution of our eyes ranges from 0.1 to 0.2 mm. This means that our eyes can distinguish between two objects that are separated by 0.1 to 0.2 mm.
Light Microscopy
Early Light Microscopes
• There is evidence of people using different kinds of materials (beads, crystals, water droplets) as lenses and using light as medium to see smaller objects. However, it is not very clear who invented the first microscope and when this happened.
• In 1595, Hans Jansen or his son Zacharias of Holland invented the first compound microscope. A compound microscope is a microscope with two or more lenses.
• A person by the name of Robert Hooke (1635-1703) also liked to make and use compound microscopes. His microscopes enabled him to see small objects such as eyes of insects and point of needles.
• Around 1668, Antony van Leeuwenhoek (1632-1723), a Dutch draper, started making simple microscopes (microscopes with single lenses). He made over 500 single lens microscopes and some of these could magnify up to 300 times.
Polarizing Microscope
Light has both a particle and a wave property. A beam of light can be polarized by lining up its vibrations with each other. Thus, the polarizing microscope polarizes light in order to magnify images. This microscope also determines properties of materials that transmit light, whether they are crystalline or non crystalline.
The optical features of transparent material were recognized when William Henry Fox Talbot added two Nicol prisms (prisms that can polarize light) to a microscope. However, it was Henry Clifton Sorby (1826-1908) who used polarized light microscopy to study thinned sections of transparent rocks. He showed that through their optical properties, these thinned sections of minerals could be analyzed.
The polarizing microscope can be divided into three major component sets:
1. Stand holds the body tube and the stage
2. Optical system consists of the source of light, usually a lamp
3. System for production of plane polarized light these devices consist of a polarizer and an analyser and these determine the resolving power or resolution
The quality of magnification depends on the objective lens and the smaller the diameter of the outermost lens, the higher the magnification.
Reflected Light Microscopy
In 1740, Dr. Johann N. Lieberkuhn authenticated an instrument for illuminating opaque materials that had a cup shaped mirror encircling the objective lens of a microscope. This mirror is called a reflector. A reflector has a concave reflecting surface and a lens in its center. This evenly illuminates the specimen when the specimen is fixed up to the light and the light rays reflected from it and to the specimen.
Henry Clifton Sorby used a small reflector and attached this over the objective lens of his microscope. When he used this to study steel, he was able to see residues and distinguish these from the hard components of the steel. From then on, several scientists that study minerals also used reflected light microscopes and this technology improved throughout time.
Near-Field Scanning Light Microscope
Professor Michael Isaacson of Cornell University invented this type of a microscope. This microscope also uses light but not lenses. In order to focus the light on a sample, Isaacson passed light through a very tiny hole. The hole and the sample are so closed together that the light beam does not spread out. This type of a microscope enabled Isaacson's team to resolve up to 40 nm when they used yellow-green light. In this type of a microscope, the resolution is not really limited by the wavelength of light but the amount of the sample since it is very small.
Electron Microscopy
Because they only have resolutions in the micrometer range by using visible light, the light microscopes cannot be used to see in the nanometer range. In order to see in the nanometer range, we would need something that has higher energy than visible light. A physicist named de Broglie came up with an equation that shows the shorter the wavelength of a wave, the higher the energy it has. From the wave-particle duality, we know that matter, like light, can have both wave and particle properties. This means that we can also use matter, like electrons, instead of light. Electrons have shorter wavelengths than light and thus have higher energy and better resolution.
Electron microscopes use electrons to focus on a sample. In 1926-1927, Busch demonstrated that an appropriately shaped magnetic field could be used as a lens. This discovery made it possible to use magnetic fields to focus the electron beam for electron microscopes.
Transmission Electron Microscope (TEM)
After Busch’s discovery and development of electron microscopes, companies in different parts of the world developed and produced a prototype of an electron microscope called Transmission Electron Microscopes (TEM). In TEM, the beam of electrons goes through the sample and their interactions are seen on the side of the sample where the beam exits. Then, the image is gathered on a screen. TEMs consist of three major parts:
1. Electron source (electron gun)
2. System of image production
3. System of image recording
TEM has a typical resolution of approximately 2 nm. However, the sample has to be thin enough to transmit electrons so it cannot be used to look at living cells.
Scanning Electron Microscope (SEM)
In 1942, Zworykin, Hillier, and R.L. Snyder developed another type of an electron microscope called Scanning Electron Microscope (SEM). SEM is another example of an electron microscope and is arguably the most widely used electron beam instrument. In SEM, the electron beam excites the sample and its radiation is detected and photographed. SEM is a mapping device—a beam of electrons scanning across the surface of the sample creates the overall image. SEM also consists of major parts:
1. Electron source (electron gun)
2. System of lenses
3. Collector of electrons
4. System of image production
SEM’s resolution is about 20 nm and its magnification is about 200,000x. SEM cannot be used to study living cells as well since the sample for this process must be very dry.
Scanning Probe Microscopy (SPM)
Scanning probe microscopes are also capable of magnifying or creating images of samples in the nanometer range. Some of them can even give details up to the atomic level.
Examples of Scanning Probe Microscopes
Scanning Tunneling Microscopy (STM)
Briab Josephson shared when he explained Tunneling. This phenomenon eventually led to the development of Scanning Tunneling Microscopes by Heinrich Rohrer and Gerd Binnig around 1979. Rohrer and Binnig received the Nobel Prize in physics in 1986. The STM uses an electron conductor needle, composed of either platinum-rhodium or tungsten, as a probe to scan across the surface of a solid that conducts electricity as well. The tip of the needle is usually very fine; it may even be a single atom that is 0.2 nm wide. Electrons tunnel across the space between the tip of the needle and the specimen surface when the tip and the surface are very close to each other. The tunneling current is very sensitive to the distance of the tip from the surface. As a result, the needle moves up and down depending on the surface of the solid—a piezoelectric cylinder monitors this movement. The three-dimensional image of the surface is then projected on a computer screen.
The STM has a resolution of about 0.1 nm. However, the fact that the needle-tip and the sample must be electrical conductors limits the amount of materials that can be studied using this technology.
Atomic Force Microscope (AFM)
In 1986, Binnig, Berger, and Calvin Quate invented the first derivative of the STM—the Atomic Force Microscope. The AFM is another type of a scanning microscope that scans the surface of the sample. It is different from the STM because it does not measure the current between the tip of the needle and the sample. The AFM has a stylus with a sharp tip that is attached on the end of a long a cantilever. As the stylus scans the sample, the force of the surface pushes or pulls it. The cantilever deflects as a result and a laser beam is used to measure this deflection. This deflection is then turned into a three dimensional topographic image by a computer.
With AFM, a much higher resolution is attained with less sample damage. The AFM can be used on non-conducting samples as well as on liquid samples because there is no current applied on the sample. Thus the AFM can be used to study biological molecules such as cells and proteins. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Miscellaneous_Microscopy/Microscopy_-_Overview.txt |
This module provides an introduction to Scanning Probe Microscopy (SPM). SPM is a family of microscopy techniques where a sharp probe (2-10 nm) is scanned across a surface and probe-sample interactions are monitored. SPM is an extremely useful tool that is utilized in numerous research settings ranging from chemistry and materials to biological sciences. In addition to imaging surfaces with nanometer resolution, SPM can also be used to determine a variety of properties including: surface roughness, friction, surface forces, binding energies, and local elasticity. This module is aimed at presenting the basic theory and applications of SPM. It is aimed towards undergraduates and anyone who wants an introduction into SPM. There are two primary forms of SPM: Scanning Tunneling Microscopy (STM) and Atomic Force Microscopy (AFM). The basic theory of both of these techniques is presented here along with an introduction into some additional SPM characterization methods.
This work is partially supported through NSF grant DMR-0526686. The authors would also like to acknowledge the participants at the ASDL Curriculum Development Workshop held at the University of California - Riverside, July 10-14, 2006.
Scanning Probe Microscopy
Goals
The basic theory and applications of scanning probe microscopy (SPM) will be presented. Emphasis will be placed on SPM characterization methods including:
• scanning tunneling microscopy
• atomic force microscopy
• lateral force microscopy
• chemical force microscopy
• magnetic force microscopy
• phase imaging
Objectives
Upon completion of this module you understand the basic principles, operation and applications of SPM.
02 History
Overview
Scanning Tunneling Microscope (STM)
Developed in 1982 by Binnig, Rohrer, Gerber, and Weibel at IBM in Zurich, Switzerland
• Binnig and Rohrer won the Nobel Prize in Physics for this invention in (1986)
Atomic Force Microscope (AFM)
Developed in 1986 by Binnig, Quate, and Gerber as a collaboration between IBM and Stanford University.
During the 20th century a world of atomic and subatomic particles opened new avenues. In order to study and manipulate material on an atomic scale there needed to be a development in new instrumentation. Physicist Richard Feynman said in his now famous lecture in 1959: “if you want to make atomic-level manipulations, first you must be able to see what’s going on.”1 Until the 1980s researchers lacked any method for studying the surfaces on atomic scale. It was known that the arrangement of atoms on the surface differed from the bulk, but investigators had no way to determine how they were different. The scanning tunneling microscope (STM) was developed in the early 1980s by Binnig, Rohrer, and co-workers.2
How does the STM work?
The STM provides a 3D profile of the surface on a nanoscale, by tunneling electrons between a sharp conductive probe (etched Tungsten wire) and a conductive surface. The flow of electrons is very sensitive to the probe-sample distance (1-2 nm). As the probe moves across surface features the probe position is adjusted to keep the current flow constant. From this a topographic image of the surface can be obtained on an atomic scale.
Note: A more detailed explanation is found in SPM Basic Theory
Interesting STM Facts
Binnig and Rohrer receive the Nobel Prize in Physics (1986) for their work on the STM. They shared this award with German scientist Ernst Ruska, designer of the first electron microscope.
The STM that Binnig and Rohrer had built was actually based upon the field ion microscope invented by Erwin Wilhelm Müller.3
A precursor instrument, the topografiner, was invented by Russell Young and colleagues between 1965 and 1971 at the National Bureau of Standards (NBS).4
This instrument was the fundamental tool in the development of nanotechnology. It opened the door for the ability to control, see, measure, and manipulate matter on the atomic scale.
Drawbacks: Although the STM was considered a fundamental advancement for scientific research it has limited applications, as it only works for conducting or semi-conducting samples (needed for tunneling of electrons). In 1986, Binnig, Quate, and Gerber extended the field of application to non-conducting (biological, insulators etc.) by developing an atomic force microscope (AFM).5
How does the AFM work?
The AFM provides a 3D profile of the surface on a nanoscale, by measuring forces between a sharp probe (<10 nm) and surface at very short distance (0.2-10 nm probe-sample separation). The probe is supported on a flexible cantilever.
Note: A more detailed explanation is found in AFM Basic Theory
The STM and AFM may be applied to samples in very different environments: These microscopes work under vacuum conditions, air, and, in liquids (with specific modifications). | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/01_Goals_and_Objectives.txt |
HOW DOES THE STM WORK?
The STM doesn’t work the way a conventional microscope does, using optics to magnify a sample. Instead a sharp (1-10 nm) probe that is electrically conductive is scanned just above the surface of an electrically conductive sample. The principle of STM is based on tunneling of electrons between this conductive sharp probe and sample.
What is Tunneling?
Tunneling is a phenomenon that describes how electrons flow (or tunnel) across two objects of differing electric potentials when they are brought into close proximity to each other. When a voltage is applied between probe and surface electrons will flow across the gap (probe-sample distance) generating a measurable current.
The tunneling phenomena can be explained by quantum mechanics. Tunneling originates from the wavelike properties of electrons. When two conductors are close enough there is an overlapping of the electron wavefunctions. Electrons can then diffuse across the barrier between the probe (tip-terminating ideally in a single atom) and the sample when a small voltage is applied. The resulting diffusion of electrons is called tunneling. A more detailed quantum mechanics explanation can be found.1
An important characteristic of tunneling is that the amplitude of the current exhibits an exponential decay with the distance, d. One way to describe this relationship is by the equation:
$\mathrm{I \sim Ve^{–cd}}\nonumber$
• I = Tunneling current
• V = voltage between probe and sample
• c = constant
• d = probe-sample separation distance
Key factor for STM: Very small changes in the probe-sample separation induce large changes in the tunneling current! (i.e. at a separation of a few Å the current rapidly decreases)
This dependency on tunneling current and probe-sample distance allows for precise control of probe-sample separation, resulting in high vertical resolution(<1 Å) . Furthermore, tunneling is only carried out by the outermost single atom of the probe. This allows for high lateral resolution (<1 Å).
How are the sharp STM probes made?
Commercial probes are available but often users make their own probes. A common method is to electrochemically etch tungsten, W, wire in NaOH to create a sharp probe. A problem with W probes is that they oxidize over time. Platinum iridium (Pt-Ir) is preferred for use in air because platinum does not easily oxidize. The tiny fraction of Iridium in the alloy makes it much harder. The Pt-Ir tips are usually shaped by cutting Pt-Ir wire with a wire cutter.
It should be noted that a tip does not necessarily have to be one perfect point.
STM Probe
How does the probe move across the surface?
A simple analogy to describe SPM is to think of a stylus of a turntable scanning across a record, Figure 4. However unlike the stylus in a turntable, the probe in SPM does not make direct contact with the surface.
In STM a voltage is applied between the metallic probe and the sample, typically (0-3 V). When the probe is close to the surface (2-4 Å ) the voltage will result in a current, due to tunneling between the probe and sample. When the probe is far away from the surface, the current is zero. The tunneling current produced is low (pA-nA) but can be monitored using amplifiers. A 3D scanner with an electronic feedback loop is used to raster the probe across the sample to obtain a topographical image and monitor the tunneling current.
Piezoelectric 3D Scanner
The probe is attached to a 3-D piezoelectric scanner. By adjusting the voltage applied to the scanner the position of the probe can be controlled. This is due to the unique properties of piezoelectric materials that are incorporated into the scanner.
Piezoelectric materials have a permanent dipole moment across unit cell (Example: PbZrTiO3 (PZT)). If the dipoles are oriented, the material changes length in applied electric field. Each scanner responds differently to applied voltage because of the differences in the material properties and dimensions of each piezoelectric element. Sensitivity is a measure of this response, a ratio of how far the piezo extends or contracts per applied volt. A 10-4 to 10-7 % length change per V allows < 1 Å positioning.
Most SPM instruments use a piezoelectric scan tube technology which combines independent piezos to control directions in the x, y, and z. AC voltages applied to the different electrodes produce a scanning (raster) motion in x and y. This motion is controlled by a computer.
Notes:
1. Rastering involves rendering the surface image, pixel by pixel, by sweeping in a vertical and horizontal motion, similar to drawing lines in the dirt with rake.
2. The set up described here can also be reversed and the sample can be rastered with piezoelectric scanners underneath the SPM probe.
There are some factors to consider with piezoelectric scanners:
Hysteresis: piezo scanners are more sensitive at the end of travel. Therefore opposite scans will behave differently and display some hysteresis.
Creep: Drift of piezo displacement may occur with large changes in x, y offsets
Aging: Piezoelectric materials' sensitivity to voltages decreases over time. Therefore scanners need to be calibrated on a standard basis
Bow: This is due to the scanner swinging in an arc motion to measure x,y displacement. This is often compensated with the z piezo or by using flattening algorithms.
Imaging Methods
There are two methods of imaging in STM:
1) Constant Current
A constant tunneling current is maintained during scanning (typically 1 nA). This is done by vertically (z) moving the probe at each (x,y) data point until a “setpoint” current is reached. The vertical position of the probe at each (x,y) data point is stored by the computer to form the topographic image of the sample surface. This method is most common in STM.
2) Constant Height
In this approach the probe-sample distance is fixed. A variation in tunneling current forms the image. This approaching allows for faster imaging, but only works for flat samples.
Online Interactive Tools
To view an interactive STM model and see how the STM probe moves across a surface you can go to the following website: http://www.iap.tuwien.ac.at/www/_media/surface/stm_gallery/stm_animated.gif
To see how you could possibly build your own STM go here: http://www.e-basteln.de/index_m.htm
WHAT ARE THE LIMITATIONS OF STM?
Although the STM itself does not need vacuum to operate (it works in air as well as under liquids), ultrahigh vacuum (UHV) is required to avoid contamination of the samples from the surrounding medium.
• Complex and expensive instrumentation - especially (UHV) version
• Subject to noise (electrical, vibration)
• Must fabricate probes - dull probes or multiple tips at the end of probe can create serious artifacts
• Only works for conductive samples: metals, semiconductors
• samples can be “altered” to be conductive by coating with Au, but this coating can mask/hide certain features or degrade imaging resolution | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/03_Basic_Theory/01_Scanning_Tunneling_Microscopy_%28STM%29.txt |
HOW DOES THE AFM WORK?
AFM provides a 3D profile of the surface on a nanoscale, by measuring forces between a sharp probe (<10 nm) and surface at very short distance (0.2-10 nm probe-sample separation). The probe is supported on a flexible cantilever. The AFM tip “gently” touches the surface and records the small force between the probe and the surface.
How are Forces Measured?
The probe is placed on the end of a cantilever (which one can think of as a spring). The amount of force between the probe and sample is dependant on the spring constant (stiffness) of the cantilever and the distance between the probe and the sample surface. This force can be described using Hooke’s Law:
$\mathrm{F=-k·x}\nonumber$
F = Force
k = spring constant
x = cantilever deflection
If the spring constant of cantilever (typically ~ 0.1-1 N/m) is less than surface, the cantilever bends and the deflection is monitored.
This typically results in forces ranging from nN (10-9) to µN (10-6) in the open air.
What are probes made of?
Probes are typically made from Si3N4, or Si. Different cantilever lengths, materials, and shapes allow for varied spring constants and resonant frequencies. A description of the variety of different probes can be found at various vendor sites.2 Probes may be coated with other materials for additional SPM applications such as chemical force microscopy (CFM) and magnetic force microscopy (MFM).
Instrumentation
The motion of the probe across the surface is controlled similarly to the STM using feedback loop and piezoelectronic scanners. (See STM basic theory) The primary difference in instrumentation design is how the forces between the probe and sample surface are monitored. The deflection of the probe is typically measure by a “beam bounce” method. A semiconductor diode laser is bounced off the back of the cantilever onto a position sensitive photodiode detector. This detector measures the bending of cantilever during the tip is scanned over the sample. The measured cantilever deflections are used to generate a map of the surface topography.
For a visual depiction of the “beam bounce” method of detection in AFM you can refer to the following web resource which utilizes Legos ®, magnetics, and a laser pointer to demonstrate this concept.
Imaging Methods
What types of forces are measured?
The dominant interactions at short probe-sample distances in the AFM are Van der Waals (VdW) interactions. However long-range interactions (i.e. capillary, electrostatic, magnetic) are significant further away from the surface. These are important in other SPM methods of analysis.
During contact with the sample, the probe predominately experiences repulsive Van der Waals forces (contact mode). This leads to the tip deflection described previously. As the tip moves further away from the surface attractive Van der Waals forces are dominant (non-contact mode).
Modes of Operation
There are 3 primary imaging modes in AFM:
1. < 0.5 nm probe-surface separation
2. 0.5-2 nm probe-surface separation
3. 0.1-10 nm probe-surface separation
Primary Modes of Imaging:
1. Contact Mode AFM: (repulsive VdW) When the spring constant of cantilever is less than surface, the cantilever bends. The force on the tip is repulsive. By maintaining a constant cantilever deflection (using the feedback loops) the force between the probe and the sample remains constant and an image of the surface is obtained.
Advantages: fast scanning, good for rough samples, used in friction analysis
Disadvantages: at time forces can damage/deform soft samples (however imaging in liquids often resolves this issue)
2. Intermittent Mode (Tapping): The imaging is similar to contact. However, in this mode the cantilever is oscillated at its resonant frequency, Figure 4. The probe lightly “taps” on the sample surface during scanning, contacting the surface at the bottom of its swing. By maintaining a constant oscillation amplitude a constant tip-sample interaction is maintained and an image of the surface is obtained.
Oscillation Amplitude: 20-100 nm
Advantages: allows high resolution of samples that are easily damaged and/or loosely held to a surface; Good for biological samples
Disadvantages: more challenging to image in liquids, slower scan speeds needed
3. Non-contact Mode: (attractive VdW) The probe does not contact the sample surface, but oscillates above the adsorbed fluid layer on the surface during scanning. (Note: all samples unless in a controlled UHV or environmental chamber have some liquid adsorbed on the surface). Using a feedback loop to monitor changes in the amplitude due to attractive VdW forces the surface topography can be measured.
Advantages: VERY low force exerted on the sample(10-12 N), extended probe lifetime
Disadvantages: generally lower resolution; contaminant layer on surface can interfere with oscillation; usually need ultra high vacuum (UHV) to have best imaging
What are Force Curves?
Force curves measure the amount of force felt by the cantilever as the probe tip is brought close to - and even indented into - a sample surface and then pulled away. In a force curve analysis the probe is repeatedly brought towards the surface and then retracted, Figure 5. Force curve analyses can be used to determine chemical and mechanical properties such as adhesion, elasticity, hardness and rupture bond lengths.
The slope of the deflection (C) provides information on the hardness of a sample. The adhesion (D) provides information on the interaction between the probe and sample surface as the probe is trying to break free. Direct measurements of the interactions between molecules and molecular assemblies can be achieved by functionlizing probes with molecules of interest (see Chemical Force Microscopy).
Interactive Tools:
An interactive force curve analysis can be found here: https://www.ntmdt-si.com/resources/spm-principles/afm-spectroscopies/force-distance-curves
WHAT ARE THE LIMITATIONS OF AFM?
The AFM can be used to study a wide variety of samples (i.e. plastic, metals, glasses, semiconductors, and biological samples such as the walls of cells and bacteria). Unlike STM or scanning electron microscopy it does not require a conductive sample. However there are limitations in achieving atomic resolution. The physical probe used in AFM imaging is not ideally sharp. As a consequence, an AFM image does not reflect the true sample topography, but rather represents the interaction of the probe with the sample surface. This is called tip convolution, Figure 6.
Commercially available probes are becoming more widely available that have very high aspect ratios. These are made with materials such as carbon nanotubes or tungsten spikes.3 However these probes are still very expensive to use for every day image analysis.
Online Resources
Another useful source on the principles of SPM:
http://ip.physics.leidenuniv.nl/index.php/component/content/article/15-cat-info/46-stmprinciples
Another useful source on how AFM works
http://stm2.nrl.navy.mil/how-afm/how-afm.html#General%20concept | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/03_Basic_Theory/02_Atomic_Force_Microscopy_%28AFM%29.txt |
• Lateral Force Microscopy
Lateral Force Microscopy (LFM) is conducted when imaging in the contact mode. During scanning in contact mode the cantilever bends not only along vertically to the surface as a result of repulsive Van der Waals interactions, but the cantilever also undergoes torsional (lateral) deformation. LFM measures the torsional bending (or twisting) of the cantilever, which is dependent on a frictional force acting on tip. As a result, this method is also known as friction force microscopy (FFM).
• Chemical Force Microscopy
• Magnetic Force Microscopy
• Phase Imaging
04 Additional SPM Methods
Lateral Force Microscopy (LFM) is conducted when imaging in the contact mode. During scanning in contact mode the cantilever bends not only along vertically to the surface as a result of repulsive Van der Waals interactions, but the cantilever also undergoes torsional (lateral) deformation. LFM measures the torsional bending (or twisting) of the cantilever, which is dependent on a frictional force acting on tip. As a result, this method is also known as friction force microscopy (FFM).
LFM is sensitive to chemical composition or structure of the surface. This imaging mode offers nanometer-scale resolution with sensitivity to variations in surface composition, molecular organization, mechanical properties, and acid-base characteristics.1-4
Note:
For LFM imaging, the direction of scanning should be perpendicular to the long axis of the cantilever. Furthermore, the roughness of the surface makes interpretation of LFM mapping difficult, as height topography in addition to friction will cause lateral twisting of the cantilever. Therefore, LFM analysis is typically completed on smooth surfaces.
Online Images:
Excellent source for images using phase, CFM and friction contrast. NIST Building and Fire Research Laboratory Image Gallery http://www.bfrl.nist.gov/nanoscience/
02 Chemical Force Microscopy
Chemical force microscopy (CFM) is a technique, which combines the force sensitivity of the AFM with chemical discrimination. This is achieved by modifying probes with covalently linked molecules that terminate in well-defined functional groups or biological molecules. By using a suitable tip modification, chemically specific probing of the surface based on a defined tip-surface interaction can be achieved.1, 2 For example, CFM experiments have been used to probe fundamental adhesion and friction forces at the solid-liquid interface and biological interactions such as biotin and streptavidin.
CFM Applications
• Mapping of surfaces with chemical contrast
• Specific imaging of biological surfaces
• Imaging of hydrophilic/hydrophobic contrasts
• Direct determination of intermolecular forces
• Determination of adhesion forces on local scale
• Induction of chemical reactions on local scale
• pKa-value determination
Online Images:
Interesting application of CFM to evaluate surface chemistry of skeletal tissue http://www.mnp.leeds.ac.uk/dasmith/CFM.html
03 Magnetic Force Microscopy
Magnetic force microscopy (MFM) is a mode that maps the spatial distribution of magnetic materials on a surface, by measuring the magnetic interaction between a sample and a tip. MFM is conducting cantilever that is equipped with a tip that has a magnetic coating such as (Co-Cr). The interactions between the probe and surface can be detected via the deflection of the cantilever (contact mode). More commonly the magnetic interaction can be monitored through monitoring the oscillation of the probe (tapping mode). The changes in magnetic field shift the resonant frequency of the cantilever. This shift can be monitored as shown in Figure 1. A more detailed description of the various ways the actually shift frequency can be monitored can be found.1-3
A key aspect of this measurement is that a “background is taken” prior to the MFM measurement to ensure the MFM measurement is due to magnetic domains and not topography. First the topography of the sample is measured, then the probe is “lifted” a distance from the sample and the response of the tip to magnetic domain is measured (taking into account cantilever associated with topography).
04 Phase Imaging
Phase Imaging is a powerful extension of tapping mode AFM that provides nanometer-scale information about surface structure.1 Phase imaging detects variations in composition, adhesion, friction, viscoelasticity and other properties by mapping the phase of the cantilever oscillation during tapping mode, Figure 1. Some applications include:
• identification of contaminants
• mapping of different components in composite materials
• differentiating regions of high and low surface adhesion or hardness
An excellent interactive demonstration of phase imaging can be found here.
Online Images:
Excellent source for images using phase, CFM and friction contrast. NIST Building and Fire Research Laboratory Image Gallery http://www.bfrl.nist.gov/nanoscience/BFRL_AFM.html
05 Applications
Application Notes:
At VEECO detailed information on different SPM applications can be found.
Image Galleries:
Below are links to some image galleries which demonstrate the nanoscale imaging capability and numerous applications of SPM. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/04_Additional_SPM_Methods/01_Lateral_Force_Microscopy.txt |
Thumbnail: Lead(II) iodide precipitates when solutions of potassium iodide and lead(II) nitrate are combined. (CC BY-SA 3.0; PRHaney).
Characteristic Reactions of Select Metal Ions
• Most common oxidation states: +3, +5
• M.P. 630º
• B.P. 1380º
• Density 6.69 g/cm3
• Characteristics: Antimony is brittle and silvery. Not very active, but reacts with oxygen, sulfur and chlorine at high temperatures.
• Characteristic reactions of $\ce{Sb^{3+}}$: (Sb(III) is the more stable oxidation state.)
Chloride Ion:
No reaction observable, but will be present as $\ce{SbCl4}$.
Aqueous Ammonia:
Sb(III) reacts with aqueous ammonia to precipitate white $\ce{Sb(OH)3}$.
$\ce{Sb^{3+}(aq) + 3NH3(aq) + 3H2O(l) <=> Sb(OH)3(s) + 3NH4^{+}(aq) } \nonumber$
Sodium Hydroxide
Sodium hydroxide also precipitates $\ce{Sb(OH)3}$, which is amphoteric and dissolves in an excess of hydroxide and in acids.
$\ce{Sb^{3+}(aq) + 3OH^{-}(aq) <=> Sb(OH)3(s)} \nonumber$
$\ce{Sb(OH)3(s) + OH^{-}(aq) <=> Sb(OH)4^{-}(aq)} \nonumber$
$\ce{Sb(OH)3(s) + 3H^{+}(aq) <=> Sb^{3+}(aq) + 3H2O(l) } \nonumber$
Hydrogen Sulfide
Under moderately acidic conditions, $\ce{H2S}$ precipitates red $\ce{Sb2S3}$.
$\ce{2SbCl4(aq) + 3H2S(aq) <=> Sb2S3(s) + 6H+(aq) + 8Cl(aq)} \nonumber$
This sulfide is soluble in solutions of hot $\ce{NaOH}$ which contain excess sulfide ion and in hot, concentrated (12 M) $\ce{HCl}$.
$\ce{Sb2S3(s) + 3S2^{-}(aq) <=> 2SbS33(aq)} \nonumber$
$\ce{Sb2S3(s) + 6H^{+}(aq) + 8Cl^{-}(aq) <=> 2SbCl4^{2-}(aq) + 3H2S(aq)} \nonumber$
Water
Solutions of antimony(III) chloride in $\ce{HCl}$ react when added to excess water to form the basic, white, insoluble salt $\ce{SbOCl}$.
$\ce{SbCl4^{-}(aq) + H2O(l) <=> SbOCl(s) + 2H^{+}(aq) + 3Cl^{-}(aq)} \nonumber$
Reducing Agents
In the presence of $\ce{HCl}$, either aluminum or iron will reduce $\ce{Sb^{3+}}$ to $\ce{Sb}$ metal, which will be deposited as black particles.
$\ce{SbCl4^{-} (aq) + Al(s) <=> Sb(s) + Al^{3+}(aq) + 4Cl^{-}(aq)} \nonumber$
No Reaction
$\ce{SO4^{2-}}$
Characteristic Reactions of Aluminum Ions (Al)
• Most common oxidation state: +3
• M.P. 648º
• B.P. 1800º
• Density 2.70 g/cm3
• Characteristics: Silvery, rather soft. Very active, but protected by an oxide coating.
• Characteristic reactions of $\ce{Al^{3+}}$:
Aqueous Ammonia:
Aluminum ion reacts with aqueous ammonia to produce a white gelatinous precipitate of Al(OH)3:
$\ce{Al^{3+}(aq) + 3NH3(aq) + 3H2O(aq) <=> Al(OH)3(s) + 3NH4+(aq)} \nonumber$
Sodium Hydroxide
A strong base, such as $\ce{NaOH}$, precipitates $\ce{Al(OH)3}$, which is amphoteric and dissolves in an excess of hydroxide or in acids.
$\ce{Al^{3+}(aq) + 3OH^-(aq) <=> Al(OH)3(s)} \nonumber$
$\ce{Al(OH)3(s) + OH-(aq) <=> Al(OH)4-(aq)} \nonumber$
$\ce{Al(OH)3(s) + 3H+(aq) <=> Al3+(aq) + 3H2O(l)} \nonumber$
Aluminon
The dye aluminon is adsorbed by the gelatinous $\ce{Al(OH)3}$ precipitate to form a red "lake" and a colorless solution. Although this reaction is not suitable for separation of aluminum ion, it can be used as a confirmatory test for $\ce{Al^{3+}}$ after precipitation of $\ce{Al(OH)3}$ with aqueous ammonia.
No Reaction
$\ce{Cl^-}$, $\ce{SO_4^{2-}}$
Characteristic Reactions of Ammonium Ion (NH)
General Description and Properties
Ammonium ion is formed by the reaction between acids and aqueous ammonia:
$\ce{NH3(aq)+ H+(aq) <=> NH>4+(aq)} \nonumber$
The ammonium ion behaves chemically like the ions of the alkali metals, particularly potassium ion, which is almost the same size. All ammonium salts are white and soluble.
Characteristic reactions of NH₄⁺
Sodium Hydroxide
Addition of concentrated hydroxide ion solutions evolves NH3 gas:
$\ce{NH4^{+}(aq) + OH^{-}(aq) <=> NH3(aq) + H2O(l)} \nonumber$
The ammonia gas turns moistened red litmus paper blue. It can also be detected by its characteristic odor. This reaction serves as a confirmatory test for NH4+.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$, $\ce{NH3(aq)}$
Characteristic Reactions of Arsenic Ions (As)
• Most common oxidation states: +3, +5
• Characteristics: Arsenic is a gray, very brittle substance; sublimes at 615º. Combines readily with sulfur and oxygen at high temperatures.
• Characteristic reactions of $\ce{As^{3+}}$:
Hydrogen Sulfide
In slightly acid solution, yellow $\ce{As2S3}$ forms on addition of $\ce{H_2S}$:
$\ce{2As3+(aq) + 3H2S(aq) <=> As2S3(s) + 6H+(aq)} \nonumber$
Hydrogen sulfide causes no precipitation from neutral or alkaline solutions. The precipitate is soluble in concentrated $\ce{HNO3}$ or in ammoniacal $\ce{H2O2}$:
$\ce{As2S3(s) + 8H^{+}(aq) + 2NO3^{-}(aq) <=> 2As^{3+}(aq) + 3S(s) + 2NO(g) + 4H2O(l)} \nonumber$
$\ce{As2S3(s) + 14H2O2(aq) + 12NH3(aq) <=> 2AsO4^{3-}(aq) + 3SO4^{2-}(aq) + 8H2O(l) + 12NH4^{+}(aq)} \nonumber$
The precipitate is insoluble in dilute, nonoxidizing acids such as HCl.
Silver Ion
Silver ion will precipitate yellow silver arsenite from neutral or slightly basic solution:
$\ce{3Ag^{+}(aq) + AsO3^{3-}(aq) <=> Ag3AsO3(s)} \nonumber$
It is insoluble in water, but soluble in aqueous ammonia and in acids.
Oxidizing Agents
Oxidizing agents readily oxidize arsenites (+3) to arsenates (+5) in alkaline or neutral solutions:
$\ce{2Cu(OH)2(s) + AsO3^{3-}(aq) <=> Cu2O(s) (red) + AsO4^{3-}(aq) + 2H2O(l) } \nonumber$
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$, $\ce{NH3(aq)}$, $\ce{OH^{-}}$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions/Antimony_Sb3.txt |
• Barium, $\ce{Ba^{2+}}$
• Most common oxidation state: +2
• M.P. 725º
• B.P. 1640º
• Density 3.51 g/cm3
• Characteristics: Barium is a silvery metal. Extremely active, reacts quickly with oxygen in air, and with most non-metals.
Sulfate Ion
Addition of a sulfate source, such as sulfuric acid produces a white, finely divided precipitate of barium sulfate:
$\ce{Ba^{2+}(aq) + HSO4^{-}(aq) <=> BaSO4(s) + H^{+}(aq)} \nonumber$
$\ce{Ba^{2+}(aq) + SO4^{2-}(aq) <=> BaSO4(s)} \nonumber$
$\ce{BaSO4}$ is extremely insoluble in water, alkalies, or acids, but is slightly soluble in hot, concentrated sulfuric acid.
Ammonium Carbonate
A soluble carbonate such as ammonium carbonate reacts with $\ce{Ba^{2+}}$ to precipitate white barium carbonate:
$\ce{Ba^{2+}(aq) + CO3^{2-}(aq) <=> BaCO3(s)} \nonumber$
Aqueous ammonia should also be added to ensure complete precipitation. The aqueous ammonia assures that the concentration of carbonate ion will be high enough by preventing the hydrolysis of carbonate ion to form hydrogen carbonate ion:
$\ce{NH3(aq) + H2O(l) <=> NH4^{+}(aq) + OH^{-}(aq)} \nonumber$
$\ce{CO3^{2-}(aq) + H2O(l) <=> HCO3^{-}(aq) + OH^{-}(aq)} \nonumber$
Barium carbonate is soluble in acid, including dilute acetic acid, in strong bases, and in aqueous ammonia.
Potassium Chromate
Soluble chromates react with barium ion to form a finely divided yellow precipitate of barium chromate:
$\ce{Ba^{2+}(aq) + CrO4^{2-}(aq) <=> BaCrO4(s)} \nonumber$
Barium chromate is soluble in mineral acids, but only slightly soluble in acetic acid. In strong acids, an orange solution of barium dichromate is formed:
$\ce{2BaCrO4(s) + 2H^{+}(aq) <=> 2Ba^{2+}(aq) + Cr2O7^{2-}(aq) + H2O(l)} \nonumber$
Barium chromate is insoluble in bases.
Sodium Oxalate
Soluble oxalates react with barium ion to produce white barium oxalate. This precipitate is soluble in strong acids, and in hot dilute acetic acid.
$\ce{Ba^{2+}(aq) + C2O4^{2-}(aq) + H2O(l) <=> BaC2O4 \cdot H2O(s)} \nonumber$
Flame Test
Solutions of barium salts give a yellow-green color to a Bunsen burner flame.
No Reaction
$\ce{Cl^{-}}$, $\ce{NH3(aq)}$ in dilute solutions (< 0.2 M), $\ce{NaOH}$ in dilute solutions (< 0.2 M)
Characteristic Reactions of Bismuth (Bi)
• Most common oxidation states: +3, +5
• M.P. 271º
• B.P. 1560º
• Density 9.75 g/cm3
• Characteristics: Bismuth is hard and brittle, with a reddish cast. Rather inactive, but will dissolve in nitric acid or hot sulfuric acid.
Characteristic reactions of Bi³⁺
The +3 oxidation state is the more stable one.
Aqueous Ammonia
Aqueous ammonia reacts with bismuth(III) ion to precipitate white bismuth hydroxide:
$\ce{Bi^{3+}(aq) + 3NH3(aq) + 3H2O(aq) <=> Bi(OH)3(s) + 3NH4^{+}(aq) } \nonumber$
Sodium Hydroxide
Sodium hydroxide reacts with bismuth(III) ion to produce a precipitate of Bi(OH)3.
$\ce{Bi^{3+}(aq) + 3OH^{-}(aq) <=> Bi(OH)3(s)} \nonumber$
Bi(OH)3 does not dissolve in excess ammonia or sodium hydroxide, but does dissolve in acids:
$\ce{Bi(OH)3(s) + 3H^{+}(aq) <=> Bi^{3+}(aq) + 3H2O(l)} \nonumber$
Water
Compounds of $\ce{Bi^{3+}}$ hydrolyze readily in dilute solutions, especially when chloride ion is present, to form a white precipitate of $\ce{BiOCl}$:
$\ce{Bi^{3+}(aq) + Cl^{-}(aq) + H2O(l) <=> BiOCl(s) + 2H^{+}(aq)} \nonumber$
An acid should be added to aqueous solutions of bismuth(III) salts to prevent this precipitation.
Stannite Ion
Stannite ion reduces bismuth hydroxide to small black particles of metallic bismuth:
$\ce{2Bi(OH)3(s) + 3Sn(OH)4^{2-}(aq) <=> 2Bi(s) + 3Sn(OH)6^{2-}(aq)} \nonumber$
The solution of stannite ion must be prepared just prior to use, by treating a solution of tin(II) chloride with excess sodium hydroxide:
$\ce{Sn^{2+}(aq) + 2OH^{-}(aq) <=> Sn(OH)2(s) (white)} \nonumber$
$\ce{Sn(OH)2(s) + 2OH^{-}(aq) <=> Sn(OH)4^{2-}(aq) } \nonumber$
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions/Characteristic_Reactions_of_Barium_%28Ba%29.txt |
• Most common oxidation state: +2
• M.P. 321º
• B.P. 767º
• Density 8.65 g/cm3
• Characteristics: Cadmium is a silvery, crystalline metal, resembling zinc. Moderately active. $\ce{Cd^{2+}}$ is colorless in solution and forms complex ions readily.
Characteristic reactions of Cd²⁺
Aqueous Ammonia
Aqueous ammonia reacts with cadmium ion to precipitate white cadmium hydroxide, which dissolves in excess ammonia:
$\ce{Cd^{2+}(aq) + 2NH3(aq) + 2H2O(l) <=> Cd(OH)2(s) + 2NH4^{+}(aq)} \nonumber$
$\ce{Cd(OH)2(s) + 4NH3(aq) <=> [Cd(NH3)4]^{2+}(aq) + 2OH^{-}(aq)} \nonumber$
Addition of 6 M $\ce{NaOH}$ to a solution of $\ce{Cd(NH3)4^{2+}}$ precipitates a white basic salt of unknown formula. This salt is not soluble in ammonia.
Sodium Hydroxide
Sodium hydroxide produces a precipitate of $\ce{Cd(OH)2}$, but the precipitate does not dissolve in excess hydroxide:
$\ce{Cd^{2+}(aq) + 2OH^{-}(aq) <=> Cd(OH)2(s)} \nonumber$
Hydrogen Sulfide
Hydrogen sulfide reacts with cadmium ion to precipitate yellow-orange cadmium sulfide from basic, neutral, or weakly acidic solutions:
$\ce{Cd^{2+}(aq) + H2S(aq) <=> CdS(s) + 2H^{+}(aq)} \nonumber$
$\ce{Cd^{2+}(aq) + HS^{-}(aq) <=> CdS(s) + H^{+}(aq)} \nonumber$
$\ce{Cd^{2+}(aq) + S2^{-}(aq) <=> CdS(s)} \nonumber$
Cadmium sulfide is soluble in hot dilute nitric acid:
$\ce{3CdS(s) + 8H^{+}(aq) + 2NO3^{-}(aq) <=> 3Cd^{2+}(aq) + 2NO(g) + 4H2O(l) + 3S(s)} \nonumber$
Cadmium sulfide is also soluble in 3 M HCl and in hot, dilute $\ce{H2SO4}$.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$
Characteristic Reactions of Calcium Ions (Ca)
• Most common oxidation state: +2
• M.P. 845º
• B.P. 1420º
• Density 1.55 g/cm3
• Characteristics: Calcium is a rather soft, very active metal. Very similar to barium in its chemical properties.
Characteristic reactions of Ca²⁺
Sulfate Ion
Soluble sulfates, such as sulfuric acid, do not precipitate $\ce{Ca^{2+}}$ as calcium sulfate, unless the calcium ion is present in very high concentrations.
Sodium Hydroxide
Calcium hydroxide can be precipitated by addition of sodium hydroxide if $\ce{Ca^{2+}}$ is present in moderate concentration (>~0.02 M).
Ammonium Carbonate
This forms a precipitate similar to that formed with $\ce{Ba^{2+}}$.
Sodium Oxalate
The behavior is similar to that of $\ce{Ba^{2+}}$, but the precipitate is much less soluble in water and is insoluble in acetic acid. $\ce{CaC2O4 \cdot H2O}$ is soluble in mineral acids.
Flame Test
Solutions of calcium salts give a yellow-red color to a Bunsen burner flame, sometimes with a sparkly appearance.
No Reaction
$\ce{Cl^{-}}$, $\ce{NH3(aq)}$
Characteristic Reactions of Chromium Ions (Cr)
• Most common oxidation state: +3; +2 and +6 also exist. The +3 oxidation state is the most stable.
• M.P. 1857º
• B.P. 2672º
• Density 8.94 g/cm3
• Characteristics: Chromium is a silvery, rather brittle metal. Similar to aluminum, but exhibits several oxidation states.
Aqueous Ammonia
Ammonia reacts with chromium(III) ion to precipitate gray-green chromium(III) hydroxide:
$\ce{Cr^{3+}(aq) + 3NH3(aq) + 3H2O(l) <=> Cr(OH)3(s) + 3NH4^{+}(aq)} \nonumber$
$\ce{Cr(OH)3}$ dissolves only to a slight extent in excess ammonia. Boiling the solution causes the chromium(III) hydroxide to reprecipitate.
Sodium Hydroxide
Strong bases such as $\ce{NaOH}$ also precipitate $\ce{Cr(OH)3}$, but the precipitate dissolves in excess hydroxide.
$\ce{Cr^{3+}(aq) + 3OH^{-}(aq) <=> Cr(OH)3(s)} \nonumber$
$\ce{Cr(OH)3(s) + OH^{-}(aq) <=> Cr(OH)4^{-}(aq) (green) } \nonumber$
Hydrogen Peroxide
In basic solution, hydrogen peroxide oxidizes $\ce{Cr(III)}$ to $\ce{Cr(VI)}$:
$\ce{2Cr(OH)4^{-}(aq) + 3H2O2(aq) + 2OH^{-}(aq) -> 2CrO4^{2-}(aq) + 8H2O(l)} \nonumber$
To confirm the oxidation, addition of $\ce{Ba^{2+}}$ solutions precipitate the yellow chromate ion, $\ce{CrO4^{2-}}$, as yellow barium chromate.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions/Characteristic_Reactions_of_Cadmium_Ions_%28Cd%29.txt |
• Most common oxidation states: +2, +3
• M.P. 1495º
• B.P. 2870º
• Density 8.92 g/cm3
• Characteristics: Cobalt is a steel-gray, hard, tough metal. Dissolves easily in $\ce{HNO3}$ and also in dilute $\ce{HCl}$ and $\ce{H2SO4}$.
Characteristic reactions of Co²⁺
Aqueous Ammonia
Excess concentrated ammonia reacts with cobalt(II) ion to form a metal complex ion, hexaamminecobalt(II) ion:
$\ce{Co^{2+}(aq) + 6NH3(aq) <=> [Co(NH3)6]^{2+}} \nonumber$
If an insufficient amount of ammonia is present, the reaction results instead in a precipitate of a basic salt, $\ce{Co(OH)NO3}$ (blue) or $\ce{Co(OH)2}$ (rose-red), which can co-precipitate with other metal ions that form hydroxide precipitates, causing complications when trying to separate metal ions:
$\ce{Co^{2+}(aq) + OH^{-}(aq) + NO3^{-}(aq) <=> Co(OH)NO3(s) } \nonumber$
Sodium Hydroxide
Sodium hydroxide first precipitates the basic salt just described. This basic salt is insoluble in excess sodium hydroxide, but is soluble in acids. When heated, the basic salt hydrolyzes to form $\ce{Co(OH)2}$. Cobalt(II) hydroxide is slowly oxidized by atmospheric oxygen to form brown $\ce{Co(OH)3}$.
Ammonium Thiocyanate
Addition of a concentrated solution of ammonium thiocyanate to solutions containing cobalt(II) ion results in a blue color, due to formation of a complex ion, tetraisothiocyanatocobaltate(II) ion:
$\ce{Co^{2+}(aq) + 4NCS^{-}(aq) <=> [Co(NCS)4]^{2-}(aq)} \nonumber$
This complex ion is more stable in the presence of acetone; the blue color can be enhanced by addition of about an equal volume of acetone.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$
Characteristic Reactions of Copper Ions (Cu)
• Most common oxidation states: +1, +2
• M.P. 1083º
• B.P. 2582º
• Density 8.92 g/cm3
• Characteristics: Copper is a reddish-yellow, rather inactive metal. Dissolves readily in $\ce{HNO3}$ and in hot, concentrated $\ce{H2SO4}$.
Characteristic reactions of Cu²⁺
The +2 oxidation state is more common than the +1. Copper(II) is commonly found as the blue hydrated ion, $\ce{[Cu(H2O)4]^{2+}}$.
Aqueous Ammonia
Copper(II) ion reacts with stoichiometric quantities of aqueous ammonia to precipitate light blue Cu(OH)2. Some basic salts may also form.
$\ce{Cu2+(aq) + 2NH3(aq) + 3H2O(l) <=> Cu(OH)2(s) + 2NH4+(aq)} \nonumber$
The precipitate dissolves in excess ammonia to form a dark blue complex ion:
$\ce{Cu(OH)2(s) + 4NH3(aq) <=> [Cu(NH3)4]2+(aq) + 2OH-(aq) } \nonumber$
Sodium Hydroxide
Sodium hydroxide precipitates copper(II) hydroxide:
$\ce{Cu2+(aq) + 2OH-(aq) <=> Cu(OH)2(s)} \nonumber$
The precipitate does not dissolve in excess sodium hydroxide unless the NaOH solution is very concentrated. However, the precipitate will dissolve upon addition of concentrated ammonia solution.
Potassium Ferrocyanide
Potassium ferrocyanide precipitates red-brown copper(II) ferrocyanide from Cu2+ solutions:
$\ce{2Cu2+(aq) + [Fe(CN)6]4-(aq) <=> Cu2[Fe(CN)6](s)} \nonumber$
This test is very sensitive. The precipitate is soluble in aqueous ammonia.
Note: Many metal ions form ferrocyanide precipitates, so potassium ferrocyanide is not a good reagent for separating metal ions. It is used more commonly as a confirmatory test.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$
Characteristic Reactions of Iron (Fe)
• Most common oxidation states: +2, +3
• M.P. 1535º
• B.P. 2750º
• Density 7.87 g/cm3
• Characteristics: Iron is a gray, moderately active metal.
Characteristic reactions of Fe²⁺ and Fe³⁺
The $\ce{[Fe(H2O)6]^{3+}}$ ion is colorless (or pale pink), but many solutions containing this ion are yellow or amber-colored because of hydrolysis. Iron in both oxidation states forms many complex ions.
Aqueous Ammonia
Aqueous ammonia reacts with Fe(II) ions to produce white gelatinous $\ce{Fe(OH)2}$, which oxidizes to form red-brown $\ce{Fe(OH)3}$:
$\ce{Fe^{2+}(aq) + 2NH3(aq) + 3H2O(l) <=> Fe(OH)2(s) + 2NH4^{+}(aq)} \nonumber$
Aqueous ammonia reacts with $\ce{Fe(III)}$ ions to produce red-brown $\ce{Fe(OH)3}$:
$\ce{Fe^{3+}(aq) + 3NH3(aq) + 3H2O(l) <=> Fe(OH)3(s) + 3NH4^{+}(aq)} \nonumber$
Both precipitates are insoluble in excess aqueous ammonia. Iron(II) hydroxide quickly oxidizes to $\ce{Fe(OH)3}$ in the presence of air or other oxidizing agents.
Sodium Hydroxide
Sodium hydroxide also produces $\ce{Fe(OH)2}$ and $\ce{Fe(OH)3}$ from the corresponding oxidation states of iron in aqueous solution.
$\ce{Fe^{2+}(aq) + 2OH^{-}(aq) <=> Fe(OH)2(s)} \nonumber$
$\ce{Fe^{3+}(aq) + 3OH^{-}(aq) <=> Fe(OH)3(s)} \nonumber$
Neither hydroxide precipitate dissolves in excess sodium hydroxide.
Potassium Ferrocyanide
Potassium ferrocyanide will react with $\ce{Fe^{3+}}$ solution to produce a dark blue precipitate called Prussian blue:
$\ce{K^{+}(aq) + Fe^{3+}(aq) + [Fe(CN)6]^{4-}(aq) <=> KFe[Fe(CN)6](s)} \label{Prussian}$
With $\ce{Fe^{2+}}$ solution, a white precipitate will be formed that will be converted to blue due to the oxidation by oxygen in air:
$\ce{2Fe^{2+}(aq) + [Fe(CN)6]^{4-}(aq) <=> Fe2[Fe(CN)6](s) } \nonumber$
Many metal ions form ferrocyanide precipitates, so potassium ferrocyanide is not a good reagent for separating metal ions. It is used more commonly as a confirmatory test.
Potassium Ferricyanide
Potassium ferricyanide will give a brown coloration but no precipitate with $\ce{Fe^{3+}}$. With $\ce{Fe^{2+}}$, a dark blue precipitate is formed. Although this precipitate is known as Turnbull's blue, it is identical with Prussian blue (from Equation \ref{Prussian}).
$\ce{K+(aq) + Fe2+(aq) + [Fe(CN)6]^{3-}(aq) <=> KFe[Fe(CN)6](s)} \nonumber$
Potassium Thiocyanate
$\ce{KSCN}$ will give a deep red coloration to solutions containing $\ce{Fe^{3+}}$:
$\ce{Fe3+(aq) + NCS^{-}(aq) <=> [FeNCS]2+(aq)} \nonumber$
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions/Characteristic_Reactions_of_Cobalt_Ions_%28Co%29.txt |
• Most common oxidation states: +2, +4
• M.P. 328º
• B.P. 1750º
• Density 11.35 g/cm3
• Characteristics: Lead is a soft metal having little tensile strength, and it is the densest of the common metals excepting gold and mercury. It has a metallic luster when freshly cut but quickly acquires a dull color when exposed to moist air.
• Characteristic reactions of $\ce{Pb^{2+}}$: The +2 oxidation state is the more stable state.
Chloride Ion
Soluble chlorides, such as hydrochloric acid, precipitate white lead chloride from $\ce{Pb^{2+}}$ solutions, when the solutions are not too dilute:
$\ce{Pb^{2+}(aq) + 2Cl^{-}(aq)<=> PbCl2(s) } \nonumber$
Lead chloride is a slightly soluble salt, with a solubility of 10 g/L at 20º. The solubility of $\ce{PbCl2}$ increases very rapidly as the temperature rises. At 100º it has a solubility of 33.5 g/L. However, $\ce{PbCl2}$ precipitates very slowly, particularly when other ions that form insoluble chlorides are not present. The precipitation can be speeded up by vigorously rubbing the inside of the test tube with a stirring rod. Even then the precipitate may not form until 3 to 5 minutes after mixing the solutions. $\ce{PbCl2}$ dissolves in excess chloride ion as a result of the formation of the tetrachloroplumbate(II) complex ion:
$\ce{PbCl2(s) + 2Cl^{-}(aq)<=> [PbCl4]^{2-}(aq) } \nonumber$
Sulfate Ion
Soluble sulfates, including dilute sulfuric acid, precipitate white lead sulfate, which is much less soluble than lead chloride:
$\ce{Pb^{2+}(aq) + SO4^{2-}(aq)<=> PbSO4(s) } \nonumber$
$\ce{PbSO4}$ dissolves in concentrated solutions of hydroxide or acetate ions.
$\ce{PbSO4(s) + 4OH^{-}(aq)<=> [Pb(OH)4]^{2-}(aq) + SO4^{2-}(aq) } \nonumber$
$\ce{PbSO4(s) + 2CH3CO2^{-}(aq)<=> Pb(CH3CO2)2(aq) + SO4^{2-}(aq) } \nonumber$
The lead acetate, though only slightly dissociated, is soluble.
Aqueous Ammonia
Lead(II) ion reacts with aqueous ammonia to precipitate a white basic salt, $\ce{Pb2O(NO3)2}$, rather than the expected lead(II) hydroxide:
$\ce{Pb^{2+}(aq) + 2NH3(aq) + 3H2O(l) + 2NO3^{-}(aq)<=> Pb2O(NO3)2(s) + H2O(l) + 2NH4^{+}(aq) } \nonumber$
The basic salt is insoluble in excess ammonia.
Sodium Hydroxide
Sodium hydroxide precipitates lead(II) hydroxide, which dissolve with excess hydroxide:
$\ce{Pb^{2+}(aq) + 2OH^{-}(aq)<=> Pb(OH)2(s) } \nonumber$
$\ce{Pb(OH)2(s) + 2OH^{-}(aq)<=> [Pb(OH)4]^{2-}(aq) } \nonumber$
Characteristic Reactions of Magnesium Ions (Mg)
• Most common oxidation state: +2
• M.P. 650º
• B.P. 1120º
• Density 1.74 g/cm3
• Characteristics: Magnesium is a silvery metal that is quite active, reacting slowly with boiling (but not cold) water to give hydrogen and the rather insoluble magnesium hydroxide, $\ce{Mg(OH)2}$. It combines easily with oxygen and at high temperatures reacts with such nonmetals as the halogens, sulfur, and even nitrogen.
Characteristic Reactions of Mg²⁺
Magnesium ion rarely forms complex ions. All salts are white; most are soluble in water.
Aqueous Ammonia
Aqueous ammonia precipitates white gelatinous $\ce{Mg(OH)2}$:
$\ce{Mg^{2+}(aq) + 2NH3(aq) + 2H2O(l) <=> Mg(OH)2(s) + 2NH4^{+}(aq)} \label{eq1}$
Ammonium salts dissolve $\ce{Mg(OH)2}$ or prevent its precipitation, when added to aqueous ammonia. This is a buffer effect and shifts the pH to a lower value, causing a shift of the precipitation equilibrium in Equation \ref{eq1} to the left.
Sodium Hydroxide
Sodium hydroxide gives the same precipitate as aqueous ammonia:
$\ce{Mg^{2+}(aq) + 2OH^{-}(aq) <=> Mg(OH)2(s) } \nonumber$
Sodium Monohydrogen Phosphate
$\ce{Na2HPO4}$ gives a characteristic crystalline precipitate in an ammonia-ammonium chloride buffer.
$\ce{ Mg^{2+}(aq) + NH3(aq) + HPO4^{2-}(aq) <=> MgNH4PO4(s) } \nonumber$
Magnesium Reagent
Solid magnesium hydroxide forms a blue "lake" with a dilute solution of 4-(p-nitrophenylazo)resorcinol (magnesium reagent).
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$
Characteristic Reactions of Manganese Ions (Mn)
• Most common oxidation states: +2, +7; +3, +4, and +6 also exist.
• M.P. 1244º
• B.P. 1962º
• Density 7.43 g/cm3
• Characteristics: Manganese is a gray or reddish-white metal. Very hard and brittle. Very similar to iron in activity. Dissolves readily in dilute acids.
Characteristic Reactions of Mn²⁺
Aqueous Ammonia
Addition of aqueous ammonia precipitates white $\ce{Mn(OH)2}$:
$\ce{Mn^{2+}(aq) + 2NH3(aq) + 2H2O(l) <=> Mn(OH)2(s) + 2NH4^{+}(aq)} \nonumber$
The precipitate does not dissolve in excess ammonia, but does dissolve in solutions containing ammonium salts. The precipitate is easily oxidized by atmospheric oxygen to form Mn(III) or Mn(IV), which turns the precipitate a brownish color.
Sodium Hydroxide
Sodium hydroxide precipitates manganese(II) hydroxide:
$\ce{Mn^{2+}(aq) + 2OH^{-}(aq) <=> Mn(OH)2(s) } \nonumber$
Hydrogen Peroxide
In basic solutions, $\ce{H2O2}$ oxidizes Mn(II) to Mn(IV), giving a brown precipitate:
$\ce{Mn(OH)2(s) + H2O2(aq) -> MnO2(s) + 2H2O(l)} \nonumber$
$\ce{MnO2}$ is generally insoluble in acids, but does react with hot concentrated hydrochloric acid to release chlorine gas. In acid solution, $\ce{H2O2}$ becomes a reducing agent, and the $\ce{MnO2}$ will dissolve:
$\ce{MnO2(s) + H2O2(aq) + 2H^{+}(aq) -> Mn^{2+}(aq) + O2(g) + 2H2O(l)} \nonumber$
Sodium Bismuthate
Solid sodium bismuthate oxidizes $\ce{Mn^{2+}}$ to purple $\ce{MnO4^{-}}$ without heating. With heating, the product is $\ce{MnO2}$.
$\ce{2Mn^{2+}(aq) + 5BiO3^{-}(aq) + 14H^{+}(aq) -> 2MnO4^{-}(aq) + 5BiO^{+}(aq) + 7H2O(l)} \nonumber$
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions/Characteristic_Reactions_of_Lead_Ions_%28Pb%29.txt |
• Most common oxidation states: +1, +2
• M.P. -38.87º
• B.P. 356.57º
• Density 13.546 g/cm3
• Characteristics: Mercury is one of the few liquid elements. It dissolves in oxidizing acids, producing either $\ce{Hg^{2+}}$ or $\ce{Hg_2^{2+}}$, depending on which reagent is in excess. The metal is also soluble in aqua regia ( a mixture of hydrochloric and nitric acids) to form $\ce{HgCl4^{2-}}$.
Mercury(I) Ion: Hg₂²⁺
Mercury(I) compounds often undergo disproportionation, producing black metallic mercury and mercury(II) compounds.
Chloride Ion
Soluble chlorides, including hydrochloric acid, precipitate white mercury(I) chloride, also known as calomel:
$\ce{Hg_2^{2+}(aq) + 2Cl^{-}(aq) <=> Hg2Cl2(s)} \nonumber$
Aqueous ammonia reacts with $\ce{Hg2Cl2}$ to produce metallic mercury (black) and mercury(II) amidochloride (white), a disproportionation reaction:
$\ce{Hg2Cl2(s) + 2NH3(aq) -> Hg(l) + HgNH2Cl(s) + NH4^{+}(aq) + Cl^{-}(aq)} \nonumber$
Aqueous Ammonia
Aqueous ammonia produces a mixture of a white basic amido salt and metallic mercury:
$\ce{2Hg_2^{2+}(aq) + 4NH3(aq) + NO3^{-}(aq) + H2O(l) -> 2Hg(l) + Hg2ONH2NO3(s) + 3NH4^{+}(aq)} \nonumber$
The precipitate is not soluble in excess aqueous ammonia.
Sodium Hydroxide
Black finely divided mercury metal and yellow mercury(II) oxide ($\ce{HgO}$) are precipitated by $\ce{NaOH}$:
$\ce{Hg_2^{2+}(aq) + 2OH^{-}(aq) -> Hg(l) + HgO(aq) + H2O(l)} \nonumber$
Reducing Agents
Reducing agents, such as $\ce{Sn^{2+}}$ and $\ce{Fe^{2+}}$, reduce mercury(I) to the metal:
$\ce{Hg_2^{2+}(aq) + 2Fe^{2+}(aq) -> 2Hg(l) + 2Fe^{3+}(aq) } \nonumber$
Consult an activity series or a table of reduction potentials for other possible reducing agents.
No Reaction
$\ce{SO4^{2-}}$ (unless solutions are concentrated; solubility of mercury(I) sulfate is 0.06 g per 100 mL of water at 25oC)
Mercury(II) Ion: Hg²⁺
Characteristic reactions of $\ce{Hg^{2+}}$
Chloride Ion
No reaction is visible, but Hg(II) will be present as $\ce{[HgCl4]^{2-}}$.
Aqueous Ammonia
Aqueous ammonia produces white amido salts whose composition depends on the mercury(II) salt present in the solution:
$\ce{HgCl2(aq) + 2NH3(aq) <=> HgNH2Cl(s) + 2NH4^{+}(aq) + Cl^{-}(aq)} \nonumber$
These salts are not soluble in excess aqueous ammonia, but do dissolve in acids:
$\ce{HgNH2Cl(s) + 2H^{+}(aq) + Cl^{-}(aq) <=> HgCl2(aq) + NH4^{+}(aq) } \nonumber$
Sodium Hydroxide
A yellow precipitate of $\ce{HgO}$ is produced by $\ce{NaOH}$:
$\ce{Hg^{2+}(aq) + 2OH^{-}(aq) -> HgO(s) + H2O(l)} \nonumber$
$\ce{HgCl2(s) + 2OH^{-}(aq) -> HgO(s) + H2O(l) + 2Cl^{-}(aq)} \nonumber$
The mercury(II) oxide precipitate is insoluble in excess hydroxide but is soluble in acids:
$\ce{HgO(s) + 2H^{+}(aq) <=> Hg^{2+}(aq) + H2O(l)} \nonumber$
Hydrogen Sulfide
Hydrogen sulfide precipitates black mercury(II) sulfide, the least soluble of all sulfide salts.
$\ce{Hg^{2+}(aq) + H2S(aq) <=> HgS(s) + 2H^{+}(aq)} \nonumber$
$\ce{[HgCl4]^{2-}(aq) + H2S(aq) <=> HgS(s) + 2H^{+}(aq) + 4Cl^{-}(aq)} \nonumber$
Mercury(II) sulfide is insoluble in 6 M $\ce{HNO3}$ or 12 M $\ce{HCl}$, even if heated. However, it is soluble in aqua regia (3:1 HCl:HNO3) and in hot dilute $\ce{NaOH}$ containing excess sulfide.
$\ce{3HgS(s) + 12Cl^{-}(aq) + 2NO3^{-}(aq) + 8H^{+}(aq) -> 3[HgCl4]^{2-}(aq) + 2NO(g) + 3S(s) + 4H2O(l)} \nonumber$
$\ce{HgS(s) + S2^{-}(aq) <=> [HgS2]^{2-}(aq)} \nonumber$
Tin(II) Chloride
Tin(II) chloride reduces $\ce{Hg(II)}$ to $\ce{Hg(I)}$ or to metallic $\ce{Hg}$, giving a white or gray precipitate:
$\ce{2[HgCl4]^{2-}(aq) + [SnCl4]^{2-}(aq) <=> Hg2Cl2(s) + [SnCl6]^{2-}(aq) + 4Cl^{-}(aq)} \nonumber$
No Reaction
$\ce{SO4^{2-}}$ (may precipitate as a mixed sulfate-oxide - a basic sulfate - $\ce{HgSO4 \cdot 2HgO}$)
Characteristic Reactions of Nickel Ions (Ni)
• Most common oxidation state: +2
• M.P. 1453º
• B.P. 2732º
• Density 9.91 g/cm3
• Characteristics: Nickel is a silvery-gray metal. Not oxidized by air under ordinary conditions. Easily dissolved in dilute nitric acid.
Characteristic Reactions of Ni²⁺
Nickel(II) ion forms a large variety of complex ions, such as the green hydrated ion, $\ce{[Ni(H2O)6]^{2+}}$.
Aqueous Ammonia
Aqueous ammonia precipitates green gelatinous Ni(OH)2:
$\ce{Ni^{2+}(aq) + 2NH3(aq) + 2H2O(l) <=> Ni(OH)2(s) + 2NH4^{+}(aq)} \nonumber$
The nickel(II) hydroxide precipitate dissolves in excess ammonia to form a blue complex ion:
$\ce{Ni(OH)2(s) + 6NH3(aq) <=> [Ni(NH3)6]^{2+}(aq) + 2OH^{-}(aq) } \nonumber$
Sodium Hydroxide
Sodium hydroxide also precipitates nickel(II) hydroxide:
$\ce{Ni^{2+}(aq) + 2OH^{-}(aq) <=> Ni(OH)2(s)} \nonumber$
Nickel(II) hydroxide does not dissolve in excess $\ce{NaOH}$.
Dimethylglyoxime
Addition of an alcoholic solution of dimethylglyoxime to an ammoniacal solution of Ni(II) gives a rose-red precipitate, abbreviated $\ce{Ni(dmg)2}$:
$\ce{[Ni(NH3)6]^{2+}(aq) + 2(CH3CNOH)2(alc) <=> Ni[ONC(CH3)C(CH3)NOH]2(s) + 2NH4^{+}(aq) + 4NH3(aq)} \nonumber$
Sulfide
Black $\ce{NiS}$ is precipitated by basic solutions containing sulfide ion:
$\ce{Ni^{2+}(aq) + S2^{-}(aq) <=> NiS(s)} \nonumber$
Nickel(II) sulfide is not precipitated by adding $\ce{H2S}$ in an acidic solution. In spite of this, $\ce{NiS}$ is only slightly soluble in $\ce{HCl}$ and has to be dissolved in hot nitric acid or aqua regia, because $\ce{NiS}$ changes to a different crystalline form with different properties.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions/Characteristic_Reactions_of_Mercury_Ions_%28Hg_and_Hg%29.txt |
• Most common oxidation state: +1
• M.P. 961º
• B.P. 2210º
• Density 10.49 g/cm3
• Characteristics: Silver is a inactive metal. It will react with hot concentrated $\ce{H2SO4}$, with $\ce{HNO3}$, and with aqua regia.
• Characteristic reactions of $\ce{Ag^{+}}$:
Chloride Ion
Soluble chlorides, such as hydrochloric acid, precipitate silver ion as white silver(I) chloride.
$\ce{Ag^{+}(aq) + Cl^{-}(aq) <=> AgCl(s)} \nonumber$
Silver(I) chloride is insoluble in acids, including $\ce{HNO3}$. The precipitate does dissolve in aqueous ammonia:
$\ce{AgCl(s) + 2NH3(aq) <=> [Ag(NH3)2]^{+}(aq) + Cl-(aq)} \nonumber$
Addition of an acid to this solution, such as $\ce{HNO3}$, destroys the complex ion and re-precipitates silver(I) chloride:
$\ce{[Ag(NH3)2]+(aq) + Cl^{-}(aq) + 2H^{+}(aq) <=> AgCl(s) + 2NH4^{+}(aq) } \nonumber$
Sulfate Ion
No reaction occurs on addition of sulfate ion unless the concentration of $\ce{Ag^{+}}$ is high, in which case silver(I) sulfate precipitates.
Aqueous Ammonia
Aqueous ammonia precipitates brown $\ce{Ag2O}$:
$\ce{2Ag^{+}(aq) + 2NH3(aq) + 2H2O(l) <=> Ag2O(s) + 2NH4^{+}(aq) + H2O(l)} \nonumber$
The silver(I) oxide precipitate dissolves in excess ammonia to form a colorless complex ion:
$\ce{Ag2O(s) + 4NH3(aq) + H2O(l) <=> 2[Ag(NH3)2]^{+}(aq) + 2OH^{-}(aq) } \nonumber$
Sodium Hydroxide
Sodium hydroxide precipitates silver(I) oxide:
$\ce{2Ag^{+}(aq) + 2OH^{-}(aq) <=> Ag2O(s) + H2O(l)} \nonumber$
Silver(I) oxide does not dissolve in excess $\ce{NaOH}$.
Characteristic Reactions of Strontium Ions (Sr)
• Most common oxidation state: +2
• M.P. 770º
• B.P. 1380º
• Density 2.60 g/cm3
• Characteristics: Strontium is an active metal, very similar to barium and calcium.
Characteristic reactions of Sr²⁺
Sulfate Ion
Soluble sulfates, including sulfuric acid, precipitate white $\ce{SrSO4}$:
$\ce{Sr^{2+}(aq) + SO4^{2-}(aq) <=> SrSO4(s) } \nonumber$
Chromate Ion
Although chromate ion does not give a precipitate in neutral or acid solutions of $\ce{Sr^{2+}}$, yellow $\ce{SrCrO4}$ precipitates from slightly basic solution:
$\ce{Sr^{2+}(aq) + CrO4^{2-}(aq) <=> SrCrO4(s)} \nonumber$
Strontium chromate dissolves readily in acids, even in acetic acid. It is only slightly soluble in alcohol or alcohol-water mixtures.
Oxalate Ion
Oxalate ion precipitates white $\ce{SrC2O4}$:
$\ce{Sr^{2+}(aq) + C2O4^{2-}(aq) <=> SrC2O4(s)} \nonumber$
Strontium oxalate is insoluble in acetic acid, but soluble in mineral acids such as $\ce{HCl}$:
$\ce{SrC2O4(s) + 2H^{+}(aq) <=> Sr^{2+}(aq) + H2C2O4(aq) } \nonumber$
Ammonium Carbonate
Ammonium carbonate precipitates white $\ce{SrCO3}$.
$\ce{Sr^{2+}(aq) + CO3^{2-}(aq) <=> SrCO3(s)} \nonumber$
Aqueous ammonia should also be added to ensure complete precipitation. The aqueous ammonia assures that the concentration of carbonate ion will be high enough by preventing the hydrolysis of carbonate ion to form hydrogen carbonate ion:
$\ce{NH3(aq) + H2O(l) <=> NH4^{+}(aq) + OH^{-}(aq)} \nonumber$
$\ce{CO3^{2-}(aq) + H2O(l) <=> HCO3^{-}(aq) + OH^{-}(aq)} \nonumber$
Strontium carbonate is soluble in acid, including dilute acetic acid, in strong bases, and in aqueous ammonia.
No Reaction
$\ce{Cl^{-}}$, $\ce{NH3(aq)}$, $\ce{OH^{-}}$
Characteristic Reactions of Tin Ions (Sn Sn)
• Most common oxidation states: +2, +4
• M.P. 232º
• B.P. 2270º
• Density 7.30 g/cm3
• Characteristics: Metallic tin is soft and malleable. It slowly dissolves in dilute nonoxidizing acids or more readily in hot concentrated $\ce{HCl}$. It reacts with $\ce{HNO3}$ to form metastannic acid, $\ce{H2SnO3}$, a white substance insoluble in alkalies or acids. In neutral or only slightly acidic solutions, zinc displaces tin from its compounds, forming the metal.
Characteristic reactions of Sn²⁺ and Sn⁴⁺
In aqueous solutions, both tin(II) and tin(IV) exist as complex ions. Both tin(II) chloride and tin(IV) chloride tend to undergo hydrolyze and aged solutions of these salts become measurably acidic. Acid should be added to aqueous solutions of these compounds to prevent hydrolysis.
Tin(IV) chloride exists as a colorless liquid. It is soluble in organic solvents, and is a nonconductor of electricity, indicating that it is a molecular compound. Tin(II) chloride is a strong reducing agent and is easily oxidized by atmospheric oxygen. Metallic tin is often added to solutions of $\ce{SnCl2}$ to prevent this oxidation.
Chloride Ion
Although there is no visible reaction, tin(II) exists as the complex ion $\ce{[SnCl4]^{2-}}$ and tin(IV) as the complex ion $\ce{[SnCl6]^{2-}}$.
Aqueous Ammonia
Aqueous ammonia precipitates white $\ce{Sn(OH)2}$ and white $\ce{Sn(OH)4}$ with tin(II) and tin(IV), respectively.
$\ce{[SnCl4]^{2-}(aq) + 2NH3(aq) + 2H2O(l) <=> Sn(OH)2(s) + 2NH4^{+}(aq) + 4Cl^{-}(aq)} \nonumber$
$\ce{[SnCl6]^{2-}(aq) + 4NH3(aq) + 4H2O(l) <=> Sn(OH)4(s) + 4NH4^{+}(aq) + 6Cl^{-}(aq)} \nonumber$
Both precipitates, tin(II) hydroxide and tin(IV) hydroxide, dissolve in excess aqueous ammonia.
Sodium Hydroxide
Sodium hydroxide also precipitates the hydroxides:
$\ce{[SnCl4]^{2-}(aq) + 2OH^{-}(aq) <=> Sn(OH)2(s) + 4Cl^{-}(aq)} \nonumber$
$\ce{[SnCl6]^{2-}(aq) + 4OH^{-}(aq) <=> Sn(OH)4(s) + 6Cl^{-}(aq)} \nonumber$
These precipitates dissolve in excess hydroxide:
$\ce{Sn(OH)2(s) + 2OH^{-}(aq) <=> [Sn(OH)4]^{2-}(aq)} \nonumber$
$\ce{Sn(OH)4(s) + 2OH^{-}(aq) <=> [Sn(OH)6]^{2-}(aq) } \nonumber$
Hydrogen Sulfide
In mildly acidic solution, sulfide precipitates $\ce{SnS}$ (brown) and $\ce{SnS2}$ (light yellow):
$\ce{[SnCl4]^{2-}(aq) + H2S(aq) <=> SnS(s) + 2H^{+}(aq) + 4Cl^{-}(aq)} \nonumber$
$\ce{[SnCl6]^{2-}(aq) + 2H2S(aq) <=> SnS2(s) + 4H^{+}(aq) + 6Cl^{-}(aq)} \nonumber$
$\ce{SnS2}$ is soluble in basic solutions containing excess $\ce{S2^{-}}$, even in the presence of ammonia. It is also soluble in 6 M $\ce{HCl}$:
$\ce{SnS2(s) + S2^{-}(aq) <=> [SnS3]^{2-}(aq)} \nonumber$
$\ce{SnS2(s) + 4H^{+}(aq) + 6Cl^{-}(aq) <=> [SnCl6]^{2-}(aq) + 2H2S(aq)} \nonumber$
$\ce{SnS}$ is soluble in 12 M $\ce{HCl}$:
$\ce{SnS(s) + 2H^{+}(aq) + 4Cl^{-}(aq) <=> [SnCl4]^{2-}(aq) + H2S(aq)} \nonumber$
Reducing and Oxidizing Agents
In $\ce{HCl}$ solution, either metallic $\ce{Fe}$ or metallic $\ce{Al}$ will reduce $\ce{Sn(IV)}$ to $\ce{Sn(II)}$:
$\ce{Fe(s) + [SnCl6]^{2-}(aq) -> Fe^{2+}(aq) + [SnCl4]^{2-}(aq) + 2Cl^{-}(aq)} \nonumber$
$\ce{2Al(s) + 3[SnCl6]^{2-}(aq) --> 2Al^{3+}(aq) + 3[SnCl4]^{2-}(aq) + 6Cl^{-}(aq)} \nonumber$
Sn(II) reduces $\ce{HgCl2}$ to \9\ce{Hg2Cl2}\) (white) or metallic mercury (black) or a mixture of both. These reactions are described in more detail in the mercury section.
In basic solution, $\ce{Sn(II)}$ reduces $\ce{Bi(III)}$ to metallic $\ce{Bi}$. This reaction is described in more detail in the bismuth section.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$
Characteristic Reactions of Zinc Ions (Zn)
• Most common oxidation states: +2
• M.P. 420o
• B.P. 907o
• Density 7.13 g/cm3
• Characteristics: Zinc is a bluish-gray metal. Quite active; burns readily in air to form white $\ce{ZnO}$ and combines with many nonmetals.
Characteristic reactions of Zn²⁺
Zinc(II) ion forms complex ions readily.
Aqueous Ammonia
Zinc(II) ion reacts with aqueous ammonia to precipitate white gelatinous Zn(OH)2:
$\ce{Zn^{2+}(aq) + 2NH3(aq) + 2H2O(l) <=> Zn(OH)2(s) + 2NH4^{+}(aq)} \nonumber$
The zinc(II) hydroxide precipitate dissolves in excess ammonia:
$\ce{Zn(OH)2(s) + 4NH3(aq) <=> [Zn(NH3)4]^{2+}(aq) + 2OH^{-}(aq) } \nonumber$
Sodium Hydroxide
Sodium hydroxide also precipitates zinc(II) hydroxide:
$\ce{Zn^{2+}(aq) + 2OH^{-}(aq) <=> Zn(OH)2(s)} \nonumber$
The zinc(II) hydroxide precipitate also dissolves in excess hydroxide:
$\ce{Zn(OH)2(s) + 2OH^{-}(aq) <=> [Zn(OH)4]^{2-}(aq)} \nonumber$
Potassium Ferrocyanide
A gray-white precipitate is formed with ferrocyanide ion. The precipitate may be blue-green if traces of iron ions are present:
$\ce{3Zn^{2+}(aq) + 2K^{+}(aq) + 2[Fe(CN)6]^{4-}(aq) <=> K2Zn3[Fe(CN)6]2(s)} \nonumber$
Note: Many metal ions form ferrocyanide precipitates, so potassium ferrocyanide is not a good reagent for separating metal ions. It is used more commonly as a confirmatory test.
No Reaction
$\ce{Cl^{-}}$, $\ce{SO4^{2-}}$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions/Characteristic_Reactions_of_Silver_Ions_%28Ag%29.txt |
Confirmatory tests should be performed on separate solutions of some of your ions, in order to see what these tests look like before using them on an unknown. Generally a confirmatory test is used only after other reactions have been used to isolate the ion. When working with stock solutions of an ion, dilute 1 drop with 9 drops of water to simulate the concentration that would exist in an unknown. In a mixture or a solution obtained from an unknown or known mixture, the dilution is not necessary since the ions have already been diluted compared to the stock solutions. In the tests described here, it is assumed that 10 drops of solution will be used. If you change the amount of solution of the ion being tested, you must also adjust the amounts of the reagents to be added.
A. Tests Based on Hydrogen Sulfide
Even if hydrogen sulfide was not used for separation of ions, it may be useful for confirmatory tests. The most convenient and safe source of $\ce{H2S}$ is thioacetamide. When heated, aqueous solutions of thioacetamide hydrolyze to produce $\ce{H2S}$:
In acid solution:
$\ce{CH3CSNH2 + 2H2O + H^{+} -> CH3COOH + NH4^{+} + H2S} \nonumber$
In basic solution using ammonia:
$\ce{CH3CSNH2 + 2H2O + 2NH3 -> CH3COO^{-} + 3NH4^{+} + S2^{-}} \nonumber$
In basic solution using strong base:
$\ce{CH3CSNH2 + 2OH^{-} -> CH3COO^{-} + NH3 + S2^{-} + H2O } \nonumber$
Antimony (Sb³⁺)
To 10 drops of solution, add 6 M $\ce{NH3(aq)}$ until neutral. Make the solution acidic by adding one or more drops of 6 M $\ce{HCl}$. Add 1 mL of thioacetamide and stir well. Heat the test tube in the boiling water bath for 5 minutes. If antimony is present, a red orange precipitate of antimony sulfide should form. This same test will also work with arsenic(III),
tin(II),
and tin(IV).
These precipitates are yellow, brown, and yellow, respectively.
Cadmium (Cd²⁺)
Three related procedures can be used.
a. Follow the procedure described for antimony(III). The precipitate should be yellow.
b. Follow the procedure described for antimony(III), but first make the solution basic with aqueous ammonia. If a precipitate forms on addition of ammonia, continue to add ammonia until the precipitate dissolves, before adding the thioacetamide.
c. Add ammonia, as described in procedure b. Then add 10 drops of water and 10 drops of 6 M $\ce{NaOH}$. A white precipitate should form. If it does not form, either increase the amount of $\ce{NaOH}$, or do not add the 10 drops of water. Centrifuge and discard the solution. Wash the precipitate twice with a mixture of 1 mL of water and 1 mL of 6 M $\ce{NH3(aq)}$. Dissolve the precipitate by adding 6 M $\ce{HCl}$ drop by drop until no precipitate remains. Add 6 M $\ce{NaOH}$ until the solution is just basic. A white precipitate of $\cd{Cd(OH)2}$ will form. Then add 1 mL of 1 M thioacetamide and heat the mixture in a boiling water bath for 5 minutes. A yellow precipitate of $\ce{CdS}$ will form.
Mercury(II) (Hg²⁺)
Follow the procedure described for antimony(III). The precipitate should be black.
Try to dissolve the precipitate in 1 mL of 12 M $\ce{HCl}$ with heating. If it does not dissolve in $\ce{HCl}$, try the same procedure with 1 mL of 6 M (dilute) $\ce{HNO3}$. If it still does not dissolve, then try to dissolve it in a mixture of 1 mL of 6 M $\ce{HCl}$ and 1 mL of 6 M $\ce{HNO3}$, heating for 2 minutes in a water bath. Most of the black precipitate should dissolve. Mercury(II) sulfide is the least soluble of the metal sulfides.
Tests Based on Other Reagents
Aluminum (Al³⁺)
To 10 drops of solution, add 2 drops of aluminon. Add 6 M $\ce{NH3(aq)}$ dropwise until the solution is basic to litmus. White $\ce{Al(OH)3}$ should form and adsorb the aluminon, which colors it red. The solution should become colorless.
Ammonium (NH₄⁺)
Place 10 drops of solution in a 30 mL beaker. Place a moistened piece of red litmus paper and put it on the bottom of a small watch glass. Add 1 mL of 6 M $\ce{NaOH}$ to the sample in the beaker. Cover the beaker with the watch glass. Gently heat the solution to near the boiling point. Do not allow the solution to splatter onto the litmus paper. The paper should turn blue from ammonia fumes.
Bismuth (Bi³⁺)
Two procedures can be used.
a. To 10 drops of solution, add a freshly prepared sodium stannite solution dropwise. A black precipitate should form.
Sodium Stannite: The sodium stannite solution is prepared by diluting 4 drops of 0.25 M $\ce{SnCl2}\0 with 2 mL water and adding 6 M \(\ce{NaOH}$ dropwise, stirring well after each drop, until a permanent precipitate forms. Then add excess NaOH to dissolve this precipitate.
b. To 10 drops of solution, add 2 drops of 6 M $\ce{HCl}$. Then add water dropwise until a white precipitate forms. The precipitate may not be very pronounced, but may instead show up as a turbidity of the solution.
Calcium (Ca²⁺)
Two procedures may be used.
a. To 10 drops of solution, add aqueous ammonia to make the solution basic. Then add $\ce{(NH4)2C2O4}$ (ammonium oxalate) solution dropwise. A white precipitate should form.
b. Perform a flame test. The flame should turn orange-red.
Chromium (Cr³⁺)
To 10 drops of solution, add 1 mL of 3% $\ce{H2O2}$. Then add 6 M $\ce{NaOH}$ dropwise until the solution is basic. Heat in a boiling water bath for a few minutes. A yellow solution of $\ce{CrO4^{2-}}$ should form.
Cobalt (Co²⁺)
To 10 drops of solution, add 5 drops of 0.5 M $\ce{KNCS}$. To this mixture, add an equal volume of acetone and mix. A blue color indicates the formation of $\ce{[Co(NCS)4]^{2-}$.
Copper (Cu²⁺)
To 10 drops of solution, add 0.5 M $\ce{K4Fe(CN)6}$ dropwise until a red-brown precipitate forms.
Iron (Fe³⁺)
Two procedures can be used.
a. To 10 drops of solution, add 0.5 M $\ce{K4Fe(CN)6}$ dropwise until a dark blue precipitate forms.
b. To 10 drops of solution, add 1 or 2 drops of 0.5 M $\ce{KNCS}$. The solution should become deep red due to formation of $\ce{FeNCS^{2+}}$.
Magnesium (Mg²⁺)
To 10 drops of solution, add 2 drops of magnesium reagent, 4-(p-nitrophenylazo)resorcinol. Then add 6 M $\ce{NaOH}$ dropwise. A "blue lake" -- a precipitate of $\ce{Mg(OH)2}$ with adsorbed magnesium reagent -- forms.
Manganese (Mn²⁺)
To 10 drops of solution, add 1 mL of 6 M $\ce{HNO3}$. Add a spatula-tip quantity of solid sodium bismuthate ($\ce{NaBiO3}$) and stir well. There should be a slight excess of solid bismuthate. Wait about 1 minute and then centrifuge the mixture. The solution should be purple due to the presence of $\ce{MnO4^{-}}$.
Mercury(I) (Hg₂²⁺)
To 10 drops of solution, add 6 M $\ce{HCl}$ to form a white precipitate. Centrifuge and discard the centrifugate. Add 6 M $\ce{NH3(aq)}$ to the precipitate. The color of the precipitate should change to gray or black due to formation of mercury metal.
Mercury(II)(Hg²⁺)
To 10 drops of solution, add 1 or more drops of 0.25 M $\ce{SnCl2}$. A grayish precipitate should form. The precipitate might be white or black instead of gray.
Nickel (Ni²⁺)
To 10 drops of solution, add 6 M $\ce{NH3(aq)}$ until the solution is basic. Add 2 or 3 drops of dimethylglyoxime reagent (DMG). A rose-red precipitate of $\ce{Ni(DMG)2}$ should form.
Silver (Ag⁺)
To 10 drops of solution, add 6 M $\ce{HCl}$ dropwise, with shaking, until precipitation is complete. Centrifuge and decant. Discard the centrifugate. Suspend the silver chloride precipitate in 1 mL of water and add 6 M $\ce{NH3(aq)}$ dropwise until the precipitate dissolves. Acidify the solution with 6 M $\ce{HNO3}$ and the white precipitate should reappear.
Strontium (Sr²⁺)
To 10 drops of the solution , add 5 drops of ethanol ($\ce{C2H5OH}$). Then add 3 M (6 N) $\ce{H2SO4}$ dropwise until precipitation is complete. Heat the sample in a water bath for a few minutes. Test again for complete precipitation by adding 1 more drop of sulfuric acid. Centrifuge and decant the supernatant. If any further tests will be carried out on the supernatant liquid, heat it in a boiling water bath to expel the $\ce{C2H5OH}$, which could interfere with further tests. The white precipitate of $\ce{SrSO4}$ confirms the presence of strontium.
Tin (Sn²)
To 10 drops of solution, add 1 or 2 drops of 0.2 M $\ce{HgCl2}$. A white precipitate should form, but it could be gray or black instead.
Zinc (Zn²⁺)
To 10 drops of solution, add 6 M $\ce{NH3(aq)}$ to give a neutral pH. Then make the solution slightly acidic to litmus paper with 6 M $\ce{HCl}$. Add 1 or 2 drops of 0.5 M $\ce{K4Fe(CN)6}$ and stir. A gray-white precipitate of $\ce{K2Zn[Fe(CN)6]2}$ is formed. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Confirmatory_Tests.txt |
• Carbonate Ion (CO₃²⁻)
Carbonate ion, a moderately strong base, undergoes considerable hydrolysis in aqueous solution. In strongly acidic solution, CO2 gas is evolved.
• Halide Ions (Cl⁻, Br⁻, I⁻)
These ions are all very weak bases since they are the conjugate bases of very strong acids. Hence, they undergo negligible hydrolysis.
• Phosphate Ion (PO₄³⁻)
Phosphate ion is a reasonably strong base. It hydrolyzes in water to form a basic solution.
• Sulfate Ion (SO₄²⁻)
Sulfate ion is a very weak base. Because it is such a weak base, sulfate ion undergoes negligible hydrolysis in aqueous solution.
• Sulfide Ion (S²⁻)
Sulfide is a strong base, so solutions of sulfide in water are basic, due to hydrolysis. Sulfide solutions develop the characteristic rotten-egg odor of H2S as a result of this hydrolysis.
• Sulfite Ion (SO₃²⁻)
Sulfite ion is a weak base, but does undergo some hydrolysis to produce basic solutions. In acidic solution, the equilibria are shifted to form sulfurous acid, resulting in the evolution of SO2 gas. Sulfur dioxide is a colorless gas with a characteristic choking odor.
Properties of Select Nonmetal Ions
Acid Equilibria
$\ce{CO3^{2-}(aq) + H2O(l) <=> HCO3^{-}(aq) + OH^{-}(aq)} \nonumber$
with $K_b = 2.0 \times 10^{-4}$
$\ce{HCO3^{-}(aq) + H2O(l) <=> H2CO3(aq) + OH^{-}(aq) } \nonumber$
with $K_b = 2.5 \times 10^{-8}$
$\ce{H2CO3(aq) <=> H2O(l) + CO2(g)} \nonumber$
Carbonate ion, a moderately strong base, undergoes considerable hydrolysis in aqueous solution. In strongly acidic solution, $\ce{CO2}$ gas is evolved.
Solubility
Carbonate ion can be precipitated from solution as white barium or calcium salts that have low solubilities:
$\ce{BaCO3(s) <=> Ba^{2+}(aq) + CO3^{2-}(aq)} \nonumber$
$K_{sp} = 5.0 \times 10^{-9}$
$\ce{CaCO3(s) <=> Ca^{2+}(aq) + CO3^{2-}(aq)} \nonumber$
with $K_{sp} = 7.5 \times 10^{-9}$
Although many carbonate salts are insoluble, those of $\ce{Na^{+}}$, $\ce{K^{+}}$, and $\ce{NH4^{+}}$ are quite soluble. All bicarbonate ($\ce{HCO3^{-}}$) salts are soluble. Because of this, even insoluble carbonate salts dissolve in acid.
None.
Halide Ions (Cl Br I)
Acid Equilibria
These ions are all very weak bases since they are the conjugate bases of very strong acids. Hence, they undergo negligible hydrolysis.
Solubility
Most halide salts are soluble. Exceptions are the halide salts of silver, lead(II), and mercury(I). For example, the solubility of the silver salts is indeed very low, as shown by their solubility product constants:
$\ce{AgCl(s) <=> Ag^{+}(aq) + Cl^{-}(aq)} \nonumber$
with $K_{sp} = 1.2 \times 10^{-10}$
$\ce{AgBr(s) <=> Ag^{+}(aq) + Br^{-}(aq)} \nonumber$
with $K_{sp} = 4.8 \times 10^{-13}$
$\ce{AgI(s) <=> Ag^{+}(aq) + I^{-}(aq)} \nonumber$
with $K_{sp} = 1.4 \times 10^{-16}$
The silver halide solubility can be increased by addition of ammonia in appropriate concentrations, due to complex ion formation:
$\ce{Ag^{+}(aq) + 2NH3(aq) <=> [Ag(NH3)2]^{+}(aq)} \nonumber$
with $K_f = 1.5 \times 10^7$.
The less soluble the silver halide, the greater the concentration of ammonia needed to dissolve the silver halide. $\ce{AgCl}$ dissolves in 6 M $\ce{NH3(aq)}$, while $\ce{AgBr}$ dissolves in 15 M $\ce{NH3(aq)}$ (the concentrated reagent). $\ce{AgI}$ will not dissolve even in 15 M $\ce{NH3(aq)}$. Thus, adding an appropriate concentration of aqueous ammonia can be used to separate the silver halides from one another.
Oxidation Reduction
As reducing agents, the halide ions follow the trend in reducing strength:
$\ce{I^{-} > Br^{-} > Cl^{-}}. \nonumber$
Conversely, the halogens follow the opposite order of oxidizing strength:
$\ce{Cl2 > Br2 > I2} \nonumber$
Thus pale green $\ce{Cl2}$ oxidizes $\ce{Br^{-}}$ to red $\ce{Br2}$ and $\ce{I^{-}}$ to violet $\ce{I2}$. These colors are better observed if the halogens are extracted into a small amount of hexane. CAUTION: hexane is flammable.
Phosphate Ion (PO)
Acid Equilibria
Phosphate ion is a reasonably strong base. It hydrolyzes in water to form a basic solution.
$\ce{PO4^{3-}(aq) + H2O(l) <=> HPO4^{2-}(aq) + OH^{-}(aq)} \nonumber$
with $K_b = 1.0 \times 10^{-2}$
$\ce{HPO4^{2-}(aq) + H2O(l) <=> H2PO4^{-}(aq) + OH^{-}(aq)} \nonumber$
with $K_b = 1.6 \times 10^{-7}$
$\ce{H2PO4^{-}(aq) + H2O(l) <=> H3PO4(aq) + OH^{-}(aq)} \nonumber$
with $K_b = 1.3 \times 10^{-12}$
Solubility
Phosphates of the alkali metals are soluble. Most other phosphates, such as $\ce{FePO4}$, $\ce{CrPO4}$, $\ce{BiPO4}$, $\ce{Ca3(PO4)2}$, and $\ce{Ag3PO4}$ are only sparingly soluble. Phosphate ion also forms a bright yellow precipitate with ammonium molybdate:
$\ce{PO4^{3-} + 3NH4^{+} + 12MoO4^{2-} + 24H^+ -> (NH4)3PO4 \cdot 12MoO3 + 12H2O} \nonumber$
Oxidation-Reduction
Phosphate is a very weak oxidizing agent. Since the phosphorus is in its highest oxidation state in phosphate ion, this ion cannot act as a reducing agent. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Properties_of_Select_Nonmetal_Ions/Carbonate_Ion_%28CO%29.txt |
Acid Equilibria
$\ce{SO42-(aq)+ H2O(l) <=> HSO4-(aq) + OH-(aq)} \nonumber$
with $K_b = 1 \times 10^{-12}$
$\ce{HSO4-(aq) + H2O(l) <=> H2SO4(aq) + OH-(aq)} \nonumber$
with $K_b = 1 \times 10^{-15}$
Sulfate ion is a very weak base, while $\ce{HSO4^{-}}$ is a fairly strong acid, with $K_a = 0.01$. On the other hand, $\ce{H2SO4}$ is a very strong acid. Because it is such a weak base, sulfate ion undergoes negligible hydrolysis in aqueous solution.
Solubility
Most sulfates, including those of $\ce{Na^{+}}$, $\ce{K^{+}}$, and $\ce{NH4^{+}}$, are soluble in water. Exceptions that are insoluble are white lead(II) sulfate and white barium sulfate:
$\ce{BaSO4(s) <=> Ba2+(aq) + SO42-(aq)} \nonumber$
with $K_{sp} = 1.4 \times 10^{-8}$
$\ce{PbSO4(s) <=> Pb2+(aq) + SO42-(aq)} \nonumber$
with $K_{sp} = 1.1 \times 10^{-10}$
Formation of white $\ce{BaSO4}$ upon addition of $\ce{Ba^{2+}}$ to a solution of $\ce{SO4^{2-}}$, even if it is acidic, is a reliable test for sulfate. Other insoluble sulfates are those of calcium, strontium, and mercury(I).
Oxidation-Reduction:
Sulfate is a very weak oxidizing agent. Since sulfur is in its maximum oxidation number in sulfate ion, this ion cannot act as a reducing agent.
Sulfide Ion (S)
Acid Equilibria
Sulfide is a strong base, so solutions of sulfide in water are basic, due to hydrolysis. Sulfide solutions develop the characteristic rotten-egg odor of $\ce{H2S}$ as a result of this hydrolysis.
$\ce{S2^{-}(aq) + H2O(l) <=> HS^{-}(aq) + OH^{-}(aq) } \nonumber$
with $K_b = 8.3$ and
$\ce{H^{-}(aq) + H2O(l) <=> H2S(aq) + OH^{-}(aq)} \nonumber$
with $K_b = 1 \times 10^{-7}$.
Solubility
Many sulfide salts are insoluble in acidic or basic solution:
• Acidic: $\ce{PbS}$, $\ce{Bi2S3}$, $\ce{CuS}$, $\ce{CdS}$, $\ce{HgS}$, $\ce{As2S3}$, $\ce{Sb2S3}$, $\ce{SnS2}$
• Basic: $\ce{CoS}$, $\ce{FeS}$, $\ce{MnS}$, $\ce{NiS}$, $\ce{ZnS}$
Those salts that are insoluble in acidic solution are also insoluble in basic solution.
A common test for aqueous sulfide ion involves acidification to form $\ce{H2S}$, then exposure to moistened lead acetate paper to form black $\ce{PbS}$ on the paper:
$\ce{Pb(OAc)2 + H2S -> PbS + 2HOAc} \nonumber$
Oxidation-Reduction:
$\ce{S2^{-}}$ or $\ce{H2S}$ can be oxidized to yellow elemental sulfur in a colloidal form with fairly mild oxidizing agents, including nitric acid.
Sulfite Ion (SO)
Acid Equilibria
Sulfite ion is a weak base, but does undergo some hydrolysis to produce basic solutions. In acidic solution, the equilibria are shifted to form sulfurous acid, resulting in the evolution of SO2 gas. Sulfur dioxide is a colorless gas with a characteristic choking odor.
$\ce{SO3^{2-}(aq) + H2O(l) <=> HSO3^{-}(aq) + OH^{-}(aq)} \nonumber$
wiht $K_b = 1.8 \times 10^{-7}$
$\ce{HSO3^{-}(aq) + H2O(l) <=> H2SO3(aq) + OH^{-}(aq) } \nonumber$
with $K_b = 1 \times 10^{-12}$
$\ce{H2SO3(aq) <=> H2O(l) + SO2(g)} \nonumber$
Solubility
The sulfites of $\ce{Na^{+}}$, $\ce{K^{+}}$, and $\ce{NH4^{+}}$ are soluble in water. Most other sulfites are insoluble in water. However, due to the basic nature of $\ce{SO3^{2-}}$, all sulfites dissolve in acidic solution.
Oxidation-Reduction
Sulfite ion is readily oxidized to sulfate. On prolonged exposure to air, this oxidation occurs with atmospheric oxygen:
$\ce{2SO3^{2-}(aq) + O2(g) -> 2SO4^{2-}(aq)} \nonumber$
Sulfite or sulfur dioxide will decolorize permanganate. This decolorization serves as a convenient test for sulfur dioxide:
$\ce{2MnO4^{-}(aq) + 5SO2(g) + 2H2O(l) -> 5SO4^{2-}(aq) + 2Mn^{2+}(aq) + 4H^{+}(aq)} \nonumber$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Properties_of_Select_Nonmetal_Ions/Sulfate_Ion_%28SO%29.txt |
Performing qualitative analysis properly requires a knowledge of certain basic laboratory techniques. In order to speed up procedures, all techniques will be on a semimicro scale. This scale involves volumes of 12 mL of solutions and adding reagents dropwise with eye droppers. Containers will generally be standard 75 mm test tubes which hold about 3 mL. Techniques for working with volumes of this magnitude will be outlined below.
Water
Whenever it is necessary to use water in a procedure, use distilled water. Ordinary tap water is not completely pure and may introduce substances for which you are trying to test or it may introduce incompatible contamination.
Dispensing Reagent Solutions
When obtaining reagents from the reagent bottles, always dispense the reagent with the dropper contained in the reagent bottle, whether dispensing the reagent directly into your sample, or obtaining a quantity of reagent in another container. Do not touch the dropper to the solution to which you are adding the reagent or to your sample container. Do not set the dropper on the reagent bench or lab bench. Return the stopper promptly to the reagent bottle from which it originated. Do not place anything into a reagent bottle other than the dropper which is contained in it. If you need a volume greater than 2 mL, use a graduated cylinder. For lesser volumes, you may want to calibrate one of your eye droppers by counting how many drops of water it takes to deliver 1 mL into a graduated cylinder.
Stirring Rods
When reagents are added to a solution, it is essential that the solution be stirred thoroughly. Stirring rods can be prepared by cutting short lengths of thin glass rod and firepolishing the ends. The stirring rods get wet with each usage, and if not properly cleaned, will contaminate the next solution. A simple way to keep stirring rods clean is to place them in a beaker of clean distilled water and swirl them about after each use. The contamination will be highly diluted and can remain in the water. It is advisable to change the water periodically to minimize contamination however.
Adjusting pH
At times you will want to make a solution acidic or basic. Add the proper reagent dropwise, stirring well with a stirring rod after each addition, and test the pH at appropriate intervals by touching the tip of the stirring rod to litmus or other pH indicating paper. Continue this procedure until the paper turns the proper color. If litmus paper is not sufficiently sensitive, obtain some pH indicator paper, which is available for various ranges of the pH scale.
Precipitation
In order to detect the formation of a precipitate, both the solution being used and the reagent must be clear (transparent, but not necessarily colorless). Precipitation is accomplished by adding the specified amount of reagent to the solution and stirring well. Stir both in a circular direction and up and down. When precipitation appears to be complete, centrifuge to separate the solid. Before removing the supernatant liquid with a dropper or by decanting (pouring off), add a few drops more of the reagent to check for complete precipitation. If more precipitation occurs, add a few more drops of reagent, centrifuge, and test again.
Centrifuging
A centrifuge is used to separate a precipitate from a liquid. Put the test tube containing the precipitate into one of the locations in the centrifuge. Place another test tube containing an equal volume of water in the centrifuge location directly opposite your first test tube. This procedure is extremely important; it must be followed to maintain proper balance in the centrifuge. Otherwise, the centrifuge will not function properly and may be damaged.
Turn on the centrifuge and let it run for at least 30 seconds. Turn the centrifuge off and let it come to a complete stop without touching it. Stopping the centrifuge with your hand is not only dangerous, but is likely to stir up your precipitate. The precipitate should settle to a compact mass at the bottom of the test tube. The liquid above the precipitate (the supernatant) should not have any precipitate suspended in it. If it does, centrifuge again. The supernatant can then be poured off (decanted) into another test tube without disturbing the precipitate. All of the liquid should be decanted in a single pouring motion to avoid resuspending the precipitate. An eye dropper or a dropper with an elongated tip may also be used to draw off the supernatant.
Washing a Precipitate
After a precipitate has been centrifuged and the supernatant liquid decanted or drawn off, there is still a little liquid present in the precipitate. To remove any ions which might interfere with further testing, this liquid should be removed with a wash liquid, usually distilled water. The wash liquid must be a substance which will not interfere with the analysis, cause further precipitation, or dissolve the precipitate. Add the wash liquid to the precipitate, stir well, centrifuge, and decant the wash liquid. The wash liquid is usually discarded. Precipitates should be washed twice for best results.
Transferring a Precipitate
Sometimes you will want to divide a separated and washed precipitate into two portions, in order to carry out two additional tests. To transfer part of the precipitate to another test tube, add a small amount of distilled water to the precipitate, stir the mixture to form a slurry, and quickly pour half of the slurry into another container. Do not use a spatula. This could contaminate your sample.
Heating Solutions
Test tubes containing reactions mixtures are never to be heated directly over an open flame. If a solution is to be heated, it should be placed in a test tube and suspended in a beaker of boiling (or in some cases only hot) water. It will be convenient to keep a beaker of water hot throughout the laboratory period. If hot water is required in a procedure, it should be distilled water heated in a test tube suspended in the beaker of boiling water. Do not use water directly from the beaker it may be contaminated.
Evaporating a Solution
Sometimes it is necessary to boil a solution to reduce the volume and concentrate a species or drive off a volatile species. To boil a liquid, place it in a small porcelain casserole or evaporating dish and heat it on a wire gauze with a small flame. Watch it carefully and do not overheat it. Generally, you do not want to heat to dryness as this might decompose the sample. Stir the solution during the evaporation. Do not try to evaporate a solution in a small test tube. It will take much longer and the contents of the tube may be ejected if the tube is heated too strongly.
Spatulas
Never place a metal spatula in a solution. It may dissolve and cause contamination. If you need to manipulate a solid, use a rubber policeman on a stirring rod.
Cleaning Glassware
Cleanliness is essential for a successful procedure. All apparatus must be cleaned well with soap and a brush, rinsed with tap water, and finally rinsed with distilled water.
Sulfide Ion
In any procedures involving sulfide ion, thioacetamide ($\ce{CH3CSNH2}$) should be used as the source of sulfide ion. Upon heating in water (or acidic or basic solution), thioacetamide decomposes to $\ce{CH3CO2}$, $\ce{NH4^{+}}$, and $\ce{H2S}$ (or $S_2^{-}$ in basic solution):
$\ce{CH3CSNH2 + 2 H2O -> CH3CO2 + NH4+ + H2S } \nonumber$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Semimicro_Analytical_Techniques.txt |
Thioacetamide is an organosulfur compound with the formula $\ce{C2H5NS}$. This white crystalline solid is soluble in water and serves as a source of sulfide ions in the synthesis of organic and inorganic compounds.
Behavior of Ions with Sulfide
Ions not listed here either do not react with sulfide ($\ce{S2^{-}}$), or they should have been removed by precipitation as chlorides or sulfates before sulfide is added to the metal ion mixture. Ions that will precipitate in acidic solutions of sulfide:
$\ce{As^{3+}}$, $\ce{Bi^{3+}}$, $\ce{Hg^{2+}}$, $\ce{Cd^{2+}}$, $\ce{Sn^{2+}}$, $\ce{Sn^{4+}}$, $\ce{Sb^{3+}}$, $\ce{Pb^{2+}}$, $\ce{Cu^{2+}}$
Of these, $\ce{Hg^{2+}}$, $\ce{Sn^{2+}}$, and $\ce{Sb^{3+}}$ dissolve in basic solutions containing excess $\ce{S2^{-}}$ due to complex ion formation. The others remain insoluble in basic solution.
Ions that will precipitate in moderately basic (pH=9) solutions of sulfide:
$\ce{Fe^{2+}}$, $\ce{Fe^{3+}}$, $\ce{Al^{3+}}$, $\ce{Zn^{2+}}$, $\ce{Mn^{2+}}$, $\ce{Cr^{3+}}$, $\ce{Co^{2+}}$, $\ce{Ni^{2+}}$
Of these ions, $\ce{Al^{3+}}$, $\ce{Fe^{3+}}$, and $\ce{Cr^{3+}}$ precipitate as the hydroxide rather than the sulfide. Of those that form insoluble sulfides, all except $\ce{CoS}$ and $\ce{NiS}$ are soluble in dilute aqueous hydrochloric acid.
The concentration of $\ce{S2^{-}}$ is the controlling factor in determining whether an ion will precipitate. Consider the dissociation equilibrium for hydrosulfuric acid:
$\ce{H2S (aq) <=> 2H^{+}(aq) + S2^{-}(aq)} \nonumber$
The more strongly acidic the solution, the lower the concentration of $\ce{S2^{-}}$. The sulfides that precipitate in base will not precipitate in acid, because the concentration of $\ce{S2^{-}}$ is too low. Those sulfides that precipitate in acid will also precipitate in base because the concentration of $\ce{S2^{-}}$ is higher than necessary for precipitation.
Procedure for Sulfide Separations
To separate the acidic sulfide group ions from the basic sulfide group ions, follow this procedure:
To 5 mL of solution containing ions from both sulfide groups, add 10 drops (0.5 mL) of 6 M $\ce{HNO3}$ and 20 drops (1 mL) of 6 M $\ce{HCl}$. Evaporate the mixture to dryness slowly in an evaporating dish in a hood. The last few drops should be evaporated with steam by placing the evaporating dish on top of a beaker of boiling water. Heating to complete dryness with a flame could evaporate the chloride salts $\ce{PbCl2}$, $\ce{HgCl2}$, or $\ce{SnCl4}$, if they are present. It is necessary to use steam to get the salt mixture completely dry so there will be no excess acid present after evaporation.
Add 2 mL of $\ce{H2O}$ to the cool sulfide salt mixture. Swirl and stir to dissolve as much salt as possible. Transfer the solution and the residue to a test tube for precipitation. Rinse the evaporating dish with 1 mL of $\ce{H2O}$ and 4 drops of 6 M $\ce{HCl}$ and add to the same test tube. The test tube should now contain all your salts in 3 mL of solution. If a precipitate is still present, it is probably some oxychloride salts that may not be completely dissolved in the 0.38 M $\ce{H^{+}(aq)}$ solution.
Precipitate the acidic sulfide group ions by adding 1 mL of 1 M thioacetamide. Stir and heat the mixture in a boiling water bath for 7 minutes. Then add 1.5 mL $\ce{H2O}$ and 0.5 mL thioacetamide and heat for another 5 minutes. Prepare the following wash solution while heating your sample if you need your precipitate for additional separations or tests.
Wash solution: Add 2 drops of 1 M thioacetamide and 1 mL of 1 M $\ce{NH4Cl}$ to 1 mL of $\ce{H2O}$ and heat in a water bath. Remove any pale yellow elemental sulfur present from decomposition of thioacetamide by centrifugation and decanting.
The solution should contain any basic sulfide group ions, so centrifuge and save the solution for analysis, if it might contain any basic sulfide group ions. Wash the precipitate, which contains the acidic sulfide group ions as sulfide (or hydroxide) salts, with 1 mL of the wash solution. Centrifuge and add the decanted wash liquid to the basic sulfide group solution. Wash the precipitate again with the remaining 1 mL of wash solution. Centrifuge and discard the decanted wash solution.
Sulfide precipitates can be dissolved by adding 25 mL of 6 M $\ce{HNO3}$. If necessary to dissolve all the solid, add more nitric acid. Heat the mixture in a boiling water bath for a few minutes. Centrifuge and remove the solution. Nitric acid will dissolve some precipitates by shifting the solubility equilibrium; for example:
$\ce{CuS + 2H^{+} \rightarrow Cu^{2+} + H2S} \nonumber$
Hot nitric acid will also oxidize sulfide ion to sulfur:
$\ce{3S_2^{-} + 2NO3^{-}(aq) + 8H^{+} \rightarrow 3S + 2NO (g) + 4H2O (l)} \nonumber$
$\ce{HgS}$ does not dissolve unless heated for a long time with more concentrated $\ce{HNO3}$ because it is so insoluble. Be cautious however, since prolonged heating might oxidize sulfur to sulfate ion, which could precipitate $\ce{PbSO4}$ if lead ion is present.
To get rid of excess sulfide ion in a solution, acidify the solution with $\ce{HNO3}$ and heat. Centrifuge off any sulfur formed. The solution can be tested for sulfide ion with lead acetate paper, which will turn black due to formation of lead sulfide if sulfide ion is present in the solution.
CAUTION
Hydrogen sulfide is an extremely toxic gas. Work only under a hood. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Separations_with_Thioacetamide.txt |
Quantitative analysis is the determination of the absolute or relative abundance (often expressed as a concentration) of one, several or all particular substance(s) present in a sample.
• Accuracy of Spectrophotometer Readings
The needle deflection or the number shown on the digital display of a spectrophotometer is proportional to the transmittance of the solution. How do errors in transmittance readings affect the accuracy of solution concentration values?
• Density and Percent Compositions
Density and percent composition are important concepts in chemistry. Each have basic components as well as broad applications. Components of density are: mass and volume, both of which can be more confusing than at first glance. An application of the concept of density is determining the volume of an irregular shape using a known mass and density. Determining Percent Composition requires knowing the mass of entire object or molecule and the mass of its components.
• Dynamic Light Scattering
Dynamic Light Scattering (DLS), also called Photon Correlation Spectroscopy, is a spectroscopic technique used in Chemistry, Biochemistry, and Physics primarily to characterize the hydrodynamic radius of polymers, proteins, and colloids in solution. DLS is a useful technique for determining the size distribution of nanoparticles in a suspension and detecting small amounts of high mass species in protein samples.
• Significant Digits
Significant Digits - Number of digits in a figure that express the precision of a measurement instead of its magnitude. The easiest method to determine significant digits is done by first determining whether or not a number has a decimal point. This rule is known as the Atlantic-Pacific Rule. The rule states that if a decimal point is Absent, then the zeroes on the Atlantic/right side are insignificant. If a decimal point is Present, then the zeroes on the Pacific/left side are insignificant.
• Temperature Basics
The concept of temperature may seem familiar to you, but many people confuse temperature with heat. Temperature is a measure of how hot or cold an object is relative to another object (its thermal energy content), whereas heat is the flow of thermal energy between objects with different temperatures. Three different scales are commonly used to measure temperature: Fahrenheit (expressed as °F), Celsius (°C), and Kelvin (K).
• The Scientific Method
The Scientific Method is simply a framework for the systematic exploration of patterns in our world. It just so happens that this framework is extremely useful for the examination of chemistry and its many questions. The scientific process, an iterative process, uses the repeated acquisition and testing of data through experimental procedures to disprove hypotheses.
• Units of Measure
Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. Measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning, and must be understood when converting between different unit systems.
Thumbnail: The seven SI base units and their interdependency. Clockwise from top left: second (time), metre (distance), Ampere (electric current), mole (amount of substance), kilogram (mass), Kelvin (temperature) and candela (luminous intensity). (CC BY-SA 3.0; Dono)
Quantifying Nature
The needle deflection or the number shown on the digital display of a spectrophotometer is proportional to the transmittance of the solution. How do errors in transmittance readings affect the accuracy of solution concentration values? The concentration as a function of the transmittance is given by the equation
$c(T) = - \dfrac{\log T}{ \epsilon \,b} \nonumber$
Let $c_o$ be the true concentration and $T_o$ the corresponding transmittance, i.e. $c_o = c(T_o)$. Suppose that the actual transmittance measured is $T_o + T$, corresponding to the concentration
$c_o + c = c(T_o + T). \nonumber$
The error in the transmittance is $T$ and that of the concentration is $c$.
By using a Taylor series expansion, and discarding all terms higher than T to the first power, it is possible to show that:
$\Delta c = - \dfrac{\Delta T}{ 2.303 \epsilon \,b\,T} \nonumber$
Dividing the second equation by the first gives us:
$\dfrac{\Delta c}{c} = \dfrac{\Delta T}{ 2.303 \epsilon \,b\,T} = \dfrac{\Delta T}{ T \,ln T} \nonumber$
Values of -(TlnT)-1 as a function of T or A (A = -logT) are tabulated below. Below the tabulation one finds a plot of -(TlnT)-1 versus T.
The relative error in the concentration, for a given T, has its smallest value, when T = 1/e = 0.368 or when A = 0.434. The minimum is not sharp and good results can be expected in a transmittance range from 0.2 to 0.6 or an absorbance range from 0.7 to 0.2. An inspection of the graph below indicates that transmittance values of 0.1 and 0.8 are the outside limits between which one can expect to obtain reasonably accurate results. These transmittance values correspond to an absorbance range of 0.1 to 1.0 absorbance units. This is the rationale for limiting your calibration curve to that absorbance range.
Table 1: Values of -(T In T)-1 as a Function of T and A
T -(T ln T)-1 A
0.010 21.71 2.00
0.050 6.68 1.30
0.100 4.34 1.00
0.150 3.51 0.824
0.200 3.11 0.699
0.250 2.89 0.602
0.300 2.77 0.523
0.350 2.72 0.456
0.368 2.718 0.434
0.400 2.73 0.398
0.450 2.78 0.347
0.500 2.89 0.301
0.550 3.04 0.260
0.600 3.26 0.222
0.650 3.57 0.187
0.700 4.01 0.155
0.750 4.63 0.125
0.800 5.60 0.097
0.850 7.24 0.071
0.900 10.55 0.046
0.950 20.52 0.022
0.990 100.50 0.004
Graph of -(T ln T)-1 vs. T
Contributors and Attributions
• Ulrich de la Camp and Oliver Seely (California State University, Dominguez Hills). | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Accuracy_of_Spectrophotometer_Readings.txt |
Which one weighs more, a kilogram of feathers or a kilogram of bricks? Though many people will say that a kilogram of bricks is heavier, they actually weigh the same! However, many people are caught up by the concept of density, which causes them to answer the question incorrectly. A kilogram of feathers clearly takes up more space, but this is because it is less "dense." But what is density, and how can we determine it?
Introduction
Density ($\rho$) is a physical property found by dividing the mass of an object by its volume. Regardless of the sample size, density is always constant. For example, the density of a pure sample of tungsten is always 19.25 grams per cubic centimeter. This means that whether you have one gram or one kilogram of the sample, the density will never vary. The equation is as follows:
$Density = \dfrac{Mass}{Volume} \nonumber$
or just
$\rho = \dfrac{m}{v} \label{dens}$
Based on Equation $\ref{dens}$, it's clear that density can, and does, vary from element to element and substance to substance due to differences in the relation of mass and volume. Let us break it down one step further. What are mass and volume? We cannot understand density until we know its parts: mass and volume. The following two sections will teach you all the information you need to know about volume and mass to properly solve and manipulate the density equation.
Mass
Mass concerns the quantity of matter in an object. The SI unit for mass is the kilogram (kg), although grams (g) are commonly used in the laboratory to measure smaller quantities. Often, people mistake weight for mass. Weight concerns the force exerted on an object as a function of mass and gravity. This can be written as
$\text{Weight} = \text{mass} \times \text{gravity} \nonumber$
$Weight = {m}{g}$
Hence, weight changes due to variations in gravity and acceleration. For example, the mass of a 1 kg cube will continue to be 1 kg whether it is on the top of a mountain, the bottom of the sea, or on the moon, but its weight will differ. Another important difference between mass and weight is how they are measured. Weight is measured with a scale, while mass must be measured with a balance. Just as people confuse mass and weight, they also confuse scales and balances. A balance counteracts the effects of gravity while a scale incorporates it. There are two types of balances found in the laboratory: electronic and manual. With a manual balance, you find the unknown mass of an object by adjusting or comparing known masses until equilibrium is reached.
Volume
Volume describes the quantity of three dimensional space than an object occupies. The SI unit for volume is meters cubed (m3), but milliliters (mL), centimeters cubed (cm3), and liters (L) are more common in the laboratory. There are many equations to find volume. Here are just a few of the easy ones:
Volume = (length)3 or (length)(width)(height) or (base area)(height)
Density: A Further Investigation
We know all of density's components, so let's take a closer look at density itself. The unit most widely used to express density is g/cm3 or g/mL, though the SI unit for density is technically kg/m3. Grams per centimeter cubed is equivalent to grams per milliliter (g/cm3 = g/mL). To solve for density, simply follow the equation d = m/v. For example, if you had a metal cube with mass 7.0 g and volume 5.0 cm3, the density would be
$\rho = \dfrac{7\,g}{5\,cm^3}= 1.4\, g/cm^3 \nonumber$
Sometimes, you have to convert units to get the correct units for density, such as mg to g or in3 to cm3.
Density can be used to help identify an unknown element. Of course, you have to know the density of an element with respect to other elements. Below is a table listing the density of a few elements from the Periodic Table at standard conditions for temperature and pressure, or STP corresponding to a temperature of 273 K (0° Celsius) and 1 atmosphere of pressure.
Element Name and Symbol Density (g/cm3) Atomic Number
Table $1$: Density of elements
Hydrogen (H) 0.000089 $(8.9 \times 10^{-5})$ at 0 °C and 1 Atm. pressumre 1
Helium (He) 0.000164 $(1.64 \times 10^{-4})$ at 0 °C and 1 Atm. pressure 2
Aluminum (Al) 2.7 13
Zinc (Zn) 7.13 30
Tin (Sn) 7.31 50
Iron (Fe) 7.87 26
Nickel (Ni) 8.9 28
Cobalt (Co) 8.9 27
Copper (Cu) 8.96 29
Silver (Ag) 10.5 47
Lead (Pb) 11.35 82
Mercury (Hg) 11.55 80
Gold (Au) 19.32 79
Platinum (Pt) 21.45 78
Osmium (Os) 22.6 76
As can be seen from the table, the most dense element is Osmium (Os) with a density of 22.6 g/cm3. The least dense element is Hydrogen (H) with a density of 0.09 g/cm3.
Density and Temperature
Density generally decreases with increasing temperature and likewise increases with decreasing temperatures. This is because volume differs according to temperature. Volume increases with increasing temperature. If you are curious as to why the density of a pure substance could vary with temperature, check out the ChemWiki page on Van Der Waal interactions. Below is a table showing the density of pure water with differing temperatures.
Temperature (C) Density (g/cm3)
Table $2$: Density of water as a function of temperature
100 0.9584
80 0.9718
60 0.9832
40 0.9922
30 0.9957
25 0.997
22 0.9978
20 0.9982
15 0.9991
10 0.9997
4 1.000
0 (liquid) .9998
0 (solid) 0.9150
As can be seen from Table $2$, the density of water decreases with increasing temperature. Liquid water also shows an exception to this rule from 0 degrees Celsius to 4 degrees Celsius, where it increases in density instead of decreasing as expected. Looking at the table, you can also see that ice is less dense than water. This is unusual as solids are generally denser than their liquid counterparts. Ice is less dense than water due to hydrogen bonding. In the water molecule, the hydrogen bonds are strong and compact. As the water freezes into the hexagonal crystals of ice, these hydrogen bonds are forced farther apart and the volume increases. With this volume increase comes a decrease in density. This explains why ice floats to the top of a cup of water: the ice is less dense.
Even though the rule of density and temperature has its exceptions, it is still useful. For example, it explains how hot air balloons work.
Density and Pressure
Density increases with increasing pressure because volume decreases as pressure increases. And since density=mass/volume , the lower the volume, the higher the density. This is why all density values in the Periodic Table are recorded at STP, as mentioned in the section "Density and the Periodic Table." The decrease in volume as related to pressure is explained in Boyle's Law: $P_1V_1 = P_2V_2$ where P = pressure and V = volume. This idea is explained in the figure below. More about Boyle's Law, as well as the other gas laws, can be found here.
Archimedes' Principle
The Greek scientist Archimedes made a significant discovery in 212 B.C. The story goes that Archimedes was asked to find out for the King if his goldsmith was cheating him by replacing his gold for the crown with silver, a cheaper metal. Archimedes did not know how to find the volume of an irregularly shaped object such as the crown, even though he knew he could distinguish between elements by their density. While meditating on this puzzle in a bath, Archimedes recognized that when he entered the bath, the water rose. He then realized that he could use a similar process to determine the density of the crown! He then supposedly ran through the streets naked shouting "Eureka," which means "I found it!" in Latin.
Archimedes then tested the king's crown by taking a genuine gold crown of equal mass and comparing the densities of the two. The king's crown displaced more water than the gold crown of the same mass, meaning that the king's crown had a greater volume and thus had a smaller density than the real gold crown. The king's "gold" crown, therefore, was not made of pure gold. Of course, this tale is disputed today because Archimedes was not precise in all his measurements, which would make it hard to determine accurately the differences between the two crowns.
Archimedes' Principle states that if an object has a greater density than the liquid that it is placed into, it will sink and displace a volume of liquid equal to its own. If it has a smaller density, it will float and displace a mass of liquid equal to its own. If the density is equal, it will not sink or float. This principle also explains why balloons filled with helium float. Balloons, as we learned in the section concerning density and temperature, float because they are less dense than the surrounding air. Helium is less dense than the atmospheric air, so it rises. Archimedes' Principle can also be used to explain why boats float. Boats, including all the air space, within their hulls, are far less dense than water. Boats made of steel can float because they displace their mass in water without submerging all the way.
Table $3$ below gives the densities of a few liquids to put things into perspective.
Liquid
Density in kg/m3
Density in g/cm3
Table $3$: Density of select liquids
2-Methoxyethanol
964.60
0.9646
Acetic Acid
1049.10
1.049
Acetone
789.86
0.7898
Alcohol, ethyl
785.06
0.7851
Alcohol, methyl
786.51
0.7865
Ammonia
823.35
0.8234
Benzene
873.81
0.8738
Water, pure
1000.00
1.000
Percent Composition
Percent composition is very simple. Percent composition tells you by mass what percent of each element is present in a compound. A chemical compound is the combination of two or more elements. If you are studying a chemical compound, you may want to find the percent composition of a certain element within that chemical compound. The equation for percent composition is (mass of element/molecular mass) x 100.
Steps to calculating the percent composition of the elements in an compound
1. Find the molar mass of all the elements in the compound in grams per mole.
2. Find the molecular mass of the entire compound.
3. Divide the component's molar mass by the entire molecular mass.
4. You will now have a number between 0 and 1. Multiply it by 100% to get percent composition.
Tips for solving:
1. The percent composition of all elements in a compounds must add up to 100%. In a binary compound, you can find the % of the first element, then do 100%-(% first element) to get (% second element)
2. If using a calculator, you can store the overall molar mass to a variable such as "A". This will speed up calculations, and reduce errors.
Example $1$: Phosphorus Pentachloride
What is the percent composition of phosphorus and chlorine in $PCl_5$?
Solution
Find the molar mass of all the elements in the compound in grams per mole.
• $P$: $1 \times 30.975 \,g/mol = 30.75\, g/mol$
• $Cl$: $5 \times 35.453 \, g/mol = 177.265\, g/mol$
Find the molecular mass of the entire compound.
• $PCl_5$: $1 \times 30.975 \,g/mol + 5 \times 35.453 \, g/mol = 208.239 \, g/mol$
Divide the component's molar mass by the entire molecular mass.
• $P$: $\dfrac{30.75 \, g/mol}{208.239\, g/mol} \times 100\% = 14.87\%$
• $Cl$: $\dfrac{177.265 \, g/mol}{208.239\, g/mol} \times 100\% = 85.13 \%$
Therefore, in $PCl_5$ is 14.87% phosphorus and 85.13% chlorine by mass.
Example $2$: HCl
What is the percent composition of each element in hydrochloric acid (HCl).
Solution
First find the molar mass of hydrogen.
$H = 1.00794 \,g \nonumber$
Now find the molecular mass of the HCl molecule:
$1.00794\,g + 35.4527\,g = 36.46064\,g \nonumber$
Follow steps 3 and 4:
$\left(\dfrac{1.00794\,g}{36.46064\,g}\right) \times 100\% = 2.76\% \nonumber$
Now just subtract to find the percent by mass of chlorine in the compound:
$100\%-2.76\% = 97.24\% \nonumber$
Therefore, $HCl$ is 2.76% hydrogen and 97.24% chlorine by mass.
Percent Composition in Everyday Life
Percent composition plays an important role in everyday life. It is more than just the amount of chlorine in your swimming pool because it concerns everything from the money in your pocket to your health and how you live. The next two sections describe percent composition as it relates to you.
Nutrition Labels
The nutrition label found on the container of every bit of processed food sold by the local grocery store employs the idea of percent composition. On all nutrition labels, a known serving size is broken down in five categories: Total Fat, Cholesterol, Sodium, Total Carbohydrate, and Protein. These categories are broken down into further subcategories, including Saturated Fat and Dietary Fiber. The mass for each category, except Protein, is then converted to percent of Daily Value. Only two subcategories, Saturated Fat and Dietary Fiber are converted to percent of Daily Value. The Daily Value is based on a the mass of each category recommended per day per person for a 2000 calorie diet. The mass of protein is not converted to percent because their is no recommended daily value for protein. Following is a picture outlining these ideas.
For example, if you wanted to know the percent by mass of the daily value for sodium you are eating when you eat one serving of the food with this nutrition label, then go to the category marked sodium. Look across the same row and read the percent written. If you eat one serving of this food, then you will have consumed about 9% of your daily recommended value for sodium. To find the percent mass of fat in the whole food, you could divide 3.5 grams by 15 grams, and see that this snack is 23.33% fat.
Penny: The Lucky Copper Coin
The penny should be called "the lucky copper coated coin." The penny has not been made of solid copper since part of 1857. After 1857, the US government started adding other cheaper metals to the mix. The penny, being only one cent, is literally not worth its weight in copper. People could melt copper pennies and sell the copper for more than the pennies were worth. After 1857, nickel was mixed with the more expensive copper. After 1864, the penny was made of bronze. Bronze is 95% copper and 5% zinc and tin. For one year, 1943, the penny had no copper in it due to the expense of the World War II. It was just zinc coated steel. After 1943 until 1982, the penny went through periods where it was brass or bronze.
Today, the penny in America is 2.5% copper with 97.5% zinc. The copper coats the outside of the penny while the inner portion is zinc. For comparison's sake, the penny in Canada is 94% steel, 1.5% nickel, and 4.5% copper.
The percent composition of a penny may actually affect health, particularly the health of small children and pets. Since the newer pennies are made mainly of zinc instead of copper, they are a danger to a child's health if ingested. Zinc is very susceptible to acid. If the thin copper coating is scratched and the hydrochloric acid present in the stomach comes into contact with the zinc core it could cause ulcers, anemia, kidney and liver damage, or even death in severe cases. Three important factors in penny ingestion are time, pH of the stomach, and amount of pennies ingested. Of course, the more pennies swallowed, the more danger of an overdose of zinc. The more acidic the environment, the more zinc will be released in less time. This zinc is then absorbed and sent to the liver where it begins to cause damage. In this kind of situation, time is of the essence. The faster the penny is removed, the less zinc is absorbed. If the penny or pennies are not removed, organ failure and death can occur.
Below is a picture of a scratched penny before and after it had been submerged in lemon juice. Lemon juice has a similar pH of 1.5-2.5 when compared to the normal human stomach after food has been consumed. Time elapsed: 36 hours.
As you can see, the copper is vastly unharmed by the lemon juice. That's why pennies made before 1982 with mainly copper (except the 1943 penny) are relatively safe to swallow. Chances are they would pass through the digestive system naturally before any damage could be done. Yet, it is clear that the zinc was partially dissolved even though it was in the lemon juice for only a limited amount of time. Therefore, the percent composition of post 1982 pennies is hazardous to your health and the health of your pets if ingested.
Summary
Density and percent composition are important concepts in chemistry. Each have basic components as well as broad applications. Components of density are: mass and volume, both of which can be more confusing than at first glance. An application of the concept of density is determining the volume of an irregular shape using a known mass and density. Determining Percent Composition requires knowing the mass of entire object or molecule and the mass of its components. In the laboratory, density can be used to identify an element, while percent composition is used to determine the amount, by mass, of each element present in a chemical compound. In daily life, density explains everything from why boats float to why air bubbles will try to escape from soda. It even affects your health because bone density is very important. Similarly, percent composition is commonly used to make animal feed and compounds such as the baking soda found in your kitchen.
Density Problems
These problems are meant to be easy in the beginning and then gradually become more challenging. Unless otherwise stated, answers should be in g/mL or the equivalent g/cm3.
1. If you have a 2.130 mL sample of acetic acid with mass .002234 kg, what is the density?
2. Calculate the density of a .03020 L sample of ethyl alcohol with a mass of 23.71002 g.
3. Find the density of a sample that has a volume of 36.5 L and a mass of 10.0 kg.
4. Find the volume in mL of an object that has a density of 10.2 g/L and a mass of 30.0 kg.
5. Calculate the mass in grams of an object with a volume of 23.5 mL and density of 10.0 g/L.
6. Calculate the density of a rectangular prism made of metal. The dimensions of the prism are: 5cm by 4cm by 5cm. The metal has a mass of 50 grams.
7. Find the denstiy of an unknown liquid in a beaker. The beaker's mass is 165g when there is no liquid present. With the unknown liquid, the total mass is 309g. The volume of the unknown is 125mL.
8. Determine the density in g/L of an unknown with the following information. A 55 gallon tub weighs 137.5lb when empty and 500.0 lb when filled with the unknown.
9. A ring has a mass of 5.00g and a volume of 0.476 mL. Is it pure silver?
10. What is the density of the solid in the image if the mass is 40 g? Make your answer have 3 significant figures.
11) Below is a model of a pyramid found at an archeological dig made of an unknown substance. It is too large to find the volume by submerging it in water. Also, the scientists refuse to remove a piece to test because this pyramid is a part of history. Its height is 150.0m. The length of its base is 75.0m and the width is 50.0m. The mass of this pyramid is 5.50x105kg. What is the density?
Density Problem Solutions
1. 1.049 g/mL
2. 0.7851 g/mL
3. 0.274 g/mL
4. 2.94 x 106 mL
5. 0.3.27 kg
6. 0.5 g/cm3
7. 1.15 g/mL
8. 790 g/L
9. Yes
10. 0.195 g/cm3
11. 29.3 g/cm3
Percent Composition Problems
These problems will follow the same pattern of difficulty as those of density.
1. Calculate the percent by mass of each element in Cesium Fluoride (CsF).
2. Calculate the percent by mass of each element present in carbon tetrachloride (CCl4)
3. A solution of salt and water is 33.0% salt by mass and has a density of 1.50 g/mL. What mass of the salt in grams is in 5.00L of this solution?
4. A solution of water and HCl contains 25% HCl by mass. The density of the solution is 1.05 g/mL. If you need 1.7g of HCl for a reaction, what volume of this solution will you use?
5. A solution containing 42% NaOH by mass has a density of 1.30 g/mL. What mass, in kilograms, of NaOH is in 6.00 L of this solution?
Percent Composition Problem Solutions
1. CsF is 87.5% Cs and 12.5% F by mass
2. CCl4is 92.2% Cl and 7.8% C by mass
3. 2480g
4. 6.5mL
5. 2.38 kg
References
1. AUTOR , ARQUIMEDES , and Thomas Little . The Works of Archimedes . Courier Dover Publications, 2002.
2. Chande, D. and T. Fisher (2003). "Have a Penny? Need a Penny? Eliminating the One-Cent Coin from Circulation." Canadian Public Policy/Analyse de Politiques 29(4): 511-517.
3. Jefferson, T. (1999). "A Thought for Your Pennies." JAMA 281(2): 122.
4. Petrucci , Ralph , William Harwood , and Geoffrey Herring . Principles and Modern Application. ninth . New Jersey : Peason Eduation , 2007.
5. Rauch, F., H. Plotkin, et al. (2003). "Bone Mass, Size, and Density in Children and Adolescents with Osteogenesis Imperfecta: Effect of Intravenous Pamidronate Therapy." Journal of Bone and Mineral Research 18: 610-614.
6. Richardson, J., S. Gwaltney-Brant, et al. (2002). "Zinc Toxicosis from Penny Ingestion in Dogs." Vet Med 97(2): 96-99.
7. Tate, J. "Archimedes’ Discoveries: A Closer Look." | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Density_and_Percent_Compositions.txt |
Dynamic Light Scattering (DLS), also called Photon Correlation Spectroscopy, is a spectroscopic technique used in Chemistry, Biochemistry, and Physics primarily to characterize the hydrodynamic radius of polymers, proteins, and colloids in solution. DLS is a useful technique for determining the size distribution of nanoparticles in a suspension and detecting small amounts of high mass species in protein samples.
Introduction
In a typical DLS experiment, a solution/suspension of analyte is irradiated with monochromatic laser light and fluctuations in the intensity of the diffracted light are measured as a function of time. Intensity data is then collected using an autocorrelator to determine the size distribution of particles or molecules in a sample.
In general, when a sample of particles with diamter much smaller than the wavelength of light is irradiated with light, each particle will diffract the incident light in all directions. This is called Rayleigh Scattering. If that diffracted light is projected as an image onto a screen it will generate a “speckle" pattern like the one seen here.3
Figure 1. A Typical Speckle Pattern
The dark areas in the speckle pattern represent regions where the diffracted light from the particles arrives out of phase interfering destructively and the bright areas represent regions where the diffracted light arrives in phase interfering constructively.
In practice, particle samples are typically not stationary because they are suspended in a solution and as a result they are moving randomly due to collisions with solvent molecules. This type of motion is called Brownian Motion and it is vital for DLS analysis because it allows the use of the Stokes-Einstein equation to relate the velocity of a particle in solution to its hydrodynamic radius.1
$D=\frac{kT}{6\pi \eta a} \nonumber$
In the Stokes-Einstein equation, D is the diffusion velocity of the particle, k is the Boltzmann constant, T is the temperature, η is the viscosity of the solution and a is the hydrodynamic radius of the particle. The diffusion velocity (D) in the Stokes-Einstein relation is inversely proportional to the radius of the particle (a) and this shows that for a system undergoing Brownian Motion, small particles should diffuse faster than large ones. This is a key concept in DLS analysis.
In a sample of particles experiencing Brownian Motion, the distance between particles is constantly changing and this results in a Doppler Shift between the frequency of incoming light and the frequency of scattered light. Since the distance between particles affects the phase overlap of the diffracted light, the brightness of the spots on the speckle pattern will fluctuate in intensity as the particles change position with respect to each other. The rate of these intensity fluctuations depends on how fast the particles are moving and will therefore be slower for large particles and faster for small particles. This means that the fluctuating scattered light data contains information about the size of the particles.2
Collecting Data
In a typical DLS experiment, a suspension of analyte such as nanoparticles or polymer molecules is irradiated with monochromatic light from a laser while intensity of the diffracted light is measured. The detector is typically a photomultiplier positioned at 90° to the light source and it is used to collect light diffracted from the sample. Collimating lenses are used to focus laser light to the center of the sample holder and to prevent saturation of the photomultiplier tube.1
Ideally, the sample itself should be free of unwanted particles that could contribute to light scattering. For this reason dispersions are often filtered or purified before being measured. Samples are also diluted to low concentrations in order to prevent the particles from interacting with each other and disrupting Brownian Motion.
Processing Data
Since the fluctuating intensity data contains a wide spectrum of Doppler shifted frequencies it is not usually measured directly but instead it is compiled for processing using a device called a digital correlator. The function of the correlator in a DLS system is essentially to compare the intensity of two signals over a short period of time $\tau$ (nanoseconds to microseconds) and to calculate the extent of similarity between the two signals using the correlation function. The electric field correlation function is defined mathematically as
$G_{1}(t)=\lim_{t\to\infty}\frac{1}{T}\int_{-T}^{T}I(t)I(t+\tau )dt \nonumber$
And is related to the intensity correlation function (G2) by the Siegart relationship
$G_{2}(\tau )=1+\beta \left |G_{1}(\tau ) \right |^{2} \nonumber$
where β is an experimental factor that is related to the angle of scattering in the DLS setup being used.
Consider the fluctuating intensities of the speckle pattern mentioned earlier. If the intensity signal at a location on a speckle pattern is compared to itself with no change in time (t) then the correlator will measure a perfect correlation and it will assign a value of 1. However, if the same intensity signal is compared with another signal a short time later (t+Δt) then the correlation has now diminished and the correlator will assign a value less than 1. With most speckle patterns the signal correlation drops to zero after 1-10 milliseconds so Δt, the time scale of measurements, must be on a faster time scale of nanoseconds to microseconds.1
Since these intensity fluctuations that are being measured are directly related to the movement of particles in solution, it is useful to recall the Stokes-Einstein relation above which shows that smaller particles move more quickly through solution than larger particles. This means that the intensity signal for smaller particles should fluctuate more rapidly than for larger particles and as a result the correlation decreases at a faster rate as seen in the figure below.
Figure 2. Exponential decay of the correlation function.
The correlation function for a system experiencing Brownian motion $G(t)$ decays exponentially with decay constant $\Gamma$.
$G(t)=e^{-\Gamma t} \nonumber$
$\Gamma$ is related to the diffusivity of the particle by
$\Gamma=-Dq^{^{2}} \nonumber$
where
$q=\frac{4\pi n }{\lambda}\sin(\dfrac{\Theta }{2}) \nonumber$ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Dynamic_Light_Scattering.txt |
Accuracy and precision are very important in chemistry. However, the laboratory equipment and machines used in labs are limited in such a way that they can only determine a certain amount of data. For example, a scale can only mass an object up until a certain decimal place, because no machine is advanced enough to determine an infinite amount of digits. Machines are only able to determine a certain amount of digits precisely. These numbers that are determined precisely are called significant digits. Thus, a scale that could only mass until 99.999 mg, could only measure up to 5 figures of accuracy (5 significant digits). Furthermore, in order to have accurate calculations, the end calculation should not have more significant digits than the original set of data.
Introduction
Significant Digits - Number of digits in a figure that express the precision of a measurement instead of its magnitude. The easiest method to determine significant digits is done by first determining whether or not a number has a decimal point. This rule is known as the Atlantic-Pacific Rule. The rule states that if a decimal point is Absent, then the zeroes on the Atlantic/right side are insignificant. If a decimal point is Present, then the zeroes on the Pacific/left side are insignificant.
General Rules for Determining Number of Significant Figures
1. All nonzero digits are significant.
2. Zeros are also significant with two exceptions:
1. zeros preceding the decimal point.
2. zeros following the decimal point and preceding the first nonzero digit.
3. Terminal zeros preceding the decimal point in amounts greater than one is an ambiguous case.
Rules for Numbers WITHOUT a Decimal Point
1. START counting for sig. figs. On the FIRST non-zero digit.
2. STOP counting for sig. figs. On the LAST non-zero digit.
3. Non-zero digits are ALWAYS significant
4. Zeroes in between two non-zero digits are significant. All other zeroes are insignificant.
Example $1$:
The first two zeroes in 200500 (four significant digits) are significant because they are between two non-zero digits, and the last two zeroes are insignificant because they are after the last non-zero digit.
It should be noted that both constants and quantities of real world objects have an infinite number of significant figures. For example if you were to count three oranges, a real world object, the value three would be considered to have an infinite number of significant figures in this context.
Example $1$
How many significant digits are in 5010?
Solution
1. Start counting for significant digits On the first non-zero digit (5).
2. Stop counting for significant digits On the last non-zero digit (1).
5 0 1 0 Key: 0 = significant zero. 0 = insignificant zero.
3 significant digits.
Rules for Numbers WITH a Decimal Point
1. START counting for sig. figs. On the FIRST non-zero digit.
2. STOP counting for sig. figs. On the VERY LAST digit (regardless whether or not the last digit is a zero or non-zero number).
3. Non-zero digits are ALWAYS significant.
4. Any zero AFTER the first non-zero digit is STILL significant. The zeroes BEFORE the first non-zero digit are insignificant.
Example $3$
The first two zeroes in 0.058000 (five significant digits) are insignificant because they are before the first non-zero digit, and the last three zeroes are significant because they are after the first non-zero digit.
Example $4$
How many significant digits are in 0.70620?
Solution
1. Start counting for significant digits On the first non-zero digit (7).
2. Stop counting for significant digits On the last digit (0).
0 . 7 0 6 2 0 Key: 0 = significant zero.0 = insignificant zero.
5 significant digits.
Scientific Notation
Scientific notation form: a x 10b, where “a” and “b” are integers, and "a" has to be between 1 and 10.
Example $5$
The scientific notation for 4548 is 4.548 x 103.
Solution
• Disregard the “10b,” and determine the significant digits in “a.”
• 4.548 x 103 has 4 significant digits.
Example $6$
How many significant digits are in 1.52 x 106?
NOTE: Only determine the amount of significant digits in the "1.52" part of the scientific notation form.
Answer
3 significant digits.
Rounding Significant Digits
When rounding numbers to a significant digit, keep the amount of significant digits wished to be kept, and replace the other numbers with insignificant zeroes. The reason for rounding a number to a particular amount of significant digits is because in a calculation, some values have less significant digits than other values, and the answer to a calculation is only accurate to the amount of significant digits of the value with the least amount. NOTE: be careful when rounding numbers with a decimal point. Any zeroes added after the first non-zero digit is considered to be a significant zero. TIP: When doing calculations for quizzes/tests/midterms/finals, it would be best to not round in the middle of your calculations, and round to the significant digit only at the end of your calculations.
Example $7$
Round 32445.34 to 2 significant digits.
Answer
32000 (NOT 32000.00, which has 7 significant digits. Due to the decimal point, the zeroes after the first non-zero digit become significant).
Rules for Addition and Subtraction
When adding or subtracting numbers, the end result should have the same amount of decimal places as the number with the least amount of decimal places.
Example $8$
Y = 232.234 + 0.27 Find Y.
Answer
Y = 232.50
NOTE: 232.234 has 3 decimal places and 0.27 has 2 decimal places. The least amount of decimal places is 2. Thus, the answer must be rounded to the 2nd decimal place (thousandth).
Rules for Multiplication and Division
When multiplying or dividing numbers, the end result should have the same amount of significant digits as the number with the least amount of significant digits.
Example $9$
Y = 28 x 47.3 Find Y
Answer
Y = 1300
NOTE: 28 has 2 significant digits and 47.3 has 3 significant digits. The least amount of significant digits is 2. Thus, the answer must me rounded to 2 significant digits (which is done by keeping 2 significant digits and replacing the rest of the digits with insignificant zeroes).
Exact Numbers
Exact numbers can be considered to have an unlimited number of significant figures, as such calculations are not subject to errors in measurement. This may occur:
1. By definition (1 minute = 60 seconds, 1 inch = 2.54 cm, 12 inches = 1 foot, etc.)
2. As a result of counting (6 faces on a cube or dice, two hydrogen atoms in a water molecule, 3 peas in a pod, etc.)
References
1. Brown, Theodore E., H. Eugene LeMay, and Bruce E. Bursten. Chemistry: The Central Science, Tenth Edition. Pearson Education Inc. Upper Saddle River, New Jersey: 2005.
2. Petrucci, Ralph H., William S. Harwood, F. Geoffrey Herring, and Jeffry D. Madura. General Chemistry: Principles and Modern Applications, Ninth Edition. Pearson Education Inc. Upper Saddle River, New Jersey: 2007.
3. Petrucci, Ralph H., William S. Harwood, F. Geoffrey Herring, and Jeffry D. Madura. General Chemistry: Principles and Modern Applications, Tenth Edition. Pearson Education Inc. Upper Saddle River, New Jersey: 2011. Custom Edition for Chem 2, University of California, Davis
Additional Problems
1. a) How many significant digits Are in 50?
2. b) How many significant digits Are in 50.0?
1. How many significant digits Are in $3.670 \times 10^{35}$?
2. Round 4279852.243 to 3 significant digits.
3. Round 0.0573000 to 1 significant digit.
4. Y = 45.2 + 16.730 Find Y.
5. Y = 23 – 26.2 Find Y.
6. Y = 16.7 x 33.2 x 16.72 Find Y.
7. Y = 346 ÷ 22 Find Y.
8. Y = (23.2 + 16.723) x 28 Find Y
9. Y = (16.7 x 23) – (23.2 ÷ 2.13) Find Y
Solutions
1. a) 1 significant digit.
b) 2 significant digits.
2. 4 significant digits.
3. 4280000
4. 0.06
5. Y = 61.9
6. Y = -3
7. Y = 9270
8. Y = 16
9. Y = (23.2 + 16.723) x 28
Y = 39.923 x 28 (TIP: Do not round until the end of calculations.)
Y = 1100 (NOTE: 28 has the least amount of significant digits (2 sig. figs.) Thus, answer must be rounded to 2 sig. figs.)
10. Y = (16.7 x 23) – (23.2 ÷ 2.13)
Y = 384.1 – 10.89201878 (TIP: Do not round until the end of calculations.)
Y = 373.2 (NOTE: 384.1 has the least amount of decimal point (tenth). Thus, answer must be rounded to the tenth.)
Contributors and Attributions
• Jeffrey Susila (UCD), Neema Shah (UCD)
Significant Digits
Propagation of Error (or Propagation of Uncertainty) is defined as the effects on a function by a variable's uncertainty. It is a calculus derived statistical calculation designed to combine uncertainties from multiple variables to provide an accurate measurement of uncertainty.
Introduction
Every measurement has an air of uncertainty about it, and not all uncertainties are equal. Therefore, the ability to properly combine uncertainties from different measurements is crucial. Uncertainty in measurement comes about in a variety of ways: instrument variability, different observers, sample differences, time of day, etc. Typically, error is given by the standard deviation ($\sigma_x$) of a measurement.
Anytime a calculation requires more than one variable to solve, propagation of error is necessary to properly determine the uncertainty. For example, lets say we are using a UV-Vis Spectrophotometer to determine the molar absorptivity of a molecule via Beer's Law: A = ε l c. Since at least two of the variables have an uncertainty based on the equipment used, a propagation of error formula must be applied to measure a more exact uncertainty of the molar absorptivity. This example will be continued below, after the derivation.
Derivation of Exact Formula
Suppose a certain experiment requires multiple instruments to carry out. These instruments each have different variability in their measurements. The results of each instrument are given as: a, b, c, d... (For simplification purposes, only the variables a, b, and c will be used throughout this derivation). The end result desired is $x$, so that $x$ is dependent on a, b, and c. It can be written that $x$ is a function of these variables:
$x=f(a,b,c) \label{1}$
Because each measurement has an uncertainty about its mean, it can be written that the uncertainty of $dx_i$ of the ith measurement of $x$ depends on the uncertainty of the ith measurements of a, b, and c:
$dx_i=f(da_i,db_i,dc_i)\label{2}$
The total deviation of $x$ is then derived from the partial derivative of x with respect to each of the variables:
$dx=\left(\dfrac{\delta{x}}{\delta{a}}\right)_{b,c}da, \; \left(\dfrac{\delta{x}}{\delta{b}}\right)_{a,c}db, \; \left(\dfrac{\delta{x}}{\delta{c}}\right)_{a,b}dc \label{3}$
A relationship between the standard deviations of x and a, b, c, etc... is formed in two steps:
1. by squaring Equation \ref{3}, and
2. taking the total sum from $i = 1$ to $i = N$, where $N$ is the total number of measurements.
In the first step, two unique terms appear on the right hand side of the equation: square terms and cross terms.
Square Terms
$\left(\dfrac{\delta{x}}{\delta{a}}\right)^2(da)^2,\; \left(\dfrac{\delta{x}}{\delta{b}}\right)^2(db)^2, \; \left(\dfrac{\delta{x}}{\delta{c}}\right)^2(dc)^2\label{4}$
Cross Terms
$\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{db}\right)da\;db,\;\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{dc}\right)da\;dc,\;\left(\dfrac{\delta{x}}{db}\right)\left(\dfrac{\delta{x}}{dc}\right)db\;dc\label{5}$
Square terms, due to the nature of squaring, are always positive, and therefore never cancel each other out. By contrast, cross terms may cancel each other out, due to the possibility that each term may be positive or negative. If $da$, $db$, and $dc$ represent random and independent uncertainties, about half of the cross terms will be negative and half positive (this is primarily due to the fact that the variables represent uncertainty about a mean). In effect, the sum of the cross terms should approach zero, especially as $N$ increases. However, if the variables are correlated rather than independent, the cross term may not cancel out.
Assuming the cross terms do cancel out, then the second step - summing from $i = 1$ to $i = N$ - would be:
$\sum{(dx_i)^2}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sum(da_i)^2 + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sum(db_i)^2\label{6}$
Dividing both sides by $N - 1$:
$\dfrac{\sum{(dx_i)^2}}{N-1}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\dfrac{\sum(da_i)^2}{N-1} + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\dfrac{\sum(db_i)^2}{N-1}\label{7}$
The previous step created a situation where Equation \ref{7} could mimic the standard deviation equation. This is desired, because it creates a statistical relationship between the variable $x$, and the other variables $a$, $b$, $c$, etc... as follows:
The standard deviation equation can be rewritten as the variance ($\sigma_x^2$) of $x$:
$\dfrac{\sum{(dx_i)^2}}{N-1}=\dfrac{\sum{(x_i-\bar{x})^2}}{N-1}=\sigma^2_x\label{8}$
Rewriting Equation \ref{7} using the statistical relationship created yields the Exact Formula for Propagation of Error:
$\sigma^2_x=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sigma^2_a+\left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sigma^2_b+\left(\dfrac{\delta{x}}{\delta{c}}\right)^2\sigma^2_c\label{9}$
Thus, the end result is achieved. Equation \ref{9} shows a direct statistical relationship between multiple variables and their standard deviations. In the next section, derivations for common calculations are given, with an example of how the derivation was obtained.
Arithmetic Calculations of Error Propagation
In the following calculations $a$, $b$, and $c$ are measured variables from an experiment and $\sigma_a$, $\sigma_b$, and $\sigma_c$ are the standard deviations of those variables.
Addition or Subtraction
If $x = a + b - c$ then
$\sigma_x= \sqrt{ {\sigma_a}^2+{\sigma_b}^2+{\sigma_c}^2} \label{10}$
Multiplication or Division
If $x = \dfrac{ a \times b}{c}$ then
$\dfrac{\sigma_x}{x}=\sqrt{\left(\dfrac{\sigma_a}{a}\right)^2+\left(\dfrac{\sigma_b}{b}\right)^2+\left(\dfrac{\sigma_c}{c}\right)^2}\label{11}$
Exponential
If $x = a^y$ then
$\dfrac{\sigma_x}{x}=y \left(\dfrac{\sigma_a}{a}\right) \label{12}$
Logarithmic
If $x = \log(a)$ then
$\sigma_x=0.434 \left(\dfrac{\sigma_a}{a}\right) \label{13}$
Anti-logarithmic
If $x = \text{antilog}(a)$ then
$\dfrac{\sigma_x}{x}=2.303({\sigma_a}) \label{14}$
Note
Addition, subtraction, and logarithmic equations leads to an absolute standard deviation, while multiplication, division, exponential, and anti-logarithmic equations lead to relative standard deviations.
Derivation of Arithmetic Example
The Exact Formula for Propagation of Error in Equation $\ref{9}$ can be used to derive the arithmetic examples noted above. Starting with a simple equation:
$x = a \times \dfrac{b}{c} \label{15}$
where $x$ is the desired results with a given standard deviation, and $a$, $b$, and $c$ are experimental variables, each with a difference standard deviation. Taking the partial derivative of each experimental variable, $a$, $b$, and $c$:
$\left(\dfrac{\delta{x}}{\delta{a}}\right)=\dfrac{b}{c} \label{16a}$
$\left(\dfrac{\delta{x}}{\delta{b}}\right)=\dfrac{a}{c} \label{16b}$
and
$\left(\dfrac{\delta{x}}{\delta{c}}\right)=-\dfrac{ab}{c^2}\label{16c}$
Plugging these partial derivatives into Equation $\ref{9}$ gives:
$\sigma^2_x=\left(\dfrac{b}{c}\right)^2\sigma^2_a+\left(\dfrac{a}{c}\right)^2\sigma^2_b+\left(-\dfrac{ab}{c^2}\right)^2\sigma^2_c\label{17}$
Dividing Equation $\ref{17}$ by Equation $\ref{15}$ squared yields:
$\dfrac{\sigma^2_x}{x^2}=\dfrac{\left(\dfrac{b}{c}\right)^2\sigma^2_a}{\left(\dfrac{ab}{c}\right)^2}+\dfrac{\left(\dfrac{a}{c}\right)^2\sigma^2_b}{\left(\dfrac{ab}{c}\right)^2}+\dfrac{\left(-\dfrac{ab}{c^2}\right)^2\sigma^2_c}{\left(\dfrac{ab}{c}\right)^2}\label{18}$
Canceling out terms and square-rooting both sides yields Equation \ref{11}:
$\dfrac{\sigma_x}{x}={\sqrt{\left(\dfrac{\sigma_a}{a}\right)^2+\left(\dfrac{\sigma_b}{b}\right)^2+\left(\dfrac{\sigma_c}{c}\right)^2}} \nonumber$
Example $1$
Continuing the example from the introduction (where we are calculating the molar absorptivity of a molecule), suppose we have a concentration of 13.7 (±0.3) moles/L, a path length of 1.0 (±0.1) cm, and an absorption of 0.172807 (±0.000008). The equation for molar absorptivity is dictated by Beer's law:
$ε = \dfrac{A}{lc}. \nonumber$
Solution
Since Beer's Law deals with multiplication/division, we'll use Equation \ref{11}:
\begin{align*} \dfrac{\sigma_{\epsilon}}{\epsilon} &={\sqrt{\left(\dfrac{0.000008}{0.172807}\right)^2+\left(\dfrac{0.1}{1.0}\right)^2+\left(\dfrac{0.3}{13.7}\right)^2}} \[4pt] &=0.10237 \end{align*}
As stated in the note above, Equation \ref{11} yields a relative standard deviation, or a percentage of the ε variable. Using Beer's Law, ε = 0.012614 L moles-1 cm-1 Therefore, the $\sigma_{\epsilon}$ for this example would be 10.237% of ε, which is 0.001291.
Accounting for significant figures, the final answer would be:
ε = 0.013 ± 0.001 L moles-1 cm-1
If you are given an equation that relates two different variables and given the relative uncertainties of one of the variables, it is possible to determine the relative uncertainty of the other variable by using calculus. In problems, the uncertainty is usually given as a percent. Let's say we measure the radius of a very small object. The problem might state that there is a 5% uncertainty when measuring this radius.
To actually use this percentage to calculate unknown uncertainties of other variables, we must first define what uncertainty is. Uncertainty, in calculus, is defined as:
$\left(\dfrac{dx}{x}\right) = \left(\dfrac{∆x}{x}\right) = \text{uncertainty} \nonumber$
Example $2$
Let's look at the example of the radius of an object again. If we know the uncertainty of the radius to be 5%, the uncertainty is defined as
$\left(\dfrac{dx}{x}\right)=\left(\dfrac{∆x}{x}\right)= 5\% = 0.05.\nonumber$
Now we are ready to use calculus to obtain an unknown uncertainty of another variable. Let's say we measure the radius of an artery and find that the uncertainty is 5%. What is the uncertainty of the measurement of the volume of blood pass through the artery? Let's say the equation relating radius and volume is:
$V(r) = c(r^2) \nonumber$
where $c$ is a constant, $r$ is the radius and $V(r)$ is the volume.
Solution
The first step to finding the uncertainty of the volume is to understand our given information. Since we are given the radius has a 5% uncertainty, we know that (∆r/r) = 0.05. We are looking for (∆V/V).
Now that we have done this, the next step is to take the derivative of this equation to obtain:
$\dfrac{dV}{dr} = \dfrac{∆V}{∆r}= 2cr \nonumber$
We can now multiply both sides of the equation to obtain:
$∆V = 2cr(∆r) \nonumber$
Since we are looking for (∆V/V), we divide both sides by V to get:
$\dfrac{∆V}{V} = \dfrac{2cr(∆r)}{V} \nonumber$
We are given the equation of the volume to be $V = c(r)^2$, so we can plug this back into our previous equation for $V$ to get:
$\dfrac{∆V}{V} = \dfrac{2cr(∆r)}{c(r)^2} \nonumber$
Now we can cancel variables that are in both the numerator and denominator to get:
$\dfrac{∆V}{V} = \dfrac{2∆r}{r} = 2 \left(\dfrac{∆r}{r}\right) \nonumber$
We have now narrowed down the equation so that ∆r/r is left. We know the value of uncertainty for ∆r/r to be 5%, or 0.05. Plugging this value in for ∆r/r we get:
$\dfrac{∆V}{V} = 2 (0.05) = 0.1 = 10\% \nonumber$
The uncertainty of the volume is 10%. This method can be used in chemistry as well, not just the biological example shown above.
Caution
• Error propagation assumes that the relative uncertainty in each quantity is small.3
• Error propagation is not advised if the uncertainty can be measured directly (as variation among repeated experiments).
• Uncertainty never decreases with calculations, only with better measurements.
Disadvantages of Propagation of Error Approach
In an ideal case, the propagation of error estimate above will not differ from the estimate made directly from the measurements. However, in complicated scenarios, they may differ because of:
• unsuspected covariances
• errors in which reported value of a measurement is altered, rather than the measurements themselves (usually a result of mis-specification of the model)
• mistakes in propagating the error through the defining formulas (calculation error)
Treatment of Covariance Terms
Covariance terms can be difficult to estimate if measurements are not made in pairs. Sometimes, these terms are omitted from the formula. Guidance on when this is acceptable practice is given below:
1. If the measurements of a and b are independent, the associated covariance term is zero.
2. Generally, reported values of test items from calibration designs have non-zero covariances that must be taken into account if b is a summation such as the mass of two weights, or the length of two gage blocks end-to-end, etc.
3. Practically speaking, covariance terms should be included in the computation only if they have been estimated from sufficient data. See Ku (1966) for guidance on what constitutes sufficient data2. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Significant_Digits/Propagation_of_Error.txt |
Significant figures are used to keep track of the quality (variability) of measurements. This includes propagating that information during calculations using the measurements. The purpose of this page is to help you organize the information about significant figures -- to help you set priorities. Sometimes students are overwhelmed by too many rules, and lack guidance about how to sort through them. What is the purpose? Which rules are most important?
The following points as being most important:
• Significant Digits relate to measurements. When you think about how many Significant Digits a number has, think about where the number came from.
• The most important rule for handling Significant Digits when doing calculations is the rule for multiplication.
I will de-emphasize the following:
• Whether zeroes are significant.
• The rules for handling Significant Digits in other types of calculations.
What if the advice given here disagrees with what your book or instructor says?
Let's break that into two parts. One is about the information per se, and the other is about priorities, about the approach to thinking about Significant Digits. The information here should agree, for the most part. However, what may be different is the order of presenting things, with a different perspective in the approach -- the steps -- to learning Significant Digits. We will all end up in the same place.
If you were completely happy with how the Significant Digits topic is presented in your own course, you probably wouldn't be reading this page. Think of it as another approach -- to the same thing. Sometimes, looking at things differently can help. Trying two approaches can be better than trying only one. There is no claim that one approach is "right" or even "better". If there is a discrepancy between any information here and your own course, please let me know -- or check with your own instructor. Some details are a matter of preference. In the lab. When you take a measurement, you record not only the value of the measurement, but also some information about its quality. Using Significant Digits is one simple way to record the quality of the information.
Note
A simple and useful statement is that the significant figures (Significant Digits) are the digits that are certain in the measurement plus one uncertain digit.
Significant Digits is not a set of arbitrary rules. Almost everything about Significant Digits follows from how you make the measurements, and then from understanding how numbers work when you do calculations. Unfortunately, there are "special cases" that can come up with Significant Digits. If all the rules are presented together, it is easy to get lost in the rules. Better -- and what we will do here -- is to emphasize the logic of using Significant Digits. This involves a few basic ideas, which can be stated as rules. We will leave special cases for a while, so they do not confuse the big picture. The number of high priority rules about Significant Digits is small.
The best way to start with Significant Digits is in the lab, taking measurements. An alternative is to use an activity that simulates taking measurements -- of various accuracy. We will do that here, using drawings of measurement scales. A bad way to start with Significant Digits is to learn a list of rules.
How many significant figures does a measurement have?
When you take a measurement, you write down the correct number of digits. You write down the significant digits. That is, the way you write a number conveys some information about how accurate it is. It is up to you to determine how many digits are worth writing down. It is important that you do so, since what you write conveys not only the measurement but something about its quality. For many common lab instruments, the proper procedure is to estimate one digit beyond those shown directly by the measurement scale. If that one estimated digit seems meaningful, then it is indeed a significant digit.
Example 1: Reading a typical scale
The scale shown here is a "typical" measurement scale. The specific scale is from a 10 mL graduated cylinder -- shown horizontally here for convenience. The arrow marks the position of a measurement.
Glossary entry: Scale.
Our goal is to read the scale at the position of the arrow. Let's go through this in detail.
• The numbered lines are 1 mL apart.
• The little lines (between the numbered lines) are 0.1 mL apart.
• The arrow is clearly between 4.7 and 4.8 mL.
• We will estimate the position of the arrow to 1/10 the distance between the little lines, that is, to the nearest 0.01 mL. (It is a common rule of thumb to estimate the last digit to 1/10 the distance between the lines. This corresponds, of course, to writing one more digit "as best we can".)
• A reasonable estimate is 4.78 mL. Some people might say 4.77 mL or 4.79 mL. No one should say 4.72 mL! That is, the estimate is 4.78 mL to about +/- 0.01 mL. 4.78 mL is 3 Significant Digits; the last Significant Digits is not certain, but is "close".
How meaningful is a drawing of a measurement scale, such as the one in the example above? It illustrates one particular issue very well: how to read a scale per se, figure out what the marks and labels mean, and how to estimate the final digit. Real measuring instruments, such as graduated cylinders, have those issues. Depending on the situation, there may be other issues that affect the ease of reading. In the drawing above, the goal is to read a well-defined arrow. With a real graduated cylinder, you may need to deal with a meniscus (curved surface) and parallax. Those issues are beyond our topic here.
A final zero? In estimating that last digit, be sure to write down the zero if your best estimate is indeed zero. For example, if the last digit reflects hundredths of a mL, you might estimate in one case that there are 6 hundredths; thus you would write 6 as the last digit (e.g., 8.16 mL -- 3 Significant Digits). But you might (in another case) estimate that there are 0 hundredths; it is important that you write that zero (e.g., 8.10 mL -- 3 Significant Digits). That final zero says you looked for hundredths and found none. If you wrote only 8.1 mL (2 Significant Digits), it would imply that you did not look for hundredths.
Example 2: When the measurement seems to be "right on" a line.
The arrow below appears to be "right on" the "4.7" line. (Let's assume that. The point here is to deal with the case where you think the arrow is "on" the line.) Thus we estimate that the hundredths place is 0. The proper reading, then, is 4.70 mL (3 Significant Digits). That final zero means that we looked for hundredths, and found none. If we wrote 4.7 mL (2 Significant Digits), it would imply that we didn't look for hundredths.
The scale shown in Example 2 is the same scale as in Example 1. In Example 1 our proper reading had 3 Significant Digits. That is also true in Example 2. That final 0 in Example 2 is an estimate; it is entirely equivalent to the final 8 estimated in Example 1.
When I look at a measurement that someone else has given me, how can I tell how many significant figures it has?
There are a couple of ways to approach this:
1. You can look at the number and analyze the digits, using your rules for Significant Digits.
2. You can think about the measurement scale that resulted in this measurement. Think about how the scale was read, with one digit being estimated.
Both approaches will work. They reflect the same principles. Often, simply looking at the number will be sufficient. However, when you are not sure, it helps to go back to basics: think about the underlying measurement. We will illustrate this in the next section, on zeroes -- the situation most likely to cause confusion.
What about the zeroes? Are they significant or not?
We tend to spend more time on this issue than it really is worth. Only one tenth of all digits are zeroes, yet the bulk of a list of Significant Digits rules may be about how to treat the zeroes. Many zeroes are clear enough, but indeed it can take a bit of thought to decide whether some zeroes are or are not significant.
If you understand where Significant Digits come from, then whether a zero is significant should be clear -- at least most of the time. If you are learning Significant Digits by memorizing rules, then you are doing it the hard way -- not understanding the meaning. If, for whatever reason, you are struggling with Significant Digits, the problem of the zeroes is a low priority problem.
Here is what I usually suggest to students. Don't worry too much about the rules for zeroes, especially when you are just starting. As you go on, ask about specific cases where you are not sure about the zeroes. That way, you will gradually learn how to deal with the zeroes, but not get bogged down with what can seem to be a bunch of picky rules.
The key point in deciding whether a zero is significant is to decide if it is part of the measurement, or simply a digit that is there to "fill space". The next section will help with much of the "zeroes problem".
Why is scientific notation helpful?
When a number is written in standard scientific (exponential) notation format, there should be no problem with zeroes. In this format, with one digit before the decimal point and only Significant Digits after the decimal point, all digits shown are significant.
Example 3: Scientific Notation
How many Significant Digits are in the measurement 0.00023456 m?
Solution
In scientific notation that is 2.3456x10-4 m. 5 Significant Digits. Scientific notation makes clear that all the zeroes to the left are not significant. The first zero is just decorative and could be omitted; the others are place-holders, so you can show that the 2 is the fourth decimal place.
The "rule" that covers this case may be stated: zeroes on the left end of a number are not significant -- regardless of where the decimal point is. Hopefully, the example, showing how this plays out in scientific notation, makes this rule clearer.
Example 4: Scientific Notation
How many Significant Digits are in the measurement 0.00023450 m?
Solution
In scientific notation that is 2.3450x10-4 m. 5 Significant Digits. That final zero is part of the measurement. If it weren't, why would it be there?
The "rule" that covers this case may be stated: zeroes on the right end of a number are significant -- if they are to the right of the decimal point. This rule may seem confusing in words, but showing the case in scientific notation should make it clearer.
Example 4: Scientific Notation
How many Significant Digits are in the measurement 234000 m?
Solution
In scientific notation that is ... Hm, what is it? It's not really clear. Let's suggest that it is 2.34x105 m. That is clearly 3 Significant Digits.
Why did I choose to not consider the zeroes significant? Maybe they are significant. Or maybe one of them is significant. The problem is that there is no way to tell from the number 234000 whether those zeroes are significant or are merely place holders, telling us (for example) that the 4 is in the thousands place. So why choose to make them not significant? First, that is the conservative position. I don't know whether they are significant, and to claim that they are is an unwarranted claim of quality. Second, 3 Significant Digits is reasonable -- a common way to measure distances; 6 Significant Digits is not likely. What if the person making the measurement knows that the measurement is good to 4 Significant Digits, with the first zero being significant? Then, somehow, they need to say so. One good way is to put the measurement in proper scientific notation in the first place: 2.340x105 m, 4 Significant Digits.
How do I handle significant figures in calculations?
It depends on the type of calculation. Each math operation has its own rules for handling Significant Digits. More precisely, there is one rule each for:
• multiplication and division (which are basically the same thing, so they share a rule);
• addition and subtraction (which are basically the same thing, so they share a rule);
• logs and antilogs (which are basically the same thing, so they share a rule).
Those three rules are distinct; you must be careful to use the right rule for the right operation. But there is good news: The multiplication rule is by far the most important in basic chemistry -- and it is perhaps also the simplest. So, as a matter of priority, emphasize the multiplication rule. When you have mastered it, you can go on and learn the addition rule. It is useful, though much less important. Whether you need the rule for logs will depend on your course; some courses manage to avoid this rule completely.
In summary ... there are three rules, but there is a clear set of priorities with them. Emphasize the multiplication rule. It is the most important rule, and the easiest one.
Multiplication Rule
If you multiply two numbers with the same number of Significant Digits, then the answer should have that same number of Significant Digits. If you multiply together two numbers that each have 4 Significant Digits, then the answer should have 4 Significant Digits.
Example 6
Multiply 12.3 cm by 2.34 cm.
Solution
Doing the arithmetic on the calculator gives 28.782. In this case, each number has 3 Significant Digits. Thus we report the result to 3 Significant Digits. Proper rounding of 28.782 to 3 Significant Digits gives 28.8. With the units, the final answer is 28.8 cm2.
If you multiply together two numbers with different numbers of Significant Digits, then the answer should have the same number of Significant Digits as the "weaker" number. Hm, that is a lot of words. An example should help. Multiply a number with 3 Significant Digits and a number with 4 Significant Digits. Keep 3 Significant Digits in the answer.
Example 7
Multiply 24 cm by 268 cm.
Solution
Doing the arithmetic on the calculator gives 6432. One measurement has 2 Significant Digits and one has 3 Significant Digits. The 2 Significant Digits number is "weaker": it has less information; it has only two digits of information in it. That is, the 2 Significant Digits number limits the calculation. Thus we report the result to 2 Significant Digits. Proper rounding of 6432 to 2 Significant Digits gives 6400. That is clearer in scientific notation, as 6.4x103. With the units, the final answer is 6.4x103 cm2. [Recall section Why is scientific notation helpful?, especially Example 5.]
The following two examples serve as reminders that it is important to understand the context of the particular problem. In Example 7, we reported the product of 24 & 268 to 2 Significant Digits. But in Example 8, which follows, we report the product of those same two numbers to 3 Significant Digits. Both are correct -- because the contexts are different. Example 9 reminds us of another issue in carefully recording measurements.
Example 8
You have an object that is 268 cm long. What would be the total length of 24 such objects?
Solution
The calculator gives 6432, as in Example 7. Now we look at the Significant Digits; we must carefully think about what each number means. "268 cm" is an ordinary measurement; it has 3 Significant Digits. But the "24" is a count, and is taken as exact (with no uncertainty). That is, the "24" does not limit the calculation, and we report 3 Significant Digits. With the units, the final answer is 6.43x103 cm.
Example
You measure the sides of a rectangle. The sides are 28.2 cm and 25 cm. What is the area? But before you calculate the area... There is probably something wrong with the statement of this question. What?
What's wrong? Well, we have an object, approximately square. Someone has measured two sides. One would think they used the same measuring instrument -- the same ruler. But the two reported measurements are inconsistent. One is reported to the nearest cm, and one is reported to the nearest tenth. That is suspicious. Why were they not reported the same way?
The purpose of this example is to remind you of the importance of reading the measuring instrument carefully and consistently, and recording the final zero if indeed that is your estimate. There is no need to carry out the calculation in this case.
Notes...
• The position of the decimal point is irrelevant in determining Significant Digits for multiplication. Just count how many Significant Digits there are.
• We discussed the multiplication rule for the case of multiplying two numbers. If there are more than two numbers, the rule is the same. You can think of this as multiplying two numbers at a time.
• Multiplication and division are basically the same operation. Dividing by "x" is equivalent to multiplying by "1/x". The rule for Significant Digits is the same for multiplication and division, and for operations involving any combination of them.
• Ordinary calculators have no idea about Significant Digits at all. They may give you too many digits or too few digits. Use the calculator to do the arithmetic, but then you take responsibility for the Significant Digits.
Addition Rule
For students who are just starting chemistry, the addition rule for Significant Digits is not as important as the multiplication rule. The intent of that statement is to help you set priorities. Learn one thing at a time -- especially if you are finding the topic difficult. The multiplication rule is more important; learn it first and get comfortable with it.
Note
Most instructors will want you to learn the addition rule. I am not suggesting otherwise. Again, the emphasis here is to guide you to learn one thing at a time.
Here is an example of a basic chem situation that would seem to involve the addition rule, yet where using that rule is not really needed. Consider calculating the molar mass (formula weight) of a compound, say KOH. Using the atomic masses shown on the periodic table, the molar mass of KOH is 39.10 + 16.00 + 1.008 = 56.108 (in g/mol).
So, how many Significant Digits do we keep?
One answer might be to use the Significant Digits rule for addition and note that the result is only good to the hundredths place. Therefore, we round it to 56.11 g/mol.
However, that may be unnecessary -- and even undesirable. The reason for calculating a molar mass is to use it in a real calculation. In real cases, it is usually fine to calculate molar mass by using the atomic masses shown on your periodic table. No rounding, at least now. When you use the molar mass for a calculation, you round the final result. At this step, you should -- in principle -- consider the quality of the molar mass number. However, in practice, it is likely to not matter. It is most likely -- especially in beginning chemistry -- that the Significant Digits of the final result will be limited by other parts of the calculation, not the molar mass.
Therefore, I encourage beginning students to use the procedure above... Use all the digits of the atomic weights shown on their periodic table. Just add them up, and use the molar mass you get. Don't round the molar mass. Round the final result for the overall calculation, assuming that the molar mass Significant Digits is not a concern. This is usually fine, and lets you worry about the addition rule a bit later.
Now, it is easy enough for the textbook to make up problems where the above method would not be satisfactory. My point is that such cases are uncommon in real problems, especially in introductory chemistry. In fact, a simple example of a question is "Calculate the molar mass of ... [some chemical]." How many Significant Digits do you report? Well, you'll need to use the addition rule for Significant Digits. But that is an artificial question; in the real world one almost always wants to know a molar mass in the context of a specific calculation involving some measurement, and it is quite likely that the measurement will limit the quality of the result.
Logarithm Rule
The logarithm of 74 is 1.87. (We will use base 10 logs here, but the Significant Digits rule is the same in any case.) 74 has 2 Significant Digits, and the log shown, 1.87, has 2 Significant Digits. Why? Because the 1 in the log (the part before the decimal point -- the "characteristic") relates to the exponent, and is an "exact" number.
Whoa! What exponent? Well, it will help to put the number in standard scientific notation. 74 is 7.4x101. Now consider the log of each part: the log of 101 is 1, an exact number; the log of 7.4 is 0.87 -- with a proper 2 Significant Digits. Add those together, and you get log 74 = 1.87 -- with 2 Significant Digits.
Log of 740,000? That is log of 7.4x105. 5.87. In scientific notation only the exponent is different from the previous number; therefore in the logarithm, only the leading integer is different.
This log rule is often skipped in an intro chem course for a couple of reasons. First, logs may come up only once, with pH. Second, students in an intro chem course often are weak with using exponents -- and may not have learned about logs at all. So, sometimes one just suggests that pH be reported to two decimal places -- a usable if rough approximation.
Should I round off to the proper number of significant figures at each step?
The short answer is "no".
It is common now that most calculations are done on a calculator. Just do all the steps with the calculator, letting the machine keep track of the intermediate results. There is no need to even write down intermediates, much less round them. Why avoid rounding at each step? Each time you round, you are throwing away some information. If you do it over and over, it gets worse and worse; you accumulate rounding errors -- and that is not so good.
Example
Imagine that we want to calculate 1.00 * (1.127)10. For our purposes here, the numbers are measurements, and we are to give the answer with proper Significant Digits. Proper Significant Digits in this case is 3 (because 1.00 is 3 Significant Digits). (For a clarification, see * note at end of this example box.)
We might consider two ways to do this:
1. Do the indicated calculation; then, at the end, round to 3 Significant Digits. This gives 3.31.
2. First round the 1.127 to 3 Significant Digits: 1.13. Now do the calculation and round the answer to 3 Significant Digits. This gives 3.39.
Well, those two calculations give answers that are quite different! How can we judge them? Here is one approach... The original number 1.127, by convention, means 1.127 +/- 0.001. That is, this measurement might be 1.126 to 1.128. If we do the calculation with 1.126, we get 3.28. If we do the calculation with 1.128, we get 3.34. Thus it seems that the result should be in the range of those two numbers, 3.28-3.34. In fact, method 1 (calculate with the original number and round only at the end) gives 3.31 -- which is in the middle of that range. However, method 2 (round first), gives 3.39 -- which is outside the range, by quite a bit. The reason should be clear enough in this example: we have rounded "up" ten times, and thus biased the result upwards. This is an example of how rounding errors can accumulate. It is better to round only at the end.
At the start of this example we said that the proper number of Significant Digits in this case was 3. As we went on, we found that the range of possible answers was 3.28-3.34, or 3.31 +/- 0.03. Obviously, this means that stating the answer as 3.31, to 3 Significant Digits with an implication of +/- 0.01, is not so good. This illustrates a limitation of Significant Digits; it is not so good when there are many error terms to keep track of (10, in this case). The main point of this example was to show the effect of compounding rounding errors -- hence the desirability of not rounding off at intermediate stages. (For more about such limitations of Significant Digits, see the section below: Limitations and complications of Significant Digits.)
The discussion of Significant Digits when adding up atomic weights to calculate a molecular weight, in the section Significant figures in addition, is consistent with this point. The question of how to round when the final digit is a 5 -- or at least appears to be a 5 -- is discussed below in the Special cases section on Rounding: What to do with a final 5.
Conversion factors
How many Significant Digits do conversion factors have? Well, it depends. Conversion factors within the metric system, i.e., involving only metric prefixes, are exact. Similarly, conversion factors between large and small units within the American system (e.g., 12 inches per foot, are exact). Conversion factors between metric and American systems are typically not exact, and it is your responsibility to try to make sure you use a conversion factor that has enough Significant Digits for your case. It is generally not good to allow a conversion factor to limit the quality of a calculation.
Exception
The conversion factor between centimeters and inches, 2.54 cm = 1 inch, is exact -- because it has been defined to be exact. If you convert 14.626 cm to inches, at 2.54 cm/inch, you can properly report the result as 5.7583 inches -- 5 Significant Digits, like the original measurement -- because the conversion factor is exact.
Many conversion factors we use in chemistry relate one property to another. Examples are density (mass per volume, g/mL) and molar mass (mass per mole, g/mol). These conversion factors are based on measurements, and their Significant Digits must be considered. It is your responsibility to think about the Significant Digits of a conversion factor. The best approach is usually to think about where the number came from. Is it a definition? a measurement?
Limitations and complications of Significant Digits
Using Significant Digits can be a good simple way to introduce students to the idea of measurement errors. It allows us to begin to relate the measurement scale to measurement quality, and does not require much math to implement. However, Significant Digits are only an approximation to true error analysis, and it is important to avoid getting bogged down in trying to make Significant Digits work well when they really don't.
One type of difficulty with Significant Digits can be seen with reading a scale to the nearest "tenth". (The scale shown with Example 1 illustrates this case.) In this case, 1.1 and 9.1 are both proper measurements. If we assume for simplicity that each measurement is good to +/- 0.1, the uncertainty in the first measurement is about 10% and the uncertainty in the second measurement is about 1%. Clearly, simply saying that both numbers are good to two Significant Digits is only a rough indication of the quality of the measurement.
Further, Significant Digits does not convey the magnitude of the reading uncertainty for any specific scale. The common statement, which I used in the previous paragraph, is that readings are assumed to be good to 1 in the last place shown. But on some scales, it would be much more realistic to suggest that the uncertainty is 2 or even 5 in the last place shown. A similar problem can occur when the errors from many numbers are accumulated in one calculation. Example 10 illustrated this.
Another limitation of Significant Digits is that it deals with only one source of error, that inherent in reading the scale. Real experimental errors have many contributions, including operator error and sometimes even hidden systematic errors. One cannot do better than what the scale reading allows, but the total uncertainty may well be more than what the Significant Digits of the measurements would suggest.
I have found that, even in introductory courses, some of the students will realize some of these limitations. When they point them out to me, I am happy to compliment them on their understanding. I then explain that Significant Digits is a simple and approximate way to start looking at measurement errors, and assure them that more sophisticated -- but more labor-intensive -- ways are available.
Scale Reading: Digital instruments
Some modern measuring instruments have a digital scale. Electronic balances are particularly common. How do you know how many Significant Digits to write down from a digital scale? Good question. Most such instruments will display the proper number of digits. However, you should watch the instrument and see if that seems reasonable. Remember that we usually estimate one digit beyond what is certain. With a digital scale, this is reflected in some fluctuation of the last digit. So if you see the last digit fluctuating by 1 or 2, that is fine. Write down that last digit; you should try to write down a value that is about in the middle of the range the scale shows.
Note
If the fluctuation is more than 2 or so in the last digit, it may mean that the instrument is not working properly. For example, if the balance display is fluctuating much, it may mean that the balance is being influenced by air currents -- or by someone bumping the bench. Regardless of the reason, a large fluctuation may mean that a displayed digit is not really significant.
Scale Reading: Volumetric pipets or volumetric flasks
These measuring instruments have only one calibration line. You adjust the liquid level to the calibration line -- as close as you can; you then have the volume that is shown on the device. A 10 mL volumetric pipet measures 10 mL; that is the only thing it can do. So, how many Significant Digits do we report in such a measurement? Obviously the usual procedures for determining Significant Digits are not applicable.
One key determinant of the quality of a measurement with a volumetric pipet is the tolerance -- the accuracy of the device as guaranteed by the manufacturer. The tolerance may be shown on the instrument; if not, it can be obtained from the catalog or other reference source.
There is no necessary relationship between the tolerance and measurement error. However, it turns out that these instruments have been designed so that the tolerance is close to the typical measurement error. Thus, as an approximation, but a useful one, one can treat the stated tolerance as the measurement error. As a rule of thumb, high quality ("Class A") volumetric glassware will give 4 Significant Digits measurements. (In contrast, ordinary glassware will give about 3 Significant Digits at best.) Of course, this assumes that the instrument is being used by trained personnel. In serious work, one would take care to measure actual experimental errors.
Rounding: What to do with a final 5
There are two points to be made here. The first is to make sure that the final 5 really is a final 5. And then, if it is, what to do.
Is the final 5 really a final 5? This might seem to be simple enough, but with common calculators it is easy to be misled. Calculators know nothing about Significant Digits; how many digits they display depends on various things, including how you set them. It is easy for a calculator to mislead you about a final 5. For example, imagine that the true result of a calculation is 8.347, but that the calculator is set to display two decimal places (two digits beyond the decimal point). It will show 8.35. If you want 2 Significant Digits, you would be tempted to round to 8.4. However, that is clearly incorrect, if you look at the complete result 8.347, which should round to 8.3 for 2 Significant Digits. How do you avoid this problem? If you see a final 5 that you want to round off, increase the number of digits displayed before making your decision.
What to do if you really have a final 5. There are two schools of thought on this.
• Some people will suggest that you always round a final 5 up.
• Others will suggest that you round it up and down each half of the time; the usual way to do this is to round a final 5 to make the previous digit an even number. For example, 0.35 becomes 0.4 and 0.65 becomes 0.6.
What should you do? Well, this is really a rather arcane point, not worth much attention. If your instructor prefers a particular way, do it. It really is not a big deal, one way or the other. If you are looking to decide your own preferred approach, I'd suggest you read a bit about what various people suggest, and why. If you just want my opinion, well, I suggest "rounding even". | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Significant_Digits/Significant_Figures.txt |
All measurements have a degree of uncertainty regardless of precision and accuracy. This is caused by two factors, the limitation of the measuring instrument (systematic error) and the skill of the experimenter making the measurements (random error).
Introduction
The graduated buret in Figure 1 contains a certain amount of water (with yellow dye) to be measured. The amount of water is somewhere between 19 ml and 20 ml according to the marked lines. By checking to see where the bottom of the meniscus lies, referencing the ten smaller lines, the amount of water lies between 19.8 ml and 20 ml. The next step is to estimate the uncertainty between 19.8 ml and 20 ml. Making an approximate guess, the level is less than 20 ml, but greater than 19.8 ml. We then report that the measured amount is approximately 19.9 ml. The graduated cylinder itself may be distorted such that the graduation marks contain inaccuracies providing readings slightly different from the actual volume of liquid present.
Figure 1: A meniscus as seen in a burette of colored water. '20.00 mL' is the correct depth measurement. Click here for a more complete description on buret use, including proper reading. Figure used with permission from Wikipedia.
Systematic vs. Random Error
The diagram below illustrates the distinction between systematic and random errors.
Figure 2: Systematic and random errors. Figure used with permission from David DiBiase (Penn State U).
Systematic errors: When we use tools meant for measurement, we assume that they are correct and accurate, however measuring tools are not always right. In fact, they have errors that naturally occur called systematic errors. Systematic errors tend to be consistent in magnitude and/or direction. If the magnitude and direction of the error is known, accuracy can be improved by additive or proportional corrections. Additive correction involves adding or subtracting a constant adjustment factor to each measurement; proportional correction involves multiplying the measurement(s) by a constant.
Random errors: Sometimes called human error, random error is determined by the experimenter's skill or ability to perform the experiment and read scientific measurements. These errors are random since the results yielded may be too high or low. Often random error determines the precision of the experiment or limits the precision. For example, if we were to time a revolution of a steadily rotating turnable, the random error would be the reaction time. Our reaction time would vary due to a delay in starting (an underestimate of the actual result) or a delay in stopping (an overestimate of the actual result). Unlike systematic errors, random errors vary in magnitude and direction. It is possible to calculate the average of a set of measured positions, however, and that average is likely to be more accurate than most of the measurements.
1. Since Tom must rely on the machine for an absorbance reading and it provides consistently different measurements, this is an example of systematic error.
2. The majority of Claire's variation in time can likely be attributed to random error such as fatigue after multiple laps, inconsistency in swimming form, slightly off timing in starting and stopping the stop watch, or countless other small factors that alter lap times. To a much smaller extent, the stop watch itself may have errors in keeping time resulting in systematic error.
3. The researcher's percent error is about 0.62%.
4. This is known as multiplier or scale factor error.
5. This is called an offset or zero setting error.
6. Susan's percent error is -7.62%. This percent error is negative because the measured value falls below the accepted value. In problem 7, the percent error was positive because it was higher than the accepted value.
7. You would first weigh the beaker itself. After obtaining the weight, then you add the graphite in the beaker and weigh it. After obtaining this weight, you then subtract the weight of the graphite plus the beaker minus the weight of the beaker. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Significant_Digits/Uncertainties_in_Measurements.txt |
Learning Objectives
• To identify the different between temperature and heat
• To recognize the different scales used to measuring temperature
The concept of temperature may seem familiar to you, but many people confuse temperature with heat. Temperature is a measure of how hot or cold an object is relative to another object (its thermal energy content), whereas heat is the flow of thermal energy between objects with different temperatures.
Three different scales are commonly used to measure temperature: Fahrenheit (expressed as °F), Celsius (°C), and Kelvin (K). Thermometers measure temperature by using materials that expand or contract when heated or cooled. Mercury or alcohol thermometers, for example, have a reservoir of liquid that expands when heated and contracts when cooled, so the liquid column lengthens or shortens as the temperature of the liquid changes.
The Fahrenheit Scale
The Fahrenheit temperature scale was developed in 1717 by the German physicist Gabriel Fahrenheit, who designated the temperature of a bath of ice melting in a solution of salt as the zero point on his scale. Such a solution was commonly used in the 18th century to carry out low-temperature reactions in the laboratory. The scale was measured in increments of 12; its upper end, designated as 96°, was based on the armpit temperature of a healthy person—in this case, Fahrenheit’s wife. Later, the number of increments shown on a thermometer increased as measurements became more precise. The upper point is based on the boiling point of water, designated as 212° to maintain the original magnitude of a Fahrenheit degree, whereas the melting point of ice is designated as 32°.
The Celsius Scale
The Celsius scale was developed in 1742 by the Swedish astronomer Anders Celsius. It is based on the melting and boiling points of water under normal atmospheric conditions. The current scale is an inverted form of the original scale, which was divided into 100 increments. Because of these 100 divisions, the Celsius scale is also called the centigrade scale.
The Kelvin Scale
Lord Kelvin, working in Scotland, developed the Kelvin scale in 1848. His scale uses molecular energy to define the extremes of hot and cold. Absolute zero, or 0 K, corresponds to the point at which molecular energy is at a minimum. The Kelvin scale is preferred in scientific work, although the Celsius scale is also commonly used. Temperatures measured on the Kelvin scale are reported simply as K, not °K.
Figure \(1\): A Comparison of the Fahrenheit, Celsius, and Kelvin Temperature Scales. Because the difference between the freezing point of water and the boiling point of water is 100° on both the Celsius and Kelvin scales, the size of a degree Celsius (°C) and a kelvin (K) are precisely the same. In contrast, both a degree Celsius and a kelvin are 9/5 the size of a degree Fahrenheit (°F).
Converting between Scales
The kelvin is the same size as the Celsius degree, so measurements are easily converted from one to the other. The freezing point of water is 0°C = 273.15 K; the boiling point of water is 100°C = 373.15 K. The Kelvin and Celsius scales are related as follows:
T (in °C) + 273.15 = T (in K)
T (in K) − 273.15 = T (in °C)
Degrees on the Fahrenheit scale, however, are based on an English tradition of using 12 divisions, just as 1 ft = 12 in. The relationship between degrees Fahrenheit and degrees Celsius is as follows:where the coefficient for degrees Fahrenheit is exact. (Some calculators have a function that allows you to convert directly between °F and °C.) There is only one temperature for which the numerical value is the same on both the Fahrenheit and Celsius scales: −40°C = −40°F. The relationship between the scales are as follows:
°C = (5/9)*(°F-32)
°F = (9/5)*(°C)+32
Exercise \(1\)
Convert the temperature of the surface of the sun (5800 K) and the boiling points of gold (3080 K) and liquid nitrogen (77.36 K) to °C and °F.A student is ill with a temperature of 103.5°F. What is her temperature in °C and K? | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Temperature_Basics.txt |
The Scientific Method is simply a framework for the systematic exploration of patterns in our world. It just so happens that this framework is extremely useful for the examination of chemistry and its many questions. The scientific process, an iterative process, uses the repeated acquisition and testing of data through experimental procedures to disprove hypotheses. A hypothesis is a proposed explanation of natural phenomena, and after a hypothesis has survived many rounds of testing, it may be accepted as a theory and used to explain the phenomena in question. Thus, the scientific method is not a linear process of steps, but a method of inductive reasoning.
Contributors and Attributions
• Harley Brinkman (UCD)
The Scientific Method
Learning Objectives
• To understand the limitations in the scientific method, one must become familiar with the scientific method and its components.
Pseudo-science, basically "fake"-science," consists of scientific claims which are made to appear factual when they are actually false. Many people question whether Pseudo-science should even contain the word "science" as Pseudo-science isn't really even an imitation of science; it pretty much disregards the scientific method all together. Also known as alternative or fringe-science, Pseudo-science relies on invalid arguments called sophisms, a word Webster dictionary defines as "an argument apparently correct in form but actually invalid; especially : such an argument used to deceive". Pseudo-science usually lacks supporting evidence and does not abide by the scientific method. That is, pseudo-theories fail to use carefully cultivated and controlled experiments to test a hypothesis. A scientific hypothesis must include observable, empirical and testable data, and must allow other experts to test the hypothesis. Pseudo-science does not accomplish these goals. Several examples of Pseudo-Science include phrenology, astrology, homeopathy, reflexology and iridology.
Distinguishing Pseudo-Science
In order to distinguish a pseudoscience, one must look at the definition of science, and the aspects that make science what it is. Science is a process based on observations, conjectures, and assessments to provide better understanding of the natural phenomena of the world. Science generally always follows a formal system of inquiry which consists of observations, explanations, experiments, and lastly, hypothesis and predictions. Scientific theories are always challenged by experts and revised to fit new theories. Pseudo-science, however, is mostly based on beliefs and it greatly opposes contradictions. Their hypothesis are never revised to fit new data or information. Scientist continually disprove ideas to achieve a better understanding of the physical world, whereas pseudo-scienctists focus on proving theories to make their claims seem plausible. For example, science text books come out with new editions every couple of years to correct typos, update information, add new illustrations, etc. However, it has been observed that pseudo-science textbooks only come out with one edition, and is never updated or revised even if their theory has been proven to be false.
Pseudo-science beliefs often tend to be greatly exaggerated and very vague. Complicated technical language is often used to sound impressive but it is usually meaningless. For example, a phrase like "energy vibrations" is used to sound remarkable but a phrase like this is insignificant and doesn't really explain anything. Furthermore, Pseudo-science often consists of outrageous, yet unprovable claims. Thus, pseudo-scientists tend to focus on confirming their ideas, rather than finding evidence that refutes them. The following dialogue contains the thought-processes behind Pseudo-Science.
1. My friend and I think unicorns exist
2. Science has no evidence about unicorns.
3. Science therefore cannot prove if unicorns do or do not exist.
4. One day my friend, a very trustworthy person, said she saw a unicorn in the field by her house. There is no other evidence, other than the fact that my friend saw it.
5. Unicorns exist and any scientist who tries to deny the existence of unicorns is a fun-sucking, hostile human being.
The dialogue above features many key characteristics of Pseudo-Science. The speaker makes his or her point valid though the two facts alone that her friend had a personal experience and that science has no proof to prove the theory wrong. Finally, the speaker insults anyone who would challenge the theory. In science, challenges to a theory are accepted as everyone has the same common goal of improving the understanding of the natural word. Below is a table that lays out the key characteristics of Science and Pseudo-Science
SCIENCE PSEUDO-SCIENCE
Science never proves anything. Pseudoscience aimes to prove an idea.
Self-correcting methodology which involves critical thinking. Starts with a conculsion and gives easy answers to complex problems.
An on-going process to develop a better understanding of the physical world by testing all possible hypotheses. Often driven by social, political or commercial goals.
Involves a continual expansion of knowledge due to intense research. A field has not evolved a lot since the beginning. If any research is done, it is done to justify the claims, rather that expand them.
Scientists constantly attempt to refute other scientists' works. An attempt to disprove the beliefs is considered hostile and unacceptable.
When results or observations are not consistent with a scientific understanding, intense research follows. Results or observations that are not consistent with current beliefs are ignored.
Remains questionable at any time. There are two types of theories: those that have been proven wrong by experimentation and data, and every other theory. Thus, no theory can be proven correct; every theory is also subject to being refuted. Beliefs of the field can not usually be tested empirically so will likely not ever be proven wrong; Thus, Pseudo-scientists believe that they are right just because no one can prove them wrong.
Concepts are based on previous understandings or knowledge. Pseudo-Scientists are often not in touch with main-stream science and are often driven by the egos of the "scientists". Furthermore, famous names and testimonials are often used for support rather than scientific evidence.
Findings must be stated in unambiguous, clear language. Pseudoscience often uses very vague, yet seemingly technical terms.
Phrenology
Phrenology, also known as craniology, was a "science" popular during the early 1800s that was centered around the idea that the brain was an organ of the mind. During this time, most people believed that the brain was divided into distinct sections that all controlled different parts of a person's personality or intelligence. The basis of phrenology revolves around the concept that the brain mirrors a muscle and those parts of the brain which are "exercised" the most, will be proportionally larger than those parts of the brain that aren't often used. Thus, the scientists pictured the brain as a bumpy surface, with the make-up of the surface differing for every person depending on their personality and intelligence. By the mid 19th century, automated phrenology machines existed, which was basically a set of spring loaded probes that were placed on the head to measure the topography of one's skull. The machine then gave an automated reading about a person's characteristics based on this.
Let's consider some of the key characteristics of pseudo-science from our chart, and see how they apply to phrenology.
• Pseudo-Scientists are often not in touch with main-stream science: Scientific research has since the 1800s shown how though the brain is indeed divided into sections, each section does not determine a characteristic or personality trait, but instead controls a specific function such as memory or motor skills. Likewise, it has been concluded that the brain conforms to the shape of the skull , rather than the skull conforming to the shape of the brain (meaning the bumps of a persons skull have nothing to do with the shape of the brain). Back in the 1800s, little knowledge existed about the realities of brain structure and function, so the concept wasn't as reflexive of pseudo-science as it is today. However, some doctors and scientist still believe in the basic tenets of phrenology. Phrenology today exists as a classic form of pseudo-science as it goes against the common understanding about how the brain functions.
• Often driven by social, political or commercial goals- Indeed, the main goal of phrenology was a political and social one: to prove the dominance of the white race over other races. "Scientists" measured the brains of both races and concluded that the brains of white people were larger then that of people of African descent. Therefore, they concluded, they were smarter and superior. It was later revealed that the scientists were biased while conducting the experiment and that they were previously aware of what race each brain belonged to. The experiment was repeated and this time the scientists were not aware of the race and they concluded that the brains were of equal size. The second experiment better conforms to the scientific method, as in this case the scientists objectively measured the brains, while in the first case the bias of the scientists lead to their conclusions. Thus, this situation demonstrates a two-fold level of defective science because not only was the idea of measuring the brains to determine personality and intelligence not correct all together, but the methods in which the scientists were doing this was also flawed. Phrenology was also commercially driven, since phrenology parlors where very wide spread and many devices were on the market to be used to measure.
• Pseudo-Scientists are often driven by the egos of the "scientists"- In the book Phrenology and the origins of Victorian Scientific Naturalism by John Van Whye, Van Whye quotes about the main discoverer of Phrenology Franz Joseph Gall, that " the peculiar incentive behind Gall's fascination with explaining individuals' differences may have lain in his hubris" (Van Whye 18). Of the 12 children in his family, Gall was the sharpest and brightest and naturally interested in distinguishing factors between children. Even as a young school boy, Gall noticed that the other children who were just as good at memorization as he was all had protruding eyes, which lead him to the idea of the basis of phrenology, that the characteristics of one's head indicates his or her intelligence.
Reflexology
Reflexology is a way of treatment that involves physically applying pressure to the feet or hands with the belief that each are divided up into different zones that are "connected" to other parts of the body. Thus, reflexologists assert that they can make physical changes throughout the body simply by rubbing ones hands or feet. Like we did with phrenology, lets go through some of the main characteristics of Pseudo-Science and see how they apply to reflexology.
• Pseudo-Scientists are often not in touch with main-stream science: No Scientific research has proven the validity of reflexology and how in fact it would actually work. In 2009, the Australian Medical Journal conducted an extensive study on reflexology and concluded "The best evidence available to date does not demonstrate convincingly that reflexology is an effective treatment for any medical condition". However, despite this lack of evidence, Reflexology continues.
• Pseudoscience often uses very vague, yet seemingly technical terms terms A main focus of reflexology is that the pressure on the foot removes any blockage of Qi, the "life energy force" and restores balance to lead to better health. Terms like "vital energy" or "energy blockage" which are used to talk about reflexology are classic pseudo-science terms; they sound impressive yet have no meaning to us
• Furthermore, famous names and testimonials are often used for support rather than scientific evidence. Because pseudo-science beliefs do not use scientific data for support, they must rely on individual circumstances when their product, idea, etc. appeared to have worked. For example, on the home page of well-known reflexologist Laura Norman's home page, she has a quote of Regis Philben (past host of Who Wants to be a Millionaire?) saying "Laura Norman's Reflexology spared me from a kidney stone operation and saved my life.", opposed to a quote from say, a medical journal, that would cite how many studies say reflexology is an extremely effective form of treatment.
Distinguishing Pseudo-Science from other types of invalid science
An important distinction should be made between Pseudo-science and other types of defective science. Take for example, the "discovery" of N-rays. While attempting to polarize X-rays, physicist René Prosper Blondlot claimed to have discovered a new type of radiation he called N-rays. After Blondlot shared with others his exciting discovery, many other scientists confirmed his beliefs by saying they too had saw the N-rays. Though he claimed N-rays contained impossible properties, Blondlot asserted when he put a hot wire in an iron tube, he was able to detect the N-rays when he used a thread of calcium sulfite that glowed slightly when the rays were sent through a prism of aluminum. Blondlot claimed that all substances except some treated metals and green wood emit N-rays. However, Nature magazine was skeptical of Blondlot and sent physicist Robert Wood to investigate. Before Blondlot was about to show Wood the rays, Wood removed the aluminum prism from the machine without telling Blondlot. Without the prism, the rays would be impossible to detect. However, Blondlot claimed to still see the N-rays, demonstrating how the N-rays did not exist; Blondlot just wanted them to exist. This is an example of Pathological science, a phenomenon which occurs when scientists practice wishful data interpretation and come up with results they want to see. This case of Pathological science and Pseudo-science differ. For one, Blondlot asked for a confirmation by other experts, something Pseudo-science usually lacks. More importantly, in pathological science, a scientist starts by following the scientific method; Blondlot was indeed doing an experiment when he made his discovery and proceeded to experiment when he found the substances that did not emit the rays. However, Pseudo-science usually includes a complete disregard of the scientific method, while Pathological scientists includes following the scientific method but seeing the results you wish to see.
Another type of invalid science, called hoax science occurred in 1999 when a team at the Lawrence Berkeley National Laboratory claimed to have discovered elements 116 and 118 when they bombarded Lead with Krypton particles. However, by 2002 it had been discovered that physicist Victor Ninov had intentionally fudged the data to get the ideal results. Thus, the concept of hoax science, which occurs when the data is intentionally falsified, differs both from pathological and pseudo science. In pathological science, scientists wishfully interpret the data and legitimately think they see what they want to see. However, in Hoax science, scientists know they don't see what they want to see, but just say they did. Finally, in Pseudo-Science, scientists don't consider the scientific method at all as they don't use valid experiments to back up their data in the first place.
From Pseudo-Science to Science
There have been incidents where what was once considered pseudo-science became a respectable theory. In 1911, German astronomer and meteorologist Alfred Wegener first began developing the idea of Continental Drift. The observation that the coastlines of African and South American seemed to fit together was not a new observation: scientists just couldn't believe that the continents could have drifted so far to cross the 5,000 mile Atlantic Ocean. At the time, it was a common theory that a land bridge had existed between Africa and Brazil. However, one day in the library Wegener read a study about a certain species that could not have crossed the ocean, yet had fossils appeared on both sides of the supposed land bridge. This piece of evidence lead Wegener to believe that our world had once been one piece, and had since drifted apart. However, Wegener's theory encountered much hostility and disbelief. In this time, it was the norm for scientists to stay within the scopes of their fields, meaning biologists did not study physics, chemists did not study oceanology and of course, meteorologists/astronomers like Wegener did not study geology. Thus, Wegener's theory faced much criticism just due to the fact that he was not a geologist. Also, Wegener could not explain why the continents moved, just that they did. This lack of reasoning lead to more skepticism about the theory and all these factors combined lead to the viewing of continental drift as Pseudo-Science. However, today much evidence exists that shows that Continental Drift is a perfectly acceptable scientific theory. Today, the modern ideas of plate tectonics can help explain Continental Drift, as the Plate Tectonic Theory presents the idea that the earth's surface is made up of several large plates that often move up to a few inches every year. Also, the development of paleomagnetism, which allows us to determine the earth's magnetic poles at the time a rock formed, suggests that the earth's magnetic poles have changed many times in the last 175 million years and that at one time South America and Africa were connected.
Limitations of the Scientific Method
Due to the need to have completely controlled experiments to test a hypothesis, science can not prove everything. For example, ideas about God and other supernatural beings can never be confirmed or denied, as no experiment exists that could test their presence. Supporters of Intelligent Design attempt to convey their beliefs as scientific, but nonetheless the scientific method can never prove this. Science is meant to give us a better understanding of the mysteries of the the natural world, by refuting previous hypotheses, and the existence of supernatural beings lies outside of science all together. Another limitation of the scientific method is when it comes making judgements about whether certain scientific phenomenons are "good" or "bad". For example, the scientific method cannot alone say that global warming is bad or harmful to the world, as it can only study the objective causes and consequences. Furthermore, science cannot answer questions about morality, as scientific results lay out of the scope of cultural, religious and social influences.
Concept Assessment
Determine if each statement is true or false (see answers at bottom of the page)
1. What is considered Pseudo-Science today will always be considered Pseudo-Science
2. A person has a cold and decides to seek reflexology treatment. The next day, the person gets better. This means reflexology is a valid scientific theory
3. Just because "science" is immoral or defective does not necessarily mean it is Pseudo-Science
4. Famous people are used in advertisements for products such as gatorade. This means these products are Pseudo-Science
5. Medically based Pseudo-Science such as homeopathy, reflexology or acupuncture have absolutely no benefits to people
Answers to concept assessment
1. False- just because something is considered pseudo-science today, does not mean it will always be. Take for example, our discussion about Continental Drift. Continental Drift used to be considered Pseudo-Science, but now since there is scientific evidence to prove it, the theory is considered a product of science.
2. False- Just because a person got better after having reflexology treatment does not mean the treatment, which has no scientific evidence behind it, is the sole reason for a person's recovery. Many other factors could have lead to a person's healing, such as medication or time to let the body fight by itself so it would be impossible to determine that reflexology caused a person to get over a cold
3. True- Pseudo-Science is a specific type of defective. See the discussion about pathological and hoax science to learn how to distinguish Pseudo-Science from other types of invalid Science.
4. False- The common characteristic of relying on testimonials or celebrity support of Pseudo-Science is just one of the many characteristics on Pseudo-Science. Before declaring something as Pseudo-Science or science, it is important to consider various characteristics of both and focus on whether or not the ideas have experimentally determined data to support them. There has indeed been Scientific Data to support the use of Gatorade.
5. False- though there is little scientific evidence to support these types of medical treatment, it does not mean that they have no value. The Placebo effect may be relevant here, as people may believe that the methods are working, which may trigger the body to actually feel better. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/The_Scientific_Method/Science_vs._Pseudo-science%3A_Limitations_of_the_Scientific_Method.txt |
Chemistry is a quantitative science. Amounts of substances and energies must always be expressed in numbers and units (in order to make some sense of what you are talking about). You should also develop a sensation about quantities every time you encounter them; you should be familiar with the name, prefix, and symbol used for various quantities.
Units of Measure
To learn more about systems of measurement, visit the SI Unit page and the Non-SI Unit page. As the four examples below can attest, small errors in these unit systems can harbor massive ramifications.
Although NASA declared the metric system as its official unit system in the 1980s, conversion factors remain an issue. The Mars Climate Orbiter, meant to help relay information back to Earth, is one notable example of the unit system struggle. The orbiter was part of the Mars Surveyor ’98 program, which aimed to better understand the climate of Mars. As the spacecraft journeyed into space on September 1998, it should have entered orbit at an altitude of 140-150 km above Mars, but instead went as close as 57km. This navigation error occurred because the software that controlled the rotation of the craft’s thrusters was not calibrated in SI units. The spacecraft expected newtons, while the computer, which was inadequately tested, worked in pound forces; one pound force is equal to about 4.45 newtons. Unfortunately, friction and other atmospheric forces destroyed the Mars Climate Orbiter. The project cost $327.6 million in total. Tom Gavin, an administrator for NASA's Jet Propulsion Laboratory in Pasadena, stated, "This is an end-to-end process problem. A single error like this should not have caused the loss of Climate Orbiter. Something went wrong in our system processes in checks and balances that we have that should have caught this and fixed it." The Mars Climate Orbiter, image courtesy NASA/JPL-Caltech NASA's Constellation Program: A Possible Casualty of Metric/Imperial Conversions Another NASA related conversion concern involves the Constellation project, which is focused mainly on manned spaceflight. Established in 2005, it includes plans for another moon landing. The Constellation project is partially based upon decades-old projects such as the Ares rocket and the Orion crew capsule. These figures and plans are entirely in British Imperial units; converting this work into metric units would cost approximately$370 million.
Work on the Constellation Project, image courtesy NASA/Kim Shiflett
Disneyland Tokyo: A Bumpy Blunder
Tokyo Disneyland’s Space Mountain roller coaster came to a sudden halt just before the end of a ride on December 5, 2003. This startling incident was due a broken axle. The axle in question fractured because it was smaller than the design’s requirement; because of the incorrect size, the gap between the bearing and the axle was over 1 mm – when it should have been a mere 0.2 mm (the thickness of a dime vs. the thickness of two sheets of common printer paper.) The accumulation of excess vibration and stress eventually caused it to break. Though the coaster derailed, there were no injuries. Once again, unit systems caused the accident. In September 1995, the specifications for the coaster’s axles and bearings were changed to metric units. In August 2002, however, the British-Imperial-unit plans prior to 1995 were used to order 44.14 mm axels instead of the needed 45 mm axels.
Air Canada Flight 143: Unit-Caused Fuel Shortage
A Boeing 767 airplane flying for Air Canada on July 23, 1983 diminished its fuel supply only an hour into its flight. It was headed to Edmonton from Montreal, but it received low fuel pressure warnings in both fuel pumps at an altitude of 41,000 feet; engine failures followed soon after. Fortunately, the captain was an experienced glider pilot and the first officer knew of an unused air force base about 20 kilometers away. Together, they landed the plan on the runway, and only a few passengers sustained minor injuries. This incident was due partially to the airplane’s fuel indication system, which had been malfunctioning. Maintenance workers resorted to manual calculations in order to fuel the craft. They knew that 22,300 kg of fuel was needed, and they wanted to know how much in liters should be pumped. They used 1.77 as their density ratio in performing their calculations. However, 1.77 was given in pounds per liter, not kilograms per liter. The correct number should have been 0.80 kilograms/liter; thus, their final figure accounted for less than half of the necessary fuel.
The Air Canada craft, image courtesy Akradecki
Example $1$
If Jimmy walks 5 miles, how many kilometers did he travel?
Solution
$5 \;\cancel{miles} \times \left (\dfrac{1.6\; kilometers }{1\; \cancel{mile}}\right) = 8\; kilometers \nonumber$
Example $2$
A solid rocket booster is ordered with the specification that it is to produce a total of 10 million pounds of thrust. If this number is mistaken for the thrust in Newtons, by how much, in pounds, will the thrust be in error? (1 pound = 4.5 Newtons)
Solution
10,000,000 Newtons x (1 pound / 4.448 Newtons) = 2,200,000 pounds.
10,000,000 pounds - 2,200,000 pounds = 7.800,000 pounds.
The error is a missing 7,800,000 pounds of thrust.
Example $3$
The outer bay tank at the Monterey Bay Aquarium holds 1.3 million gallons. If NASA takes out all the fish in this tank and sends them to swim around in space, what is the theoretical volume of all the fish in liters? Assume there are 3,027,400 liters of water left in the tank after the fish are removed.
Solution
$3,027,400\; \cancel{liters} \times \left(\dfrac{0.264 \;gallons}{1\; \cancel{liter}}\right) = 800,000\;\text{gallons remaining in tank} \nonumber$
The volume of the space fish is 1,300,000 - 800,000 = 500,000 gallons, which converts to 1,892,100 liters worth of fish swimming around the solar system.
Example $4$
A bolt is ordered with a thread diameter of 1.25 inches. What is this diameter in millimeters? If the order was mistaken for 1.25 centimeters, by how many millimeters would the bolt be in error?
Solution
$1.25\; \cancel{\rm{ inches}} \times \dfrac{25.4\; \rm{millimeters}}{1\; \cancel{ \rm{inch}}} = 31.75 \; \rm{millimeters} \nonumber$
Since 1.25 centimeters x (10 millimeters / 1 centimeter) = 12.5 millimeters, the bolt would delivered 31.75 - 12.5 = 19.25 millimeters too small.
Example $5$
The Mars Climate Orbiter was meant to stop about 160 km away from the surface of Mars, but it ended up within 36 miles of the surface. How far off was it from its target distance (in km)? If the Orbiter is able to function as long as it stays at least 85 km away from the surface, will it still be functional despite the mistake?
Solution
$36 \; \cancel{\text{miles}} \times \dfrac {1.6 \; \text{kilometers} }{1\; \cancel{\text{mile}}} = 57.6 \;\text{km kilometers from surface} \nonumber$
The difference then is (in kilometers): 160 - 57.6 kilometers = 102.4 kilometers away from targeted distance. Hence, the Orbiter is unable to function due to this mistake since it is beyond the 85 km error designed into its function. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Metric_Imperial_Conversion_Errors.txt |
In introductory chemistry we use only a few of the most common metric prefixes, such as milli, centi, and kilo. Our various textbooks and lab manuals contain longer lists of prefixes, but few if any contain a complete list. There is no point of memorizing this, but it is nice to have a place to look them up. You will find prefixes from throughout the range as you read the scientific literature. In particular, the smaller prefixes such as nano, pico, femto, etc., are becoming increasingly common as analytical chemistry and biotechnology develop more sensitive methods. To help you visualize the effect of these prefixes, there is a column "a sense of scale", which gives some examples of the magnitudes represented.
prefix abbreviation (upper and lower case are important) meaning example
a sense of scale (for some) Most are approximate.
yotta Y 1024 yottagram, 1 Yg = 1024 g mass of water in Pacific Ocean ~ 1 Yg
energy given off by the sun in 1 second ~ 400 YJ
volume of earth ~ 1 YL
mass of earth ~ 6000 Yg
zetta Z 1021 zettameter, 1 Zm = 1021 m radius of Milky Way galaxy ~ 1 Zm
volume of Pacific Ocean ~ 1 ZL
world energy production per year, ~ 0.4 ZJ
exa E 1018 exasecond, 1 Es = 1018 s age of universe ~ 0.4 Es (12 billion yr)
peta P 1015 petameter, 1 Pm = 1015 m 1 light-year (distance light travels in one year) ~ 9.5 Pm
The dinosaurs vanished ~ 2 Ps ago.
tera T 1012 terameter, 1 Tm = 1012 m distance from sun to Jupiter ~ 0.8 Tm
giga G 109 gigasecond, 1 Gs = 109 s human life expectancy ~ 1 century ~ 3 Gs
1 light-second (distance light travels in one second) ~ 0.3 Gm
mega M 106 megasecond, 1 Ms = 106 s 1 Ms ~ 11.6 days
kilo k 103 kilogram, 1 kg = 103 g
hecto h 102 hectogram, 1 hg = 102 g
deka (or deca) da 10 = 101 dekaliter, 1 daL = 101 L
deci d 10-1 deciliter, 101 dL = 1 L
centi c 10-2 centimeter, 102 cm = 1 m
milli m 10-3 millimole, 103 mmol = 1 mol
micro μ (Greek letter "mu") 10-6 microliter, 106 μL = 1 L 1 μL ~ a very tiny drop of water
nano n 10-9 nanometer, 109 nm = 1 m radius of a chlorine atom in Cl2 ~ 0.1 nm or 100 pm
pico p 10-12 picogram, 1012 pg = 1 g mass of bacterial cell ~ 1 pg
femto f 10-15 femtometer, 1015 fm = 1 m radius of a proton ~ 1 fm
atto a 10-18 attosecond, 1018 as = 1 s time for light to cross an atom ~ 1 as
bond energy for one C=C double bond ~ 1 aJ
zepto z 10-21 zeptomole, 1021 zmol = 1 mol 1 zmol ~ 600 atoms or molecules
"A picture is worth about 1.7 zmol of words."
yocto y 10-24 yoctogram, 1024 yg = 1 g 1.7 yg ~ mass of a proton or neutron
Binary Prefixes
You have probably heard words such as kilobyte, in the context of computers. What does it mean? It might seem to mean 1000 bytes, since kilo means 1000. But in the computer world it often means 1024 bytes. That is 210 - a power of two very close to 1000. Now, in common usage it often does not matter whether the intent was 1000 bytes or 1024 bytes. But they are different numbers and sometimes it does matter. So, a new set of "binary prefixes", distinguished by "bi" in the name or "i" in the abbreviation, was introduced in 1998. By this new system, 1024 bytes would be properly called a kibibyte or KiB. (Sounds like something you would feed the dog.)
This new system of binary prefixes has been endorsed by the International Electrotechnical Commission (IEC) for use in electrical technology. See the NIST page at http://physics.nist.gov/cuu/Units/binary.html. Whether these will catch on remains to be seen, but at least if you see such an unusual prefix you might want to be aware of this.
Non-SI Units
The metric system of measurement, the International system of units (SI Units), is widely used for quantitative measurements of matter in science and in most countries. However, different systems of measurement existed before the SI system was introduced. Any units used in other system of measurement (i.e. not included in the SI system of measurement), will be referred to as non-SI units. In most science courses, non-SI Units are not be used regularly.
Contributors and Attributions
• Ko-Wei Che, Betty Wan Wu, Boris Poutivski (UC Davis) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Metric_Prefixes_-_from_yotta_to_yocto.txt |
Chemistry is a quantitative science. Amounts of substances and energies must always be expressed in numbers and units (in order to make some sense of what you are talking about). You should also develop a sensation about quantities every time you encounter them; you should be familiar with the name, prefix, and symbol used for various quantities.
However, due to the many different units we use, expression of quantities is rather complicated. We will deal with the number part of quantities on this page, using SI Units.
are expressed by eXXX or EXXX.
```Words Number Prefix Symbol Exponent
----- --------------------- ------ ------ -of 10--
Quindrillion 1,000,000,000,000,000 e15
Trillion 1,000,000,000,000 Tera- T e12
Billion 1,000,000,000 Giga- G e9
Million 1,000,000 Mega- M e6
Thousand 1,000 Kilo- k e3
Hundred 100
Ten 10
One 1
Tenth 0.1 Deci- d
Hundredth 0.01 Centi- c
Thousandth 0.001 Milli- m e-3
Millionth 0.000001 Micro- u (mu) e-6
Billionth 0.000000001 Nano- n e-9
Trillionth 0.000000000001 Pico- p e-12
----- --------------------- ------ ------ -------
```
By now, you probably realized that every time the number increases by a factor of a thousand, we give a new name, a new prefix, and a new symbol in its expression.
Reading Numbers
After you are familiar with the words associated with these numbers, you should be able to communicate numbers with ease. Consider the following number:
123,456,789,101,234,567
In words, this 18-digit number takes up a few lines:
One hundred twenty three quindrillions, four hundred fifty six trillions, seven hundred eighty nine billions, one hundred and one millions, two hundred and thirty four thousands, five hundred and sixty seven.
If a quantity makes use of this number, the quantity has been measured precisely. Most quantities do not have a precise measurement to warrant so many significant figures. The above number may often be expressed as 123e15 or read as one hundred twenty three quindrillions.
There are seven basic quantities in science, and these quantities, their symbols, names of their units, and unit symbols are listed below:
``` == Basic Quantity == ==== Unit =====
Name Symbol Symbol Name
============= ====== ====== ========
Length l m meter
Mass m kg Kilogram
Time t s Second
Electric current I A Ampere (C/s)
Temperature T K Kelvin
Amount of substance n mol Mole
Luminous intensity Iv cd Candela
============= ====== ====== ========
```
*The unit ampere, A, is equal to Coulombs per second, (A = C/s).
Prefixes
Prefixes for Decimal Multiples and Submultiples
Prefixes are often used for decimal multiples and submultiples of units. Often, the symbols are used together with units. For example, MeV means million electron volts, units of energy. To memorize these prefixes is not something you will enjoy, but if you do know them by heart, you will appreciate the quantity when you encounter them in your reading. You can come back here to check them in case you forgot. The table is arranged in a symmetric fashion for convenience of comparison. Note the increments or decrements by thousand 10+3 or 10-3 are used aside from hecto (centi) and deca (deci).
Multiple Name Abbreviation Name Multiple
10+24 yotta Y y yocto 10-24
10+21 zetta Z z zepto 10-21
10+18 exa E a atto 10-18
10+15 peta P f femto 10-15
10+12 tera T p pico 10-12
10+9 giga G n nano 10-9
10+6 mega M m micro 10-6
10+3 kilo k m milli 10-3
10+2 hecto h c centi 10-2
10+1 deca da d deci 10-1
When you want to express some large or small quantities, you may also find these prefixes useful.
Greek Prefixes
Greek prefixes are often used for naming compounds. You will need the prefixes in order to give a proper name of many compounds. You also need to know them to figure out the formula from their names. The common prefixes are given in this Table.
Note that some of the prefixes may change slightly when they are applied to the names. Some of the examples show the variations.
Note also that some names are given using other conventions. For example, \(\ce{P4O6}\) and \(\ce{P4O10}\) are called phosphorus trioxide and phosphorus pentoxide respectively. These are given according to their empirical formulas.
Prefix Number Example
mono- 1 monoatomic ions
bi- or di- 2 bicarbonate
dichloro-
tri- 3 tridentate ligand
trinitrotoluene
tetra- 4 ethylenediamine-
tetraacetate
tetraethyl lead
penta- 5 bromine pentafluoride
hexa- 6 hexachlorobenzene
hepta- 7 n-heptane
octa- 8 iso-octane
nona- 9 nanosecond
deca- 10 decimal | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Physical_Quantities.txt |
The International System of Units (SI) is system of units of measurements that is widely used all over the world. This modern form of the Metric system is based around the number 10 for convenience. A set unit of prefixes have been established and are known as the SI prefixes or the metric prefixes (or units). The prefixes indicate whether the unit is a multiple or a fraction of the base ten. It allows the reduction of zeros of a very small number or a very larger number such as 0.000000001 meter and 7,500,000 Joules into 1 nanometer and 7.5 Megajoules respectively. These SI prefixes also have a set of symbols that precede unit symbol.
However countries such as the United States, Liberia, and Berma have not officially adopted the International System of Units as their primary system of measurements. Since the SI Units are nearly globally though, the scientific and mathematical field will use these SI units in order to provide ease between the sharing data with one another because of a common set of measurements.
Base Units
The SI contains seven BASE UNITS that each represent a different kind of physical quantity. These are commonly used as a convention.
PHYSICAL QUANTITY NAME OF UNIT ABBREVIATION
Mass Kilogram kg
Length Meter m
Time Second s
Temperature Kelvin K
Amount of Substance Mole mol
Electric Current Ampere A
Luminous Intensity Candela cd
Derived Units
Derived Units are created by mathematical relationships between other Base Units and are expressed in a combination of fundamental and base quantities.
DERIVED QUANTITY NAME ABBREVIATION
Area Square Meter m2
Volume Cubic Meter m3
Mass Density Kilogram Per Cubic Meter kg/m3
Specific Volume Cubic Meter Per Kilogram m3/kg
Celsius Temperature degree Celsius oC
Prefixes
Metric units use a prefix, used for conversion from or to an SI unit. Below is a chart illustrating how prefixes are labeled in metric measurements.
SYMBOL PREFIX MULTIPLICATION FACTOR
T Tera 1012
G Giga 109
M Mega 106
k Kilo 103
h Hecto 102
da Deka 101
d Deci 10-1
c Centi 10-2
m Milli 10-3
µ Micro 10-6
n Nano 10-9
p Pico 10-12
Temperature
Temperature is usually measured in Celsius (although the U.S. still uses Fahrenheit), but is often converted to for the absolute Kelvin scale for many chemistry problems.
• For Fahrenheit to Celsius: $F= \dfrac{9}{5} \times C+32 \nonumber$
• For Celsius to Fahrenheit: $C= \dfrac{5}{9} \times F - 32 \nonumber$
• For Celsius to Kelvin: $K=C+273.15 \nonumber$
Reference Points:
• Melting Point of ice is 0° C = 32° F
• Boiling Point of water is 100° C = 212° F
The Kelvin scale does not use the degree symbol (°) and only K, which can only be positive since it is an absolute scale
Mass
Mass is usually measured by a sensitive balance machine
• 1 kilograms = 2.205 lbs.
• (Remember that 1 kg = 1000 grams)
Length
The U.S. usually makes measurements in inches and feet, but the SI system prefers meters as the unit for length.
• 1 meter = 3.281 feet.
• 1 inches = 2.54 centimeters
Volume
SI units commonly uses derived units for Volume such as meters cubed to liters.
• 1 cm3 (centimeter cubed) = 1 mL (mililiter)
• 1000 cm3 = 1 L = 1 dm3
Energy
• 1 calorie = 4.184 Joules
Amount of Substance
• 1 mole = 6.022 x 1023 molecules/atoms
• (Avogadro's number)
Problems
Convert to the appropriate SI Units:
1. 1 Day 4 Hours and 20 Minutes
2. 10.8 Lbs.
3. 58.8 Ft.
4. 10,288 grams
5. 128,968,888 mL
6. 1.4 Degrees Celcius
7. 16.13 Cal
8. 18,888,888 km
#1-4
#5-8
Contributors and Attributions
• Christina Doan (UCD), Ryan Cheung (UCD) | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/SI_Units.txt |
Learning Objectives
• Know (not memorize) the seven (7) basic quantities and their SI Units.
• Explain what the basic units are.
• Explain what are derived units, and how they are derived.
• Represent all quantities with proper numbers and units whenever possible.
List all the basic quantities and their units you know of and search for those that you do not know yet. Understanding and proper expression of quantities are basic skills for any modern educated person. You have to master all quantities described here.
The Basic Units
Quantities form the basis for science and engineering and any moment of our lives. Unless you have expressed the quantities in numbers and units, you have not expressed anything. Quantities are defined only when they are expressed in numbers and units. Missing units and improper use of units are serious omissions and errors.
Years ago, physicists used either the mks (meter-kilogram-second) system of units or the cgs (centimeter-gram-second) systems for length, mass, and time. In addition to these three basic quantities are four others: the electric charge or current, temperature, luminous intensity and the amount of substance. Chemical quantities are mostly based on the last one. Thus, these are seven basic quantities, and each has an unit.
The international system of units (Systeme International d'Units) was adopted by the General Conference on Weights and Measure in 1960, and the SI units are widely used today. All SI units are based on these basic units.
Seven Basic Quantities and Units
Quantity Unit Symbol
Length Meter m
Mass Kilogram kg
Time Second s
Electric current Ampere A
Temperature Kelvin K
Luminous intensity Candela cd
Amount of substance Mole mol
Close your eyes, and see if you can name the 7 fundamental quantities in science and their (SI) Units. Science is based on only 7 basic quantities; for each, we have to define a standard unit. Think why these are the basic quantities. Are these related to any other quantities? Can they be derived from other quantities?
Derived Units
There are other quantities aside from the seven basic quantities mentioned above. However, all other quantities are related to the basic quantities. Thus, their units can be derived from the seven SI units above. For this reason, other units are called derived units The table below lists some examples:
Derived Quantities and Their SI Units
Quantity Unit Symbol
Area square meter m2
Volume cubic meter m3
Density kg per cubic meter kg m-3
Velocity meter per second m s-1
acceleration meter per second per second m s-2
Derived units can be expressed in terms of basic quantities. From the specific derived unit, you can reason its relationship with the basic quantities.
For some specific common quantities, the SI units have special symbols. As you use these often, you will feel at home with them. To remember it is very hard. However, you will encounter them during your study of these quantities. They are collected here to point out to you that these are special SI symbols.
Special Symbols of Some SI Units
Quantity Unit Explanation
Force N Newton = kg m s-2
Pressure Pa Pascal = N m-2
Energy J Joule = N.m
Electric charge C Coulomb = A.s
Electric potential V Volt = J/C
Energy J Joule = N.m
Electric charge C Coulomb = A s
Electric potential V 1 V = 1 J/C
Power watt 1 watt = 1 J/s
Common Units Still in Use
The following units are still in common use for chemistry. There are some other commonly used units too, but their meanings are clear by the time you use them.
Common Units Still in Use
Quantity Symbol Explanation
Volume L liter = 1 dm3, 1 dm = 0.1 m
mL milliliter = 1/1000 L
Molarity M number of moles dissolved in 1 liter solution
Molality * m number of moles dissolved in 1 kg solvent
* The use of m for molality and for meter is sometimes confusing.
Units for Radiation:
The following units are used in special technologies or disciplines. Since most people are not familiar with them, they are explained in more detail here.
Becquerel
the SI unit for radioactivity symbol (B), which is 1 disintegration per second (dps). 1 Ci = 3.7e10 B.
Curie (Ci)
a unit of radioactivity originally based on the disintegration rate of 1 g of radium. Now a Curie is the quantity of radioactive material that has a disintegration rate of 3.700e10 per second (B). 1 mCi = 1e-3 Ci; 1 microCi = 1e-6 Ci; 1 MCi = 1e6 Ci.
Gray and Rad
radiation dose units. The gray (Gy) is an SI unit for the absorption of 1 J radiation energy by one kg of material. The rad was a popular unit, which is the absorption of 100 erg of radiation energy by one gram, (1 Gy = 100 rad).
Roentgen (R)
a unit for the measure of X-ray and gamma ray exposure. 1 R = 93 erg per g (1 R = 0.93 rad for X-rays or gamma rays whose energy is above 50 keV).
The unit erg is for energy, 1 J = 10,000,000 erg.
Review Questions
1. What is the SI unit and symbol for force?
Newton (N), he defined force
One N is the gravitational pull of 98 g mass
1. What is the SI unit and symbol for pressure?
Pascal (Pa), who studied effect of pressure on fluid
1 atm = 101325 Pa = 101.3 kPa
1. What physical quantity uses the unit Joule?
Joule (J) is an energy unit
1 J = 1 N m = 10e7 ergs
1. Which is the SI unit for temperature?
Kelvin (K)
1C is the same as 273.15 K
1. What is the SI unit for measuring the amount of substance?
mole (mol), derived from Latin, meaning mass
one mole has 6.023e23 atoms or molecules
1. What are the symbols for the seven basic SI units?
m, kg, s, A, K, cd, mol
m, k, s, current, temperature, luminous, mole
1. What is the unit M used for?
M stands for mol/L, a concentration unit
1. What is the unit A used for?
1 C/s, for an electric current
1. What is the power consumption if the current is 1 A from a source of 10 V?
10 C/s V (J/s = watt)
watt is the unit for power
1. What is the SI unit for measuring radioactivity?
Becquerel (B), he discovered radioactivity
1 Ci = 3.7e10 B | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/SI_Units_-_A_Summary.txt |
Learning Objectives
• The skill to convert a quantity into various units is important. Unit conversion and dimensional analysis are also scientific methods. There are many examples in Chemistry, and you will encounter them later.
In the field of science, the metric system is used in performing measurements. The metric system is actually easier to use than the English system, as you will see shortly. The metric system uses prefixes to indicate the magnitude of a measured quantity. The prefix itself gives the conversion factor. You should memorize some of the common prefixes, as you will be using them on a regular basis. Common prefixes are shown below:
Table $1$
Prefix Symbol Power Prefix Symbol Power
mega-
M
106 centi-
c
10-2
kilo-
k
103 milli-
m
10-3
hecto-
h
102 micro-
10-6
deca-
D
101 nano-
n
10-9
deci-
d
10-1 pico-
p
10-12
Metric - Metric Conversions
Suppose you wanted to convert the mass of a $250\; mg$ aspirin tablet to grams. Start with what you know and let the conversion factor units decide how to set up the problem. If a unit to be converted is in the numerator, that unit must be in the denominator of the conversion factor in order for it to cancel.
Notice how the units cancel to give grams. the conversion factor numerator is shown as $1 \times 10^{-3}$ because on most calculators, it must be entered in this fashion, not as just 10-3. If you don't know how to use the scientific notation on your calculator, try to find out as soon as possible. Look in your calculator's manual, or ask someone who knows. Also, notice how the unit, mg is assigned the value of 1, and the prefix, milli-, is applied to the gram unit. In other words, $1\, mg$ literally means $1 \times 10^{-3}\, g$.
Next, let's try a more involved conversion. Suppose you wanted to convert 250 mg to kg. You may or may not know a direct, one-step conversion. In fact, the better method (foolproof) to do the conversion would be to go to the base unit first, and then to the final unit you want. In other words, convert the milligrams to grams and then go to kilograms:
Example $1$
The world's ocean is estimated to contain $\mathrm{1.4 \times 10^9\; km^3}$ of water.
1. What is the volume in liters?
2. What is the weight if the specific density is 1.1?
3. How many moles of water are present if all the weight is due to water?
4. How many moles of $\ce{H}$ atoms (not $\ce{H2}$) are there in the ocean?
5. How many $\ce{H}$ atoms are present in the ocean?
Solution
$\mathrm{1.4e9 \left (\dfrac{1000\: m}{1\: km} \right )^3 \left (\dfrac{10\: dm}{1\:m} \right )^3\ = 1.4e21\: dm^3 \left (\dfrac{1\: L}{1\: dm^3} \right )\ = 1.5e21\: L \left (\dfrac{1.1\: kg}{1\: L} \right )\ = 1.5e21\: kg \left (\dfrac{1000\: g}{1\: kg} \right )\left (\dfrac{1\: mol}{18\: g} \right )\ = 8.3e22\: mol\: H_2O \left (\dfrac{2\: mol\: H\: atoms}{1\: mol\: H_2O} \right )\ = 1.7e23\: mol\: H \left (\dfrac{6.02e23\: atoms}{1\: mol} \right )\ = 5.0e46\: H\: atoms}$
In this example, a quantity has been converted from a unit for volume into other units of volume, weight, amount in moles, and number of atoms. Every factor used for the unit conversion is a unity. The numerator and denominator represent the same quantity in different ways.
Even in this simple example, several concepts such as the quantity in moles, Avogadro's number, and specific density (or specific gravity) have been applied in the conversion. If you have not learned these concepts, you may have difficulty in understanding some of the conversion processes. Identify what you do not know and find out in your text or from a resource.
Example $2$
A typical city speed for automobiles is 50 km/hr. Some years ago, most people believed that 10 seconds to dash a 100 meter race was the lowest limit. Which speed is faster, 50 km/hr or 10 m/s?
Solution
For comparison, the two speeds must be expressed in the same unit. Let's convert 50 km/hr to m/s.
$\mathrm{50 \;\dfrac{\cancel{km}}{\cancel{hr}} \left(\dfrac{1000\; m}{1\; \cancel{km}}\right) \left(\dfrac{1\; \cancel{hr}}{60\;\cancel{min}}\right) \left(\dfrac{1\;\cancel{min}}{ 60\; s}\right) =13.89\; m/s} \nonumber$
Thus, 50 km/hr is faster.
Note: a different unit can be selected for the comparison (e.g., miles/hour) but the result will be the same (test this out if interested).
Exercise $1$
The speed of a typhoon is reported to be 100 m/s. What is the speed in km/hr and in miles per hour?
English - Metric Conversions
These conversions are accomplished in the same way as metric - metric conversions. The only difference is the conversion factor used. It would be a good idea to memorize a few conversion factors involving converting mass, volume, length and temperature. Here are a few useful conversion factors.
• length: 2.54 cm = 1 inch (exact)
• mass: 454 g = 1 lb
• volume: 0.946 L = 1 qt
• temperature: oC = (oF - 32)/1.8
All of the above conversions are to three significant figures, except length, which is an exact number. As before, let the units help you set up the conversion.
Suppose you wanted to convert mass of my $23\, lb$ cat to kilograms. One can quickly see that this conversion is not achieved in one step. The pound units will be converted to grams, and then from grams to kilograms. Let the units help you set up the problem:
$\dfrac{23 \, lb}{1} \times \dfrac{454\,g}{1 \, lb} \times \dfrac{1 \, kg}{ 1 \times 10^3 \, g} = 10 \, kg \nonumber$
Let's try a conversion which looks "intimidating", but actually uses the same basic concepts we have already examined. Suppose you wish to convert pressure of 14 lb/in2 to g/cm2. When setting up the conversion, worry about one unit at a time, for example, convert the pound units to gram units, first:
Next, convert in2 to cm2. Set up the conversion without the exponent first, using the conversion factor, 1 in = 2.54 cm. Since we need in2 and cm2, raise everything to the second power:
Notice how the units cancel to the units sought. Always check your units because they indicate whether or not the problem has been set up correctly.
Example $2$: Convert Quantities into SI units
Mr. Smart is ready for a T-bone steak. He went to market A and found the price to be 4.99 dollars per kilograms. He drove out of town to a roadside market, which sells at 2.29 per pound. Which price is better for Mr. Smart?
Solution
To help Mr. Smart, we have to know that 1.0 kg is equivalent to 2.206531 lb or 1 lb = 453.2 g. By the way, are these the same?
$\mathrm{4.99\; \dfrac{}{\cancel{kg}} \left( \dfrac{1\; \cancel{kg}}{2.206532\; lb} \right) = 2.26468 \;\dfrac{}{lb}} \nonumber$
Of course, with the money system in Canada, there is no point quoting the price as detailed as it is given above. This brings about the significant digit issue, and the quantization. The price is therefore 2.26 $/lb, better for Mr. Smart than the price of 2.29$/lb.
Exercises
1. 1.2e-4 kg
Skill -
Converting a quantity into SI units.
2. $(70-32)\times\dfrac{5}{9}\:\: {^\circ\textrm C}$
Skill -
To convert temperature from one scale to another scale.
3. The price is Cdn$0.60 / L Skill - Converting two quantities. 4. Canada at Cdn$0.55 / L
Skill -
Determine the costs per unit common volume.
5. A marathon race covers a distance of 26 miles and 385 yards. If
1.0 mile = 5280 ft, 1 ft = 12 in, and 1 in = 2.54 cm,
express 26 miles in m.
6486 m
Skill -
Convert quantities into SI units. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Unit_Conversions.txt |
The objective of this course is to introduce the student to the fundamental concepts of analytical chemistry with particular emphasis on volumetric chemical analysis. The module is designed to familiarize the learner with the principles that underpin chemical reactivity of different types of chemical reactions. The theories, concepts of volumetric analysis and measurements of data as they apply to analytical chemistry are examined. Special emphasis is placed on the application of basic principles of chemical equilibria to acid-base reactions, precipitation reactions, oxidation-reduction (electron –transfer) reactions, and complex ion reactions. We will look in more detail at the quantitative aspects of acid-base titrations.
• 14.1: Sampling and Statistical Analysis of Data
This activity comprises two fairly distinct study topics: Sampling and Statistical analysis of data. Under “Sampling”, you will be introduced to the concept and challenges of sampling as a means to acquiring a representative laboratory sam- ple from the original bulk specimen. At the end of the subtopic on “sampling”, you will only appreciate that a sampling method adopted by an analyst is an integral part of any analytical methods.
• 14.2: Fundamentals of Volumetric Chemical Analysis, Acid/Base Equilibria & Titrations
Some of the most important processes in chemical and biological systems are acid-base reactions in aqueous solutions. At the onset of this unit, a review of the topic of acid-base equilibria, together with the properties of acids and bases is undertaken. This is because the concepts of ionic equilibria and reactions are important for a better understanding of the ideas and workings of acid-base neutralization titrations.
• 14.3: Redox Reactions and Titrations
Chemical reactions in which there is a transfer of electrons from one substance to another are known as oxidation-reduction reactions or redox reactions. The introductory sections of the unit explains the fundamental principles of galvanic cells, the thermodynamics of electrochemical reaction. Some applications of the concept of redox equilibria in explaining oxidation-reduction titrations as a technique for volumetric chemical analysis are also discussed.
• 14.4: Complex ion Equilibria and Complexometric Titrations
In this unit, the concept of complex ion formation and associated stepwise equilibrium reactions will be examined and discussed. Particular emphasis will be given to the application of the complex ion reactions in complexometric titrations, titrimetric methods based upon complex formation, as a means to quantitative analysis of metal ions in solution. The emphasis will be on how complex ion formation can be used as a basis of a titrimetric method to quantify metal ions in solution.
• Exercises
• Learning Objectives
• Pre-assessment
• Summary
Volumetric Chemical Analysis (Shiundu)
Learning Objectives:
• Describe and explain the importance of the concept of “sampling” in analytical methods of analysis.
• Describe and discuss the sources and types of sampling error and uncer- tainty in measurement.
• Acquire the techniques for handling numbers associated with measure-ments: scientific notation and significant figures.
• Explain the concept of data rejection (or elimination) and comparison of measurements.
• Apply simple statistics and error analysis to determine the reliability of analytical chemical measurements and data.
This activity comprises two fairly distinct study topics: Sampling and Statistical analysis of data. Under “Sampling”, you will be introduced to the concept and challenges of sampling as a means to acquiring a representative laboratory sam- ple from the original bulk specimen. At the end of the subtopic on “sampling”, you will not only appreciate that a sampling method adopted by an analyst is an integral part of any analytical methods, but will also discover that it is usually the most challenging part of an analysis process. Another very important stage in any analytical method of analysis is evaluation of results, where statistical tests (i.e., quantities that describe a distribution of, say, experimentally measureddata) are always carried out to determine confidence in our acquired data. In thelatter part of this activity, you will be introduced to the challenges encountered by an analytical chemist when determining the uncertainity associated with every measurement during a chemical analysis process, in a bid to determine the most probable result. You will be introduced to ways of describing and reducing, if necessary, this uncertainity in measurements through statistical techniques.
Key Concepts
• Accuracy: refers to how closely the measured value of a quantity corresponds to its “true” value.
• Determinate errors: these are mistakes, which are often referred to as “bias”. In theory, these could be eliminated by careful technique.
• Error analysis: study of uncertainties in physical measurements.
• Indeterminate errors: these are errors caused by the need to make estimates inthe last figure of a measurement, by noise present in instruments, etc. Such errorscan be reduced, but never entirely eliminated.
• Mean (m): defined mathematically as the sum of the values, divided by thenumber of measurements.
• Median: is the central point in a data set. Half of all the values in a set will lie above the median, half will lie below the median. If the set contains an odd number of datum points, the median will be the central point of that set. If the set contains an even number of points, the median will be the average of the two central points. In populations where errors are evenly distributed about the mean, the mean and median will have the same value.
• Precision: expresses the degree of reproducibility, or agreement between repeated measurements.
• Range: is sometimes referred to as the spread and is simply the difference between the largest and the smallest values in a data set.
• Random Error: error that varies from one measurement to another in an unpre- dictable manner in a set of measurements.
• Sample: a substance or portion of a substance about which analytical information is sought.
• Sampling: operations involved in procuring a reasonable amount of material that is representative of the whole bulk specimen. This is usually the most challenging part of chemical analysis.
• Sampling error: error due to sampling process(es).
• Significant figures: the minimum number of digits that one can use to represent a value without loss of accuracy. It is basically the number of digits that one is certain about.
• Standard deviation (s): this is one measure of how closely the individual results or measurements agree with each other. It is a statistically useful description of the scatter of the values determined in a series of runs.
• Variance (s2): this is simply the square of the standard deviation. It is another method of describing precision and is often referred to as the coefficientof variation.
Introduction to the activity
A typical analytical method of analysis comprises seven important stages, namely;plan of analysis (involves determination of sample to be analysed, analyte, and level of accuracy needed); sampling; sample preparation (involves sample disso- lution, workup, reaction, etc); isolation of analyte (e.g., separation, purification,etc); measurement of analyte; standardization of method (instrumental methods need to be standardized inorder to get reliable results); and evaluation of results (statistical tests to establish most probable data). Of these stages, sampling is often the most challenging for any analytical chemist: the ability to acquire a laboratory sample that is representative of the bulk specimen for analysis. The- refore, sampling is an integral and a significant part of any chemical analysisand requires special attention. Furthermore, we know that analytical work in general, results in the generation of numerical data and that operations such as weighing, diluting, etc, are common to almost every analytical procedure. The results of such operations, together with instrumental outputs, are often combi- ned mathematically to obtain a result or a series of results. How these resultsare reported is important in determining their significance. It is important that analytical results be reported in a clear, unbiased manner that is truly reflectiveof the very operations that go into the result. Data need to be reported with theproper number of significant digits and rounded off correctly. In short, at the endof, say a chemical analysis procedure, the analyst is often confronted with theissue of reliability of the measurement or data acquired, hence the significanceof the stage of evaluation of results, where statistical tests are done to determineconfidence limits in acquired data.
In this present activity, procedures and the quantities that describe a distribution of data will be covered and the sources of possible error in experimental measu- rements will be explored.
Sampling errors
Biased or nonrepresentative sampling and contamination of samples during orafter their collection are two sources of sampling error that can lead to significanterrors. Now, while selecting an appropriate method helps ensure that an analysis is accurate, it does not guarantee, however, that the result of the analysis willbe sufficient to solve the problem under investigation or that a proposed answerwill be correct. These latter concerns are addressed by carefully collecting the samples to be analyzed. Hence the import of studying “proper sampling strate-gies”. It is important to note that the final result in the determination of say, thecopper content in an ore sample would typically be a number(s) which indicates the concentration(s) of a compound(s) in the sample.
Uncertainty in measurements
However, there is always some uncertainity associated with each operation or measurement in an analysis and thus there is always some uncertainity in thefinal result. Knowing the uncertainty is as important as knowning the final result.Having data that are so uncertain as to be useless is no better than having no data at all. Thus, there is a need to determine some way of describing and reducing, if necessary, this uncertainty. Hence the importance of the study of the subtopic ofStatistics, which assists us in determining the most probable result and provides us the quantities that best describe a distribution of data. This subtopic of Staisticswill form a significant part of this learning activity.
List of other compulsory readings
Material and Matters (Reading #3)
Measurements and significant figures (Reading #4) Units and dimensions (Reading #5)
Significant figures (Reading #6)
Significant figures and rounding off (Reading #7) Measurements (Reading #8)
List of relevant resources
List of relevant useful links
www.chem1.com/acad/webtext/matmeasure/mm1.html
Deals with Units of measurements.
www.chem1.com/acad/webtext/matmeasure/mm2.html
Deals with measurement error.
www.chem1.com/acad/webtext/matmeasure/mm3.html
Deals with significant figures.
www.chem1.com/acad/webtext/matmeasure/mm4.html
Deals with testing reliability of data or measurements.
www.chem1.com/acad/webtext/matmeasure/mm5.html
Covers useful material on simple statistics.
Detailed description of the actvity
Studying a problem through the use of statistical data analysis often involves fourbasic steps, namely; (a) defining the problem, (b) collecting the data (c) analyzingthe data, and (d) reporting the results. In order to obtain accurate data about aproblem, an exact definition of the problem must be made. Otherwise, it would be extremely difficult to gather data without a clear definition of the problem. On collection of data, one must start with an emphasis on the importance of defi- ning the population, the set of all elements of interest in a study, about which we seek to make inferences. Here, all the requirements of sampling, the operations involved in getting a reasonableamount of material that is representative of the whole population, and experimental design must be met. Sampling is usually themost difficult step in the entire analytical process of chemical analysis, particu- larly where large quantities of samples (a sample is the subset of the population) to be analysed are concerned. Proper sampling methods should ensure that the sample obtained for analysis is representative of the material to be analyzed and that the sample that is analyzed in the laboratory is homogeneous. The more representative and homogeneous the samples are, the smaller will be the part of the analysis error that is due to the sampling step. Note that, an analysis cannot be more precise than the least precise operation.
The main idea of statistical inference is to take a random finite sample from apopulation (since it is not practically feasible to test the entire population) and then use the information from the sample to make inferences about particular population characteristics or attributes such as mean (measure of central ten- dency), the standard deviation (measure of spread), or the proportion of items in the population that have a certain characteristic. A sample is therefore the only realistic way to obtain data due to the time and cost contraints. It also saves effort. Furthermore, a sample can, in some cases, provide as much or more accuracy than a corresponding study that would otherwise attempt to investigate the entire population (careful collection of data from a sample will often provide better information than a less careful study that attempts to look at everything). Note that data can be either qualitative, labels or names used to identify an attribute of each element of the population, or quantitative, numeric values that indicate how much or how many a particular element exists in the entire population.
Statistical analysis of data: Assessing the reliability of measurements through simple statistics
In these modern times, the public is continuously bombarded with data on all sorts of information. These come in various forms such as public opinion polls, government information, and even statements from politicians. Quite often, the public is wonders about the “truth” or reliability of such information, particularly in instances where numbers are given. Much of such information often takes advantage of the average person’s inability to make informed judgement on the reliability of the data or information being given.
In science however, data is collected and measurements are made in order to get closer to the “truth” being sought. The reliability of such data or measurements must then be quantitatively assessed before disseminating the information to the stakeholders. Typical activities in a chemistry laboratory involve measurement of quantities that can assume a continuous range of values (e.g. Masses, volumes, etc). These measurements consist of two parts: the reported value itself (never an exactly known number) and the uncertainty associated with the measurement. All such measurements are subject to error which contributes to the uncertainty of the result. Our main concern here is with the kinds of errors that are inherent in any act of measuring (not outright mistakes such as incorrect use of an instru- ment or failure to read a scale properly; although such gross errors do sometimes occur and could yield quite unexpected results).
Experimental Error and Data Analysis
Theory:
Any measurement of a physical quantity always involves some uncertainty or experimental error. This means that if we measure some quantity and then repeat the measurement, we will most certainly obtain a different value the second time around. The question then is: Is it possible to know the true value of a physical quantity? The answer to this question is that we cannot. However, with grea- ter care during measurements and with the application of more experimentalmethods, we can reduce the errors and, thereby gain better confidence that themeasurements are closer to the true value. Thus, one should not only report a result of a measurement but also give some indication of the uncertainty of the experimental data.
Experimental error measured by its accuracy and precision, is defined as thedifference between a measurement and the true value or the difference between two measured values. These two terms have often been used synonymously, but in experimental measurements there is an important distinction between them.
Accuracy measures how close the measured value is to the true value or accepted value. In other words, how correct the measurement is. Quite often however, the true or accepted value of a physical quantity may not be known, in which case it is sometimes impossible to determine the accuracy of a measurement.
Precision refers to the degree of agreement among repeated measurements or how closely two or more measurements agree with each other. The term is so- metimes referred to as repeatability or reproducibility. Infact, a measurement that is highly reproducible tends to give values which are very close to each other. The concepts of precision and accuracy are demonstrated by the series oftargets shown in the figure below. If the centre of the target is the “true value”,then A is very precise (reproducible) but not accurate; target B demonstrates both precision and accuracy (and this is the goal in a laboratory); average of target C’s scores give an accurate result but the precision is poor; and target D is neithet precise nor accurate.
It is important to note that no matter how keenly planned and executed, all experiments have some degree of error or uncertainty. Thus, one should learn how to identify, correct, or evaluate sources of error in an experiment and how to express the accuracy and precision of measurements when collecting data or reporting results.
Types of experimental errors
Three general types of errors are encountered in a typical laboratory experiment measurements: random or statistical errors, systematic errors, and gross errors.
Random (or indeterminate) errors arises due to some uncontrollable fluctuations in variables that affect experimental measurements and therefore has no specific cause. These errors cannot be positively identified and do not have a definite measurable value; instead, they fluctuate in a random manner. These errors af- fect the precision of a measurement and are sometimes referred to as two-sided errors because in the absence of other types of errors, repeated measurementsyield results that fluctuate above and below the true value. With sufficientlylarge number of experimental measurements, an evenly distributed data scattered around an average value or mean is achieved. Thus, precision of measurements subject to random errors can be improved by repeated measurements. Random errors can be easily detected and can be reduced by repeating the measurementor by refining the measurement method or technique.
Systematic (or determinate) errors are instrumental, methodology-based, or individual mistakes that lead to “skewed” data, that is consistently deviated in one direction from the true value. These type of errors arises due to somespecific cause and does not lead to scattering of results around the actual value. Systematic errors can be identified and eliminated with careful inspection of theexperimental methods, or cross-calibration of instruments.
A determinate error can be further categorized into two: constant determinate error and proportional determinate error.
Constant determinate error (ecd) gives the same amount of error independent of the concentration of the substance being analyzed, whereas proportional determinate error (epd) depends directly on the concentration of the substance being analyzed (i.e., epd = K C), where K is a constant and C is the concentration of the analyte.
Therefore, the total determinate error (Etd) will be the sum of the proportional and constant determinate errors, i.e.,
$\bf{E}_{td}=\bf{e}_{cd}+\bf{e}_{pd}$
Gross errors are caused by an experimenter’s carelessness or equipment failure. As a result, one gets measurements, outliers, that are quite different from the other sets of similar measurements (i.e., the outliers are so far above or below the true value that they are usually discarded when assessing data. The “Q-test” (discussed later) is a systematic way to determine if a data point should be dis- carded or not.
Example:
Classify each of the following as determinate or random error:
1. Error arising due to the incomplete precipitation of an analyte in a gravi- metric analysis.
2. Error arising due to delayed colour formation by an indicator in an acid- base titration.
Solution:
1. Incomplete precipitation of an analyte in gravimetric analysis results in a determinate error. The mass of the precipitate will be consistently less than the actual mass of the precipitate.
2. Delayed color formation by an indicator in an acid-base titration also introduces a determinate error. Since excess titrant is added after the equivalence point, the calculated concentration of the titrand will be consistently higher than the value obtained by using an indicator which changes color exactly at the equivalence point is used.
Exercise 1:
An analyst determines the concentration of potassium in five replicatesof a standard water sample with an accepted value for its potassium concentration of 15 ppm by Flame Atomic Emission Spectrophotometry technique. The resultshe obtained in each of the five analyses in ppm were: 14.8, 15.12, 15.31, 14.95and 15.03. Classify the error in the analysis described above for the determination of potassium in the standard water sample as determinate or random.
Exercise 2:
Classify each of the errors described below as ‘constant determinate error’ or ‘proportional determinate error
1. The error introduced when a balance that is not calibrated is used for weighing samples?
2. The error introduced when preparing the same volumes of solutions magnesium ions having different concentrations from a MgCl2 salt that contains 0.5 g Ca2+ impurity per 1.0 mol (95 g) of MgCl2?
Expressing and Calculating Experimental Error and Uncertainty
An analyst reporting results of an experiment is often required to include accuracy and precision of the experimental measurements in the report to provide some credence to the data. There are various ways of describing the degree of accuracy or presision of data and the common ways are provided below, with examples or illustrations.
Significant Figures: Except in situations where numbers or quantities under in- vestigation are integers (for example counting the number of boys in a class) it is often impossible to get or obtain the exact value of the quantity under investigation. It is precisely for this reason that it is important to indicate the margin of error in a measurement by clearly indicating the number of significant figures, which arereally the meanigful digits in a measurement or calculated quantity.
When significant figures are used usually the last is understood to be uncertain.
For example, the average of the experimental values 51.60, 51.46, 51.55, and 51.61 is 51.555. The corresponding standard deviation of the sum is ± 0.069. It is clear from the above that the number in the second decimal place of the experimental values is subject to uncertainty. This implies that all the numbers in succeeding decimal places are without meaning, and we are therefore forced to round the average value accordingly. We must however, consider the question of taking 51.55 or 51.56, given that 51.555 is equally spaced between them. As a guide, when rounding a 5, always round to the nearest even number so that any tendency to round in a set direction is eliminated, since there is an equal likelihood that the nearest even number will be the higher or the lower in any given situation. Thus, we can report the above results as 51.56 ± 0.07.
It is the most general way to show “how well” a number or measurement isknown. The proper usage of significant figures becomes even more importantin today’s world, where spreadsheets, hand-held calculators, and instrumental digital readout systems are capable of generating numbers to almost any degree of apparent precision, which may be much different than the actual precision associated with a measurement.
Illustration:
A measurement of volume using a graduated measuring cylinder with 1-mL graduation markings will be reported with a precision of ± 0.1 mL, while a measurement of length using a meter-rule with 1-mm graduations will be reported with a precision of ± 0.1 mm. The treatment for digital instruments is however different owing to their increased level of accuracy. Infact, most manufacturers report precision of measurements made by digital instruments with a precision of± 1/2 of the smallest unit measurable by the instrument. For instance, a digital multimeter reads 1.384 volts; the precision of the multimeter measurement is ±1/2 of 0.001 volts or ± 0.0005 volts. Thus, the significant numbers depend on the quality of the instrument and the fineness of its measuring scale.
To express results with the correct number of significant figures or digits, a few simple rules exist that will ensure that the final result should never contain any more significant figures than the least precise data used to calculate it.
• All non-zero digits are significant. Thus 789 km has three significant figures; 1.234kg has four significant figs and so on
• Zeros between non-zero digits are significant. Thus 101 years contains three significant figures, 10,501m contains five significant figures and soon.
• The most significant digit in a reported result is the left-most non-zero digit: 359.742 (3 is the most significant digit). (How does this help to determine the number of significant figures in a measurement? I wouldrather include this:
• Zeros to the left of the first non-zero digit are not significant. Their purposeis to indicate the placement of the decimal point. For examplle, 0.008Lcontains one significant figure, 0.000423g contains three significant figuresand so on.
• If a number is greater than 1 then all the zeros to the right of the decimalpoint count as significant figures. Thus 22.0mg has three significant figures; 40.065 has five significant figures. If a number is less than1, then onlythe zeros that are at the end of the number and the zeros that are betweennonzero digits are significant. For example, 0.090 g has two significant figures, 0.1006 m has four significant figures, and so on.
• For numbers without decimal points, the trailing zeros (i.e. zeros afterthe last nonzero digit) may or may not be significant. Thus 500cm may have one significant figure (the digit 5), two significant figures (50) or three significant figures (500). It is not possible to know hich is correct without more information. By using scientific notation we avoid suchambiguity. We can therefore express the number 400 as 4 x 102 for onesignificant figure or 4.00 x 10-2 for three significant figures.
• If there is a decimal point, the least significant digit in a reported result is the right-most digit (whether zero or not): 359.742 (2 is the least significantdigit). If there is no decimal point present, the right-most non-zero digitis the least significant digit.
• The number of digits between and including the most and least significant digit is the number of significant digits in the result: 359.742 (there are six significant digits).
Exercise 1:
Determine the number of significant figures in the following measu- rements: (a) 478m (b) 12.01g (c) 0.043kg (d) 7000mL (e) 6.023 x 1023
Note that, the proper number of digits used to express the result of an arithmetic operation (such as addition, subtraction, multiplication, and division) can be obtained by remembering the principle stated above: that numerical results are reported with a precision near that of the least precise numerical measurement used to generate the number.
Illustration:
For Addition and subtraction
The general guideline when adding or subtracting numerical values is that the answer should have decimal places equal to that of the component with the least number of decimal places. Thus, 21.1 + 2.037 + 6.13 = 29.267 = 29.3, since component 21.1 has the least number of decimal places.
For Multiplication and Division
The general guideline is that the answer has the same number of significant figures as the number with the fewest significant figures: Thus
${56 \times 0.003462\times 43.72\over1.684}=4.975740998 \approx 5.0$
since one of the measurements (i.e., 56) has only two significant figures.
Exercise 1:
To how many significant figures ought the result of the sum of thevalues 3.2, 0.030, and 6.31 be reported and what is the calculated uncertainty?
Exercise 2:
To how many significant figures ought the result of the operation (28.5 x 27) / 352.3 be reported and what is the calculated uncertainty?
Percent Error (% Error): This is sometimes referred to as fractional difference, and measures the accuracy of a measurement by the difference between a measured value or experimental value E and a true or accepted value A. Therefore
$\% Error = ({|E-A|\over A})\ x \ 100\%$
Percent Difference (% Difference): This measures precision of two measure- ments by the difference between the measured experimental values E1 and E2expressed as a fraction of the average of the two values. Thus
$\%Difference={({|E1-E2|\over{E1+E2\over2}})\ x\ 100\%}$
Mean and Standard Deviation
Ordinarily, a single measurement of a quantity is not considered scientifically sufficient to convey any meaningful information about the quality of the measu- rement. One may need to take repeated measurements to establish how consistent the measurements are. When a measurement is repeated several times, we often see the measured values grouped or scattered around some central value. This grouping can be described with two numbers: a single representative number called the mean, which measures the central value, and the standard deviation, which describes the spread or deviation of the measured values about the mean.
The mean ( ) is the sum of the individual measurements (xi) of some quantity divided by the number of measurements (N). The mean is calculated by the formula:
$\bar{x}={1\over N}\sum_{i=1}^n\bf{x}_i={1\over N}(\bf{x}_1+\bf{x}_2+\bf{x}_3+...+\bf{x}_{N-1}+\bf{x}_N)$
where is the ith measured value of x.
The standard deviation of the measured values, represented by the symbol, σx is determined using the formula:
$\bf{\sigma}_x=\sqrt{{1\over{N-1}}\sum_{i=1}^N(\bf{x}_i-\bar{x})}$
The standard deviation is sometimes referred to as the mean square deviation. Note that, the larger the standard deviation, the more widely spread that data is about the mean.
The simplest and most frequently asked question is: “What is the typical value that best represents experimental measurements, and how reliable is it?”
Consider a set of N (=7) measurements of a given property (e.g., mass) arran- ged in increasing order (i.e., x1, x2, x3, x4, x5, x6 and x7). Several useful anduncomplicated methods are available for finding the most probable value and its confidence interval, and for comparing such results as seen above. However,when the number of measurements available N are few, the median is often more appropriate than the mean. In addition to the standard deviation, the range is also used to describe the scatter in a set of measurements or observations. The range is simply the difference between the largest and the smallest values or observations in a data set. Range = xmax – xmin, where xmax and xmin are the largest and smallest observations in a data set, respectively.
The median is defined as the value that bisects the set of N ordered observations, i.e., it is the central point in an ordered data set. If the N is odd, then (N-1)/2 measurements are smaller than the median, and the next higher value is reported as the median (i.e., the median is the central point of that set). In our illustra- tion above, the 4th measurement (i.e., x4) would be the median. If the data set contains an even number of points, the median will be the average of the two central points. If, however, the data set contains an even number of points, themedian will be the average of the two central points.
Example 1:For N = 6 and x() = 2, 3, 3, 5, 6, 7; median = (3+5)/2 = 4; the mean= (2 + 3 + 3 + 5 + 6 + 7)/6 = 4.33; and the range = (7 – 2) = 5.
Note: The median can thus serve as a check on the calculated mean. In samples where errors are evenly distributed about the mean, the mean and median will have the same value. Often relative standard deviation is more useful in a prac- tical sense than the standard deviation as it immediately gives one an idea of the level of precision of the data set relative to its individual values.
Relative standard deviation (rel. std. dev.) is defined as the ratio of the standarddeviation to the mean. The formula for it evaluation is:
$rel.std.dev.={\bf{\sigma}\over\bar{x}}$
Example 2. Assume that the following values were obtained in the analysis of the weight of iron in 2.0000g portions of an ore sample: 0.3791, 0.3784, 0.3793, 0.3779, and 0.3797 g.
xi (g) (xi-x̄)2 (g)2
0.3791
(0.3791 - 0.37888)2 = 4.84 x 10-8
0.3784
(0.3784 – 0.37888)2 = 2.30 x 10-7
0.3793
(0.3793 – 0.37888)2 = 1.76 x 10-7
0.3779
(0.3779 – 0.37888)2 = 9.60 x 10-7
0.3797
(0.3797 – 0.37888)2 = 6.72 x 10-7
∑xi = 1.8944 ∑(xi - x̄)2 = 2.09 x 10-6
The mean = x̄ = 1.8944g / 5 = 0.37888g
The standard deviation = σx = 2.09 x 10-6 g2)/4 = 5.2 x 10-7 g2
Rel. std. dev. = sr = 0.00072g/0.37888g = 0.0019
% Rel. std. dev. = (0.0019) x 100 = 0.19%
To easily see the range and median it is convenient to arrange the data in terms of increasing or decreasing values. Thus: 0.3779, 0.3784, 0.3791, 0.3793, and 0.3797 g. Since this data set has an odd number of trials, the median is simply the middle datum or 3rd datum, 0.3791 g. Note that for a finite data set the median andmean are not necessarily identical. The range is 0.3797 – 0.3779 g or 0.0018.
Example 3. The concentration of arsenic in a standard reference material which contains 2.35 mg/L arsenic was determined in a laboratory by four students (St1, St2, St3, and St4) who carried out replicate analyses. The experimental values as determined and reported by each of student are listed in the table below. Classify the set of results given by the students as: accurate; precise; accurate and precise; and neither accurate nor precise.
Trial No Concentration of Arsenic (in mg/L)
St1 St2 St3 St4
1 2.35 2.54 2.25 2.45
2 2.32 2.52 2.52 2.22
3 2.36 2.51 2.10 2.65
4 2.34 2.52 2.58 2.34
5 2.30 2.53 2.54 2.78
6 2.35 2.52 2.01 2.58
Mean 2.34 2.52 2.33 2.50
Solution
• The set of results as obtained by St1 and St2 (see columns 1 and 2) are close to each other. However, the calculated mean value of the six trials (the value reported as the most probable value of arsenic concentration in the reference material) as reported by St1 is close to the reported true value of 2.35 mg/L while that of St2 is relatively far from this true value. It can then be concluded that the analytical result reported by St1 is both precise and accurate while that for St2 is precise but not accurate.
• The set of values from the six trials by students St3 and St4 appear re- latively far apart from each other. However, the mean of the analytical results reported by St3 is closer to the true value while that of St4 is not. It can therefore be concluded that the analytical result reported by St3 isaccurate but not precise; while that for St4 is neither precise nor accurate.
Reporting the Results of an Experimental Measurement
Results of an experimental measurement by an analyst should always comprise of two parts: The first, is the best estimate of the measurement which is usually reported as the mean of the measurements. The second, is the variation of the measurements, which is usually reported as the standard deviation, of the measurements. The measured quantity will then be known to have a best estimate equal to the average of the experimental values, and that it lies between ( x̄+σx) and ( x̄-σx). Thus, any experimental measurement should always then be reported in the form:
$x=\bar{x}\pm\bf{\sigma}_x$
Example 3:Consider the table below which contains 30 measurements of the mass, m of a sample of some unknown material.
Table showing measured mass in kg of an unknown sample material
1.09 1.14 1.06
1.01 1.03 1.12
1.10 1.17 1.00
1.14 1.09 1.10
1.16 1.09 1.07
1.11 1.15 1.08
1.04 1.06 1.07
1.11 1.15 1.08
1.04 1.06 1.07
1.16 1.12 1.14
1.13 1.08 1.11
1.17 1.20 1.05
For the 30 measurements, the mean mass (in kg) =1/30(33.04 kg)=1.10 kg
The standard deviation =
$\bf{\sigma}_x=\sqrt{1\over{N-1}\sum_{i=1}^N(\bf{x}_i-\bar{x})^2}=\sqrt{1\over{30-1}\sum_{i=1}^{30}(\bf{x}_i-\bar{1.10\ kg})^2}=0.05\ kg$
The measured mass of the unknown sample is then reported as = 1.10 ± 0.05kg
Statistical Tests
Sometimes a value in a data set appears so far removed from the rest of the values that one suspects that that value (called an outlier) must have been the result of some unknown large error not present in any of the other trials. Statisticians have devised many rejection tests for the detection of non-random errors. We will consider only one of the tests developed to determine whether an outlier could be rejected on statistical rather than arbitrary grounds and it is the Q test. Its details are presented below.
The Q test: Rejecting data.
Note that we can always reject a data point if something is known to be “wrong” with with the data or we may be able to reject outliers if they pass a statistical test that suggests that the probability of getting such a high (or low) value by chance is so slight that there is probably an error in the measurement and that it can be discarded. The statistical test is through the Q test outline below.
The Q test is a very simple test for the rejection of outliers. In this test one calculates a number called Qexp and compares it with values, termed Qcrit, from a table. If Qexp > Qcrit, then the number can be rejected on statistical grounds. Qexp is calculated as follows:
$\bf{Q}_{exp}={|questionable\ value\ -\ its \ nearest\ neighbour|\over range}$
An example will illustrate the use of this test.
Example 4. Suppose that the following data are available: 25.27, 25.32, 25.34, and 25.61. It looks like the largest datum is suspect. Qexp is then calculated.
Qexp = (25.61 – 25.34)/(25.61-25.27) = 0.79
The values of Qcrit are then examined in statistical tables. These values depend on the number of trials in the data set, in this case 4. For example, the table below shows the values of Q for rejection of data at the 90% confidence level.
Table 4-4 Critical Values for the Rejection Quotient Q*
Qcrit (Reject if Qexp>Qcrit)
Number of Observations 90% confidence 95% confidence 99% confidence
3 0.941 0.970 0.994
4 0.765 0.829 0.926
5 0.642 0.710 0.821
6 0.560 0.625 0.740
7 0.507 0.568 0.680
8 0.468 0.526 0.634
9 0.437 0.493 0.598
10 0.412 0.466 0.568
From: Skoog, West, Holle, ""Intro to analytical Chemistry. 7th Ed. Thomson Publishing
The values are as follows:
Qcrit = 0.76 at 90% confidence
Qcrit = 0.85 at 96% confidence
Qcrit = 0.93 at 99% confidence
Since Qexp > Qcrit at 90% confidence, the value of 25.61 can be rejected with 90% confidence.
What does this mean? It means that in rejecting the datum the experimentalist will be right an average of 9 times out of 10, or that the chances of the point actually being bad are 90%.
Is this the one time out of 10 that the point is good? This is not known! When data are rejected, there is always a risk of rejecting a good point and biasing the results in the process. Since Qexp < Qcrit at the 96% and 99% levels, the datum cannot be rejected at these levels. What this says is that if one wants to be right 96 times or more out of 100, one cannot reject the datum. It is up to you to selectthe level of confidence you wish to use.
Exercise 1: Figure 1 below shows several separate steps involved in a typical total chemical analysis process in a chemical laboratory. Each step of the chemical analysis process has some random error associated with it. It is therefore clear that if a major error is made in any single step of the analysis it is unlikely that the result of the analysis can be correct even if the remaining steps are performed with little error. Discuss and list the possible sources of error an analyst is likelyto commit in each of the analytical steps given in figure 1, when determining the concentration of iron in a soil sample spectrophotometrically.
Figure 1. Block diagram of the major steps in a typical total chemical analysis process
Exercise 2:
In groups of atleast 2 people, obtain twenty-10 cent coins and de- termine the weight of each coin separately. Calculate the median, mean, range, standard deviation, relative standard deviation, % relative standard deviation and variance of your measurements
Exercise 3:
Calculate the σ and s (a) for each of the first 3, 10, 15 & 20 measu- rements you carried out in Exercise 2 to determine the mass of a 10 cents coin. b) compare the difference between the σ and s values you obtained in each case and describe your observation on the difference between the two values as the number of replicate analysis increase.
Exercise 4 :
Using data obtained in Exercise 2, determine the percent of your results that fall within the range of one, two, and three standard deviations, i.e., the results within μ± σ, μ± 2σ, μ± 3σ. Based on your findings, can you concludethat the results you obtained in determining the mass of a 10 cents coin (Exercise 2) yield a normal distribution curve?
Exercise 5 :
Refer to your results of Exercise 3 and (a) calculate the RSD and % RSD for the 3, 10, 15 and 20 measurements (b) compare the values you obtained and give a conclusion on what happens to the RSD and %RSD values as the number of replicate analyses increase.
Exercise 6:
(a) Calculate the variances for the values you obtained in the measurements i)1–5, ii)6–10, iii)11–15, iv) 16–20, and v)1–20, inactivity 1.
(b) Add the values you found in i, ii, iii, iv and compare the sum with the value you obtained in v.
(c) Calculate the standard deviations for the measurements given in i, ii, iii, iv and v of question a.
(d) Repeat what you did in question ‘b’ for standard deviation
(e) Based on your findings, give conclusions on the additiveness or non-ad- ditiveness of variance and standard deviation.
Problem 7: Compute the mean, median, range, absolute and relative standard deviations for the following set of numbers: 73.8, 73.5, 74.2, 74.1, 73.6, and 73.5.
Problem 8: Calculate the mean and relative standard deviation in ppt of the following data set: 41.29, 41.31, 41.30, 41.29, 41.35, 41.30, 41.28.
Problem 9: A group of students is asked to read a buret and produces the following data set: 31.45, 31.48, 31.46, 31.46, 31.44, 31.47, and 31.46 mL. Calculate the mean and percent relative standard deviation.
Problem 10 : The following data set is available: 17.93, 17.77, 17.47, 17.82, 17.88. Calculate its mean and absolute standard deviation. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/14.1%3A_Sampling_and_Statistical_Analysis_of_Data.txt |
Learning Objectives
Review the concept of chemical equilibria, and in particular ionic equilibria.
• Define and distinguish between acids and bases.
• Distinguish between monoprotic and polyprotic acid-base equilibrium.
• Describe and distinguish between weak acid/base dissociations.
• Have a working knowledge of the fundamentals of volumetric analysis.
• Define and distinguish between equivalence and end point.
• Use the concept of titration to distinguish between blank and back titrations.
• Define neutralization reactions and explain their corresponding titration curve structures.
• Define and explain standardization, indicators, and primary standards and their use
• Use the concept of polyprotic acid equilibria to do related calculations.
Some of the most important processes in chemical and biological systems are acid-base reactions in aqueous solutions. At the onset of this unit, a review of the topic of acid-base equilibria, together with the properties of acids and bases is undertaken. This is because the concepts of ionic equilibria and reactions are important for a better understanding of the ideas and workings of acid-base neutralization titrations. This unit discusses the basic principles of titrimetric analytical methods, and the use of the equivalence concept in quantitative titration methods. The latter part of this unit will provide you with the opportunity to carry out simple acid-base titration reactions and calculations as well as demonstrations of simulated acid-base titrations.
Key Concepts
• Arrhenius Acid: A substance that yields hydrogen ions (H+) when dissolved in water.
• Arrhenius Base: A substance that yields hydroxide ions (OH-) when dissolved in water.
• Bronsted acid : A substance capable of donationg a proton.Bronsted base: A substance capable of accepting a proton.
• Chemical Equilibrium: A state in which the rates of the forward and reverse reactions are equal.
• Chemical Reaction: A process in which a substance (or substances) is changed into one or more new substances.
• End point: The volume of titrant required for the detection of the equivalence point.
• Equilibrium constant: A number equal to the ratio of the equilibrium concen- trations of products to the equilibrium concentrations of reactants, each raised tothe power of its stoichiometric coefficient.
• Equivalence point: The point at which the acid has completely reacted with or been neutralized by the base.
• Indicators: Substancesthathavedistinctlydifferentcoloursinacidicandbasic media.
• Molar solubility: The number of moles of solute in one litre of a saturated so- lution (mol/L)
• Monoprotic acid: Each unit of the acid yields one hydrogen ion upon ioniza- tion.
• Neutralization reaction: A reaction between an acid and a base.
• Precipitation reaction: A reaction that results in the formation of a precipitate.
• Primary Standard: a high purity compound used to prepare the standard solution or to standardize the solution with.
• Quantitative analysis: The determination of the amount of substances present in a sample.
• Secondary Standard: a second material used as a substitute for a suitable primary standard. This standard solution should always be standardized using a primary standard.
• Solubility product, Ksp: The equilibrium constant for the reaction in which a solid salt dissolves to give its constituent ions in solution. It expresses the equilibrium between a solid and its ions in solution.
• Standardization: The process by which the concentration of a solution is deter- mined.
• Standard Solution: A solution of accurately known concentration.
• Stoichiometry: The quantitative study of reactants and products in a chemical reaction.
• Stoichiometric amounts: The exact molar amounts of reactants and products that appear in the balanced chemical equation.
• Strong acids: Strong electrolytes which are assumed to ionize completely in water.
• Strong bases: Strong electrolytes which are assumed to ionize completely in water.
• Titration: The gradual addition of of a solution of accurately known concentration to another solution of unknown concentration until the chemical reaction between the two solutions is complete.
• Volumetric methods of analysis:based on the measurement of the amount of reagent that combines with the analyte. The terms volumetric analysis specifically involves the determination of the volume of the reagent solution needed for a complete reaction.
• Volumetric titrimetry: methods that require that a reagent solution of known concentration, standard solution or titrant, be used.
Introduction to activity # 2
The thrust of this unit deals primarily with the fundamentals of volumetric methods of analysis. Volumetric method of analysis or sometimes referred to as titrimetric method of analysis is a quantitative method of analysis that is based upon the measurement of volume. These methods are considered important since they are usually rapid, convenient and are often accurate. The concept of pH and the pH scale, the ionization of weak acids and weak bases are introduced and discussed at length. Also, the unit looks at the relationship between acid strength and molecular structure.
List of other compulsory readings
Chemical equilibria (Reading #9)
Introduction to acid-base chemistry (Reading #10)
Acid-Base equilibria and Calculations (Reading #11)
Acid-base equilibria of aquatic environment (Reading #12)
A reference text with file with sub-sections containing sample problemsdealing with acids and bases, chemical equilibrium, quantitative calcula-tions in acids and bases (Reading #13).
Acid Base (Reading #14)
Bronsted-Lowry Acid-Base Reactions (Reading #15)
Chemistry Chapter 16 Complex Ions (Reading #16)
Chemistry Chapter 16 Hydrolysis of Bases (Reading #17)
Chemistry Chapter 16 Titrations (Reading #18)
Chemistry Chapter 16 Hydrolysis of Acids (Reading #19)
Chemistry Chapter 16 Autoionization of Water (Reading #20)
Addition of Strong Base to a weak Acid (Reading #21)
pH curves (titration curves) (Reading #22)
Titration of a Weak Acid with a Strong Base (Reading #23)
Detailed description of the activity
Review of General concepts and Principles of Chemical Equilibria:
The concept of equilibrium is extremely important in chemistry. For example, an industrial chemist who wants to maximize the yield of sulphuric acid, must have a clear understanding of the equilibrium constants for all the steps in the process, starting from the oxidation of sulphur as a reactant and ending with theformation of the final product. For a “general” reaction at equilibrium, we write:
$\ce{aA + bB <=> cC + dD}$
where the double arrow, ⇔ , is used to indicate that the chemical reaction proceeds in both directions. Dynamic equilibrium is said to occur when the rate of the forward reaction (represented by the chemical reaction,
$\ce{aA + bB -> cC + dD}$
which implies that a moles of substance A reacts with b moles of substance B to form c moles of C and d moles of D) equals the rate of the reverse reaction represented by the chemical reaction
$\ce{cC + dD -> aA + bB}$
which again implies that $c$ moles of substance $C$ reacts with $d$ moles of substance $D$ to form $a$ moles of $A$ and $b$ moles of $B). At equilibrium, when there is no net change in any of the concentrations of the materials involved in that reaction over time, $\bf{K}_{eq}={[C]^c[D]^d\over \bf{[A]}_{eq}^a\bf{[B]}_{eq}^b}$ where the Keq is the equilibrium constant and the square brackets, [ ], indicate a concentration of a species relative to the standard state for that particular phase at equilibrium. The standard state and their standard concentrations are: [solutes] = mol/L; [gases] = atmospheres (atm) and [pure liquids], [pure solids], and [solivents] = unity. For most calculations required for this module, it is sufficient to rememberthat the molar concentrations are to be used in the equilibrium expressions. Types of Equilibria: There are 4 main types of chemical equilibria that will be discussed in this module: 1. Solubility Equilibria 2. Acid-Base Equilibria 3. Oxidation/Reduction Equilibria and 4. Complex ion Formation Equilibria In this unit, our discussions will be limited to solubility and acid-base equilibria. The latter two equilibria types will be dealt with in the subsequent two units of this module. Solubility Equilibria: In solubility equilibrium, (see equation below) a moles of the analyte A reacts with r moles of the reagent, R, to form an insoluble species,AaRr. Recall that the standard state for a solid solution is unity (i.e., x = 1). The solid precipitate is assumed to be pure, thus has x =1. Thus, because of this, the concentration of AaRr (s) does not appear in the solubility product expression given below. $aA(aq) + rR(aq)\Leftrightarrow\bf{A}_a\bf{R}_x(s) \[\bf{K}_{sp}={[A]^a[R]^r}$ where Ksp is defined as the solubility product constant. The sequence of steps for (a) calculating Ksp from solubility data and (b) calculating solubility from Ksp data are given in the figure below: Here, molar solubility, is the number of moles of solute in 1 L of a standard solution (mol/L), and solubility, which is the number of grams of solute in 1 L of a saturated solution (g/L). Note that both of these expressions refer to the concentration of saturated solutions at some given temperature (usually 25oC). Example \(1$
Consider the reaction in which a solid salt of mercury chloride dis- solves in water to give its constituent ions in solutions as shown below:
$\ce{Hg2Cl2 (s) <=> Hg2^{+} (aq) + 2Cl^{−} (aq)}$
If the solubility product, K is 1.2 x 10-18, then
Hg2Cl2 (s) É Hg2+ (aq) + 2Cl(aq)
Initail Conc. 0 0
Final Conc. x 2x
where x equals solubility only if there is little ion-pair formation. Therefore,
$K_{sp} = 1.2 \times 10^{-18} = [Hg2^{2+}] [Cl^{-}]^2$
This implies that: [Hg22+] [Cl-]2 = 1.2 x 10-18. Thus, (x) (2x-)2 = 1.2 x 10-18
Therefore, the solubility x = 6.7 x 10-7 M.
Since some dissolved Hg2Cl2 may not dissociate into free ion, we say that its solubility is at least 6.7×10-7 M.
Example $2$: solubility of calcium sulfate
The solubility of calcium sulfate (CaSO4) is found to be 0.67 g/L. Calculate the value of Ksp for calcium sulfate.
Solution
Please note that we are given the solubility of CaSO4 and asked to calculate its Ksp. The sequence of conversion steps, according to the Figure above is:
Solubility of → Molar Solubility → [Ca2+] → Ksp of
CaSO4 of CaSO and [SO 2-] CaSO4
Now consider the dissociation of CaSO4 in water. Let s be the molar solubility (in mol/L) of CaSO4.
CaSO4 (s) ⇌ [Ca2+] (aq) + SO42- (aq)
Initial (M): 0 0
Change (M): -s +s +s
Equilibrium (M): s s
The solubility product of CaSO4 is:
Ksp = [Ca2+] [SO42-] = S2
First, we calculate the number of moles of CaSO4 dissolved in 1 L of solution.
$\frac { 0.67 \mathrm { g } ~\mathrm { CaSO } _ { 4 } } { 1 \mathrm { L } \text { solution } } \times \frac { 1 \mathrm { mol }~ \mathrm { CaSO } _ { 4 } } { 136.2 \mathrm { g } ~\mathrm { CaSO } _ { 4 } } = 4.9 \times 10 ^ { - 3 } \mathrm { mol } / \mathrm { L } = \mathrm { s } \nonumber$
From the solubility equilibrium we see that for every mole of CaSO4 that dissolves,
1 mole of Ca2+ and 1 mole of SO4 2- are produced. Thus, at equilibrium
[Ca2+] = 4.9 x 10-3 M and [SO4 2-] = 4.9 x 10-3 M
Now we can calculate Ksp:
Ksp = [Ca2+] [SO4 2-]
= (4.9 x 10-3) (4.9 x 10-3)
= 2.4 x 10-5
Exercise $1$
Exercise 1: The solubility of lead chromate (PbCrO4) is 4.5 x 10-5 g/L. Calculate the solubility product of this compound.
Answer
Add texts here. Do not delete this text first.
Acid-Base Equilibria
Theory of Acids and Bases (Arrhenius and Bronsted-Lowry Theories):
According to the Arrhenius Theory of Acids: All acids contain H+ ions andAll bases contain OH- ions; and that an acid-base reaction involves the reaction of hydrogen and hydroxide ions to form water. The corresponding equation is as below:
H+ (aq) + OH- (aq) → H2O (i)
Where (aq) stands for aqueous phase in which the species exists and l, the liquid phase.
Problems with Arrhenius Theory are two-fold:
(i) Theory requires bases to have an OH- group. However, we know that ammonia (whose structural formula is NH3) does not contain the OH- group but is nonetheless a base.
(ii) Theory does not consider the role of the solvent water, H2O. These shortcomings are overcome by the Bronsted-Lowry Theory.
Bronsted-Lowry Theories on acids and bases:
Acid: An acid as any substance that can donate a proton to a base.
HA + H2O -> A- + H3O+
acid base conjugate Conjugate
base of HA acid of H2O
Base: A base is any substance that can accept a proton from an acid.
NH3 + H2O -> NH4+ + OH-
base acid conjugate acid conjugate base
We now recognize that NH3 acts as a base (proton acceptor) because of its role as a hydrogen atom acceptor in the reaction and H2O acts as an acid (proton donor). Moreover, H2O is now included as a solvent in our consideration. The conjugate acid of NH3 is NH4+ while the conjugate base of water in the reaction is OH-. NH4+/NH3 are referred to as conjugate acid/base pair.
In these two examples, water has acted as either an acid or a base, hence a unique solvent. This is one of the Hence it is called water is called an amphiprotic sol- vent. It undergoes self ionization in a process called (autoprotolysis) as shown below:
H2O + H2O ↔ H3O+ + OH-
In the study of acid-base reactions, the hydrogen ion concentration is key; its value indicates the acidity or basicity of the solution. In pure water, only 1 water in 107 undergoes autoprotolysis. This implies that only a very small fraction of water molecules are ionized, hence the concentration of water, [H2O], remains virtually unchanged. Therefore, the equilibrium constant for the autoionization of water according to the immediate above equation is
Kc = [H3O+][OH-]
And because we use H+ (aq) and H3O+ (aq) interchangeably to represent the hydrated proton, the equilibrium constant can also be expressed as
Kc = [H+][OH-]
To indicate that the equilibrium constant refers to the autoionization of water, we replace Kc by Kw
Kw = [H3O+][OH-] = [H+][OH-]
Where is the ion-product constant, which is the product of the molar concentra- tions of H+ and OH- ions at a particular temperature.
In pure water at 25oC, the concentrations of H+ and OH- ions are equal and found to be [H+] = 1.0 x 10-7 M and [OH-] = 1.0 x 10-7 M. From the above equation
Kw = (1.0 x 10-7 M) (1.0 x 10-7 M) = 1.0 x 10-14
Note that, whether we have pure water or an aqueous solution of dissolved species, the following relation always holds at 25oC:
Kw = [H+][OH-] = 1.0 x 10-14
Whenever [H+] = [OH-], the aqueous solution is said to be neutral. In an acidic solution, there is an excess of H+ ions and [H+] > [OH-]. In a basic solution, there is an excess of hydroxide ions, so [H+] < [OH-]. In practice, we can change the concentration of either H+ or OH- ions in solution, but we cannot vary both of them independently. If we adjust the solution so that [H+] = 1.0 x 10-6 M, the OH- concentration must change to
$\left[ \mathrm { OH } ^ { - } \right] = \frac { \mathrm { K } _ { \mathrm { w } } } { \left[ \mathrm { H } ^ { + } \right] } = \frac { 1.0 \times 10 ^ { - 14 } } { 1.0 \times 10 ^ { - 6 } } = 1.0 \times 10 ^ { - 8 } \mathrm { M } \nonumber$
Example $3$:
Calculate the concentration of H+ ions in a certain cleaning solution if the concentration of OH- ions is 0.0025 M.
Solution
We are given the concentration of the OH- ions and asked to calculate [H+]. We also know that Kw = [H+][OH-] = 1.0 x 10-14. Hence, by rearranging the equation,
$\left[ \mathrm { H } ^ { + } \right] = \frac { \mathrm { K } _ { w } } { \left[ \mathrm { OH } ^ { - } \right] } = \frac { 1.0 \times 10 ^ { - 14 } } { 0.0025 } = 4.0 \times 10 ^ { - 12 } \mathrm { M } \nonumber$
Exercise $2$
Cacultate the concentration OH- ions in HCl solution whose hydrogen ion concentration is 1.4 x 10-3 M.
Answer
pH as a measure of acidity:
The pH of a solution is defined as the negative logarithm of the hydrogen ionconcentration (in mol/L).
$\mathrm { pH } = - \log \left[ \mathrm { H } _ { 3 } \mathrm { O } ^ { + } \right] \text { or } \mathrm { pH } = - \log \left[ \mathrm { H } ^ { + } \right]$
Note that because pH is simply a way of expressing hydrogen ion concentration, acidic and basic solutions at 25oC can be distinguished by their pH values, as follows:
Acidic solutions: [H+] > 1.0 x 10-7 M, pH < 7.00
Basic solutions: [H+] < 1.0 x 10-7 M, pH > 7.00
Neutral solutions: [H+] = 1.0 x 10-7 M, pH = 7.00
Note that pH increases as [H+] decreases.
Now, taking the negative logarithm of both sides of the expression [H+][OH-] = 1.0 x 10-14
yields the expression,
-log ([H+][OH-]) = -log (1.0 x 10-14)
-(log ([H+]) + log ([OH-]) = -log (1.0 x 10-14)
-log ([H+]) - log ([OH-]) = 14.00
From the definitions of pH and pOH we obtainpH + pOH = 14.00
This expression provides us with another way of showing the relationship between the H+ ion concentration and the OH- ion concentration.
Example $4$:
The pH of a rain water sample collected in the Western Province of Kenya on a particular day was found to be 4.82. Calculate the H+ ion concentra- tion of the rain water.
Solution:
We know that pH = -log [H+] = 4.82
Therefore, log [H+] = -4.82
This implies that [H+] = antilog (-4.82)
Therefore, [H+] = 10-4.82 = 1.5 x 10-5 M.
Exercise $3$
If the pH of a mixture of orange and passion juice is 3.30, calculate the H+ ion concentration.
Answer
Exercise $4$
Calculate the hydrogen ion concentration in mol/L for solutions with the following pH values: (a) 2.42, (b) 11.21, (c) 6.96, (d) 15.00
Answer
Add texts here. Do not delete this text first.
Base/Acid Ionization Constant:
In the reaction below,
NH3 + H2O ⇌ NH4+ + OH-
base acid conjugate conjugate
acid base
we can write the following equilibrium expression, called the base ionization constant, Kb.
$\mathrm { K } _ { \mathrm { b } } = \frac { \left[ \mathrm { NH } _ { 4 } ^ { + } \right] \left[ \mathrm { OH } ^ { - } \right] } { \left[ \mathrm { NH } _ { 3 } \right] } = 1.8 \times 10 ^ { - 5 } \nonumber$
Note that water does not explicitly appear in the equilibrium expression because the reaction is taking place in water (water being the solvent). It is important to note that the larger the Kb, the stronger the base. Since NH3 is a known weak base, there will be a reasonable amount of unreacted NH3 in solution when equilibrium is established. Hence the low value of its Kb.
Strength of Acids and Bases
Strong acids are assumed to ionize completely in water. Examples of strong acids are hydrochloric acid (HCl), nitric acid (HNO3), perchloric acid (HClO4) and sulphuric acid (H2SO4). Most acids are weak acids, which ionize only to a limited extent in water.
Strong bases ionize completely in water. Examples of strong bases are Hydroxi- des of alkali metals (e.g., NaOH, KOH, etc).
Dissociation of Weak acids and Bases:
Weak Acids:
If HA is a weak acid, then
HA + H2O ↔ H3O+ + A-
$\mathrm { K } _ { \mathrm { A } } = \frac { \left[ \mathrm { H } _ { 3 } \mathrm { O } ^ { + } \right] \left[ \mathrm { A } ^ { - } \right] } { [ \mathrm { HA } ] }$
Weak base:
If B is a weak base, then
B + H2O ↔ BH+ + OH-
$\mathrm { K } _ { \mathrm { B } } = \frac { \left[ \mathrm { OH } ^ { - } \right] \left[ \mathrm { BH } ^ { + } \right] } { [ \mathrm { B } ] }$
Note that water is not included in both expressions because it is a constant.
Example $5$:
Determine the pH of a 0.10 M acetic acid solution, if the acid disso- ciation constant, K is 2.24 x 10-5.
We need to know that acetic acid is a weak acid which will only ionize to a limited extent. This can be represented by the equilibrium reaction below:
HAc + H2O ↔ H3O+ + Ac-
$\mathrm { K } _ { \mathrm { A } } = \frac { \left[ \mathrm { H } _ { 3 } \mathrm { O } ^ { + } \right] \left[ \mathrm { A } ^ { - } \right] } { [ \mathrm { HA } ] } \nonumber$
Since both a H3O+ and a A- is produced for each HA that dissociates:
[H3O+] = [A-]
Also,
[HA] = 0.10 M –[H3O+]
Suppose y = [H3O+], then
KA = 2.24 x 10-5 = y2 / (0.10-y)A
Rearranging yields a quadratic equation of the form: y2 + 2.24 x 10-5 y – 2.24 x 10-6 = 0
Note that this quadratic equation can be solved
using $y = \frac { - b \pm \sqrt { \left( b ^ { 2 } - 4 a c \right) } } { 2 a } \nonumber\] or the solution can be estimated by assuming that the amount of acid dissociated is insignificant when compared with the undissociated form ( HAc). Let us try the exact solution using using$
y = \frac { - b \pm \sqrt { \left( b ^ { 2 } - 4 a c \right) } } { 2 a }
\nonumber\]
y = [-2.24 x 10-5 + {(2.24 x 10-5)2 – (4 x 2.24 x 10-6)}1/2]/2
y = 0.00149
Therefore, pH = -log (0.00149) = 2.82
Exercise $5$
The Ka for benzoic acid is 6.5 x 10-5. Calculate the pH of a 0.10 M benzoic acid solution.
Answer
Exercise $6$
The pH of an acid solution is 6.20. Calculate the Ka for the acid. The initial acid concentration is 0.010 M.
Answer
Add texts here. Do not delete this text first.
Dissociation of Weak bases:
The calculations are essentially the same as for weak acids. The important ex- pression to remember is:
pH + pOH = pKw = 14.00
Also, it can be shown that pKA + pKB = 14.00, where pKA = -log (KA) Note the following:
• If you are starting with an acid, acidic conditions or the conjugate acid of a base, then perform your calculations using KA.
• If starting with a base, basic conditions or the conjugate base of an acid, then do your calculations using KB.
• You can readily convert pH to pOH (i.e., pH + pOH = 14.00) and KA toKB (i.e., pKA + pKB = 14.00) values.
Fundamentals of Volumetric Analysis
Volumetric or titrimetric analyses are quantitative analytical techniques which employ a titration in comparing an unknown with a standard. In a titration, a measured and controlled volume of a standardized solution, a solution containing a known concentration of reactant «A» from a buret is added incrementally to a sample solution of known volume (measured by a pipete) containing a substance to be determined (analyte) of unknown concentration of reactant «B». Th titration proceeds until reactant «B» is just consumed (stoichiometric completion). This is known as the equivalence point. (The titration is complete when sufficienttitrant has been added to react with all the analyte.) At this point the number ofequivalents of «A» added to the unknown equals the number of equivalents of «B» originally present in the unknown.
An indicator, a substance that have distinctly different colours in acidic and basic media, is usually added to the reaction flask to signal when and if all the analytehas reacted. The use of indicators enables the end point to be observed. In this, the titrant reacts with a second chemical, the indicator, after completely reacting with the analyte in solution. The indicator undergoes a change that can be detected (like colour). The titrant volume required for the detection of the equivalence point is called the end point. Note that the end point and equivalence point are seldomly the same. Ideally, we want the equivalence point and the end point to be the same. This seldom happens due to the methods used to observe end points. As a result, we get a titration error, the difference between the end point and the equivalence point, which leads to overtitration.
The end point is then the point where sufficient indicator has been converted fordetection. The sequence of events can be demonstrated as below:
followed by
The last step does NOT require that all indicator be converted. Infact, it is best if a very small percent need to be reacted to make the colour change visible.
For volumetric methods of analysis to be useful, the reaction must reach 99%+ completion in a short period of time. In almost all cases, a burette is used to measure out the titrant. When a titrant reacts directly with an analyte (or with a reaction the product of the analyte and some intermediate compound), the procedure is termed a direct titration. The alternative technique is called a back titration. Here, an intermediate reactant is added in excess of that required to exhaust the analyte, then the exact degree of excess is determined by subsequent titration of the unreacted intermediate with the titrant. Regardless of the type of titration, an indicator is always used to detect the equivalence point. Most com- mon are the internal indicators, compounds added to the reacting solutions that undergo an abrupt change in a physical property (usually absorbance or color) at or near the equivalence point. Sometimes the analyte or titrant will serve this function (auto indicating). External indicators, electrochemical devices such as pH meters, may also be used. Ideally, titrations should be stopped precisely at the equivalence point. However, the ever-present random and systematic error, often results in a titration endpoint, the point at which a titration is stopped, that is not quite the same as the equivalence point. Fortunately, the systematic error, or bias may be estimated by conducting a blank titration. In many cases the titrantis not available in a stable form of well-defined composition. If this is true, thetitrant must be standardized (usually by volumetric analysis) against a compound that is available in a stable, highly pure form (i.e., a primary standard).
Note that by accurately measuring the volume of the titrant that is added (using a buret), the amount of the sample can be determined.
For a successful titrimetric analysis, the following need to be true:
• The titrant should either be a standard or should be standardized.
• The reaction should proceed to a stable and well defined equivalence point.
• The equivalence point must be able to be detected.
• The titrant’s and sample’s volume or mass must be accurately known.
• The reaction must proceed by a definite chemistry. There should be nocomplicating side reactions.
• The reaction should be nearly complete at the equivalence point. In other words, chemical equilibrium should favour the formation of products.
• The reaction rate should be fast enough to be practical.
Illustration: In the determination of chloride, 50 ml of a 0.1M AgNO3 solution would be required to completely react with 0.005 moles of chloride present in solution.
Balanced Equation for the reaction: Ag+(aq) + Cl-(aq) → AgCl(s). Here, 1 mole of Ag+ ions reacts stoichiometrically with 1 mole of Cl- ions. Therefore 50 ml (0.05L) of a standard 0.10M AgNO3 which contains 0.005 moles (= 0.10 moles L-1 x 0.050 L) requires an equivalent number of moles of Cl- ions.
Since the titrant solution must be of known composition and concentration, we ideally would like to start with a primary standard material, a high purity compound used to prepare the standard solution or to standardize the solution with. A standard solution is one whose concentration is known. The concentration of a standard solution is usually expressed in molarity (mole/liter). The process by which the concentration of a solution is determined is called standardization. Because of the availability of some substances known as primary standards, in many instances the standardization of a solution is not necessary. Primary standard solutions are analytically pure, and by dissolving a known amount of aprimary standard in a suitable medium and diluting to a definite volume, a solutionof known concentration is readily prepared. Most standard solutions, however, are prepared from materials that are not analytically pure and they have to be standardized against a suitable primary standard.
The following are the desired requirements of a primary standard:
• High purity
• Stable in air and solution: composition should be unaltered in the air at ordinary or moderately hgh temperatures.
• Not hygroscopic.
• Inexpensive
• Large formula weight: equivalence weight ought to be high in order to reduce the effects of small weighing error.
• Readily soluble in the solvent under the given conditions of the analy- sis.
• On titration, no interfering product(s) should be present.
• The primary standard should be colorless before and after titration to avoid interference with indicators.
• Reacts rapidly and stoichiometrically with the analyte.
The following are also the desired requirements of a primary standard solution:
• Have long term stability in solvent.
• React rapidly with the analyte.
• React completely with analyte.
• Be selective to the analyte.
The most commonly used primary standards are:
A. Acidimetric standards.
Sodium carbonate (Na2Co3, equivalent weight 53.00) and Borax (Na2B4O7.10H2O, equivalent weight 63.02)
B. Alkalimetric standards.
Sulphamic acid (NH2 SO3H, equivalent weight 97.098),
Potassium hydrogenphthalate (KHC8H2O4, equivalent weight 204.22)
Oxalic acid (H2C2O4.2H2O, Equivalent weight 63.02)
This is a second material used As a substitute for a suitable primary standard, asecondary standard are often used as a second material. However, the standard solution should always be standardized using a primary standard.
Exercise $7$
Discuss the following analytical terms: Standard solution; Primary standards; Standardized solution; Standardization; End point of titration; Equi- valence point of titration; and Titration error
Answer
Summary:
The basic requirements or components of a volumetric method are:
1. A standard solution (i.e., titrant) of known concentration which reacts with the analyte in a known and repeatable stoichiometry (i.e., acid/base, precipitation, redox, complexation).
2. A device to measure the mass or volume of sample (e.g., pipet, graduatedcylinder, volumetric flask, analytical balance).
3. A device to measure the volume of the titrant added (i.e., buret).
4. If the titrant-analyte reaction is not sufficiently specific, a pretreatment toremove interferents.
5. A means by which the endpoint can be determined. This may be an in- ternal indicator (e.g., phenolphthalein) or an external indicator (e.g., pH meter).
Apparatus for titrimetric analysis:
The most common apparatus used in volumetric determinations are the pipette,buret, measuring cylinder, volumetric and conical (titration) flask. Reliablemeasurements of volume is often done with the help of a pipet, buret, and a vo-lumetric flask. The conical flask is preferred for titration because it has a good“mouth” that minimizes the loss of the titrant during titration.
Classification of reactions in volumetric (titrimetric) analysis
Any type of chemical reactions in solution should theoretically be used for ti- trimetric analysis. However, the reactions most often used fall under two main categories:
(a) Those in which no change in oxidation state occurs. These are dependent on combination of ions.
(b) Oxidation-reduction reactions: These involve a change of oxidation state (i.e., the transfer of electrons).
For convenience, however, these two types of reactions are further divided into four main classes:
(i) Neutralization reactions or acidimetry and alkalimetry: HA+B⇌HB+ +A-
(ii) Precipitation reactions: M(aq) +nL(aq) ⇌ MLn(s)
(iii) Oxidation-reduction reactions: Ox + Red ⇌ Red' + ox'
(iv) Complex ion formation reactions: M(aq) +nL(aq) ⇌ MLn(aq)
In this unit, we shall focus on neutralization reactions. The latter two will be dealt with in the next two units that follow in this module.
General Theory of Titrations
In determining what happens during a titration process, some of the theories of chemical equilibria (previously covered in this unit as well as in an earlier Module entitled General Chemistry) are often used. To fully understand what happens during a titration experiment enables one to set up a titration and choose an indicator wisely.
Consider a hypothetical titration reaction illustrated as follows:
T+A→Px +Py
where T is the titrant (considered as the standard), A is the titrand (considered as the
unknown analyte whose concentration is desired), and Px and Py are products. Note that the extent of the above hypothetical reaction is determined by the magnitude of the equilibrium constant,
$\mathrm { K } _ { \mathrm { eq } } = \frac { \left[ \mathrm { P } _ { x } \right] \left[ \mathrm { P } _ { y } \right] } { [ \mathrm { T } ] [ \mathrm { A } ] }$
Suppose Ct is the concentration of the titrant which must be known (in the buret) and CA is the concentration of the unknown analyte A in the titration flask before any titrant is added. For the purposes of our illustrations here, we shall assume that both Ct and CA are known.
In order to understand what is occuring in a titration flask, we shall consider asingle step titration to comprise of four (4) distinct regions described as below:
• Region 1 –Initial Stage (i.e., before the addition of any titrant):
Here, a pure solution of analyte, A is placed in a titration flask before any volume of reagent is added. At this point there is no titrant, T introduced in the flask, no products Px or Py formed yet and [A] in the titration flask is a function of CA.
• Region 2 –Before equivalence point (i.e., after addition of the titrant but before the equivalence point): Here, the volume of reagent added to analyte is not sufficient to make the reaction complete (when there is excess of analyte). Thus, in this region T becomes the limiting reagent,and hence there will be very little T in solution (in fact [T] in the flask could be zero if the reaction went totally to completion). Therefore, only A, Px and Py would be present in measurable quantities.
• Region 3 –At equivalence point: In this region, the reagent added is the amount that is chemically equivalent to the amount of substance being determined (analyte). The equivalence point is defined as the point at which there would be neither T nor A present if the reaction went to completion. In reality though, there is often very little of either T or A present and there exists a very simple relationship between [T] and [A]. As is expected, only Px and Py would be present in measurable amounts.
• Region 4 –After equivalence point: Here, the amount of reagent added is higher than the amount of substance being determined. In this region, A now becomes the limiting reagent, and therefore there will be very little A (if any, depending on whether the reaction went totally to completion, in which case [A] = 0) in solution. Only T, Px, and Py are present in measurable amounts in the titration flask.
In view of the aforementioned, it is clear that the method for determining what is trully present in a titration flask during a titration depends on the region under consideration. To demonstrate what happens and how the concentrations of all substances present varies during a titration process, acid/base titrations will be used as examples of titrations in general. The types of acid/base titrations that will be considered in this unit are:
A) Titration of strong acid with strong base
B) Titration of weak acid with strong base
C) Titration of weak base with strong acid
D) Titration of polyprotic weak acids with strong base
Note that the behaviour in each of the stages mentioned above is a function of the type of acid-base titration process. This behaviour is best depicted or described by making plots of pH as a function of the volume of base added. This plot is known as a titration curve.
Let us now examine each of the types of acid/base titrations mentioned above.
Titration of a Strong acid with a Strong Base
The reaction between HCl (considered here as the unknown) and NaOH (the titrant) will be used as an example. As discussed earlier, strong acids are 100% dissociated in water (i.e., H+Cl +H2O→H3O+ +Cl-) and strong bases are 100%hydrolysed (i.e., Na+OH +H2O→H2O+OH- +Na+ ). The reaction therefore between HCl and NaOH can be expressed as:
H3O+ +OH- →2H2O
Here, Na+ and Cl- are spectator ions and do not enter into the titration reaction. In this reaction, Cl- is neither added to the flask nor is it consumed in the reaction in the course of the titration process. Thus the number of moles of Cl- remains constant while its concentration decreases due dilution. (Remember that the volume of solution in the flask is increasing as titration progresses and this dilution process has an effect on the concentrations.)
If Ct is the concentration of the titrant in the buret and whose value is a fixed; CA is the concentration of the unknown analyte, A in the flask before the titration and is also a fixed value; Vt is the volume of the titrant added to the titration flask;and VA is the volume of the unknown analyte placed initially in the titration flask,then the concentration of Cl-, [Cl-] which does not depend on the region of the titration is given by
$\left[ \left. C \right| ^ { - } \right] = \frac { \text { mol } } { \text { Volume } } = \frac { C _ { A } V _ { A } } { \left( V _ { A } + V _ { t } \right) } \text { and } p C l = - \log [ C l ] \nonumber$
Na+ is continuously being added to the titration flask in the course of the titration process but is not reacting. Hence, the concentration of Na+, [Na+] will continuously increase and its concentration will not depend on the region of the titration and is given as
$\left[ \mathrm { Na } ^ { + } \right] = \frac { \mathrm { mol } } { \text { Volume } } = \frac {\mathrm{C}_{t} \mathrm { V } _ { \mathrm { t } } } { \left( \mathrm { V } _ { \mathrm { A } } + \mathrm { V } _ { \mathrm { t } } \right) } \text { and pNa } = - \log \left[ \mathrm { Na } ^ { + } \right] \nonumber$
Since the species H3O+ and OH- are involved in the titration reaction, the calculations of [H3O+] and [OH-] in the titration flask will now depend on the titration region. This is now examined below.
Suppose Ca is the concentration of the strong acid that is present in the flask at any point during the titration process, and Cb is the concentration of the strong base actually present in the titration flask at any point in the titration. Note that these values will always be different from Ct and CA.
Let us now examine what happens to the concentrations of [H3O+] and [OH-] in each of the titration regions discussed above.
Region 1: This is simple a solution of the strong acid present in the titration flask.
$\left[ \mathrm { H } _ { 3 } \mathrm { O } ^ { + } \right] = \mathbf { C } _ { \mathrm { a } } = \mathbf { C } _ { \mathrm { A } } \text { and } \mathrm { pH } = - \log \left[ \mathrm { H } _ { 3 } \mathrm { O } ^ { + } \right] \text { and } \mathrm { pOH } = 14.00 - \mathrm { pH }$
Region 2: As titrant (strong base) is added, some of the strong acid get consu- med, but no strong base is yet present. Thus, only the strong acid affects theoverall pH of the solution in the titration flask.
$\ Moles of acid remaining = (moles of original acid – moles of base added)$
$\bf{C}_a={(moles\ of\ acid\ remaining)\over(total\ resultant\ volume)}={(\bf{V}_a\bf{C}_a-\bf{V}_t\bf{C}_t)\over(\bf{V}_a+\bf{V}_t)}$
$[H^+]=\bf{C}_a={(\bf{V}_a\bf{C}_A-\bf{V}_t\bf{C}_t)\over(\bf{V}_a+{V}_t)}\ if\ \bf{C}_a>>2\bf{K}_w^{1/2}$
Region 3: In this region, there is neither strong acid nor strong base present inthe titration flask. The solution simply contains the salt, NaCl, the product of theacid-base reaction. Since neither Na+ nor Cl- affect pH of the solution mixture, the pH will be that of pure water.
$[ \mathrm { H } + ] = \left( K _ { W } \right) ^ { 1 / 2 } \text { or } \mathrm { pH } = 7.00$
Region 4: In this region, all the strong acid is now exhausted and there is excess strong base present. Therefore, the pH of the soultion mixture is determined by the excess strong base present.
Moles of base present = (total moles of base added so far – moles of original acidpresent in flask at beginning of titration)
$Moles\ of\ base\ present={(total\ moles\ of\ base\ added\ so\ far\ -\ moles\ of\ original\ acid\ present\ in\ flask\ at\ beginning\ of\ titration)}$
$\bf{C}_a={(moles\ of\ base\ present)\over(total\ resultant\ volume)}={(\bf{V}_t\bf{C}_t-\bf{V}_A\bf{C}_A)\over(\bf{V}_a+\bf{V}_t)}$
$[OH^-]=\bf{C}_b={(\bf{V}_t\bf{C}_t-\bf{V}_a\bf{C}_A)\over(\bf{V}_a+{V}_t)}\ if\ \bf{C}_a>>2\bf{K}_w^{1/2}$
$[\bf{H}_3O^+]={\bf{K}_w\over[OH^-]}$
Example $6$:
Consider titration of 100.0 mL of 0.100 mol/L HCl solution with 0.100 mol/L standard NaOH solution.
Region 1: Before addition of any titrant. The 100.0 mL solution contains a strong HCl acid and the total volume is 100 mL (0.100 L).
[Na+] = 0.0 mol/L
$[ \mathrm { Cl } ^-] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol }/ \mathrm { L } ) } { ( 0.100 \mathrm { L } ) } = 0.100 \mathrm { mol } / \mathrm { L } \nonumber$
$\left[ \mathrm { H } ^ { + } \right] = \mathbf { C } _ { \mathrm { a } } = \mathbf { C } _ { \mathrm { A } } = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.100 \mathrm { L } ) } = 0.100 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { OH } ^-] = \frac { 1.00 \times 10 ^ { - 14 } } { ( 0.100 ) } = 1.00 \times 10 ^ { - 13 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 2: After addition of, say 50.00 mL of NaOH. The solution still contains a strong acid and the total volume is 150.0 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.0500 \mathrm { L } ) ( 0.100 \mathrm { mol } /\mathrm { L } ) } { ( 0.1500 \mathrm { L } ) } = 0.0333 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { Cl }^- ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol/L } ) } { ( 0.1500 \mathrm { L } ) } = 0.0667 \mathrm { mol } / \mathrm { L } \nonumber$
${ \left[ \mathrm { H } ^ { + } \right] = \mathbf { C } _ { \mathrm { a } } = } { \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) - ( 0.0500 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.1500 \mathrm { L } ) } = 0.0333 \mathrm { mol } / \mathrm { L } } \nonumber$
$[ \mathrm { OH ^-} ] = \frac { 1.00 \times 10 ^ { - 14 } } { ( 0.0333 ) } = 3.00 \times 10 ^ { - 13 } \mathrm { mol } / \mathrm { L }\nonumber$
Region 3: After addition of, say 99.00 mL of NaOH. The solution contains very little of the strong acid and the total volume is 199.0 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.0990 \mathrm { L } ) ( 0.100 \mathrm { mol/ } \mathrm { L } ) } { ( 0.1990 \mathrm { L } ) } = 0.0498 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { Cl^- } ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.1990 \mathrm { L } ) } = 0.0502 \mathrm { mol } / \mathrm { L } \nonumber { \left[ \mathrm { H } ^ { + } \right] = \mathbf { C } _ { \mathrm { a } } = } { \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) - ( 0.0990 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.1990 \mathrm { L } ) } = 5.03 \times 10 ^ { - 4 } \mathrm { mol } / \mathrm { L } } \nonumber$
$[ \mathrm { OH^- } ] = \frac { 1.00 \times 10 ^ { - 14 } } { \left( 5.03 \times 10 ^ { - 4 } \right) } = 1.99 \times 10 ^ { - 11 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 3 Continued: After addition of, say 99.90 mL of NaOH. The solution contains further reduced strong acid and the total volume is 199.90 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.0999 \mathrm { L } ) ( 0.100 \mathrm { mol/L } ) } { ( 0.199 \mathrm { L } ) } = 0.04998 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { Cl ^-} ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.1999 \mathrm { L } ) } = 0.05003 \mathrm { mol } / \mathrm { L } \nonumber$
${ \left[ \mathrm { H } ^ { + } \right] = \mathbf { C } _ { \mathrm { a } } = } { \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) - ( 0.0999 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.1999 \mathrm { L } ) } = 5.00 \times 10 ^ { - 5 } \mathrm { mol } / \mathrm { L } } \nonumber$
$[ \mathrm { OH^- } ] = \frac { 1.00 \times 10 ^ { - 14 } } { \left( 5.00 \times 10 ^ { - 5 } \right) } = 1.999 \times 10 ^ { - 10 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 3 Continued: After addition of, say 99.99 mL of NaOH. The solution contains even further reduced strong acid and the total volume is 199.90 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.09999 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.19999 \mathrm { L } ) } = 0.049998 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { Cl^- } ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol/ } \mathrm { L } ) } { ( 0.1999 \mathrm { L } ) } = 0.050003 \mathrm { mol } / \mathrm { L } \nonumber$
${ \left[ \mathrm { H } ^ { + } \right] = \mathbf { C } _ { \mathrm { a } } = } { \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol/ } \mathrm { L } ) - ( 0.09999 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.1999 \mathrm { L } ) } = 5.00 \times 10 ^ { - 6 } \mathrm { mol/ } \mathrm { L } } \nonumber$
$[ \mathrm { OH^- } ] = \frac { 1.00 \times 10 ^ { - 14 } } { \left( 5.00 \times 10 ^ { - 6 } \right) } = 2.00 \times 10 ^ { - 9 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 3 Continued: After addition of 100.00 mL of NaOH. The solution contains neither a strong acid nor a strong base and the total volume is 200.00 mL
${ \left[ \mathrm { Na } ^ { + } \right] = \left[ \mathrm { Cl } ^ { - } \right] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.200 \mathrm { L } ) } = 0.0500 \mathrm { mol } / \mathrm { L } } \ { \left[ \mathrm { H } ^ { + } \right] = \left[ \mathrm { OH } ^ { - } \right] = 1.00 \mathrm { x } 10 ^ { - 7 } \mathrm { mol } / \mathrm { L } } \nonumber$
Region 4: After addition of, say 100.01 mL of NaOH. The solution contains a little strong base and the total volume is 200.01 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.10001 \mathrm { L } ) ( 0.100 \mathrm { mol/L } ) } { ( 0.20001 \mathrm { L } ) } = 0.050022 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { Cl ^-} ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol/L } ) } { ( 0.20001 \mathrm { L } ) } = 0.0499975 \mathrm { mol/L } \nonumber$
${ [ \mathrm { OH^- } ] = \mathbf { C } _ { \mathrm { b } } = } { \frac { ( 0.10001 \mathrm { L } ) ( 0.100 \mathrm { mol/L } ) - ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol/L } ) } { ( 0.20001 \mathrm { L } ) } = 4.99975 \times 10 ^ { 6 } \mathrm { mol } / \mathrm { L } } \nonumber$
$\left[ \mathrm { H } ^ {+} \right] = \frac { 1.00 \times 10 ^ { - 14 } } { \left( 4.99975 \times 10 ^ { - 6 } \right) } = 2.00 \times 10 ^ { - 9 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 4 Continued: After addition of 100.10 mL of NaOH. The solution contains a little strong base and the total volume is 200.10 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.1001 \mathrm { L } ) ( 0.100 \mathrm { mol/ } \mathrm { L } ) } { ( 0.2001 \mathrm { L } ) } = 0.050025 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { Cl^- } ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol/L } ) } { ( 0.2001 \mathrm { L } ) } = 0.049975 \mathrm { mol } / \mathrm { L } \nonumber$
${ [ \mathrm { OH^- } ] = \mathrm { C } _ { \mathrm { b } } = } { \frac { ( 0.1001 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) - ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.2001 \mathrm { L } ) } = 4.9975 \times 10 ^ { 5 } \mathrm { mol } / \mathrm { L } } \nonumber$
$\left[ \mathrm { H } ^ { + } \right] = \frac { 1.00 \times 10 ^ { - 14 } } { \left( 4.9975 \times 10 ^ { - 5 } \right) } = 2.001 \times 10 ^ { - 10 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 4 Continued: After addition of 110.00 mL of NaOH. The solution contains more strong base and the total volume is 210.00 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.110 \mathrm { L } ) ( 0.100 \mathrm { mol/ } \mathrm { L } ) } { ( 0.210 \mathrm { L } ) } = 0.05238 \mathrm { mol/L } \nonumber$
$[ \mathrm { Cl^- } ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol/ } \mathrm { L } ) } { ( 0.210 \mathrm { L } ) } = 0.04762 \mathrm { mol } / \mathrm { L } \nonumber$
${ [ \mathrm { OH^- } ] = \mathbf { C } _ { \mathrm { b } } = } { \frac { ( 0.110 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) - ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.210 \mathrm { L } ) } = 4.762 \times 10 ^ { - 3 } \mathrm { mol } / \mathrm { L } } \nonumber$
$\left[ \mathrm { H } ^ { + } \right] = \frac { 1.00 \times 10 ^ { - 14 } } { \left( 4.762 \times 10 ^ { - 3 } \right) } = 2.1 \times 10 ^ { - 12 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 4 Continued: After addition of 150.00 mL of NaOH. The solution contains more strong base and the total volume is 250.00 mL.
$\left[ \mathrm { Na } ^ { + } \right] = \frac { ( 0.150 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.250 \mathrm { L } ) } = 0.06 \mathrm { mol } / \mathrm { L } \nonumber$
$[ \mathrm { Cl^- } ] = \frac { ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.250 \mathrm { L } ) } = 0.04 \mathrm { mol } / \mathrm { L } \nonumber$
${ [ \mathrm { OH^- } ] = \mathrm { C } _ { \mathrm { b } } = } { \frac { ( 0.150 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) - ( 0.100 \mathrm { L } ) ( 0.100 \mathrm { mol } / \mathrm { L } ) } { ( 0.250 \mathrm { L } ) } = 0.02 \mathrm { mol } / \mathrm { L } } \nonumber$
$\left[ \mathrm { H } ^ { + } \right] = \frac { 1.00 \times 10 ^ { - 14 } } { ( 0.02 ) } = 5.0 \times 10 ^ { - 13 } \mathrm { mol } / \mathrm { L } \nonumber$
The result of a series of such calculations can now be ploted as a graph of pH versus volume of NaOH to generate what is referred to as a titration curve. In such a plot, it becomes evident that the concentrations of the reactants, but not the products or the spectator ions, go through a large change exactly at the equivalence point. It is this change that allows one to pinpoint the equivalence point. The equivalence point can therefore be determined, say in this case, by monitoring either the concentration of OH- or H+ ions.
A typical titration curve for a strong acid versus a strong base is given in the figurebelow. (We’ll take hydrochloric acid and sodium hydroxide as typical of a strongacid and a strong base; i.e., NaOH (aq) + HCl (aq) → NaCl (aq) + H2O (l) ).
It is clear from this figure that the pH only rises a very small amount until quitenear the equivalence point. Then there is a really steep rise.
In order for you to appreciate the generation of a titration curve from such calcu- lations as above, you are encouraged to do the exercise problem below.
Exercise: Using the above calculations, plot the corresponding titration curve for the titration of 100.00 mL of 0.10 mol/L HCL solution with 0.10 mol/L standard NaOH solution.
Let us now consider the case of titration of a weak acid with a strong base and see how it compares with that of a strong acid with a strong base dealt with above.
Titration of a Weak acid with a Strong Base
At first glance, it might seem that weak acid/strong base titrations are just like the strong acid/strong base titration encountered in the preceding section. However, there is a significant difference that makes this case more complicated. Whereas the product of the titration reaction and the spectator ions in a strong acid/strong base titration (such as H2O, Na+, and Cl-) do not affect the pH of the solution in the titration flask and can thus be neglected, the same cannot be said of the weak acid/strong base titration. In fact, when a weak acid is titrated with a strong base, one of the products is a weak base which does affect the pH in all the regions discussed above except Region 1, and this must be taken into consideration.
The titration reaction involving a weak acid (HA) with a strong base such as NaOH is often expressed as:
$\mathrm { HA } + \mathrm { NaOH } \rightarrow \mathrm { NaA } + \mathrm { H } _ { 2 } \mathrm { O } \nonumber$
which can be more accurately represented as:
$\mathrm { HA } + \mathrm { OH } ^ { - } \rightarrow \mathrm { A } ^ { - } + \mathrm { H } _ { 2 } \mathrm { O }$
with a corresponding equilibrium constant expressed as:
$\mathrm { K } _ { \mathrm { eq } } = \frac { \left[ \mathrm { A } ^ { - } \right] } { [ \mathrm { HA } ] \left[ \mathrm { OH } ^ { - } \right] } = \frac { 1 } { \mathrm { K } _ { \mathrm { b } } } = \frac { \mathrm { K } _ { a } } { \mathrm { K } _ { \mathrm { w } } }$
Note that Keq for the titration of a weak acid with an Ka of about 1.0 x 10-5 will be only 1.0x109 (i.e., $\ k_{eq}=$$\dfrac{1}{K_b}$=$\dfrac{K_a}{K_w}$=$\dfrac{K_a}{10^{-14}}$) a value that is not as large as that for a strong acid. However, this is still considered large enough for the reaction to proceed to completion. In fact, as the value of Ka of the weak acid decreases, so does the value of Keq for the titration. Therefore, if the acid is too weak, it cannot be easily titrated.
A typical titration curve for a weak acid versus a strong base is given in the figurebelow. (We’ll take ethanoic acid and sodium hydroxide as typical of a waek acidand a strong base; i.e., CH3COOH (aq) + NaOH (aq) → CH3COONa (aq) + H2O (l)).
The start of the graph shows a relatively rapid rise in pH but this slows down as a buffer solution containing ethanoic acid and sodium ethanoate is produced. Beyond the equivalence point (when the sodium hydroxide is in excess) the curve is just the same as that end of the HCl-NaOH graph shown previously.
Let us now consider the four regions of the titration in the same manner as for the strong acid/strong base titration. Here again, we shall assume that the weak acid of concentration CA is the unknown and that the titrant of concentration Ct, is a strong base.
Region 1: Prior to addition of any base, the solution in the titration flask basicallycontains only the weak acid, HA and that Ca = CA . In order to calculate [H+], a weak acid equation need to be used. Thus
$\left[ \mathrm { H } ^ { + } \right] = \left( \mathrm { K } _ { \mathrm { a } } \mathrm { C } _ { \mathrm { A } } \right) ^ { \frac { 1 } { 2 } } , \mathrm { i.e. } , \mathrm { pH } = - \log \sqrt { \mathrm { K } _ { \mathrm { a } } \mathrm { C } _ { \mathrm { A } } }$
Region 2: Upon addition of some titrant, the solution contains some unreacted acid, HA (since the acid is not 100% dissociated) and some conjugate base, A-, due to the titration reaction.
Therefore, moles of acid remaining = original moles of acid-moles of strong base added.
$\bf{C}_a={(moles\ of\ acid\ remaining)\over Total volme}={\bf{V}_A\bf{C}_A-\bf{V}_t\bf{C}_t\over(\bf{V}_A+\bf{V}_t)}$
Moles of weak base formed = moles of strong base added.
Therefore CB=VtCt$VA+Vt), where CB is concentration of the weak base formed. To calculate the [H+], we need to use the simplified equation for a mixture of a weak acid and its conjugate base, i.e., $\left[ \mathrm { H } ^ { + } \right] = \frac { \mathrm { K } _ { a } \mathrm { C } _ { \mathrm { a } } } { \mathrm { C } _ { \mathrm { B } } } , \text { i.e., } \mathrm { pH } = \mathrm { pK } _ { \mathrm { a } } + \log \frac { \mathrm { C } _ { \mathrm { B } } } { \mathrm { C } _ { \mathrm { a } } }$ Region 3: At the equivalence point, all the weak acid has been completely neu- tralized by the strong base and only the weak base remains. There is no excess strong base present. Since the solution contains a weak base, the pH of the solutionin the flask cannot be equal to 7.00 and must be greater than 7.00. Moles of weak base present = moles of strong base added. $C _ { B } = \frac { V _ { t } C _ { t } } { \left( V _ { A } + V _ { t } \right) }$ [OH-] can be calculated using the simplified weak base equation, i.e., $\left[ \mathrm { OH } ^ { - } \right] = \left( \mathrm { K } _ { \mathrm { b } } \mathrm { C } _ { \mathrm { B } } \right) ^ { \frac { 1 } { 2 } }$ Region 4: After the equivalence point , both the weak base, A-, and excess strong base, OH-, will be present. moles of weak base = moles of original weak acid Concentration of weak base, $\mathbf { C } _ { w } = \frac { V _ { A } C _ { A } } { \left( V _ { A } + V _ { t } \right) }$ Moles of strong base present = moles of titrant added – moles of original weak acid Thus, concentration of strong base, $\mathbf { C } _ { \mathrm { s } } = \frac { \left( \mathrm { V } _ { \mathrm { t } } \mathrm { C } _ { \mathrm { t } } - \mathrm { V } _ { \mathrm { A } } \mathrm { C } _ { \mathrm { A } } \right) } { \left( \mathrm { V } _ { \mathrm { A } } + \mathrm { V } _ { \mathrm { t } } \right) }$ The [OH-] can be calculated by using an equation for a mixture of a weak base and a strong base, i.e., $\left[ \mathrm { OH } ^ { - } \right] = \mathrm { C } _ { \mathrm { s } }$ Example \(7$:
Consider the titration of a 50.0 mL solution of weak acid of butanoic (pKa =4.98,i.e.,Ka =1.05x10-5)ofconcentration0.10mol/Lwitha0.10mol/L standard NaOH solution. For convenience, we shall calculate only the pH of thesolution mixture in the titration flask.
Region 1: Before addition of NaOH, CA = 0.10 mol/L.
${ \left[ \mathrm { H } ^ { + } \right] = \left( \mathrm { K } _ { \mathrm { a } } \mathrm { C } _ { \mathrm { A } } \right) ^ { \frac { 1 } { 2 } } = \left\{ \left( 1.05 \times 10 ^ { - 5 } \right) ( 0.10 ) \right\} ^ { \frac { 1 } { 2 } } = 1.02 \times 10 ^ { - 3 } \mathrm { mol } / \mathrm { L } } \ { \left[ \mathrm { OH } ^ { - } \right] = \frac { 1.00 \times 10 ^ { - 14 } } { 1.02 \times 10 ^ { - 3 } } = 9.77 \times 10 ^ { - 12 } \mathrm { mol } / \mathrm { L } } \nonumber$
Region 2: After addition of, say, 20.0 mL of NaOH. Total volume in the titra-tion flask becomes 70.0 mL and the solution is a mixture of a weak acid and its conjugate base.
${ \mathrm { C } _ { a } = \frac { ( \text { moles of acid remaing } ) } { \text { Total volume } } = \frac { ( 0.050 \mathrm { L } ) ( 0.10 \mathrm { mol } / \mathrm { L } ) - ( 0.020 \mathrm { L } ) ( 0.10 \mathrm { mol } / \mathrm { L } ) } { ( 0.050 \mathrm { L } + 0.020 \mathrm { L } ) } = 0.0429 \mathrm { mol } / \mathrm { L } } \ { \mathrm { C } _ { B } = \frac { ( 0.10 \mathrm { mol } / \mathrm { L } ) ( 0.020 \mathrm { L } ) } { 0.070 \mathrm { L } } = 0.0286 \mathrm { mol/L } \ \mathrm { C } \mathrm { B } } \nonumber$
Using the simplified equation for a mixture of a weak acid and its conjugate baseto solve for [H+] and therefore pH,
\begin{aligned} \left[ H ^ { + } \right] = \frac { K _ { a } C _ { a } } { C _ { B } } & = \frac { \left( 1.05 \times 10 ^ { - 5 } \right) ( 0.0429 ) } { 0.0286 } = 1.57 \times 10 ^ { - 5 } \mathrm { mol } / \mathrm { L } \text { and } p H = - \log \left( 1.5 \times 10 ^ { - 5 } \right) = 4.80 \end{aligned} \nonumber
$\left[ \mathrm { OH } ^ { - } \right] = \frac { 1.00 \times 10 ^ { - 14 } } { 1.57 \times 10 ^ { - 5 } } = 6.37 \times 10 ^ { - 10 } \mathrm { mol } / \mathrm { L } \nonumber$
Region 3: After addition of, say, 50.0 mL of NaOH. Total volume in the titrationflask becomes 100.0 mL and the solution contains only a weak base.
${ C _ { B } = \frac { V _ { t } C _ { t } } { \left( V _ { A } + V _ { t } \right) } = \frac { ( 0.050 L ) ( 0.10 m o l / L ) } { 0.10 L } = 0.050 m o l / L } \ { K _ { b } = \frac { 1.00 \times 10 ^ { - 14 } } { K_a } = 9.55 \times 10 ^ { - 10 } } \nonumber$
With the simplified equation for a weak base
${ \left[ \mathrm { OH } ^ { - } \right] = \left( \mathrm { K } _ { \mathrm { b } } \mathrm { C } _ { \mathrm { B } } \right) ^ { \frac { 1 } { 2 } } = \left\{ \left( 9.55 \times 10 ^ { - 10 } \right) ( 0.050 ) \right\} ^ { \frac { 1 } { 2 } } = 6.91 \times 10 ^ { - 6 } \mathrm { mol } / \mathrm { L } } \ { \left[ \mathrm { H } ^ { + } \right] = \frac { 1.00 \times 10 ^ { - 14 } } { 6.91 \times 10 ^ { - 6 } } = 1.45 \times 10 ^ { - 9 } \mathrm { mol } / \mathrm { L } , \text { and } \mathrm { pH } = - \log \left( 1.45 \times 10 ^ { - 9 } \right) = } { 8.84 } \nonumber$
Region 4: After addition of, say, 60.0 mL of NaOH. Total volume in the titrationflask becomes 110.0 mL and the solution contains both a weak base and a strong base.
Concentration of weak base,
${C_w}=\frac { V _ { A } C _ { A } } { \left( V _ { A } + V _ { t } \right) } = \frac { ( 0.050 L ) ( 0.10 m o l / L ) } { 0.110 L } = 0.0455 \text { mol } / L \nonumber$
Concentration of strong base,
$C_s=\frac { \left( \mathrm { V } _ { \mathrm { t } } \mathrm { C } _ { \mathrm { t } } - \mathrm { V } _ { \mathrm { A } } \mathrm { C } _ { \mathrm { A } } )\right. } { \left( \mathrm { V } _ { \mathrm { A } } + \mathrm { V } _ { \mathrm { t } } \right) } = \frac { \{ ( 0.060 \mathrm { L } ) ( 0.10 \mathrm { mol } / \mathrm { L } ) - ( 0.050 \mathrm { L } ) ( 0.10 \mathrm { mol } / \mathrm { L } ) \} } { 0.110 \mathrm { L } }=0.00909\ mol/L \nonumber$
Using the simplified equation for a mixture of a strong and weak base,
${ [ \mathrm { OH } ] = \mathrm { C } _ { \mathrm { s } } = 0.00909 \mathrm { mol } / \mathrm { L } } \ { \left[ \mathrm { H } ^ { + } \right] = \frac { 1.00 \times 10 ^ { - 14 } } { 0.00909 } = 1.10 \times 10 ^{-12} \mathrm { mol } / \mathrm { L } , \text { and } \mathrm { pH } = - \log \left( 1.10 \times 10 ^{-12} \right) = 11.96 } \nonumber$
Note that the complete titration curve can be plotted when a series of additional calculations similar to those above are carried out.
Exercise $8$: Titration of weak acid with strong base.
Using the information pro- vided below, plot the titration curve for the titration of 100.00 mL of 0.10 mol/L CH3COOH solution with 0.10 mol/L standard NaOH solution. Hint: Consider the four titration stages given above.
Answer
Illustration
Calculating pH at Initial Stage:
Let n moles of HA (i.e., a monoprotic weak acid such as the CH3COOH) be available in a titration flask. The pH is dependent on extent of dissociation of the weak acid. If Ka of the weak acid is very small (i.e., Ka < 1.0 x 10-4), then it is possible to calculate the pH from the dissociation of weak acid using the relation:
$\mathrm { pH } = - \log \left[ \sqrt { \mathrm { K } _ { \mathrm { a } } [ \mathrm { Acid } ] }\right.$
Before equivalence point:
Again, let n moles of HA be available in titration flask. To this solution let mmoles of NaOH be added. In this titration, “Before Equivalence Point” meansn > m.
HA + NaOH → NaA + H2O
Start n m 0 0
n>m
Final n-m 0 m m
Question: What is left in the solution in the titration flask?
The solution left in the flask contains excess weak acid by an amount of (n-m) moles and salt of weak acid of an amount of equivalent to m moles. The solution is a buffer solution. Therefore, the pH of the solution can be calculated using the Henderson-Hasselbalch equation:
$\mathrm{pH}=\mathrm{pKa}+\log\frac{[\mathrm{salt}]}{[\mathrm{Acid}]}=\mathrm { pK } _ { \mathrm { a } } + \log \frac { \frac { \mathrm { m } } { \left( \mathrm { V } _ { \text { Acid } } + \mathrm { V } _ { \text { base added } } \right) } } { \frac { ( \mathrm { n } - \mathrm { m } ) } { \left( \mathrm { V } _ { \text { Acid } } + \mathrm { V } _ { \text { base added } } \right) } } = \mathrm { pK } _ { \mathrm { a } } + \log \frac { \mathrm { m } } { ( \mathrm { n } - \mathrm { m } ) } \] At equivalence point: The acid is completely neutralized by the base added. Hence the pH is dependent on the salt solution. The resultant salt obtained is a salt of a weak acid and strong base. Therefore the anion part of the salt will hydrolyze as follows. $\begin{array} { l } { \mathrm { NaA } \rightarrow \mathrm { Na } ^ { + } ( \mathrm { aq } ) + \mathrm { A } ^ { - } ( \mathrm { aq } ) } \ { \mathrm { A } ^ { - } + \mathrm { H } _ { 2 } \mathrm { O } \Leftrightarrow \mathrm { HA } + \mathrm { OH } ^ { - } } \end{array}$ $\mathrm{pH}=\frac{1}{2}\{(\mathrm {pK_w}+\mathrm{pK_a}-\mathrm {pC_{salt \ produced}})\}= \frac { 1 } { 2 } \left\{ \left( \mathrm { pK } _ { \mathrm { w } } + \mathrm { pK } _ { \mathrm { a } } + \log \left( \frac { \mathrm { m } } { \mathrm { V } _ { \text { arid } } + \mathrm { V } _ { \text { base } } } \right) \right\}\right.$ After equivalence point: The amount of strong base added is more than the amount required to neutralize the acid. Hence an excess amount of strong base will remain in solution. The pH can be calculated from the excess strong base left in the solution using the following equation. $\mathrm { pH } = \mathrm { pK } _ { \mathrm { w } } + \log[\frac{\mathrm(C_{Base}V_{base\ added})-(C_{Acid}V_{acid})}{(V_{acid}+V_{base\ added})}]$ Exercise $9$: Titration of weak acid with strong base. Plot a titration curve for titrating 75 mL of 0.12 mol/L CH3COOH with 0.09 mol/L NH3. Ka for CH3COOH= 1.8 x 10-5 and Kb for NH3 = 1.8 x 10-5. Hint: Consider the four titration stages discussed earlier. Answer Polyprotic Acids: In our previous discussion of acid-base reactions, we dealt with acids (e.g., HCl, HNO3, and HCN) that contain only one ionizable hydrogen atom in each molecule. This group of acids that have only one ionizable hydrogen atom per molecule is known as monoprotic acid. The corresponding reactions of the monoprotic acids given above with a base like water are as follows: HCl + H2O → H3O+ + Cl- HNO3 + H2O → H3O+ + NO3- HCN + H2O ↔ H3O+ + CN- HNO3 + H2O → H3O+ + NO3- HCN + H2O ↔ H3O+ + CN- In general, however, acids can be classified by the number of protons present permolecule that can be given up in a reaction. For acids that can transfer more than one proton to a base, the term polyprotic acid is used. Diprotic acids contain two ionizable hydrogen atoms per molecule and their ionization occurs in two stages. Examples of diprotic acids include, H2SO4, H2S, H2CO3, etc. An illustration of the two-stage ionization of H2S is as follows: H2S + H2O ⇌H3O+ + HS- (primary ionization) HS- + H2O ⇌ H3O+ + S2- (secondary ionization) Each of the above steps is characterized by a different acid ionization constant. The primary ionization step has an acid ionization constant, $\mathrm { K } _ { 1 } = \frac { \left[ \mathrm { HS } ^ { - } \right] \left[ \mathrm { H } _ { 3 } \mathrm { O } ^ { + } \right] } { \left[ \mathrm { H } _ { 2 } \mathrm { S } \right] }$ whereas the secondary has an acid ionization constant, $\mathrm { K } _ { 2 } = \frac { \left[ \mathrm { S } ^ { 2 - } \right] \left[ \mathrm { H } _ { 3 } \mathrm { O } ^ { + } \right] } { \left[ \mathrm { HS } ^ { - } \right] }$ If K1 ≈ K2, the two steps occur simultaneously and both must be considered in solving any problems involving the pH of solutions of the acid. If however, K1 ≫ K2 (by atleast a factor of 104), then only the one reaction involving the species present need be considered. This then makes one apply the simple monobasic equations that have been dealt with previously in unit 2. Of the two steps above, the primary ionization always takes place to a greater extent than the secondary ionization. The most common triprotic acid is phosphoric acid or sometimes called ortho-phosphoric acid (H3PO4), which can ionize in solution in three steps as follows: H3PO4 + H2O ⇌H3O+ + H2PO4- (primary ionization) H2PO4- + H2O ⇌ H3O+ + H2PO42- (secondary ionization) H2PO42- + H2O ⇌ H3O+ + H2PO3- (tertiary ionization) The corresponding experimentally determined acid ionization constants for phosphoricacid (H3PO4) are: K1 =7.5×10-3, K2 =6.6×10-6, and K3 =1.0×10-12 Thus, a solution of phosphoric acid will usually comprise a mixture of three different acids; H3PO4, H2PO4- , and HPO42- and their corresponding conjugate bases. It is important to note that successive ionization constants for a given acid gene- rally have widely different values as can be seen in the case of phosphoric acid (i.e., K1 >> K2 >> K3). Thus, as mentioned in the case of the diprotic acid above, no more than two of the successive species associated with the ionization of a particular polyprotic acid are present in significant concentrations at any particular pH. Hence, the equilibria or any calculations can be determined by only one, or at most two,of the acid ionization constants. Titration of a Polyprotic Weak acid with a Strong base In accordance with the stepwise dissocitation of di- and polyprotic weak acids, their neutralization reactions are also stepwise. For instance, in the titration of orthophosphoric acid (H3PO4) with a strong base such as NaOH, the following stepwise reactions occur: H3PO4 + NaOH ↔ NaH2PO4 + H2O NaH2PO4 + NaOH → NaH2PO4 + H2O Na2HPO4 + NaOH ↔ Na3PO4 + H2O Accordingly, the H3PO4–NaOH titration curve has not one but three equivalence points. The first equivalence point is reached after one mole of NaOH has beenadded per mole of H3PO4; the second after addition of two moles of NaOH; andthe third after addition of three moles of NaOH. Example $8$: If 10.00 mL of 0.10 mol L-1 H3PO4 solution is titrated, the first equivalence point is reached after addition of 10.00 mL, the second after addition of 20.00 mL, and the third after addition of 30.00 mL of 0.10 mol L-1 NaOH solution. Titration of a Diprotic weak acid (H2A) with NaOH For a diprotic weak acid represented by H2A, 1. The pH at the beginning of the titration is calculated from the ionization (dissocitation) of the first proton, i.e., H2A⇌H+ +HA- If Ka1, the acid dissociation constatant$
(= \frac { \left[ \mathrm { H } ^ { + } \right] \left[ \mathrm { HA } ^ { - } \right] } { \left[ \mathrm { H } _ { 2 } \mathrm { A } \right] } )
\nonumber$, is not too large and amount of dissociated H2A is ignored compared to the analytical concentration of the acid, then $\left[ \mathrm { H } ^ { + } \right] = \sqrt { \mathrm { K } _ { \mathrm { al } } \left[ \mathrm { H } _ { 2 } \mathrm { A } \right] }$ Otherwise the quadratic formula must be used to solve pH (see section on .... Of unit 2). 2. The pH during titration upto the first equivalence point An HA-/H2A buffer region (a region where the solution attempts to resist any change in pH upon addition of base during the titration) is established such that $\mathrm { pH } = \mathrm { pK } _ { \mathrm { al } } + \log \left( \frac { \mathrm { HA } ^ { - } } { \mathrm { H } _ { 2 } \mathrm { A } } \right)$ 3. At the first equivalence point$
\mathrm { pH } = \frac { \mathrm { pK } _ { \mathrm { al } } + \mathrm { pK } _ { \mathrm { a } 2 } } { 2 }
$4. Beyond the first equivalence point an A2-/HA- buffer exists $\mathrm { pH } = \mathrm { pK } _ { \mathrm { a } 2 } + \log \left( \frac { \left[ \mathrm { A } ^ { 2 - } \right] } { \left[ \mathrm { HA } ^ { - } \right] } \right)$ 5. At the second equivalence point the pH is determined from the hydrolysis of A2- salt (i.e., A2- + H2O HA- + OH-), such that$
\left[ \mathrm { OH } ^ { - } \right] = \sqrt { \frac { \mathrm { K } _ { w } } { \mathrm { K } _ { \mathrm { a } 2 } } \left[ \mathrm { A } ^ { 2 - } \right] }
6. Beyond the second equivalence point
The pH will be dependent on the concentration of excess strong base added (i.e., concentration of the titrant).
Illustration:
Consider addition of sodium hydroxide solution to dilute ethanedioic acid (oxalic acid). Ethanedioic acid is a diprotic acid which means that it can give away 2 protons (hydrogen ions) to a base. (Note that something which can only give away one (like HCl) is known as a monoprotic acid).
The reaction with NaOH takes place in two stages because one of the hydrogens is easier to remove than the other. The two successive reactions can be represented as:
By running NaOH solution into ethanoic acid solution, the pH curve shows the end points for both of these reactions as shown in the figure below.
The curve is the reaction between NaOH and ethanedioic acid solutions of equal concentrations.
Exercise $10$
Plot a titration curve for the titration of 75 mL of 0.12 mol L-1 H2CO3 with 0.09 mol L-1 NaOH, given that Kal = 4.3 x 10-7 and Ka2 = 5.6 x 10-11.
Hint: You are expected to consider the six titration stages shown above to collect data in order to plot the titration curve.
Answer
Exercise $11$
In the titration of a triprotic acid, H3A with NaOH where Ka1/Ka2 ≥104andK/K ≥104,
(a) Identify the eight titration stages that need to be considered in order to plot the corresponding titration curve.
(b) Derive the expressions for pH in each of the eight stages.
Answer
Titration of Anions of a weak acid with strong acid, HA
For the titration of Na2A salt:
1. The pH at the beginning of the titration is calculated from the hydrolysis of A2- salt i.e.,
A2- + H2O ⇌ HA- + OH-
2. During the titration up to the first equivalence point
A2- + H+ ⇌ HA-
The solution mixture of HA- and A2- is a buffer solution and so,
$\mathrm { pH } = \mathrm { pK } _ { \mathrm { a } 2 } + \log \frac { \left[ \mathrm { A } ^ { 2 - } \right] } { \left[ \mathrm { HA } ^ { - } \right] }$
3. At the first equivalence point
A2- + H+ ⇌ HA-
We have HA- in solution and pH arises from the dissociation of HA-, i.e.,
HA- + H2O ⇌ H3O+ + A2- and
$\mathrm { pH } = \frac { \mathrm { pK } _ { \mathrm { al } } + \mathrm { pK } _ { \mathrm { a } 2 } } { 2 }$
4. Beyond the first equivalence point:
HA- + H+ ⇌ H2A
The resultant solution is a mixture of H A and HA-, hence a buffer solution and so
$\mathrm { pH } = \mathrm { pK } _ { \mathrm { al } } + \log \frac { \left[ \mathrm { HA } ^ { - } \right] } { \left[ \mathrm { H } _ { 2 } \mathrm { A } \right] }$
5. At second equivalence point:
HA- + H+ ⇌ H2A
Here, only H2A is left in solution and hence pH is dependent on the dissociation of H2A, and so
H2A + H2O ⇌ H3O+ + HA-
$\left[ \mathrm { H } ^ { + } \right] = \sqrt { \mathrm { K } _ { \mathrm { al } } \left[ \mathrm { H } _ { 2 } \mathrm { A } \right] }$
6. Beyond the second equivalence point:
Excess strong acid is added and the pH is determined by the concentration of excess strong acid.
Exercise $12$
Plot the titration curve of 75mL of 0.12 mol L-1 CH COONa with 0.09 mol L-1 HCl. Ka for CH3COOH = 1.80 x 10-5. Hint: Consider the four possible titration stages.
Answer
Add texts here. Do not delete this text first.
Exercise $13$
Plot the titration curve of 75mL of 0.12 mol L-1 Na2CO3 with 0.09 mol L-1 HCl. Ka1 = 4.3 x 10-7 and Ka2 = 5.6 x 10-11. Hint: You are expected to consider six possible titration stages to collect data in order to plot the titration curve.
Answer
Acid-Base Indicators
Example $9$:
In the titration of 30.0 mL of 0.10 M HF with 0.20 M KOH, the pH of the solution mixtures at 10.0, 15.0, and 20.0 mL additions of the strong alkali are 3.76, 8.14, and 12.30, respectively. If phenol red (whose pK value is 7.5 and its acid colour is yellow and its base colour is red) were to be used as an indicator for the titration reaction, what colour would the indicator be at the 10, 15, and20 mL additions of KOH?
Solution
At 10 mL, the pH is 3.76. This is very acidic; the colour will be yellow.
At 15 mL, the pH is 8.14. This is just within the of the pK, so it will be orange. Likely a reddish-orange colour.
At 20 mL, the pH is 12.30. This is very basic; and hence the colour will bered.
Since it changes colour at the equivalence point (it doesn’t have to hit exactly), this would be a reasonable choice indicator.
Exercise $14$
Refer to the example above. The indicator methyl orange has a pK of about 3.5. It is red in its acidic form and yellow in its basic form. What colourwill the indicator be at 10, 15, and 20 mL additions of KOH? Would this be a good indicator for this titration?
Answer | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/14.2%3A_Learning_Activity.txt |
Specific Learning Objectives
• Define oxidation, reduction, oxidizing agent and reducing agent.
• Define oxidation state and oxidation number.
• Have a knowledge of the oxidation state rules and assign oxidation numbers to atoms in molecules and ions.
• Identify a redox reaction by assigning oxidation numbers to each element.
• Write the oxidation and reduction half reactions for a redox reaction.
• Distinguish between a redox reaction and a non-redox reaction.
• Write balanced Oxidation/Reduction Equations.
• Apply the Nernst equation in all directions to determine missing quantities.
• Describe and discuss Redox titrations.
• Carry out redox-type titrations and associated calculations.
Summary of learning activity #3
Chemical reactions in which there is a transfer of electrons from one substance to another are known as oxidation-reduction reactions or redox reactions. In this unit, we will consider the class of reactions called oxidation-reduction, or redox and examine the oxidation-reduction process and use the oxidation state and oxidation number concepts to not only identify redox reactions but also keep track of electrons transferred in a chemical reaction. The introductory sections of the unit explains the fundamental principles of galvanic cells, the thermodynamics of electrochemical reaction. Some applications of the concept of redox equilibria in explaining oxidation-reduction titrations as a technique for volumetric chemical analysis are also discussed.
Key Concepts
• Nernst equation: relates the electrochemical potential to the concentration of reactants and products participating in a redox reaction.
• Oxidation: theprocessbywhichacompoundloosesorappearstolooseelectrons. It corresponds to an increase in the oxidation number.
• Oxidizing agent: species that causes oxidation. An oxidizing agent accepts electrons from the species it oxidizes and therefore, an oxidizing agent is alwaysreduced.
• Oxidation number: these are used to keep track of electron transfer. Oxidation numbers are assigned to ionic as well as molecular compounds. Oxidation numbers are assigned per atom.
• Oxidation reaction:refers to the half-reaction that involves loss of electrons.
• Redox reactions: these are or oxidation-reduction reactions that involve trans- fer of electrons. As the electron transfer occurs, substances undergo changes inoxidation number.
• Reduction: the process by which a compound gains or appears to gain electrons. It corresponds to a decrease in oxidation number.
• Reduction reaction: is a half-reaction that involves gain of electrons.Reducing agent: species that causes reduction. A reducing agent donates electrons to the species it reduces and therefore, a reducing agent is always oxidized.
Introduction to activity # 3
Oxidation-reduction (redox) reactions or processes are very much a part of the world that we live in. These processes or reactions range from the burning of fossil fuels to the action of the oxidizing power of sodium hypochlorite, the active ingredient in household bleach. It is very likely that all of us have at one time or another come across atleast a process or feature that is as a result of a redox process. It is very probable that we have seen many examples of corrosion around us such as rust on iron, tarnish on silverware, and the greening on copper or brass surfaces or material. All these changes that we observe on almost day to day basis without thinking of their chemical origin or nature are examples of processes resulting from redox reactions. Most metallic and nonmetallic elementswhich find important uses in modern day world, are obtained from their ores bythe process of oxidation or reduction reactions.All of these examples are some of the consequences of oxidation-reduction reactions. These are basically reactions that involve electron-transfer. Oxidation-reduction analysis has been used over the years as an alternate method of analyzing for materials that have multiple oxidation states. This unit in part, will help us to link what is happening at the submicroscopic level to macroscopic events in electrochemistry that we see.
List of other compulsory readings
A text that contains vital information on the Nernst Equation, the significance ofthe equation and analytical applications in quantitative analysis of metal ions in solution (Reading #24)
A text on “Redox Equilibria in Natural Waters” (Reading #25)
A text with sub-sections containing sample problems dealing with chemical equilibrium, oxidation-reduction reactions (Reading #13)
Balancing Redox Equations by Using Half-reactionsBalancing Redox Equations (Reading # 26)
Oxidation NumbersRules for Oxidation Numbers (Reading #27) Reactions in Aqueous solutions (Reading #28)
Balance redox (Reading #29)
List of relevant resources
List of relevant useful links
http://www.chemguide.co.uk/inorganic...xmenu.html#top
http://www.chemguide.co.uk/physical/...tions.html#top
www.voyager.dvc.edu/~Iborowsk...RedoxIndex.htm
Detailed description of activity #3
Redox (Reduction – Oxidation) Reactions
Introduction: Chemical reactions involve the exchange of electrons between two or more atoms or molecules. The species involved, exchange electrons which, leads to a change in the charge of both the atoms of that species. This change in the charges of atoms (change in the atom’s oxidation number) by means ofelectron exchange is defined as a Reduction-Oxidation (Redox) reaction. Che- mists usually use the terms oxidizing and reducing agents to describe the reac- tants in redox reactions. Because oxidation is the process of losing electrons, an oxidizing agent is defined simply as any substance that can cause a loss of electronsinanothersubstanceinachemicalreaction. Thus,anoxidizingagentgains electrons, and its oxidation number (a positive or negative number which represents the oxidation state of an atom as defined below) thereby decreases.Reduction on the other hand, is the process of gaining electrons and therefore areducing agent is a substance that can cause another substance to gain electrons. The reducing agent therefore loses electrons, and its oxidation number thereby increases. Therefore, as a reducing agent reduces the other reagent, it is itself being oxidized. Conversely, as an oxidizing agent oxidizes the other reagent, it is itself being reduced.
To keep track of electrons in redox reactions, it is useful to assign oxidation num- bers to the reactants and products. An atom’s oxidation number, also referred to as oxidation state, signifies the number of charges the atom would have in a molecule (or an ionic compound) if electrons were transferred completely.
For example, we can write equations for the formation of HCl and SO2 as follows:
0 0 + -1
H2 (g) + Cl2 (g) → 2HCl (g)
0 0 +4 -2
S (s) + O2 (g) → SO2 (g)
The numbers above the elements symbols are the oxidation numbers. In both of the reactions shown, there is no charge on the atoms in the reactant molecu- les. Thus their oxidation number is zero. For the product molecules, however, it is asumed that complete electron transfer has taken place and that atoms havegained or lost electrons. The oxidation numbers reflect the number of electrons“transferred”.
Oxidation numbers enables us to identify elements that are oxidized and reduced at a glance. The elements that show an increase in oxidation number- hydrogen and sulfur in the preceding examples – are oxidized. Chlorine and oxygen are reduced, so their oxidation numbers show a decrease from their initial values. Note that the sum of the oxidation numbers of H and Cl in HCl (+1 and -1) is zero. Likewise, if we add the charges on S (+4) and two atoms of O [2 x (-2)], the total is zero. The reason is that the HCl and SO2 molecules are neutral, so the charges must cancel.
The following rules are used by chemists to assign oxidation numbers:
1. In free elements (i.e., in the uncombined state), each atom has an oxidation number of zero. Thus each atom in H2, Br2, Na, K, O2 and P4 has the same oxidation number: zero.
2. For ions composed of only one atom (i.e., monoatomic ions) the oxidation number is equal to the charge on the ion. Thus Li+ ion has an oxidation number of +1, Ba2+ ion, +2; Fe3+, +3; I- ion, -1; O2- ion, -2; etc. All alkali metals have an oxidation number of +1 and all alkaline earth metals have an oxidation number of +2 in their compounds. Aluminum has an oxidation number of +3 in all its compounds.
3. The oxidation number of oxygen in most compounds (e.g., MgO and H2O) is -2, but in hydrogen peroxide (H2O2) and peroxide ion (O22-), it is -1.
4. The oxidation number of of hydrogen is +1, except when it is bonded to metals in binary compounds. In these case (for example, LiH, NaH, CaH2), its oxidation number is -1.
5. Fluorine has an oxidation number of -1 in all its compounds. Other halogens (Cl, Br, and I) have negative oxidation numbers when they occur as halide ions in their compounds. When combined with oxygen – for example in oxoacids and oxoanions – they have positive oxidation numbers.
6. In a neutral molecule, the sum of the oxidation numbers of all the atoms must be zero. In a polyatomic ion, the sum of oxidation numbers of all the elements in the ion must be equal to the net charge of the ion. For example, in the ammonium ion, NH4+, the oxidation number of N is -3
and that of H is +1. the sum of the oxidation numbers is -3 + 4(+1) = +1, which is equal to the net charge of the ion.
7. Oxidation numbers do not have to be integers. For example, the oxidation number of O in the superoxide ion O2-, is -1/2.
Example $1$:
Using the rules above, assign oxidation numbers to all the elements in the following compounds and ion: (a) Li2O, (b) HNO3, (c) Cr2O22-
Solution
1. By rule 2, we see that lithium has an oxidation number of +1 (Li+) and oxygen’s oxidation number is -2 (O2-).
2. HNO3 yields H+ and NO3- ion in solution. From rule 4 we see that H has an oxidation number of +1. Thus the other group (nitrate ion) must have a net oxidation number of -1. Oxygen has an oxidation number of -2, and if we use x to represent the oxidation number of nitrogen, then the nitrate ion can be written as: [N(x) O3(2-)]- so that x + 3(-2) = -1 or x = +5
3. From Rule 6, we see that the sum of the oxidation numbers in the dichromate ion Cr2O72- must be -2. We know that the oxidation number of O is -2, so all that remains is to determine the oxidation number of Cr, which we can call y. the dichromate ion can be written as: [Cr2(y) O7(2-)]2- so that 2(y) + 7(-2) = -2 or y = +6.
Exercise $1$
Assign oxidation numbers to all the elements in the following anions (a) NO3-, (b) MnO4- (c) SbCl5
Answer
Question : How does one tell a redox reaction from an ion exchange reaction? How does one know that electron transfer has taken place?
Answer: Electron transfer is manifested in change in oxidation number or oxi- dation state. We know electron transfer has taken place if the oxidation number of an element has changed.
Example $2$:
(The numbers written above the symbols are their corresponding oxidation states)
1. +1 +5 -2 +2 -1 +1 -1 +2 +5 -2
2 AgNO3 (aq) + CuCl2 (aq) → 2 AgCl (s) + Cu(NO3)2 (aq)
2. +1 +5 -2 0 0 +2 +5 -2
2 AgNO3 (aq) + Cu (s) → 2 Ag (s) + Cu(NO3)2 (aq)
Which of the two reactions is a redox reaction? Explain!
Solution: (b) is a redox reaction since reaction involves electron transfer. Note that the oxidation state of Cu has moved from zero to +2, while that of Ag has changed from +1 to zero. Ag+ is the oxidizing agent (it gains an electron) while Cu is the reducing agent (it loses two electrons).
Exercise $2$
Which is the reducing agent in the following reaction?
2 VO + 3Fe3O4 V2O5 + 9 FeO
Answer
Balancing of a Redox Reaction
The rules for balancing half reactions are the same as those for ordinary reactions;that is, the number of atoms of each element as well as the net charge on both sides of the equation must be equal.
The following are the steps for balancing a Redox reaction:
1. Identify the species being oxidized and the species being reduced.
2. Write the oxidation and reduction half-reactions. For each half-reaction:
1. Solution:
Step 1:
Determine the oxidation state for every atom in the skeleton equation
above.
Balance the elements except for O and H.
2. Balance the number of electrons gained or lost.
3. Balance the net charges by adding H+ (acidic) or OH- (basic).
4. Balance the O and H by adding H2O.
Strategy for balancing redox reactions in acidic solutions:
Example $3$:
Balance the equation for the reaction between dilute nitric acid and copper metal leading to the production of copper ions and the gas nitric oxide, NO given below:
Cu(s) + H+ (aq) + NO3- (aq) → Cu2+ (aq) + NO(g)
Solution
Step 1: Determine the oxidation state for every atom in the skeleton equation above.
0 +1 +5 -2 +2 +2-2
Cu(s) + H+ (aq) + NO3- (aq) → Cu2+ (aq) + NO(g)
Changes in Oxidation status:
CU: 0 → +2 Oxidation
N: +5 → +2 Reduction
Step 2: Write the skeleton half reactions.
Oxidation: Cu → Cu2+
Reduction: NO3- → NO
Step 3: Balance each half-reaction “atomically).
• consider all atoms other than H and O (use any of the species that appear in the skeleton equation of step 1 above)
• balance O atoms by adding H2O
• balance H atoms by adding H+
Oxidation: Cu → Cu2+
Reduction: NO3- → NO + 2H2O
Reduction: NO3- + 4H+ → NO + 2H2O
Step 4: Balance the electric charges by adding electrons (electrons have to appear on the right hand side of the oxidation half-reaction and on the left hand side of the reduction half-reaction).
Oxidation: Cu → Cu2+ + 2e-
Reduction: NO3- + 4H+ + 3e- → NO + 2H2O
Step 5: Make the number of electrons the same in both the half reactions by findingthe least common multiple as we prepare to sum the two half equations.
i.e., 3 x Oxidation reaction: 3Cu → 3Cu2+ + 6e-
and 2 x Reduction reaction: 2NO3- + 8H+ + 6e- → 2NO + 4H2O
Step 6: Now combine the two half-reactions.
3Cu + 2NO3 - + 8H+ + 6e- → 3Cu2+ + 6e- + 2NO + 4H2O
Step 7: Simplify the summation.
3Cu + 2NO3 - + 8H+ → 3Cu2+ + 2NO + 4H2O
Step 8: Indicate the state of each species to obtain the fully balanced net ionic equation.
3Cu(s) + 2NO3 - (aq) + 8H+ (aq) → 3Cu2+ (aq) + 2NO (aq) + 4H2O (l)
Strategy for redox reactions in basic solutions: (First balance the reaction as though it was in acid (use H+ to balance hydrogen atoms)
Example $4$:
Example 2: Balance the equation NO3- + Al → NH3 + Al(OH)4- in basic solution.
Solution
Step 1: Write the skeleton equation and determine the oxidation state per atom.
+5 -2 0 +3 -1 +3 -2 -1
NO3- + Al → NH3 + Al(OH)4-
Al is being oxidized. Its oxidation number changes from 0 to +3
N in NO3 - is being reduced. Its oxidation number changes from +5 to +3
Changes in Oxidation status:
Al: 0 → +3 Oxidation
N: +5 → +3 Reduction
Step 2: Write the skeleton half reactions.
Oxidation: Al → Al(OH)4 -
Reduction: NO3 - → NH3
Step 3: Balance each half-reaction “atomically).
NOTE: In basic solutions, no H+ are available to balance H. Hence, we pretend that the solution is acidic and carry out a neutralization reaction at the end.
As was the case with acidic solution above,
• consider all atoms other than H and O (use any of the species that appear in the skeleton equation of step 1 above)
• balance O atoms by adding H2O
• balance H atoms by adding H+
Oxidation: Al + 4H2O → Al(OH)4 -
Oxidation: Al + 4H2O → Al(OH)4- + 4H+
Reduction: NO3- → NH3 + 3H2O
Reduction: NO3- → 9H+ → NH3 + 3H2O
Step 4: Balance the electric charges by adding electrons (electrons have to appear on the right hand side of the oxidation half-reaction and on the left hand side of the reduction half-reaction).
Oxidation: Al + 4H2O → Al(OH)4 - + 4H+ + 3e-
Reduction: NO3- + 9H+ + 8e- → NH3 + 3H2O
Step 5: Make the number of electrons the same in both the half reactions by findingthe least common multiple as we prepare to sum the two half equations.
i.e., 8 x Oxidation reaction: 8Al + 32H2O → 8Al(OH)4 - + 32H+ + 24e-
and 3 x Reduction reaction: 3NO3 - + 27H+ + 24e- → 3NH3 + 9H2O
Step 6: Now combine the two half-reactions.
8Al + 32H2O + 3NO3- + 27H+ + 24e- → 8Al(OH)4- + 32H+ + 24e- + 3NH3 + 9H2O
Step 7: Simplify the summation.
8Al + 23H2O + 3NO3 - → 8Al(OH)4 - + 3NH3 + 5H+
Step 7b: Change to basic solution by adding as many OH- to both sides as there are H+.
8Al + 23H2O + 3NO3- + 5OH- → 8Al(OH)4 - + 3NH3 +5H+ + 5OH-
Neutralization procedure: Combine the H+ and the OH- to form H2O.
8Al + 3NO3- + 23 H2O + 5OH - → 8Al(OH)4- + 3NH3 + 5H2O
Now simplify: (by cancelling out redundant H2O molecules from each side)
8Al + 3NO3- + 18H2O + 5OH - → 8Al(OH)4- + 3NH3
Step 8:Indicate state of each species
8Al(s) + 3NO3- (aq) + 18H2O(l) + 5OH- (aq) → 8Al(OH)4-(aq) + 3NH3 (g)
This is the fully balanced net ionic equation.
Example $5$:
Balance the following reaction that occurs in basic solution. Write complete, balanced equations for the oxidation and reduction half-reactions and the net ionic equation.
N2H4 + BrO3- → NO + Br-
Solution
Step 1: Check the oxidation numbers to determine what is oxidized and what is reduced.
Bromine goes from +5 in BrO3- to -1 in Br-. Thus BrO3- is being reduced.
Nitrogen goes from -2 in N2H4 to +2 in NO. Hence N2H4 is being oxidized.
The unbalance half-reactions are:
Oxidation: N2H4 → NO
Reduction: BrO3- → Br-
Step 2: Balance atoms other than H and O:
Oxidation: N2H4 → 2NO
Reduction: BrO3- → Br-
Step 3: Balance O with H2O and then H with H+.
Oxidation: 2H2O + N2H42NO + 8H+
Reduction: 6H+ + BrO3 - → Br- + 3H2O
(Now the atoms are balanced but the charges are not)
Step 4:Balance the charge with electrons.
Oxidation: 2H2O + N2H4 → 2NO + 8H+ + 8e-
Reduction: 6H+ + BrO3- + 6e- → Br- + 3H2O
Step 5: Make the number of electrons the same in both the half reactions by findingthe least common multiple as we prepare to sum the two half equations.
3 x Oxidation: 6H2O + 3N2H46NO + 24H+ + 24e-
4 x Reduction: 24H+ + 4BrO3- + 24e- → 4Br- + 12H2O
Step 6: Now combine the two half-reactions.
6H2O + 3N2H4 + 24H+ + 4BrO3- + 24e- → 6NO + 24H+ + 24e- + 4Br- + 12H2O
Step 7:Simplify the summation.
3N2H4 + 4BrO3- → 6NO + 4Br- + 6H2O
Step 8: Indicate state of each species
3N2H4 (g) + 4BrO3- (aq) → 6NO (g) + 4Br- (aq) + 6H2O (l)
This is the fully balanced net ionic equation.
Exercise $3$
Balance the equation: I- + Br2 → IO3- + Br- in acidic solution.
Oxidation: 3H2O + I- → IO3- + 6e- + 6H+
Reduction: Br2 + 2e- → 2Br-
Overall: 3Br2 + I- + 3H2O → 6Br- + IO3- + 6H+
Answer
Redox Titrations
Redox titration is a titration in which the reaction between the analyte and titrant is an oxidation/reduction reaction. Like acid-base titrations, redox titrations normally require an indicator that clearly changes colour. In the presence of large amounts of reducing agent, the colour of the indicator is characteristic of its reduced form. The indicator normally assumes the colour of its oxidized form when it is present in an oxidizing medium. At or near the equivalence point, a sharp change in the indicator’s colour will occur as it changes from one form to the other, so the equivalence point can be readily identified.
Since all redox titrations involve the flow of electrons, all redox titrations can bemonitored by looking at the electrical potential of the solution. All one needs to monitor the potential of a solution is a reference electrode and an inert electrode. The details of the workings of such a setup is however, outside the scope of this unit and will not be covered. Nevertheless, the relevant expression that utilizes the experimentally measurable elctrochemical potential, E as a function of titrant volume will be discussed later.
Titrimetic methods that are based on the use of the principles of redox reactionshave been used widely in the determination of metals which have two well-definedoxidation states. The process of analysis often involves either:
(i) converting all the metal ions to be analysed (analyte) to a higher oxidation state by use of an oxidizing agent such as sodium peroxide and sodium bismuthate, or
(ii) converting all the analyte metal ions to a lower oxidation state by using a reducing agent such as sulphur dioxide or sodium bisulphite.
In both situations, an excess of reagent is needed which is then destroyed or removed before the sample is titrated.
There are other ways of carrying out quantitative reduction experiments but these are outside the scope of this Unit and will not be covered here.
Redox Titration Curves
To evaluate a redox titration we must know the shape of its titration curve. In an acid–base titration (see previous unit) or a complexation titration (see unit 4), a titration curve shows the change in concentration of hydronium ions, H3O+ (as pH) or Mn+ (as pM) as a function of the volume of titrant. For a redox titration,
it is convenient to monitor electrochemical potential. Nernst equation, which relates the electrochemical potential to the concentration of reactants and products participating in a redox reaction is often used in determining the concentration of an analyte. Consider, for example, a titration in which the analyte in a reduced state, Ared, is titrated against a titrant in an oxidized state, Tox. The titration reaction can therefore be expressed as:
Ared + Tox Tred + Aox
The corresponding electrochemical potential for the reaction, Erxn, is the difference between the potentials for the reduction and oxidation half-reactions; i.e.,
$\bf{E}_{rxn}=\bf{E}_{Tox/Tred}-\bf{E}_{Aox}$
During the titration process, upon each addition of titrant, the reaction between the analyte and titrant reaches a state of equilibrium. Under these equilibrium conditions, the reaction’s electrochemical potential, Erxn, becomes zero, and
$\bf{E}_{Tox/Tred}=\bf{E}_{Aox/Ared}$
This is true since the two redox systems we are titrating together are in equili- brium and hence the potential of the two pairs are equal. Thus, the potential of the reactant mixture can be found by calculating the electrochemical potential, E for either of the redox pairs.
Consequently, the electrochemical potential for either half-reaction may beused to monitor the titration’s progress.
It is significant to note that before the equivalence point is reached during thecourse of the titration process, the resultant mixture consists of appreciable quantities of both the oxidized (Aox) and reduced (Ared) forms of the analyte, but very little unreacted titrant. The potential, therefore, is best calculated using the Nernst equation for the analyte’s half-reaction. Although Aox /Ared is the standard-state potential for the analyte’s half-reaction, a matrix-dependent formal potential,the potential of a redox reaction for a specific set of solution conditions, such aspH and ionic composition, is used in its place.
Illustration of How to Calculate a Redox Titration Curve
Example $6$:
Let us calculate the titration curve for the titration of 50.0 mL of 0.100 mol/L Fe2+ with 0.100 mol/L Ce4+ in a matrix of 1 mol/L HClO . The reaction in
this case is
Fe2+(aq) + Ce4+(aq) ⇌ Ce3+(aq) + Fe3+(aq)
The equilibrium constant for this reaction is quite large (it is approximately 6x1015), so we may assume that the analyte and titrant react completely.
The first task in calculating the titration curve is to calculate the volume of Ce4+needed to reach the equivalence point. From the stoichiometry of the reaction we know that
Number of moles of Fe2+ = number of moles of Ce4+ or
MFeVFe = MCeVCe
where MFe is the concentration of Fe2+ and VFe is the volume of solution of Fe2+; and MCE is the concentration of Ce4+ and VCe is the volume of solution of Ce4+. Solving for the volume of Ce4+ gives the equivalence point volume as 50.0 mL (i.e., VCe = MFeVFe / MCe).
Before the equivalence point the concentration of unreacted Fe2+ and the concentration of Fe3+ produced by the reaction are easy to calculate. For this reason we find the electrochemical potential, E using the Nernst equation for the analyte’s half-reaction
$\mathrm { E } = \mathrm { E } _ { \mathrm { Fe } ^ { 3 + } / \mathrm { Fe } ^ { 2 + } } ^ { 0 } - 0.05916 \log \frac { \left[ \mathrm { Fe } ^ { 2 + } \right] } { \left[ \mathrm { Fe } ^ { 3 + } \right] } \nonumber$
Illustration:
The concentrations of Fe2+ and Fe3+ in solution after adding 5.0 mL of titrant (i.e., Ce4+ solution) are:
$\ \left[ \mathrm { Fe } ^ { 2 + } \right] = \frac { \text { moles of unreacted } \mathrm { Fe } ^ { 2 + } } { \text { total volume } } = \frac { \mathrm { M } _ { \mathrm { Fc } } \mathrm { V } _ { \mathrm { Fe } } - \mathrm { M } _ { \mathrm { Ce } } \mathrm { V } _ { \mathrm { Ce } } } { \mathrm { V } _ { \mathrm { re } } + \mathrm { V } _ { \mathrm { ce } } } = \frac { ( 0.100 \times 50 ) - ( 0.100 \times 5 ) } { ( 50 + 5 ) } = 8.18 \times 10 ^ { - 2 } \mathrm { M } \nonumber$
$\ \left[ \mathrm { Fe } ^ { 3 + } \right] = \frac { \text { moles of } \mathrm { Ce } ^ { 4 + } \text { added } } { \text { total volume } } = \frac { \mathrm { M } _ { c e } \mathrm { V } _ { \mathrm { ce } } } { \mathrm { V } _ { \mathrm { Fe } } + \mathrm { V } _ { \mathrm { ce } } } = 9.09 \times 10 ^ { - 3 } \mathrm { mol } / \mathrm { L } \nonumber$
Substituting these concentrations into the Nernst equation for Fe along with the formal potential for the Fe3+/Fe2+ half-reaction from a table of reduction potentials,we find that the electrochemical potential is:
$\ \mathrm { E } = + 0.767 \mathrm { V } - 0.05916 \log \left( \frac { 8.18 \times 10 ^ { - 2 } } { 9.09 \times 10 ^ { - 3 } } \right) = + 0.711 \mathrm { V }$
Similar electrochemical potential calculations can be carried out for various volumes of titrant (Ce4+) added.
At the equivalence point, the moles of Fe2+ initially present and the moles of Ce4+added become equal. Now, because the equilibrium constant for the reaction is large, the concentrations of Fe2+ and Ce4+ become exceedingly small and difficultto calculate without resorting to a complex equilibrium problem. Consequently, we cannot calculate the potential at the equivalence point, Eeq using just the Nernst equation for the analyte’s half-reaction or the titrant’s half-reaction. We can, however, calculate Eeq by combining the two Nernst equations. To do so we recognize that the potentials for the two half-reactions are the same; thus,
$\ \mathrm { E } _ { \mathrm { eq } } = \mathrm { E } _ { \mathrm { Fe } ^ { 3+ }/ \mathrm { Fe } ^ { 2 + } } ^ { 0 } - 0.05916 \log \frac { \left[ \mathrm { Fe } ^ { 2 + } \right] } { \left[ \mathrm { Fe } ^ { 3 + } \right] } \mathrm { and }$
$\ \mathrm { E } _ { \mathrm { eq } } = \mathrm { E } _ { \mathrm { ce } ^ { 4 }/ \mathrm { Ce } ^ { 3 + } } ^ { 0 } - 0.05916 \log \frac { \left[ \mathrm { Ce } ^ { 3 + } \right] } { \left[ \mathrm { Ce } ^ { 4 + } \right] }$
Adding the two Nernst equations yields
$\ 2 \mathrm { E } _ { \mathrm { eq } } = \mathrm { E } _ { \mathrm { Fe } ^ { 3 + }/ \mathrm { Fe } ^ { 2 + } } ^ { 0 } + \mathrm { E } _ { \mathrm { Ce } ^ { 4 + } / \mathrm { ce } ^ { 3 + } } ^ { 0 } - 0.05916 \log \frac { \left[ \mathrm { Fe } ^ { 2 + } \right] \left[ \mathrm { Ce } ^ { 3 + } \right] } { \left[ \mathrm { Fe } ^ { 3 + } \right] \left[ \mathrm { Ce } ^ { 4 + } \right] }$
At the equivalence point, the titration reaction’s stoichiometry requires that
[Fe2+] = [Ce4+] and [Fe3+] = [Ce3+]
This will make the log term in the above equation equals zero. The equationsimplifies to
$\ \bf{E}_{eq}={\bf{E^o}_{Fe^{3+}/{Fe^{2+}}}+\bf{E^o}_{Ce^{4+}/Ce^{3+}}\over 2 }={0.767V+1.70V\over 2}=1.23$
After the equivalence point, the concentrations of Ce3+ and excess Ce4+ are easy to calculate. The potential, therefore, is best calculated using the Nernst equation for the titrant’s half-reaction,
$\ \mathrm { E } = \mathrm { E } _ { \mathrm { ce } ^ { 4 + } / \mathrm { ce } ^ { 3 + } } ^ { 0 } - 0.05916 \log \frac { \left[ \mathrm { Ce } ^ { 3 + } \right] } { \left[ \mathrm { Ce } ^ { 4 + } \right] }$
Illustration:
After adding 60.0 mL of titrant, the concentrations of Ce3+ and Ce4+ are
$\ [\mathrm {Ce^{3+}}]={initial\ moles\ \mathrm{Fe^{2+}}\over total\ volume}={\bf{M}_{Fe}\bf{V}_{Fe}\over{\bf{V}_{Fe}+\bf{V}_{Ce}}}={(0.100\ mol/L)(50.0\ mL)\over 50.0\ mL \ +\ 60.0\ mL }=4.55x10^{-2}\ mol/L$
$\ [\mathrm{Ce}^{4+}]={moles\ of\ excess\ \mathrm{Ce}^{4+}\over total\ volume}={\bf{M}_{Ce}\bf{V}_{Ce}\ -\ \bf{M}_{Fe}\bf{V}_{Fe}\over \bf{V}_{Fe}\ +\ \bf{V}_{Ce}}={(0.100\ mol/L)(60.0\ mL)\ -\ (0.100\ mol/L)(50.0\ mL)\over 50.0\ mL\ +\ 60.0\ mL}$
Substituting these concentrations into the Nernst equation for Ce4+/Ce3+ half- reaction equation gives the potential as
$\ \mathrm { E } = + 1.70 \mathrm { V } - 0.05916 \log \frac { 4.55 \times 10 ^ { - 2 } } { 9.09 \times 10 ^ { - 3 } } = 1.66 \mathrm { V }$
Additional data for the plot of the titration curve can be generated following the same procedure above.
How to Sketch a Redox Titration Curve using minimum number of calculations:
Example $7$:
Sketch a titration curve for the titration of 50.0 mL of 0.100 mol/L Fe2+ with 0.100 mol/L Ce4+ in a matrix of 1 M HClO4.
Solution
This is the same titration that we previously calculated the titration curve for.
We begin as usual, by drawing the axes for the titration curve Potential E versus volume of titrant added in mLs. Having shown that the equivalence point volume is 50.0 mL, we draw a vertical line intersecting the x-axis at this volume.
Before the equivalence point, the solution’s electrochemical potential is calculated from the concentration of excess Fe2+ and the concentration of Fe3+ produced by the titration reaction. Using tabulated values of the matrix-dependent formal potential values, we can now calculate the corresponding electrochemical potential values E and plot E for 5.0 mL and 45.0 mL of titrant in the graph.
After the equivalence point, the solution’s electrochemical potential is determined by the concentration of excess Ce4+ and the concentration of Ce3+. Using tabulated values of the matrix-dependent formal potential values, we can now calculate the corresponding electrochemical potential values E and again plot points for 60.0 mL and 80.0 mL of titrant.
To complete an approximate sketch of the titration curve, we draw separate straight lines through the two points before and after the equivalence point. Finally, a smooth curve is drawn to connect the three straight-line segments.
Selecting and Evaluating the End Point
The equivalence point of a redox titration occurs when stoichiometrically equivalent amounts of analyte and titrant react. As with other titrations, any difference between the equivalence point and the end point is a determinate (see unit 1) source of error.
The most obvious question to ask is: Where is the Equivalence Point?
Previously, in discussing acid–base titrations, we noted that the equivalence pointis almost identical with the inflection point located in the sharply rising part of anytitration curve. When the stoichiometry of a redox titration is symmetrical (i.e., one mole analyte per mole of titrant), then the equivalence point is equally sym-metrical. If however, the stoichiometry is not symmetrical, then the equivalence point will lie closer to the top or bottom of the titration curve’s sharp rise. In this case the equivalence point is said to be asymmetrical. The following example shows how to calculate the equivalence point potential in this situation.
Example $8$:
Derive a general equation for the electrochemical potential at the equivalence point for the titration of Fe2+ with MnO4- . The stoichiometry of the reaction is such that
5Fe2+(aq) + MnO4- (aq) + 8H3O+ (aq) ⇌5Fe3+ (aq) + Mn2+(aq) + 12H2O(l)
Solution
The redox half-reactions for the analyte and the titrant are:
Fe2+(aq)⇌ Fe3+(aq) + e
MnO4- (aq) + 8H3O+(aq) + 5e- ⇌ Mn2+(aq) + 12H2O(l)
for which the corresponding Nernst equations are:
$\ \mathrm { E } _ { \mathrm { eq } } = \mathrm { E } _ { \mathrm { Fe } ^ { 3 + }/ \mathrm { Fe } ^ { 2 + } } ^ { 0 } - 0.05916 \log \frac { \left[ \mathrm { Fe } ^ { 2 + } \right] } { \left[ \mathrm { Fe } ^ { 3 + } \right] }$
$\ \mathrm { E } _ { \mathrm { eq } } = \mathrm { E } _ { \mathrm { MnO }_4 ^ { - }/ \mathrm { Mn } ^ { 2 + } } ^ { 0 } - {0.05916\over 5} \log \frac { \left[ \mathrm { Mn } ^ { 2 + } \right] } { \left[ \mathrm { MnO }_4 ^ { - } \right]\left[ \mathrm { H_3O } ^ { + } \right]^8 }$
Before adding together these two equations, the second equation must be multi-plied by 5 so that the log terms can be combined; thus at the equivalence point,we know that
[Fe2+] = 5 x [MnO4- ]
[Fe3+] = 5 x [Mn2+]
Substituting these equalities into the equation for Eeq and rearranging gives
$\ \bf{E}_{eq}={\bf{E}_{Fe^{3+}/Fe^{2+}}^0\ +\ \bf{5E}_{\bf{MnO}_4^-/Mn^{2+}}^0\over 6}-{0.05916\over 6}log{5[\bf{MnO}_4^-][Mn^{2+}]\over 5[Mn^{2+}][\bf{MnO}_4^-][\bf{H}_3O^+]^8}$
$\ ={\bf{E}_{Fe^{3+}/Fe^{2+}}^0\ +\ \bf{5E}_{\bf{MnO}_4^-/Mn^{2+}}\over 6}-{0.05916\over 6}log{1\over [\bf{H}_3O^+]^8}$
$\ ={\bf{E}_{Fe^{3+}/Fe^{2+}}^0\ +\ \bf{5E}_{\bf{MnO}_4^-/Mn^{2+}}\over 6}+{(0.05916)(8)\over 6}log [\bf{H}_3O^+]$
$\ ={\bf{E}_{Fe^{3+}/Fe^{2+}}^0\ +\ \bf{5E}_{\bf{MnO}_4^-/Mn^{2+}}\over 6}-0.0788pH$
For this titration, the electrochemical potential at the equivalence point consistsof two terms: The first term is a weighted average of the standard state orformal potentials for the analyte and titrant, in which the weighting factors are the number of electrons in their respective redox half-reactions. The second term shows that Eeq is pH-dependent. The Figure below shows a typical titration curve for the analysis of Fe2+ by titration with MnO4-, showing the asymmetrical equivalence point. Note that the change in potential near the equivalence point is sharp enough that selecting an end point near the middle of the titration curve’ssharply rising portion does not introduce a significant titration error.
Figure: Titration curve for Fe2+ with MnO4- in 1 mol/L H2SO4 . The equivalence point is shown by the symbol •.
Detection of End Point:
The end point of a redox titration can be determined by either an electrode via measuring the electrochemical potential with an indicating electrode relative to a reference electrode and plotting this electrochemical potential against the volume of titrant or by a colour indicator. However, as in other titrations, it is usually more convenient to use visual indicators.
Finding the End Point with a Visual Indicator
There are three methods used for visual indication of the end point in a redox titration. These are:
A. Self-Indication: A few titrants, such as MnO4-, have oxidized and reduced forms whose colors in solution are significantly different: Solutions of MnO4- are intensely purple. In acidic solutions, however, the reduced form of permanganate, Mn2+, is nearly colourless. When MnO4- is used as an oxidizing titrant, the solution remains colourless until the first drop of excess MnO4- is added. The first permanent tinge of purple signals the end point.
B. Starch Indicator: A few substances indicate the presence of a specific oxidized or reduced species. Starch, for example, forms a dark blue complex with I3- and can be used to signal the presence of excess I3- (color change: colourless to blue), or the completion of a reaction in which I3- is consumed (color change:blue to colourless). Another example of a specific indicator is thiocyanate, whichforms a soluble red-colored complex, Fe(SCN)2+, with Fe3+.
C. Redox Indicators: The most important class of visual indicators, however, are substances that do not participate in the redox titration, but whose oxidized and reduced forms differ in color. When added to a solution containing the ana- lyte, the indicator imparts a color that depends on the solution’s electrochemical potential. Since the indicator changes color in response to the electrochemicalpotential, and not to the presence or absence of a specific species, these compoundsare called general redox indicators.
The relationship between a redox indicator’s change in color and the solution’s electrochemical potential is easily derived by considering the half-reaction for the indicator
Inox + ne- ⇌ Inred
where Inox and Inred are the indicator’s oxidized and reduced forms, respectively. The corresponding Nernst equation for this reaction is
$\ E=\bf{E}_{\bf{In}_{ox}/\bf{In}_{red}}^0-{0.05916\over n}log{[\bf{In}_{red}]\over [\bf{In}_{ox}]}$
If we assume that the indicator’s color in solution changes from that of Inox to that of Inred when the ratio [Inred]/[Inox] changes from 0.1 to 10, then the end point occurs when the solution’s electrochemical potential is within the range
$\ E=\bf{E}_{\bf{In}_{ox}/\bf{In}_{red}}^0\pm{ 0.05916\over n}$
Table below shows some examples of general redox indicators
Table showing Redox Indicators
Indicator Colour Solution E0 (V)
Reduced form Oxidized form
Nitroferroin Red Pale Blue 1 mol/L H2SO4 1.25
Ferroin Red Pale Blue 1 mol/L H2SO4 1.06
Diphenylamine sulfonic acid Colourless Purple Dilute acid 0.84
Diphenylamine Colourless Violet 1 mol/L H2SO4 0.76
Methylene Blue Blue Colourless 1 mol/L acid 0.53
Indigo tetrasulfonate Colourless Blue 1 mol/L acid 0.36
Exercise $4$
A 25.0 mL solution of sodium ethanedioate (Sodium oxalate) ofconcentration 0.10 mol/L was placed in a titration flask. A solution of potas- sium manganate(VII) of concentration 0.038 mol/L was run from a burette intothe titration flask. To ensure that the reaction takes place at a suitable rate, thesolution was heated to nearly 60°c before potassium manganate (VII) solution was run in from burette.
1. Write the balanced overall redox reaction given the following half-reac- tions:
Oxidation half-reaction: C2O42- → 2 CO2 + 2e-
Reduction half-reaction: MnO4- (aq) + 8H3O+(aq) + 5e- ⇌ Mn2+ (aq) + 12H2O(l)
2. Which indicator is the most suitable to use in this titration?
3. Derive the expression for Eeq
4. What volume of the manganate(VII) solution would be needed to reachthe end point of the titration?
Answer | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/14.3%3A_Redox_Reactions_and_Titrations.txt |
Learning Objectives
• Define and use the relevant terminologies of complex ion equilibria
• Compare and contrast between complex ion and Lewis acid-base equilibria
• Describe and explain the concept of complex equilibria and stepwise equilibrium reactions.
• Use the concept of chemical equilibria in complexometric titrations and calculations.
• Distinguish among the various types of EDTA titrations and their uses.
• Carry out complexometric titrations and related calculations.
In this unit, the concept of complex ion formation and associated stepwise equilibrium reactions will be examined and discussed. Particular emphasis will be given to the application of the complex ion reactions in complexometric titrations, titrimetric methods based upon complex formation, as a means to quantitative analysis of metal ions in solution. The emphasis will be on how complex ion formation can be used as a basis of a titrimetric method to quantify metal ions in solution. Here, ethylenediamine tetraacetic acid (EDTA) will be studied as an analytical reagent or titrant that forms very stable complexes with many metal ions in complexometric titration. EDTA (a tertiary amine that also contains carboxylic acid groups) is the most widely used polyaminocarboxylic acid. A discussion of the factors that influence the stability of metal-EDTA complexes and their significance as well as the types of EDTA titrations will also be covered.
Key Concepts
• Acid-base indicators: acids or bases which exhibits a visual change on neutrali- zation by the basic or acidic titrant at or near the equivalence point.
• Chelation: the process involved in formation of a chelate.
• Chemical stoichiometry: measurement based on exact knowledge of chemical combination
• Colorimetric indicator: intensely coloured substances in atleast one form (bound or unbound to a metal) and do change colour when the metal ion analyte binds with it.
• Complex: a substance composed of two or more components capable of an in- dependent existence.
• Complexation: the association of two or more chemical species that are capable of independent existence by sharing one or more pairs of electrons.
• Complexometric indicator: water soluble organic molecules that undergoedefinite colour change in the presence of specific metal ions and are used incomplexometric titrations.
• Complexometric titration: A titration based on the formation of coordination complexes between a metal ion and complexing agent (or chelating agent) to form soluble complexes.
• Complexing agent or ligand: molecules and/or anions with one or more donor atoms that each donate a lone pair of electrons to the metal ion to form a covalent bond.
• Coordination complex: a complex in which a central atom or ion is joined to one or more ligands through what is formally a coordinate covalent bond in which both of the bonding electrons are supplied by a ligand.
• Metal chelate: a species that is simultaneously bound to two or more sites on a ligand.
• Monodentate (or Unidentate) ligand: A ligand that shares a single pair of elec- trons with a central metal ion in a complex.
• Multidentate ligand: A ligand which shares more than one pair of electrons with a central metal ion in a complex. Those ligands which share 2, 3, 4, 5, or 6 are referred to as bidentate, tridentate, tetradentate (or quadridentate), pentadentate (or quinqui dentate), and hexadentate, respectively.
• Stability constant of a complex: a measure of the extent of formation of the complex at equilibrium.
Introduction
Complex-forming reactions involving many metal ions can serve as a basis for accurate and convenient titrations for such metal ions. These kinds of complex ion titration procedures referred to as complexometric titrations, have high ac- curacies and offer the possibility of determinations of metal ions at the millimole levels. They have their applications in many chemical and biological processes. The processes involved in the formation of complex ions are basically acid-base type reactions in which the metal ion acts as an acid and the anions or molecules as the base (see unit 2 that deals with acids and bases). In this activity, the theoryand applications of complex ion formation, and specifically complexometric ti- trations in quantifying metal ions in solution will be examined. In addition, thesignificance of using a reagent that forms a chelate over one that merely forms acomplex with a metal ion in volumetric analysis will be explained. Since much attention has recently been focused on the use of ethylenediamine tetraacetic acid (EDTA) in titrimetry, its various applications will be highlighted in this unit.
Introduction to Complexation Equilibria and Processes
In this introduction, important terminologies that will be encountered when dealing with the topic of complexation titration are provided. Also included is a brief description of what a complex is and how their very nature contrast Lewis acid-base systems.
In the broadest sense, complexation is the association of two or more chemical species that are capable of independent existence by sharing one or more pairs of electrons. Although this type of a chemical reaction can be classified as a Lewis acid-base reaction, it is more commonly known as a complexation reaction. As applied to chemical analysis, this definition is generally taken to mean the bonding of a central metal ion, capable of accepting an unshared pair of electrons with a ligand that can donate a pair of unshared electrons.
Consider the addition of anhydrous copper (II) perchlorate to water. The salt dissolves readily according to the reaction,
$\ce{Cu(ClO4)2 (s) + 4H:O:H -> Cu(H2O)4^{2+} (aq) + 2ClO4^{-}}$
in which a pair of electrons on the oxygen atom of each H2O molecule forms a coordinate covalent bond, a bond in which both electrons originate from one atom(in this case one oxygen atom of H2O), to Cu2+ ion. In this reaction, Cu2+ acts as a Lewis acid and H2O as a Lewis base. Such binding of solvent molecules to a metal ion is called solvation or, in the special case of solvent water, hydration. The Cu(H2O)42- ion is called an aquo complex.
In a complexation reaction, the product of the reaction is termed a complex. The species which donates the electron pairs by acting as a Lewis base is known as a complexing agent or ligand, and the ion which accepts the donated electrons, theLewis acid, is called the central ion or central atom. Central ions are generally metallic cations. The ligand can be either a neutral molecule such as water orammonia; or an ion such as chloride, cyanide, or hydroxide. The complex can have either a positive or a negative charge, or it can be neutral.
For most analytical applications, complexation occurs between a dissolved metal ion and a dissolved ligand capable of displacing water from the metal ion. This is illustrated for the reaction between a hydrated copper (II) ion and dissolved NH3 ligand below.
$\ \left[ \mathrm { Cu } \left( \mathrm { H } _ { 2 } \mathrm { O } \right) _ { 4 } \right] ^ { 2 + } + \mathrm { NH } _ { 3 } \stackrel { \mathrm { water } } { \longrightarrow } \left[ \mathrm { CuNH } _ { 3 } \left( \mathrm { H } _ { 2 } \mathrm { O } \right) _ { 3 } \right] ^ { 2 + } + \mathrm { H } _ { 2 } \mathrm { O }$
Normally for reactions that occur in water, $\ce{H2O}$ is omitted and the complexation reaction is written simply a
$\ce{Cu^{2+} + NH3 <=> [CuNH3]^{2+}}$
Classification of Ligands
Ligands are classified according to the number of pairs of electrons which theycan share with the central metal or metal ion. A ligand that shares a single pair of electrons (such as ammonia, water, cyanide, F-, Cl-, Br-, I-, CN-, SCN-, NO2-, NH3, H2O, N(CH2CH3)3, CH3COCH3, etc.) is a monodentate or unidentate ligand; a ligand, which shares more than one pair of electrons, is a multidentate ligand. A multidentate ligand, which shares two (such as NH2CH2CH2NH2, C2O42-, etc,), three, four, five, or six pairs of electrons is a bidentate, tridentate (or terdentate), tetradentate (or quadridentate), pentadentate (or quinqui dentate), or hexadentate (or sexadentate) ligand, respectively. The maximum number of electron pair donor groups that a metal ion can accommodate in complexation reaction is known as its coordination number. Typical values are 2 for Ag+, as in Ag(CN)2-, 4 for Zn2+, as in Zn(NH3)42+; and 6 for Cr3+, as in Cr(NH3)63+.
Nature of Linkage in complex ions
A central metal ion can form a single bond with a ligand which is able to donate a pair of electrons from one of its atoms only, as in the examples given above for the formation of Zn(NH3)42+, Cr(NH3)63+ etc. However, with multidentate (or sometimes known as polydentate) ligands, it can form a bond in more than one location to form a ring structure. Generally, ring formation results in increased stability of the complex. A species that is simultaneously bonded to two or more sites on a ligand is called a metal chelate, or simply a chelate and the process of its formation is called chelation.
A chelate is formed if two or more donor atoms is coordinated by the simulta- neous use of two or more electron pairs to the same metal atom. An example ofa metal-EDTA complex is provided in the figure below.
Note that, all types of bidentate, tridentate, tetradentate, pentadentate and hexa- dentate ligands can act as chelating ligands and their complexes with metals are therefore known as chelates.
Here, EDTA behaves as a hexadentate ligand since six donor groups are involved in bonding the divalent metal cation.
Importance of Chelates
Chelates find application both in industry and in the laboratory where fixing ofmetal ions is required. In analytical chemistry, chelates are used in both qualitative and quantitative analysis. For example, Ni2+, Mg2+, and Cu2+ are quantitatively precipitated by chelating agents. In volumetric analysis, chelating agents (such as ethylenediamine tetraacetic acid, EDTA) are often used as a reagents or as indicators for the titration of some metal ions. Because of the stability of chelates, polydentate ligands (also called chelating agents) are often used to sequester or remove metal ions from a chemical system. Ethylenediamine tetraacetic acid (EDTA), for example, is added to certain canned foods to remove transition-metal ions that can catalyze the deterioration of the food. The same chelating agent has been used to treat lead poisoning because it binds Pb2+ ions as the chelate, which can then be excreted by the kidneys.
In the subsequent sections that follow, the application of the fundamentals of complex ion formation is demonstrated in complexometric titration. This isachieved after briefly considering the subtopic of complex equilibria.
Complex ion Equilibria
Stability constant of a complex is defined as a measure of the extent of formation of the complex at equilibrium. Stability of a complex depends on the strength of the linkage between the central metal ion and the ligands (ie., the bond) and therefore, the stronger the metal ligand bond, the more stable the complex.
Metal complexes are formed by replacement of molecules in the solvated shell of a metal ion in aqueous solution with the ligands by stepwise reaction as shown below:
[M(H2O)2] + L ⇌ [M(H2O)n-1L] + H2O
[M(H2O)n-1L] + L ⇌ [M(H2O)n-2L2] + H2O
[M(H2O)n-2L2] + L ⇌ [M(H2O)n-3L3] + H2O
Overall reaction is:
[M(H2O)n] + nL ⇌ [MLn] + nH2O
where L stands for the ligand and n refers to the number of molecules of a parti- cular species. If we ignore the water molecules in the above equations, one can then write the above equations and their corresponding equilibrium constants as follows:
M + L ⇌ ML $\ \bf{K}_1=\dfrac{[ML]}{[M][L]}$
ML + L ⇌ ML2 $\ \bf{K}_2=\dfrac{[\bf{ML}_2]}{[ML][L]}$
ML2 + L ⇌ ML3 $\ \bf{K}_3=\dfrac{[\bf{ML}_3]}{[\bf{ML}_2][L]}$
MLn-1 + L ⇌ MLn $\ \bf{K}_n=\dfrac{[\bf{ML}_n]}{[\bf{ML}_{n-1}][L]}$
The equilibrium constants, K1, K2, K3, ...., and Kn are known as the stepwise formation constants or stepwise stability constants or consecutive stability constants.
Note that the values of the stepwise stability constants decrease in the order:
K1 >K2 >K3 >...>Kn
because a previously metal ion-coordinated ligand tends to repel any incoming ligand of a similar kind.
The products of the stepwise stability constants is known as overall stability or cumulative stability constant and is designated as $β$, i.e.,
$β=K_1 \times K_2 \times K_3 \times ... \times K_n$
$\ \beta = \frac { [ \mathrm { ML } ] } { [ \mathrm { M } ] [ \mathrm { L } ] } \cdot \frac { \left[ \mathrm { ML } _ { 2 } \right] } { [ \mathrm { ML } ] [ \mathrm { L } ] } \cdot \frac { \left[ \mathrm { ML } _ { 3 } \right] } { \left[ \mathrm { ML } _ { 2 } \right] [ \mathrm { L } ] } \cdots \cdot \frac { \left[ \mathrm { ML } _ { \mathrm { n } } \right] } { \left[ \mathrm { ML } _ { n - 1 } \right] [ L ] }$
As previously mentioned, multidentate ligands which form five- or six-member red rings with central metal ions, generally have unusually high stability. To be useful in a titration, the complexation reaction must occur rapidly as compared with the rate of addition of the titrant. Complexes which are formed rapidly are called labile complexes and those which are formed slowly are called nonla-bile or inert complexes. Generally, only titration reactions which form labile complexes are useful.
Consider the simple complexation of copper (II) ion by the unidentate ligand NH3 in water. The reaction between these two species is
$\ce{Cu^{2+} + NH3 <=> [CuNH3]^{2+}}$
(the H2O is omitted for simplicity). In aqueous solution the copper (II) ion is actually hydrated and NH3 replaces H2O. The equilibrium constant for this reaction is the stepwise formation constant, K1, is expressed as:
$\ \mathrm { K } _ { 1 } = \frac { \left[ \left[ \mathrm { CuNH } _ { 3 } \right] ^ { 2 + } \right] } { \left[ \mathrm { Cu } ^ { 2 + } \right] \left[ \mathrm { NH } _ { 3 } \right] } = 2.0 \times 10 ^ { 4 }$
The equilibrium of the addition of a second ammonia molecule,
CuNH32+ + NH3 ⇌ [Cu(NH3)2 ]2+
is described by a second stepwise formation constant, $K_2$,
$\ \mathrm { K } _ { 2 } = \frac { \left[ \left[ \mathrm { Cu } \left( \mathrm { NH } _ { 3 } \right)_2 \right] ^ { 2 + } \right] } { \left[ \mathrm { CuNH } _ { 3 } \right] ^ { 2 + } ] \left[ \mathrm { NH } _ { 3 } \right] } = 5.0 \times 10 ^ { 3 }$
The overall process for the addition of the two NH3 molecules to a Cu2+ ion and the equilibrium constant for that reaction are given by the following:
Cu2+ + NH3 ⇌ [CuNH3 ]2+
$\ \mathrm { K } _ { 1 } = \frac { \left[ \left[ \mathrm { CuNH } _ { 3 } \right] ^ { 2 + } \right] } { \left[ \mathrm { Cu } ^ { 2 + } \right] \left[ \mathrm { NH } _ { 3 } \right] } = 2.0 \times 10 ^ { 4 }$
CuNH32+ + NH3 ⇌ [Cu(NH3)2]2+
$\ \mathrm { K } _ { 2 } = \frac { \left[ \left[ \mathrm { Cu } \left( \mathrm { NH } _ { 3 } \right) _ { 2 } \right] ^ { 2 + } \right] } { \left[ \mathrm { CuNH } _ { 3 } \right] ^ { 2 + } ] \left[ \mathrm { NH } _ { 3 } \right] } = 5.0 \times 10 ^ { 3 }$
Cu2+ + 2NH3 ⇌ [Cu(NH3)2 ]2+
$\ \beta _ { 2 } = \frac { \left[ \left[ C u \left( \mathrm { NH } _ { 3 } \right) _ { 2 } \right] ^ { 2 + } \right] } { \left[ C u ^ { 2 + } \right] ] \left[ N H _ { 3 } \right] ^ { 2 } } = K _ { 1 } \times K _ { 2 } = 1.0 \times 10 ^ { 8 }$
The formation constant β2 (= K1K2) is called an overall formation constant. Recall that the equilibrium constant of a reaction obtained by adding two other reactions is the product of the equilibrium constants of these two reactions, β2.
Similarly, the stepwise and overall formation constant expressions for the complexation of a third and fourth molecule of NH3 to copper (II) are given by the following:
[Cu(NH3)2 ]2+ + NH3 ⇌ [Cu(NH3)3]2+ $\ \mathrm { K } _ { 3 } = \frac { \left[ \left[ \mathrm { Cu } \left( \mathrm { NH } _ { 3 } \right) _ { 3 } \right] ^ { 2 + } \right] } { \left[ \left[ \mathrm { Cu } \left( \mathrm { NH } _ { 3 } \right) _ { 2 } \right] ^ { 2 + } \right] \left[ \mathrm { NH } _ { 3 } \right] }$
[Cu(NH3)3 ]2+ + NH3 ⇌ [Cu(NH3)4]2+ $\ \mathrm { K } _ { 4 } = \frac { \left[ \left[ \mathrm { Cu } \left( \mathrm { NH } _ { 3 } \right) _ { 4 } \right] ^ { 2 + } \right] } { \left[ \left[ \mathrm { Cu } \left( \mathrm { NH } _ { 3 } \right) _ { 3 } \right] ^ { 2 + } \right] \left[ \mathrm { NH } _ { 3 } \right] }$
Cu2+ + 4NH3 ⇌ [Cu(NH3)4]2+
$\ \beta _ { 4 } = \frac { \left[ \left[ \mathrm { Cu } \left( \mathrm { NH } _ { 3 } \right)_4 \right] ^ { 2 + } \right] } { \left[ \mathrm { Cu } ^ { 2 + } \right] ] \left[ \mathrm { NH } _ { 3 } \right] ^ { 4 } } = \mathrm { K } _ { 1 } \times \mathrm { K } _ { 2 } \times \mathrm { K } _ { 3 } \times \mathrm { K } _ { 4 }$
The values of K3 and K4 are 1.0 x 103 and 2.0 x 102, respectively. Therefore, the values of β3 and β4 are 1.0 x 1011 and 2.0 x 1013, respectively.
The stepwise formation constants of the amine complexes of copper (II) are rela- tively close together. This means that over a wide range of NH3 concentrations, there will exist at the same time, at least two (normally more), copper (II) aminecomplexes in solution at significant concentrations relative to each other. This isgenerally true of unidentate ligands and hence limits their use as titrants for the determination of metal ions (save for a few specialized cases, which is beyond the scope of this module).
A major requirement for titration is a single reaction that goes essentially to completion at the equivalence point. This requirement is generally not met by unidentate ligands because of the fact that their formation constants are not very high.
Dissociation of Complexes
A given complex behaves as a weak electrolyte and dissociates to a small degree. The equilibrium constant for the dissociation of a complex is simply the inverse of its formation constant, Kform, and is known as the instability constant, Kins. For example, the complex ion Ag(NH3)2+ dissociates according to the equilibrium reaction:
[Ag(NH3)2 ]+ ⇌ Ag+ + 2NH3
and its instability constant is given by,
$\ \mathrm { K } _ { \mathrm { ins } } = \frac { 1 } { \mathrm { K } _ { \mathrm { form } } } = \frac { \left[ \mathrm { Ag } ^ { + } \right] \left[ \mathrm { NH } _ { 3 } \right] ^ { 2 } } { \left[ \mathrm { Ag } \left( \mathrm { NH } _ { 3 } \right) _ { 2 } ^ { + } \right] }$
In actual practice, the dissociation of a complex ion, just like the ionization of a polyprotic acid, occurs in steps as shown below:
Ag(NH3)2+ ⇌ Ag(NH3)+ + NH3 $\ \mathrm { K } _ { 1 } = \frac { \left[ \mathrm { Ag } \left( \mathrm { NH } _ { 3 } \right) ^ { + } \right] \left[ \mathrm { NH } _ { 3 } \right] } { \left[ \mathrm { Ag } \left( \mathrm { NH } _ { 3 } \right) _ { 2 } ^ { + } \right] }$
Ag(NH3)+ ⇌ Ag+ + NH3 $\ \mathrm { K } _ { 2 } = \frac { \left[ \mathrm { Ag } ^ { + } \right] \left[ \mathrm { NH } _ { 3 } \right] } { \left[ \mathrm { Ag } \left( \mathrm { NH } _ { 3 } \right) ^ { + } \right] }$
The overall instability constant, Kins = K1 x K2
Exercise $1$
Calculate the percent dissociation of a 0.10 M Ag(NH3)2+ solution if its instability constant, Kins = 6.3 x 10-8.
Answer
Application of Complex Equilibria in Complexation Titration:
The concept behind formation of complexes can be used as stated earlier (see section on importance of chelates), in quantitative analysis of either metal ions or other anions of interest.
An example to illustrate the use of complex titration exercise is in the determi- nation of cyanide present in a solution via the titration of cyanide with silver nitrate solution given below.
When a solution of silver nitrate is added to a solution containing cyanide ion(alkali cyanide), a white precipitate is formed when the two ligands first comeinto contact with each another. On stirring, the precipitate re-dissolves due to the formation of an alkali stable salt of silver-cyanide complex, i.e.,
Ag+ + 2CN- ⇌ [Ag(CN)2]-
When the above reaction is complete (following attainement of an equivalence point), further addition of the silver nitrate solution now yields an insoluble silver cyanoargentate (some times termed insoluble silver cyanide). The end point of the reaction is indicated by the formation of a permanent precipitate or turbidity. Such a titration experiment can be used to quantify the amount of cyanide present in a solution. Here cyanide is an example of a complexone; another term for a complexing agent.
Note that the formation of a single complex species in contrast to a stepwiseproduction of complex species simplifies complexation titration (i.e., complexo- metric titrations) and facilitates the detection of end points.
The chelate most commonly used for complexometric titrations is ethylenedia- mine tetraacetic acid (EDTA); an aminopolycarboxylic acid which is an excel- lent complexing agent. It is normally represented by either of the following two structures:
Its greatest advantage is that it is inexpensive, chemically inert, and it reacts with many metals with a simple stoichiometry
EDTA4- +Mn+ → [M−EDTA]n-4
where n is the charge on the metal ion, M.
This complexing agent has four (4) ionizable acid groups with the following pKa(= -logKa, where Ka is the acid dissociation constant) values: pKa1 = 2, pKa2 = 2.7,pKa3 = 6.6 and pKa4 = 10.3 at 20°C. These values suggest that the complexing agent behaves as a dicarboxylic acid with two strongly acidic groups and thatthere are two ammonium protons of which the first ionizes in the pH region ofabout 6.3 and the second at pH of about 11.5.
If Mn+ is the metal ion and Y4- stands for the completely ionized form of EDTA, then the metal-EDTA complex can be represented as MY(n-4)+. The stability of such a complex is often dependent on a number of factors, that need due consi- deration as one investigates the application of EDTA titration experiments inquantification of metal ions in solution. These factors affect the various multiple equilibria shown above, which in turn influences how complexometric titrationis carried out. The next section looks at the two important factors that are true for all complexometric titrations.
Factors affecting Stability of Metal-EDTA complexes
• Effect of pH on stability of metal-EDTA complexes
The concentration of each of the complexes shown, say in examples above, will depend on the pH of the solution. So to have any properly defined equilibria,the pH of the solution mixture will have to be buffered. Equally important, the concentration of protons, H+, which would otherwise compete with the Mn+ ions, must be held rigorously constant. For instance, at low pH values protonation(the act of transferring or donating a proton, a hydrogen ion, H+, to a species) of Y4- species occurs and the species HY3-, H2Y2-, H3Y- and even undissociated H4Y may well be present. (The abbreviations H4Y, H3Y-, H2Y2-, HY3-, and Y4- are often used to refer to EDTA and its ions.) Thus, the act of lowering the pH of the solution will decrease the concentration of Y4-. On the other hand, Increasing the pH of the solution will cause tendency to form slightly soluble metallic hydroxides owing to the reaction below:
(MY)(n-4)+ + nOH-M(OH)n + Y4-
The extent of hydrolysis (meaning the splitting of water in a reaction such as)
-OAc + H2O HOAc + OH-
of (MY)(n-4) depends upon the characteristic of the metal ion and is largely controlled by the solubility product of the metallic hydroxide and the stabiltity constant of the complex. The larger the stability constant of the complex, the lesser the tendency of the metal hydroxide to form.
• The effect of other complexing agents
If another complexing agent (other than Y4-) is also present in the solution, then the concentration of Mn+ in solution will be reduced owing to its ability to further complex with the interfering complexing agent. The relative proportions of the complexes will be dependent on the stability constants of the two types of metal- complexing agent complexes.
EDTA titration has been traditionally used in quantitating calcium ions in water, in a process referred to as determining water hardness. Water hardness is customarily referred to as concentration of calcium in the form of calcium carbonate.
In the following section, we shall use the Ca2+-EDTA titration to illustrate the method of complexometric titration. In this method, a colorimetric indicator, [these are intensely coloured substances in at least one form (bound or unbound to the metal) and do change colour when the metal-ion analyte binds with it], is used.
The reaction between Ca2+ and EDTA proceeds according to the stoichiometry shown below:
Ca2+ + EDTA4−Ca EDTA2−
with a corresponding equilibrium constant for formation expressed as
$\ \mathrm { K } _ { \mathrm { f } } \left( \mathrm { Ca } - \mathrm { EDTA } ^ { 2 - } \right) = \frac { \left[ \mathrm { Ca } - \mathrm { EDTA } ^ { 2 - } \right] } { \left[ \mathrm { Ca } ^ { + } \right] \left[ \mathrm { EDTA } ^ { 4 - } \right] }$
Note that, whereas the equilibrium constants for acids and bases are often tabulated as dissociation constants, equilibrium constants for complex formation are tabulated as formation constants.
Titration Curves
In the former unit, we learnt that in the titration of a strong acid versus a strong base, a plot of pH against the volume of the solution of the strong base addedyields a point of inflexation at the equivalence point. Similarly, in an EDTAversus metal ion titration, if pM (= -log[Mn+], where Mn+ signifies the metal ionwhose concentration is required) is plotted against the volume of EDTA solutionadded, a point of inflexation occurs at the equivalence point. The general shapeof a titration curve obtained following the titration of 100 mL of a 0.1 mol/L solution of Ca2+ ion with a 0. 1 mol/L Na-EDTA solution at two separate pH conditions is shown below.
Chemistry of EDTA Titrations
EDTA is used to titrate many ions. For instance, EDTA has been used succesfully over the years for the determination of water hardness (a meassure of the total Ca2+ and Mg2+ ions in water). Water hardness is often conveniently determined by titration of total Ca2+ and Mg2+ ions with EDTA. In this subsection, we shall look at the chemistry of EDTA titrations in general.
In the presence of Eriochrome Black T (EBT) as an indicator, a minor difficultyis usually encountered. Note that metal complexes of EBT are generally red in colour. Therefore, if a colour change is to be observed with EBT indicator, the pH of the solution must be between 7 and 11 so that the blue form of the indicator dominates when the titrant breaks up the red metal-EBT complex at the end point. At a pH of 10, the endpoint reaction is:
MIn- + Y4- + H+ → MY2- + HIn2-
(Red) (Blue)
EDTA is normally standardized against a solution of Ca2+ ions. In the early stages of the EDTA titration with EBT as indicator, the Ca2+-EBT complex does not dissociate appreciably due to a large excess of the untitrated Ca2+ ions in solution (i.e., Ca2+ ions is plenty in solution). As the titration progresses further and more Ca2+ ions is complexed with the titrant, the equilibrium position shifts to the left (i.e., previously complexed Ca-EBT complex, which is red in colour, begins to dissociate to give back more Ca2+ ions for complexation with the titrant EDTA), causing a gradual change in colour from the red Ca-EBT complex.
To avoid this problem of gradual change in colour, a small amount of 1:1 EDTA:Mg is often added to the titration flask (this does not affect the stoichiometry of thetitration reaction because the quantities of EDTA and Mg are equimolar) becauseMgIn complex is sufficiently stable that it will not dissociate appreciably priorto attainement of the equivalence point. (Note that at pH 10, Ca-EDTA complex is more stable than Mg-EDTA complex and also, MgIn-complex is more stable than CaIn-complex.)
Note that when 1:1 EDTA:Mg is added to the Ca2+ analyte-containing indicator solution, the following reactions take place:
MgY2- + Ca2+ → Mg2+ + CaY2- (more stable)
Mg2+ + CaIn- → Ca2+ + MgIn- (more stable)
Explanation:
When EDTA titrant is added, it first binds all the Ca2+ as per the reaction shown below (Note that at pH 10, the predominant species of EDTA is HY3-):
Ca2+ + HY3- → CaY2- + H+
Upto and including the end point, EDTA replaces the less strongly bound Erio-chrome Black T from the Mg2+-EBT complex (represented as MgIn-) as shown below:
MgIn- + HY3-HIn2- + MgY2-
Types of EDTA Titrations
Important metal ions-EDTA titration experiments fall into the following categories:
A. Direct Titration
In direct titration, the solution containing the metal ion to be determined is buf- fered to the desired pH and titrated directly with a standard EDTA solution. It may be necessary to prevent precipitation of the hydroxide of the metal ion (or a basic salt) by the addition of an auxillary complexing agent (or sometimes calledmasking agent, since they form stable complexes with potential interference), such as tartarate or citrate.
At start (i.e., before addition of titrant):
Mn+ + Ind ⇌ [Mind]n+
where Ind is representing the indicator.
During titration:
Mn+ + [H2Y]2- ⇌ [MY]n-4 + 2H+
where [H2Y]2- is representing EDTA titrant
At Endpoint:
[MInd]n+ + [H2Y]2- ⇌ [MY]n-4 + Ind + 2H+
Note that the complexed ([Mind]n+)and free indicator (ind) have different colours.
At the equivalence point the magnitude of the concentration of the metal ion being determined decreases abruptly. This equivalence point is generally determined by the change in color of a metal indicator that responds to changes in pM.
Example $1$:
Titration of 100 mL of a water sample at pH 13 in the presence of acalcium specific indicator such as Eriochrome Black T required 14.0 mL of 0.02M EDTA solution. Calculate the hardness of the water sample as CaCO3 in mg L-1.
Solution
Important infromation to note:
• Molecular weight for CaCO3 is 100g
• The stoichiometry for the reaction between Ca2+ and EDTA at pH 13 is given by:
Ca2+ + EDTA4-Ca EDTA2-
• Both Mg2+ and Ca2+ contribute to water hardness. Both metal ions have the same stoichiometry with EDTA, hence the titration includes the sum of Mg and Ca ions in the water sample.
• The14.0mLof0.02MEDTAcontains ($\dfrac{14.0\ mL}{1000\ mL/L}\ x\ 0.02\ moles/L$) = 2.80 x 10-4 moles of EDTA.
• From the above 1:1 stoichiometry, the number of Ca2+ ions present in the 100 mL water sample (equivalent to the combined Ca2+ and Mg2+ ions responsible for water hardness) should be equal to the number of moles of the titrant, EDTA.
• Hence, number of moles of Ca2+ ions present in the 100 mL water sample = 2.80 x 10-4 moles.
• 2.80 x 10-4 moles of Ca2+ is equivalent to (2.80 x 10-4 moles) x 100 g mole-1 CaCO3 =2.80x10-2 g=2.80x10-2 g x 1000mg g-1 =28.0 mg of Ca (as CaCO3).
• Therefore, the hardness of the water is 28.0 mg present in the 100 mL of water $\ =\dfrac{28.0\ mg}{100\ ml\ /\ 1000\ mlL^{-1}}=280\ mg\ L^{-1}$ hardness.
Exercise $2$
A 50.00 mL water sample required 21.76 mL of 0.0200 mol/L EDTA to titrate water hardness at pH 13.0. What was the hardness in mg L-1 of CaCO3 ?
Answer
B. Back titration
This is for the determination of metal ions that cannot be titrated directly with EDTA, say in alkaline solution (e.g., Mn2+ and Al3+) due to precipitation of their hydroxides).
In back titration, an excess known amount of a standard EDTA solution is added to the solution of the analyte. The resulting solution mixture is then buffered to the desired pH, and the excess EDTA titrated with a standard solution of a second metal ion. Examples of metal ions often used as the second metal includesolutions of; ZnCl2, ZnSO4, MgCl2 or MgSO4. The end point is then detected with the aid of an appropriate metal indicator that responds to the second metal ion introduced in the back titration.
The following steps apply:
Mn+ + [H2Y]2- ⇌ [MY]n-4 + 2H+
where [H2Y]2- is representing EDTA titrant
Zn2+ + [H2Y]2- ⇌ [ZnY]-2 + 2H+
At Endpoint:
Zn2+ + ind + ⇌ [Znind]2+
Back titration becomes necessary if analyte:
• precipitates in the absence of EDTA,
• reacts too slowly with EDTA, or
• blocks the indicator.
Example $2$:
A 3208 g sample of nickel ore was processed to remove interferences and 50.00 ml of 0.1200 mol L-1 EDTA was added in excess to react with Ni2+ ions in solution. The excess EDTA was titrated with 24.17 mL of 0.0755 mol L-1standard Mg2+. Calculate the %Ni in the ore.
Solution
• The stoichiometry for the reaction between Ni2+ (or Mg2+) and EDTA can be represented as: Ni2+ + EDTA4−Ni EDTA2−
• The stoichiometry represents a 1:1, i.e., for every mole of EDTA present, an equivalent number of moles of Ni2+ is used up.
• Total number of moles of EDTA initially available in the 50.00 mL (=0.050 L) solution of 0.1200 mol L-1 EDTA = (0.1200 moles L-1 x 0.050 L) = 6.0 x 10-3 moles.
• Number of moles of the titrant Mg2+ ions present in the 24.17 mL (= 0.02417 L) of 0.0755 mol L-1 = (0.02417 L x 0.0755 moles L-1) = 1.82 x 10-3 moles
• Therefore, the moles of EDTA that must have reacted with the available Ni2+ ions originally present = (6.0 x 10-3 - 1.82 x 10-3) moles = 4.18 x 10-3 moles.
• Hence, number of moles of Ni2+ ions originally present in the 50.00 mL (= 0.050 L) solution = 4.18 x 10-3 moles.
• This is = 4.18 x 10-3 moles x 58.7g/mol (Atomic weight for Ni = 58.7g) = 0.245366 g
• Therefore, %Ni in the ore $\ =\dfrac{0.245366\ g}{3208\ g}$ x 100% = 0.00765%
C. Replacement or substitution titration
Substitution titration may be used for the metal ions that do not react (or react unsatisfactorily) with a metal indicator (e.g., Ca2+, Pb2+, Hg2+, Fe3+), or for metal ions that form EDTA complexes that are more stable than those of other metals such as magnesium and calcium.
In this, there is quantitative displacement of the second metal (Mg2+ or Zn2+) from a complex by the analyte metal. Usually, the determination of the metal ions that form weak complexes with the indicator and the colour change is unclear and vague.
The metal cation Mn+ to be determined may be treated with the magnesium complex of EDTA leading to the following reaction:
Mn+ + MgY2- (MY)(n-4)+ + Mg2+
The amount of magnesium ion set free is equivalent to the cation present and can be titrated with a standard solution of EDTA.
The following steps apply:
1. Replacement step:
Ca2+ + [MgY]2- ⇌ [CaY]2- + Mg2+
2. [Mg Ind]2+ + [H2Y]2- ⇌ [MgY]2- + Ind + 2H+
D. Alkalimetric Titration
When a solution of Na2H2Y is added to a solution containing metallic ions, com- plexes are formed with the liberation of two equivalents of hydrogen ion, i.e.,
Mn+ + H2Y2− ⇌ (MY)(n-4) + 2H+
The hydrogen ions thus set free can be titrated with a standard solution of sodium hydroxide using an acid-base indicator or potentiometric end point. Alternatively, an iodateiodide mixture is added as well as the EDTA solution and the librated iodine is titrated with a standard thiosulphate solution.
Note that the solution of the metal to be determined must be accurately neutralized before titration; this is often a difficult matter on account of the hydrolysisof many salts, and constitutes a weak feature of alkalimetric titration.
Example $3$:
A 10.00 mL solution of FeSO4 was added to 50.00 mL of 0.05 mol L-1 Na2H2Y. The H+ released required 18.03 mL of 0.080 mol L-1 NaOH for titration. What was the molar concentration of the FeSO4 solution?
Solution
• The stoichiometry for the reaction between Fe2+ and Na2H2Y can be represented as: Fe2+ + Na2H2Y Na2FeY + 2H+. Here 1 mole of Fe2+ ions yields 2 moles of H+ ions.
• The stoichiometry for the acid-base neutralization reaction between the released H+ ions and NaOH in the titration process can be represented as:
H+ + OHH2O . This is 1:1 reaction (i.e., for every 1 mole of H+ ions released, an equivalent moles of hydroxide is needed for neutralization.
• Therefore, number of moles of H+ released = number of moles of OH- ions present in the 18.03 mL (= 0.01803 L) of the 0.080 mol L-1 NaOH solution = 0.01803 L x 0.080 mol L-1 = 1.4424 x 10-3 moles.
• Since for every mole of Fe2+ ions consumed, twice as many moles of H+ ions are released, then the number of moles of Fe2+ ions consumed = $\ \dfrac{1}{2}$ x 1.4424x10-3 moles = 7.212×10-4 moles
• Therefore, 7.212 x 10-4 moles of Fe2+ ions were present in the original 10.00 mL (= 0.010 L) solution of FeSO4. Thus, Molar concentration of the FeSO4 solution $\ \dfrac{7.212 x 10^{-4}\ moles}{0.0100\ L}$ = 0.07212 M | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/14.4%3A_Complex_ion_Equilibria_and_Complexometric_Titrations.txt |
1. Explain or define each of the following terms: (a) Precision (b) Accuracy(c) Determinate error (d) Indeterminate error
2. Normally, we perform only a small number of replicate analyses. Explain!
3. A soil sample was analysed for its magnesium content and the results of the analysis are as tabulated below:
Experient number Mg Content (mg/g)
1
2
3
4
19.6
20.1
20.4
19.8
Calculate: (a) the mean concentration of magnesium in the soil sample (b) the median of the data set (c) the absolute deviation from the mean of the second data point (d) the standard deviation of the data set (e) the confidence interval for the data set at the 90% confidence interval.
4. If the true value for magnesium content in the soil of question 3 above, is 20.0 mg/g, calculate the relative error of the mean.
5. In an experiment that involved measurement of temperature, mass, volume and pressure of a gas, values were reported with their corresponding absolute errors as follows: mass = 0.3124g ± 0.1mg, temperature = 283 K ± 1K; volume = 150.3 mL ± 0.2 mL; and pressure = 1257.4 Pa ± 0.1 Pa. (a) Which of the above measurements will dominate in determiningthe error in the gas constant, R? (b) Explain your choice.
6. To how many significant figures ought the result of the operation \(\ \dfrac{24\ x\ 6.32}{100.0} \) be reported and what is the calculated uncertainty?
7. If an analysis is conducted in triplicate and the average obtained in the measurements is 14.35 with a corresponding standard deviation of 0.37,express the uncertainty at the 95% confidence level. The t value at 95% level from a table is 4.303.
8. Label each of the components in the following equations as acids, bases, conjugate acids or conjugate bases: (a) HCl + H2O → H3O+ + Cl- (b) NH3 + H2O → NH4+ + OH-
9. Write equations to show the dissociation of H2SO4, HNO3 and HCl.
10. Determine the base dissociation constant, Kb of a base whose solution of concentration 0.01 mole/L has a pH of 8.63.
11. Calculate the pH of a solution in which the hydronium (H3O+) ion concentration is: (a) 1.0 M, (b) 0.01 M, (c) 1.0 x 10-7 M (d) 3.5 x 10-9
12. Determine the pOH and the hydronium (H3O+) ion concentration of a 0.01M NaOH solution.
13. What is the hydroxide (OH-) ion concentration of a solution if the pH is: (a) 8.00 (b) 5.30, and (c) 4.68
14. Calculate the hydronium ion concentration if the pH is (a) 2.78 (b) 6.95 (c) 8.30
15. Write the equation for the Ka of the monoprotic benzoic acid, C6H5COOH whose sodium salt is C6H5COONa.
16. Periodic acid, HIO4, is a moderately strong acid. In a 0.10 M solution, the [H3O+] = 3.8 x 10-2 M. Calculate the Ka and pKa for periodic acid.
17. Is a solution whose [H3O+] = 4.6 x 10-8 M, acidic, neutral or basic? Explain your answer!
18. Define each of the following terminologies: (a) Standard solution, (b)Primary standard (c) Standardized solution (d) Standardization (e) End point of a titration (f) Equivalence point of titration (g) Titration error (h) Titration curve
19. Calculate the pH of a 0.10 M solution of Ca(OH)2. Hint: Ca(OH)2 is a strong base and dissociates to yield 2 moles of OH- ions (Ca(OH)2 → Ca2+ + 2OH-).
20. Characterize each of the following acids as monoprotic, diprotic, or triprotic and give the corresponding ionization reactions for each hydrogen ion for each acid: (a) CH3COOH (b) H2SO4 (c) H3PO4 (d) C6H4(COOH)2.
21. A 25.0 mL solution of KIO3 was placed in a titration flask. 20.0 mL solution of KI and 10.0 mL of dilute sulfuric acid were added to the flask. Theliberated iodine was titrated using a solution of Na2S2O3 of concentration 0.2mol/L while using starch as the indicator. The end point was reached when 24.0 mL of S2O32- solution was run in.
1. This is an example of a substitution titration (an indirect titration). First the analyte IO3- is reacted with excess I- to produce stoichiometrically equivalent amount of I2.
IO3- + I- ® I2 Not balanced
The librated I is titrated with standard S2O32-
S2O32- + I2 ® S4O62- + 2I Not balanced
Write the two balanced redox reactions responsible for the determination of IO3-
2. What was the concentration of the original IO3- solution?
3. Which indicator is the most suitable to use in this titration?
22. Assign oxidation numbers to each atom in the following species: (a) NO3- (b) CaHAsO4
23. Determine whether the following changes are oxidation, reduction, or neither and show the oxidation number change that proves your point. (a) SO32- to SO42- (b)Cl2 to ClO3- (c)N2O4 to NH3 (d)NO to NO3- (e) PbO to PbCl42-
24. Consider the following unbalanced oxidation/reduction reaction for the next two questions: Hg2+ (aq) + N2O4 (aq) ® 6 NO3- (aq) + Hg22+ (aq)
1. What is being oxidized in the reaction?
2. What is being reduced in the reaction?
25. Find the oxidation number of N in NO3- and Hg in Hg2+.
26. Give the complete balanced equation in acidic media for the reaction in question 24 above.
27. Identify the reducing reagent in the following redox reaction,
Hg2+ (aq)+Cu(s)→Cu2+ (aq)+ Hg(l)
28. Select the spectator ions in the following reaction:
Pb(NO3)2 (aq) + 2NaCl(aq) → PbCl2(s) + 2NaNO3 (aq) (a) Na+ (aq), NO3- (aq) (b) Pb2+ (aq), NO3- (aq) (c) Na+ (aq), Cl- (aq) (d) Pb2+ (aq) ,Cl- (aq), Na+ (aq), NO3- (aq) (e) Pb2+ (aq), Cl- (aq).
29. Permanganate ion is converted to manganese (II) ion by oxalic acid. What is the oxidizing agent? What is the reducing agent? Balance the reaction:
MnO4- (aq) + H2C2O4 (aq) → Mn2+ (aq) + CO2 (g) (in acidic, aqueous solution).
30. Balance the following reaction that occurs in acidic medium using the ion-electron method: Fe2+ + MnO4- → Fe3+ + Mn2+ | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/Exercises.txt |
Unit I: Sampling and Statistical Analysis of data
At the end of this unit the student should be able to:
- Define and use the concept of sampling in quantitative chemical analy- sis.
- Define and distinguish the various types of errors encountered in quanti- tative experimental measurements.
- Explain the difference between between accuracy and precision.
- Perform basic statistical analysis of experimental data involving descriptive statistics.
UNIT II: Fundamentals of volumetric chemical analysis, Acid-Base Reactions & Titrations
At the end of this unit the student should be able to:
- Identify acids and bases using the Bronsted-Lowry and Lewis concepts of acids and bases.
- Use acid-base theories to distinguish between strong and weak acid/ base.
- Use the concept of diprotic and polyprotic acid equilibria to do related calculations.
- Explain the basic concepts of acid-base equilibria and carry out associated calculations.
- Apply the general principles of chemical equilibrium to precipitation, acid-base, complexation, reactions and titrations.
- Define and apply the principles and steps involved in acid-base equilibriaand solubility equilibria
- Evaluate the pH in the course of acid-base titrations.
Unit III: Redox reactions and titrations
At the end of the unit the student should be able to:
- Define and describe the concept of redox reactions, with examples.
- Write balanced net ionic reactions for Oxidation/Reduction equations.
- Carry out Redox-type titration experiments and associated calculations.
Unit IV: Complex-ion equilibria and complexometric titrations
At the end of the unit the student will be able to:
- Define and understand the use of terminologies relevant in complex ionequilibria.
- Describe and explain the fundamental principles of complex equilibria and stepwise equilibrium reactions.
- Apply the principles of chemical equilibria to complexometric titra- tions.
- Carry out complexometric titrations and related calculations.
Unit Number
Learning Objective(s)
UNIT I: Sampling and Statistical Analysis of data
- Explain the notion of Sampling as an integral part of Analytical Methods of Analysis.
- Identify and describe the sources of sampling error.
- Have a knowledge of some important basic principles of error analysis.
- Identify and discuss the various types and sources of experimental errors.
- Explain and use the concept ofsignificant figures.
- Define and distinguish betweenabsolute vs. relative error; random vs. systematic error;
- Describe the relationship between error and probability.
- Apply simple statistics and error analysis to determine the reliability of analytical chemical procedures.
- Clearly and correctly report measurements and the uncertainties in them.
UNIT II: Fundamentals of Volumetric Chemical Analysis, Acid/Base Equilibria & Titrations
- Perform stoichiometry & titration calculations.
- Use equilibrium constants for acid base reactions.
- Distinguish between equivalence and end point, blank and back titrations.
- Have a working knowledge of endpoint detection and its significance.
- Explain weak acid/base dissociations.
- Explain and sketch precisely thetitration curves (pH profiles) ofdifferent types of acid-base reactions.
- Explain the concept of diprotic acid/ base neutralizations.
- Identify some common acid-base indicators and be able to specify which ones to use for various titrations.
UNIT III: Redox Reactions and Titrations
- Define Oxidation/Reduction Reactions,Oxidation and Reduction, Oxidation numbers.
- Define Oxidizing and Reducing agentswith Examples.
- Assign oxidation numbers based on the Rules of assignment.
- Know the steps needed to balance Oxidation/Reduction reactions in acidic and basic solutions.
- Carry out oxidation/Reduction titration experiments and related calculations.
UNIT IV: Complex-ion equilibria and complexometric titrations
- Understand the concept of stepwise Equilibrium processes.
- Define and discuss Polyprotic acidequilibria and titrations.
- Understand the concept of Complexo metric Titrations and their applications. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/Learning_Objectives.txt |
The fields of science and technology rely very much on measurements of physical and chemical properties of materials for these are central in defining the nature of substances or quantifying them. Some of the concepts involved in measurements such as mass, volume, concentration you would have come across in the earlier modules. There are others you will be meeting for the first time in this module,that have to do with determining the identity of a substance based on quantitative analysis. The set of pre-assessment questions below are meant to help you assess your level of mastery of the concepts on measurements that are most often used by chemists to determine a specific chemical property. Some questions which will be new to you have been included. Such questions are meant to give you some idea as to what to expect in this module that deals with various aspects of volumetric analysis.
Pre-assessment
For each of the following test items, select the option that you think is the correct one.
1. Each response below lists an ion by name and by chemical formula. Also,each ion is classified as monatomic or polyatomic and as a cation or anion.Which particular option is incorrect?
1. carbonate, CO3 2-, polyatomic anion.
2. ammonium, NH4 +, polyatomic cation.
3. magnesium, Mg2+, monatomic cation.
4. hydroxide, OH-, monatomic anion.
5. sulfite, SO3 2-, polyatomic anion.
2. Equal number of moles of different compounds
1. May or may not have the same number of atoms
2. Have the same number of molecules
3. Have equal weights
4. Have the same number of atom
5. A & B
3. What do you understand by the term “molecular mass of a molecule”?
1. The summation of atomic masses, in grams, of all the atoms in the mo- lecule.
2. The mass, in grams of the molecule.
3. The gram molecular weight of a substance.
4. The mass, in grams of a substance.
5. The mass, in grams of the heaviest atom of the molecule
4. Calculate the molecular mass of NaCl?
1. 58
2. 23
3. 35
4. 28
5. 51
5. All of the substances listed below are fertilizers that contribute nitrogen to the soil. Which of these is the richest source of nitrogen on a mass perentage basis?
1. Urea, (NH2)2CO
2. Ammonium nitrate, NH4NO3
3. Guanidine, HNC(NH2)2
4. Ammonia, NH3
5. Potassium nitrate, KNO3
6. What do 2.4 moles of CO and CO2 have in common?
1. same mass
2. contain the same mass of carbon and oxygen
3. contain the same mass of oxygen
4. contain the same number of molecules
5. contain the same number of total atoms
7. Which of the following equations best represents the reaction shown in the diagram above? This can be answered by simple elimination.
1. 8A + 4B → C + D
2. 4A + 8B → 4C + 4D
3. 2A + B →C + D
4. 4A + 2B → 4C + 4D
5. 2A + 4B → C + D
8. Which of the following can be used to measure a more accurate volume of a liquid?
1. Beaker
2. Graduated cylinder
3. Burrette
4. Bottle
5. All
9. How can one prepare 750 mL solution of 0.5 M H2SO4, from 2.5 M H2SO4stock solution?
1. by mixing 250 mL of the stock solution with 500 mL of water
2. by mixing 150 mL of the stock solution with 600 mL of water
3. by mixing 600 mL of the stock solution with 150 mL of water
4. by mixing 375 mL of the stock solution with 375 mL of water
5. none
10. If the concentration of H+ ions in an aqueous solution is 2.5 x 10-4 then,
1. its pH is less than 7
2. the solution is acidic
3. its pOH is greater than 7
4. its OH- concentration is less than the concentration of OH- in neutral solution
5. All
11. What do you understand by the term “Quantitative analysis”?
1. Involves determining the individual constituents of a given sample.
2. Involves the determination of the relative or absolute amount of an analytein a given sample
3. Involves the addition of measured volume of a known concentration of reagent into a solution of the substance to be determined (analyte).
4. Involves determining the level of purity of an analyte.
5. Involves determining the quality of a sample.
12. Which of the following statements does not appropriately describe a stage in a titration process?
1. Before equivalence point is reached, the volume of a reagent added to the analyte does not make the reaction complete (when there is excess of analyte).
2. At equivalence point the reagent added is the amount that is chemically equivalent to the amount of substance being determined (analyte).
3. After equivalence point, the amount of reagent added is higher than the amount of substance being determined.
4. After equivalence point, the amount of reagent added cannot be higher than the amount of substance being determined.
5. At the beginning of a titration, the number of moles of the reagent added is always less than that of the analyte present (when there is excess analyte).
13. Which of the following is correct about titration of a polyprotic weak acid such as orthophosphoric acid (H3PO4) with a strong base such as NaOH?
1. H3PO4 titration curve has only one equivalence point.
2. H3PO4 titration curve has only two equivalence points.
3. H3PO4 titration curve has only three equivalence points.
4. H3PO4 titration curve does not have any equivalence point.
5. H3PO4 titration curve has only seven equivalence points.
14. Which of the following reactions is not a redox reaction?
1. H2SO4 + BaCl2 → BaSO4 + 2HCl
2. CuSO4 + Zn → ZnSO4 + Cu
3. 2NaI + Cl2 → 2NaCl + I2
4. C + O2 → CO2
5. None
Answer questions 15 to 18 based on the following chemical equation:
CuSO4 + Zn ZnSO4 + Cu
1. Which of the following is a reducing agent if the reaction is a redox reac- tion?
1. CuSO4
2. Zn
3. ZnSO4
4. Cu
5. the reaction is not a redox reaction
2. Which of the following species gained electrons?
1. CuSO4
2. Zn
3. ZnSO4
4. Cu
5. None
3. What is the number of electrons gained per mole of the oxidizing agent?
1. 1 mole
2. 2 moles
3. 3 moles
4. 4 moles
5. 0 moles
4. What is the number of electrons lost per mole of the reducing agent?
1. 1 mole
2. 2 moles
3. 3 moles
4. 4 moles
5. 0 moles
5. Which one of the following is a redox reaction?
1. H+ (aq) + OH- (aq) → H2O (l)
2. 2 KBr (aq) + Pb(NO3)2 (aq) → 2 KNO3 (aq) + PbBr2 (s)
3. CaBr2 (aq) + H2SO4 (aq) → CaSO4 (s) + 2 HBr (g)
4. 2 Al (s) + 3 H2SO4 (aq) → Al2(SO4)3 (aq) + 3H2 (g)
5. CO3 2- (aq) + HSO4 - (aq) → HCO3 - (aq) + SO4 2- (aq)
6. In the reaction, Zn(s) + 2HCl (aq) → ZnCl2 (aq) + H2 (g), what is the oxidation number of H2?
1. +1
2. -1
3. 0
4. +2
5. -2
9.2: Answers Key
1. D
2. D
3. A
4. A
5. D
6. D
7. C
8. C
9. B
10. E
11. B
12. D
13. C
14. E
15. B
16. A
17. B
18. B
19. D
20. C | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/Pre-assessment/9.1%3A_Questions.txt |
Glossary
Accuracy: this is the closeness of a result to the correct answer.
Acid: A substance that yields hydrogen ions (H+) when dissolved in water.
Base: A substance that yields hydroxide ions (OH-) when dissolved in water.
Base ionization constant (Kb): The equilibrium constant for the base ionization.
Bronsted acid: A substance that is able to donate a proton.
Bronsted base: A substance that is capable of accepting a proton.
Chemical equation: An equation that uses chemical symbols to show what happens during a chemical reaction.
Chemical equilibrium: A state in which the rates of the forward and reverse reactions are equal.
Chemical reaction:A process in which a substance (or substances) is changed into one or more new substances.
Complex ion: An ion containing a central metal cation bonded to one or more molecules or ions.
Common ion effect: The shift in equilibrium caused by the addition of a com- pound having an ion in common with the dissolved substances.
Determinate errors: these are mistakes, which are often referred to as “bias”. In theory, these could be eliminated by careful technique.
Diprotic acid: Each unit of the acid yields two hydrogen ions upon ionization.
End point: The pH at which the indicator changes colour.
Equilibrium constant (Keq): A number equal to the ratio of the equilibrium concentrations of products to the equilibrium concentrations of reactants, eachraised to the power of its stoichiometric coefficient.
Equivalence point: The point at which the acid has completely reacted with or been neutralized by the base.
Homogeneous sample: sample is the same throughout.
Hydronium ion:The hydrated proton, H3O+.
Indeterminate errors: these are errors caused by the need to make estimates in the last figure of a measurement, by noise present in instruments, etc. Sucherrors can be reduced, but never entirely eliminated.
Law of mass action: For a reversible reaction at equilibrium and at a constant temperature, a certain ratio of reactant and product concentrations has a constant value, Keq (the equilibrium constant).
Lewis acid:A substance that can accept a pair of electrons.
Lewis base: A substance that can donate a pair of electrons.
Monoprotic acid: Each unit of the acid yields one hydrogen ion upon ionization.
Neutralization reaction: A reaction between an acid and a base.
Oxidation reaction: The half-reaction that involves the loss of electrons.
Oxidation-reduction reaction: A reaction that involves the transfer of electron(s) or the change in the oxidation state of reactants.
Oxidizing agent: A substance that can accept electrons from another substance or increase the oxidation numbers in another substance.
pH: The negative logarithm of the hydrogen ion concentration.
Precision: the reproducibility of a data set; a measure of the ability to obtain the same number (not necessarily the correct number) in every trial.
Redox reaction: A reaction in which there is either a transfer of electrons or a change in the oxidation numbers of the substances taking part in the reaction.
Reducing agent: A substance that can donate electrons to another substance or decrease the oxidation numbers in another substance.
Representative sample: a sample whose content is the same overall as the material from which it is taken from.
Sampling: this is used to describe the process involved in finding a reasonable amount of material that is representative of the whole.
Significant figures: The number of meaningful digits in a measured or calculated quantity.
Solution: A homogeneous mixture of two or more substances.
Standard solution: A solution of accurately known concentration.
Stoichiometrry: The quantitative study of reactants and products in a chemical reaction. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Volumetric_Chemical_Analysis_%28Shiundu%29/Summary.txt |
Know why some people's stomachs burn after they swallow an aspirin tablet? Or why a swig of grapefruit juice with breakfast can raise blood levels of some medicines in certain people? Understanding some of the basics of the science of pharmacology will help answer these questions, and many more, about your body and the medicines you take.
So, then, what's pharmacology? Despite the field's long, rich history and importance to human health, few people know much about this biomedical science. One pharmacologist joked that when she was asked what she did for a living, her reply prompted an unexpected question: "Isn't 'farm ecology' the study of how livestock impact the environment?"
Of course, this booklet isn't about livestock or agriculture. Rather, it's about a field of science that studies how the body reacts to medicines and how medicines affect the body. Pharmacology is often confused with pharmacy, a separate discipline in the health sciences that deals with preparing and dispensing medicines.
For thousands of years, people have looked in nature to find chemicals to treat their symptoms. Ancient healers had little understanding of how various elixirs worked their magic, but we know much more today. Some pharmacologists study how our bodies work, while others study the chemical properties of medicines, Others investigate the physical and behavioral effects medicines have on the body. Pharmacology researchers study drugs used to treat diseases, as well as drugs of abuse. Since medicines work in so many different ways in so many different organs of the body, pharmacology research touches just about every area of biomedicine.
01: ABCs of Pharmacology
abiotic transformation drug work
Did you know that, in some people, a single glass of grapefruit juice can alter levels of drugs used to treat allergies, heart disease, and infections? Fifteen years ago, pharmacologists discovered this "grapefruit juice effect" by luck, after giving volunteers grapefruit juice to mask the taste of a medicine. Nearly a decade later, researchers figured out that grapefruit juice affects medicines by lowering levels of a drug-metabolizing enzyme, called CYP3A4, in the intestines.
More recently, Paul B. Watkins of the University of North Carolina at Chapel Hill discovered that other juices like Seville (sour) orange juice—but not regular orange juice—have the same effect on the body's handling of medicines. Each of 10 people who volunteered for Watkins' juice-medicine study took a standard dose of Plendil® (a drug used to treat high blood pressure) diluted in grapefruit juice, sour orange juice, or plain orange juice. The researchers measured blood levels of Plendil at various time afterward. The team observed that both grapefruit juice and sour orange juice increased blood levels of Plendil, as if the people had received a higher dose. Regular orange juice had no effect. Watkins and his coworkers have found that a chemical common to grapefruit and sour oranges, dihydroxybergamottin, is likely the molecular culprit. Another similar molecule in these fruits, bergamottin, also contributes to the effect.
Many scientists are drawn to pharmacology because of its direct application to the practice of medicine. Pharmacologists study the actions of drugs in the intestinal tract, the brain, the muscles, and the liver—just a few of the most common areas where drugs travel during their stay in the body. Of course, all of our organs are constructed form cells and inside all of our cells are genes. Many pharmacologists study how medicines interact with cell parts and genes, which in turn influences how cells behave. Because pharmacology touches on such diverse areas, pharmacologists must be broadly trained in biology, chemistry, and more applied areas of medicine, such as anatomy and physiology. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.01%3A_A_Drug%27s_Life.txt |
Antibiotic Cholesterol Bacterium Death Gene Cholesterol Cyclooxygenase abiotic transformation α-cleavage work
How does aspirin zap a headache? What happens after you rub some cortisone cream on a patch of poison ivy-induced rash on your arm? How do decongestant medicines such as Sudafed® dry up your nasal passages when you have a cold? As medicines find their way to their "job sites" in the body, hundreds of things happen along the way. One action triggers another, and medicines work to either mask a symptom, like a stuffy nose, or fix a problem, like a bacterial infection.
A Model for Success
Turning a molecule into a good medicine is neither easy nor cheap. The Center for the Study of Drug Development at Tufts University in Boston estimates that it takes over \$800 million and a dozen years to sift a few promising drugs from about 5,000 failures. Of this small handful of candidate drugs, only one will survive the rigors of clinical testing and end up on pharmacy shelves.
That's a huge investment for what may seem a very small gain and, n part, it explains the high cost of many prescription drugs. Sometimes, problems do not show up until after a drug reaches the market and many people begin taking the drug routinely. These problems range from irritating side effects, such as dry mouth or drowsiness, to life-threatening problems like serious bleeding or blood clots. The outlook might be brighter if pharmaceutical scientists could do a better job of predicting how potential drugs will act in the body (a science called pharmacodynamics), as well as what side effects the drugs might cause.
One approach that can help is computer modeling of a drug's properties. Computer modeling can help scientists at pharmaceutical and biotechnology companies filter out, and abandon early on, any candidate drugs that are likely to behave badly in the body. This can save significant amounts of time and money.
Computer software can examine the atom-by-atom structure of a molecule and determine how durable the chemical is likely to be inside a body's various chemical neighborhoods. Will the molecule break down easily? How well will the small intestines take it in? Does it dissolve easily in the watery environment of the fluids that course through the human body? Will the drug be able to penetrate the blood-brain barrier? Computer tools not only drive up the success rate for finding candidate drugs, they can also lead to the development of better medicines with fewer safety concerns.
A drug's life in the body. Medicines taken by mouth (oral) pass through the liver before they are absorbed into the bloodstream. Other forms of drug administration bypass the liver, entering the blood directly.
Drugs enter different layers of skin via intramuscular, subcutaneous, or transdermal delivery methods.
Scientists have names for the four basic stages of a medicine's life in the body: absorption, distribution, metabolism, and excretion. The entire process is sometimes abbreviated ADME. The first stage is absorption. Medicines can enter the body in many different ways, and they are absorbed when they travel from the site of administration into the body's circulation. A few of the most common ways to administer drugs are oral (swallowing an aspirin tablet), intramuscular (getting a flu shot in an arm muscle), subcutaneous (injecting insulin just under the skin), intravenous (receiving chemotherapy through a vein), or transdermal (wearing a skin patch), A drug faces its biggest hurdles during absorption. Medicines taken by mouth are shuttled via a special blood vessel leading from the digestive tract to the liver, where a large amount may be destroyed by metabolic enzymes in the so-called "first-pass effect." Other routes of drug administration bypass the liver, entering the bloodstream directly or via the skin or lungs.
Once a drug gets absorbed, the next stage is distribution. Most often, the bloodstream carries medicines throughout the body. During this step, side effects can occur when a drug has an effect in an organ other than the target organ. For a pain reliever, the target organ might be a sore muscle in the leg; irritation of the stomach could be a side effect. Many factors influence distribution, such as the presence of protein and fat molecules in the blood that can put drug molecules out of commission by grabbing onto them.
Drugs destined for the central nervous system (the brain and spinal cord) face an enormous hurdle: a nearly impenetrable barricade called the blood-brain barrier. This blockade is built from a tightly woven mesh of capillaries cemented together to protect the brain from potentially dangerous substances such as poisons or viruses. Yet pharmacologists have devised various ways to sneak some drugs past this barrier.
After a medicine has been distributed throughout the body and has done its job, the drug is broken down, or metabolized. The breaking down of a drug molecule usually involves two steps that take place mostly in the body's chemical processing plant, the liver. The liver is a site of continuous and frenzied, yet carefully controlled, activity. Everything that enters the bloodstream—whether swallowed, injected, inhaled, absorbed through the skin, or produced by the body itself—is carried to this largest internal organ. There, substances are chemically pummeled, twisted, cut apart, stuck together, and transformed.
Medicines and Your Genes
How you respond to a drug may be quite different from how your neighbor does. Why is that? Despite the fact that you might be about the same age and size, you probably eat different foods, get different amounts of exercise, and have different medical histories. But your genes, which are different from those of anyone else in the world, are really what make you unique. In part, your genes give you many obvious things, such as your looks, your mannerisms, and other characteristics that make you who you are. Your genes can also affect how you respond to the medicines you take. Your genetic code instructs your body how to make hundreds of thousands of different molecules called proteins. Some proteins determine hair color, and some of them are enzymes that process, or metabolize, food or medicines. Slightly different, but normal, variations in the human genetic code can yield proteins that work better or worse when they are metabolizing many different types of drugs and other substances. Scientists use the term pharmacogenetics to describe research on the link between genes and drug response.
One important group of proteins whose genetic code varies widely among people are "sulfation" enzymes, which perform chemical reactions in your body to make molecules more water-soluble, so they can be quickly excreted in the urine. Sulfation enzymes metabolize many drugs, but they also work on natural body molecules, such as estrogen. Differences in the genetic code for sulfation enzymes can significantly alter blood levels of the many different kinds of substances metabolized by these enzymes. The same genetic differences may also put some people at risk for developing certain types of cancers whose growth is fueled by hormones like estrogen.
Pharmacogeneticist Rebecca Blanchard of Fox Chase Cancer Center in Philadelphia has discovered that people of different ethnic backgrounds have slightly different "spellings" of the genes that make sulfation enzymes. Lab tests revealed that sulfation enzymes manufactured from genes with different spellings metabolize drugs and estrogens at different rates. Blanchard and her coworkers are planning to work with scientists developing new drugs to include pharmacognetic testing in the early phases of screening new medicines.
The biotransformations that take place in the liver are performed by the body's busiest proteins, its enzymes. Every one of your cells has a variety of enzymes, drawn from a repertoire of hundreds of thousands. Each enzyme specializes in a particular job. Some break molecules apart, while others link small molecules into long chains. With drugs, the first step is usually to make the substance easier to get rid of in urine.
Many of the products of enzymatic break-down, which are called metabolites, are less chemically active than the original molecule. For this reason, scientists refer to the liver as a "detoxifying" organ. Occasionally, however, drug metabolites can have chemical activities of their own—sometimes as powerful as those of the original drug. When prescribing certain drugs, doctors must take into account these added effects. Once liver enzymes are finished working on a medicine, the now-inactive drug undergoes the final stage of its time in the body, excretion, as it exits via the urine or feces.
Perfect Timing
Pharmacokinetics is an aspect of pharmacology that deals with the absorption, distribution, and excretion of drugs. Because they are following drug actions in the body, researchers who specialize in pharmacokinetics must also pay attention to an additional dimension: time.
Pharmacokinetics research uses the tools of mathematics. Although sophisticated imaging methods can help track medicines as they travel through the body, scientists usually cannot actually see where a drug is going. To compensate, they often use mathematical models and precise measures of body fluids, such as blood and urine, to determine where a drug goes and how much of the drug or a break-down product remains after the body processes it. Other sentinels, such as blood levels of liver enzymes, can help predict how much of a drug is going to be absorbed.
Studying pharmacokinetics also uses chemistry, since the interactions between drug and body molecules are really just a series of chemical reactions. Understanding the chemical encounters between drugs and biological environments, such as the bloodstream and the oily surfaces of cells, is necessary to predict how much of a drug will be taken in by the body. This concept, broadly termed bioavailability, is a critical feature that chemists and pharmaceutical scientists keep in mind when designing and packaging medicines. No matter how well a drug works in a laboratory simulation, the drug is not useful if it can't make it to its site of action. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.02%3A_A_Drug%27s_Life.txt |
death not-deaths deaths multiple deaths
dna dna (Deoxyribonucleic acid)
dying
Antibiotic Bacterium Death Gene
Deathstar
While it may seem obvious now, scientists did not always know that drugs have specific molecular targets in the body. In the mid-1880s, the French physiologist Claude Bernard made a crucial discovery that steered researchers toward understanding this principle. By figuring out how a chemical called curare works, Bernard pointed to the nervous system as a new focus for pharmacology. Curare—a plant extract that paralyzes muscles—had been used for centuries by Native Americans in South America to poison the tips of arrows. Bernard discovered that curare causes paralysis by blocking chemical signals between nerve and muscle cells. His findings demonstrated that chemicals can carry messages between nerve cells and other types of cells.
Since Bernard's experiments with curare, researchers have discovered many nervous system messengers, now called neurotransmitters. These chemical messengers are called agonists, a generic term pharmacologists use to indicate that a molecule triggers some sort of response when encountering a cell (such as muscle contraction or hormone release).
Nerve cells use a chemical messenger called acetylcholine (balls) to tell muscle cells to contract. Curare (half circles) paralyzes muscles by blocking acetylcholine from attaching to its muscle cell receptors.
The Right Dose
One of the most important principles of pharmacology, and of much of research in general, is a concept called "dose-response." Just as the term implies, this notion refers to the relationship between some effect—let's say, lowering of blood pressure—and the amount of a drug. Scientists care a lot about dose-response data because these mathematical relationships signify that a medicine is working according to a specific interaction between different molecules in the body.
Sometimes, it takes years to figure out exactly which molecules are working together, but when testing a potential medicine, researchers must first show that three things are true in an experiment. First, if the drug isn't there, you don't get any effect. In our example, that means no change in blood pressure. Second, adding more of the drug (up to a certain point) causes an incremental change in effect (lower blood pressure with more drug). Third, taking the drug away (or masking its action with molecule that blocks the drug) means there is no effect. Scientists most often plot data from dose-response experiments on a graph. A typical "dose-response curve" demonstrates the effects of what happens (the vertical Y-axis) when more and more drug is added to the experiment (the horizontal X-axis).
Dose-response curves determine how much of a drug ((X-axis) causes a particular effect, or a side effect, in the body (Y-axis).
One of the first neurotransmitters identified was acetylcholine, which causes muscle contraction. Curare works by tricking a cell into thinking it is acetylcholine. By fitting—not quite as well, but nevertheless fitting—into receiving molecules called receptors on a muscle cell, curare prevents acetylcholine from attaching and delivering its message. No acetylcholine means no contraction, and muscles become paralyzed.
Most medicines exert their effects by making physical contact with receptors on the surface of a cell. Think of an agonist-receptor interaction like a key fitting into a lock. Inserting a key into a door lock permits the doorknob to be turned and allows the door to be opened. Agonists open cellular locks (receptors), and this is the first step in a communication between the outside of the cell and the inside, which contains all the mini machines that make the cell run. Scientists have identified thousands of receptors. Because receptors have a critical role in controlling the activity of cells, they are common targets for researchers designing new medicines.
Curare is one example of a molecule called an antagonist. Drugs that act as antagonists compete with natural agonists for receptors but act only as decoys, freezing up the receptor and preventing agonists' use of it. Researchers often want to block cell responses, such as a rise in blood pressure or an increase in heart rate. For that reason, many drugs are antagonists, designed to blunt overactive cellular responses.
The key to agonists fitting snugly into their receptors is shape. Researchers who study how drugs and other chemicals exert their effects in particular organs—the heart, the lungs, the kidneys, and so on—are very interested in the shapes of molecules. Some drugs have very broad effects because they fit into receptors on many different kinds of cells. Some side effects, such as dry mouth or a drop in blood pressure, can result from a drug encountering receptors in places other than the target site. One of a pharmacologist's major goals is to reduce these side effects by developing drugs that attach only to receptors on the target cells.
That is much easier said than done. While agonists may fit nearly perfectly into a receptor's shape, other molecules may also brush up to receptors and sometimes set them off. These types of unintended, nonspecific interactions can cause side effects. They can also affect how much drug is available in the body.
Steroids for Surgery
In today's culture, the word "steroid" conjures up notions of drugs taken by athletes to boost strength and physical performance. But steroid is actually just a chemical name for any substance that has a characteristic chemical structure consisting of multiple rings of connected atoms. Some examples of steroids include vitamin D, cholesterol, estrogen, and cortisone—molecules that are critical for keeping the body running smoothly. Various steroids have important roles in the body's reproductive system and the structure and function of membranes. Researchers have also discovered that steroids can be active in the brain, where they affect the nervous system. Some steroids may thus find use as anesthetics, medicines that sedate people before surgery by temporarily slowing down brain function.
A steroid is a molecule with a particular chemical structure consisting of multiple "rings" (hexagons and pentagon, below).
Douglas Covey of Washington University in St. Louis, Missouri, has uncovered new roles for several of these neurosteroids, which alter electrical activity in the brain. Covey's research shows that neurosteroids can either activate or tone down receptors that communicate the message of a neurotransmitter called gammaaminobutyrate, or GABA. The main job of this neurotransmitter is to dampen electrical activity throughout the brain. Covey and other scientists have found that steroids that activate the receptors for GABA decrease brain activity even more, making these steroids good candidates for anesthetic medicines. Covey is also investigating the potential of neuroprotective steroids in preventing the nerve-wasting effects of certain neurodegenerative disorders. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.03%3A_Fitting_In.txt |
Clinical trial
Combinatorial Genetics
Cyclooxygenase
Death
Prescribing drugs is a tricky science, requiring physicians to carefully consider many factors. Your doctor can measure or otherwise determine many of these factors, such as weight and diet. But another key factor is drug interactions. You already know that every time you go to the doctor, he or she will ask whether you are taking any other drugs and whether you have any drug allergies or unusual reactions to any medicines.
Interactions between different drugs in the body, and between drugs and foods or dietary supplements, can have a significant influence, sometimes "fooling" your body into thinking you have taken more or less of a drug than you actually have taken.
By measuring the amounts of a drug in blood or urine, clinical pharmacologists can calculate how a person is processing a drug. Usually, this important analysis involves mathematical equations, which take into account many different variables. Some of the variables include the physical and chemical properties of the drug, the total amount of blood in a person's body, the individual's age and body mass, the health of the person's liver and kidneys, and what other medicines the person is taking. Clinical pharmacologists also measure drug metabolites to gauge how much drug is in a person's body. Sometimes, doctors give patients a "loading dose" (a large amount) first, followed by smaller doses at later times. This approach works by getting enough drug into the body before it is metabolized (broken down) into inactive parts, giving the drug the best chance to do its job.
Nature's Drugs
Feverfew for migraines, garlic for heart disease, St. John's wort for depression. These are just a few of the many "natural" substances ingested by millions of Americans to treat a variety of health conditions. The use of so-called alternative medicines is widespread, but you may be surprised to learn that researchers do not know in most cases how herbs work—or if they work at all—inside the human body.
Herbs are not regulated by the Food and Drug Administration, and scientists have not performed careful studies to evaluate their safety and effectiveness. Unlike many prescription (or even over-the-counter) medicines, herbs contain many—sometimes thousands—of ingredients. While some small studies have confirmed the usefulness of certain herbs, like feverfew, other herbal products have proved ineffective or harmful. For example, recent studies suggest that St. John's wort is of no benefit in treating major depression. What's more, because herbs are complicated concoctions containing many active components, they can interfere with the body's metabolism of other drugs, such as certain HIV treatments and birth control pills.
1.05: Pump It Up
Using Ochem Glossary
Organic Chemistry, Transition Metal, Carbocation
Bacteria have an uncanny ability to defend themselves against antibiotics. In trying to figure out why this is so, scientists have noted that antibiotic medicines that kill bacteria in a variety of different ways can be thwarted by the bacteria they are designed to destroy. One reason, says Kim Lewis of Northeastern University in Boston, Massachusetts, may be the bacteria themselves. Microorganisms have ejection systems called multidrug-resistance (MDR) pumps—large proteins that weave through cell-surface membranes. Researchers believe that microbes have MDR pumps mainly for self-defense. The pumps are used to monitor incoming chemicals and to spit out the ones that might endanger the bacteria.
Many body molecules and drugs (yellow balls) encounter multidrug-resistance pumps (blue) after passing through a cell membrane. © LINDA S. NYE
Lewis suggests that plants, which produce many natural bacteria-killing molecules, have gotten "smart" over time, developing ways to outwit bacteria. He suspects that evolution has driven plants to produce natural chemicals that block bacterial MDR pumps, bypassing this bacterial protection system. Lewis tested his idea by first genetically knocking out the gene for the MDR pump from the common bacterium Staphylococcus aureus (S. aureus). He and his coworkers then exposed the altered bacteria to a very weak antibiotic called berberine that had been chemically extracted from barberry plants. Berberine is usually woefully ineffective against S. aureus, but it proved lethal for bacteria missing the MDR pump. What's more, Lewis found that berberine also killed unaltered bacteria given another barberry chemical that inhibited the MDR pumps. Lewis suggests that by co-administering inhibitors of MDR pumps along with antibiotics, physicians may be able to outsmart disease-causing microorganisms.
MDR pumps aren't just for microbes. Virtually all living things have MDR pumps, including people. In the human body, MDR pumps serve all sorts of purposes, and they can sometimes frustrate efforts to get drugs where they need to go. Chemotherapy medicines, for example, are often "kicked out" of cancer cells by MDR pumps residing in the cells' membranes. MDR pumps in membranes all over the body—in the brain, digestive tract, liver, and kidneys—perform important jobs in moving natural body molecules like hormones into and out of cells.
Pharmacologist Mary Vore of the University of Kentucky in Lexington has discovered that certain types of MDR pumps do not work properly during pregnancy, and she suspects that estrogen and other pregnancy hormones may be partially responsible. Vore has recently focused efforts on determining if the MDR pump is malformed in pregnant women who have intrahepatic cholestasis of pregnancy (ICP). A relatively rare condition, ICP often strikes during the third trimester and can cause significant discomfort such as severe itching and nausea, while also endangering the growing fetus. Vore's research on MDR pump function may also lead to improvements in drug therapy for pregnant women.
Got It?
Explain the difference between an agonist and an antagonist.
How does grapefruit juice affect blood levels of certain medicines?
What does a pharmacologist plot on the vertical and horizontal axes of a dose-response curve?
Name one of the potential risks associated with taking herbal products.
What are the four stages of drug's life in the body? | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.04%3A_Bench_to_Bedside-_Clinical_Pharmacology.txt |
Scientists became interested in the workings of the human body during the "scientific revolution" of the 15th and 16th centuries. These early studies led to descriptions of the circulatory, digestive, respiratory, nervous, and excretory systems. In time, scientists came to think of the body as a kind of machine that uses a series of chemical reactions to convert food into energy.
02: Body Heal Thyself
Using Achem Glossary
Analytical Chemistry, Transition Metal, Carbocation, Infrared Spectroscopy
Since blood is the body's primary internal transportation system, most drugs travel via this route. Medicines can find their way to the bloodstream in several ways, including the rich supply of blood vessels in the skin. You may remember, as a young child, the horror of seeing blood escaping your body through a skinned knee. You now know that the simplistic notion of skin literally "holding everything inside" isn't quite right. You survived the scrape just fine because blood contains magical molecules that can make a clot form within minutes after your tumble. Blood is a rich concoction containing oxygen-carrying red blood cells and infection-fighting white blood cells. Blood cells are suspended in a watery liquid called plasma that contains clotting proteins, electrolytes, and many other important molecules.
Burns: More Than Skin Deep
More than simply a protective covering, skin is a highly dynamic network of cells, nerves, and blood vessels. Skin plays an important role in preserving fluid balance and in regulating body temperature and sensation. Immune cells in skin help the body prevent and fight disease. When you get burned, all of these protections are in jeopardy. Burn-induced skin loss can give bacteria and other microorganisms easy access to the nutrient-rich fluids that course through the body, while at the same time allowing these fluids to leak out rapidly. Enough fluid loss can thrust a burn or trauma patient into shock, so doctors must replenish skin lost to severe burns as quickly as possible.
In the case of burns covering a significant portion of the body, surgeons must do two things fast: strip off the burned skin, then cover the unprotected underlying tissue. These important steps in the immediate care of a burn patient took scientists decades to figure out, as they performed carefully conducted experiments on how the body responds to burn injury. In the early 1980s, researchers doing this work developed the first version of an artificial skin covering called Integra® Dermal Regeneration Template™, which doctors use to drape over the area where the burned skin has been removed. Today, Integra Dermal Regeneration Template is used to treat burn patients throughout the world.
Blood also ferries proteins and hormones such as insulin and estrogen, nutrient molecules of various kinds, and carbon dioxide and other waste products destined to exit the body.
While the bloodstream would seem like a quick way to get a needed medicine to a diseased organ, one of the biggest problems is getting the medicine to the correct organ. In many case, drugs and up where they are not needed and cause side effects, as we've already noted. What's more, drugs may encounter many different obstacles while journeying through the bloodstream. Some medicines get "lost" when they stick tightly to certain proteins in the blood, effectively putting the drugs out of business.
Scientists called physiologists originally came up with the idea that all internal processes work together to keep the body in a balanced state. The bloodstream links all our organs together, enabling them to work in a coordinated way. Two organ systems are particularly interesting to pharmacologists: the nervous system (which transmits electrical signals over wide distances) and the endocrine system (which communicates messages via traveling hormones). These two systems are key targets for medicines.
Skin consists of three layers, making up a dynamic network of cells, nerves, and blood vessels.
Acetylsalicylate is the aspirin of today. Adding a chemical tag called an acetyl group (shaded yellow box, right) to a molecule derived from willow bark (salicylate, above) makes the molecule less acidic (and easier on the lining of the digestive tract), but still effective at relieving pain. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/02%3A_Body_Heal_Thyself/2.02%3A_River_of_Life.txt |
Like curare's effects on acetylcholine, the interactions between another drug—aspirin—and metabolism shed light on how the body works. This little white pill has been one of the most widely used drugs in history, and many say that it launched the entire pharmaceutical industry.
As a prescribed drug, aspirin is 100 years old. However, in its most primitive form, aspirin is much older. The bark of the willow tree contains a substance called salicin, a known antidote to headache and fever since the time of the Greek physician Hippocrates, around 400 B.C. The body convents salicin to an acidic substance called salicylate. Despite its usefulness dating back to ancient times, early records indicate that salicylate wreaked havoc on the stomachs of people who ingested this natural chemical. In the late 1800s, a scientific breakthrough turned willow-derived salicylate into a medicine friendlier to the body. Bayer® scientist Felix Hoffman discovered that adding a chemical tag called an acetyl group (see figure, page 20) to salicylate made the molecule less acidic and a little gentler on the stomach, but the chemical change did not seem to lessen the drug's ability to relieve his father's rheumatism. This molecule, acetylsalicylate, is the aspirin of today.
Aspirin works by blocking the production of messenger molecules called prostaglandins. Because of the many important roles they play in metabolism, prostaglandins are important targets for drugs and are very interesting to pharmacologists. Prostaglandins can help muscles relax and open up blood vessels, they give you a fever when you're infected with bacteria, and they also marshal the immune system by stimulating the process called inflammation. Sunburn, bee stings, tendonitis, and arthritis are just a few examples of painful inflammation caused by the body's release of certain types of prostaglandins in response to an injury.
Inflammation leads to pain in arthritis.
Aspirin belongs to a diverse group of medicines called NSAIDs, a nickname for the tongue-twisting title nonsteroidal antiinflammatory drugs. Other drugs that belong to this large class of medicines include Advil®, Aleve®, and many other popular pain relievers available without doctor's prescription. All these drugs share aspirin's ability to knock back the production of prostaglandins by blocking an enzyme called cyclooxygenase. Known as COX, this enzyme is a critical driver of the body's metabolism and immune function.
COX makes prostaglandins and other similar molecules collectively known as eicosanoids from a molecule called arachidonic acid. Named for the Greek word eikos, meaning "twenty," each eicosanoid contains 20 atoms of carbon.
You've also heard of the popular pain reliever acetaminophen (Tylenol®), which is famous for reducing fever and relieving headaches. However, scientists do not consider Tylenol an NSAID, because it does little to halt inflammation (remember that part of NSAID stands for "anti-inflammatory"). If your joints are aching from a long hike you weren't exactly in shape for, aspirin or Aleve may be better than Tylenol because inflammation is the thing making your joints hurt.
To understand how enzymes like COX work, some pharmacologists use special biophysical techniques and X rays to determine the three-dimensional shapes of the enzymes. These kinds of experiments teach scientists about molecular function by providing clear pictures of how all the folds and bends of an enzyme—usually a protein or group of interacting proteins—help it do its job. In drug development, one successful approach has been to use this information to design decoys to jam up the working parts of enzymes like COX. Structural studies unveiling the shapes of COX enzymes led to a new class of drugs used to treat arthritis. Researchers designed these drugs to selectively home in on one particular type of COX enzyme called COX-2.
By designing drugs that target only one form of an enzyme like COX, pharmacologists may be able to create medicines that are great at stopping inflammation but have fewer side effects. For example, stomach upset is a common side effect caused by NSAIDs that block COX enzymes. This side effect results from the fact that NSAIDs bind to different types of COX enzymes—each of which has a slightly different shape. One of these enzymes is called COX-1. While both COX-1 and COX-2 enzymes make prostaglandins, COX-2 beefs up the production of prostaglandins in sore, inflamed tissue, such as arthritic joints. In contrast, COX-1 makes prostaglandins that protect the digestive tract, and blocking the production of these protective prostaglandins can lead to stomach upset, and even bleeding and ulcers.
Very recently, scientists have added a new chapter to the COX story by identifying COX-3, which may be Tylenol's long-sought molecular target. Further research will help pharmacologists understand more precisely how Tylenol and NSAIDs act in the body. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/02%3A_Body_Heal_Thyself/2.03%3A_No_Pain_Your_Gain.txt |
Scientist know a lot about the body's organ systems, but much more remains to be discovered. To design "smart" drugs that will seek out diseased cells and not healthy ones, researchers need to understand the body inside and out. One system in particular still puzzles scientists: the immune system.
Even though researchers have accumulated vast amounts of knowledge about how our bodies fight disease using white blood cells and thousands of natural chemical weapons, a basic dilemma persists—how does the body know what to fight? The immune system constantly watches for foreign invaders and is exquisitely sensitive to any intrusion perceived as "non-self," like a transplanted organ from another person. This protection, however, can run afoul if the body slips up and views its own tissue as foreign. Autoimmune disease, in which the immune system mistakenly attacks and destroys body tissue that it believes to be foreign, can be the terrible consequence.
The "Anti" Establishment
Common over-the-counter medicines used to treat pain, fever, and inflammation have many uses. Here are some of the terms used to describe the particular effects of these drugs:
• ANTIPYRETIC—this term means fever-reducing; it comes from the Greek word pyresis, which means fire.
• ANTI-INFLAMMATORY—this word describes a drug's ability to reduce inflammation, which can cause soreness and swelling; it comes from the Latin word flamma, which means flame.
• ANALGESIC—this description refers to a medicine's ability to treat pain; it comes from the Greek word algos, which means pain.
Antibodies are Y-shaped molecules of the immune system.
The powerful immune army presents significant roadblocks for pharmacologists trying to create new drugs. But some scientists have looked at the immune system through a different lens. Why not teach the body to lunch an attack on its own diseased cells? Many researchers are pursuing immunotherapy as a way to treat a wide range of health problems, especially cancer. With advances in biotechnology, researchers are now able to tailor-produce in the lab modified forms of antibodies—out immune system's front-line agents.
Antibodies are spectacularly specific proteins that seek out and mark for destruction anything they do not recognize as belonging to body. Scientists have learned how to join antibody-making cells with cells that grow and divide continuously. This strategy creates cellular "factories" that work around the clock to produce large quantities of specialized molecules, called monoclonal antibodies, that attach to and destroy single kinds of targets. Recently, researchers have also figured out how to produce monoclonal antibodies in the egg whites of chickens. This may reduce production costs of these increasingly important drugs.
Doctors are already using therapeutic monoclonal antibodies to attack tumors. A drug called Rituxan® was the first therapeutic antibody approved by the Food and Drug Administration to treat cancer. This monoclonal antibody targets a unique tumor "fingerprint" on the surface of immune cells, called B cells, in a blood cancer called non-Hodgkin's lymphoma. Another therapeutic antibody for cancer, Herceptin®, latches onto breast cancer cell receptors that signal growth to either mask the receptors from view or lure immune cells to kill the cancer cells. Herceptin's actions prevent breast cancer from spreading to other organs.
Researchers are also investigating a new kind of "vaccine" as therapy for diseases such as cancer. The vaccines are not designed to prevent cancer, but rather to treat the disease when it has already taken hold in the body. Unlike the targeted-attack approach of antibody therapy, vaccines aim to recruit the entire immune system to fight off a tumor. Scientists are conducing clinical trials of vaccines against cancer to evaluate the effectiveness of this treatment approach.
The body machine has a tremendously complex collection of chemical signals that are relayed back and forth through the blood and into and out of cells. While scientists are hopeful that future research will point the way toward getting a sick body to heal itself, it is likely that there will always be a need for medicines to speed recovery from the many illnesses that plague humankind.
A Shock to the System
A body-wide syndrome caused by an infection called sepsis is a leading cause of death in hospital intensive care units, striking 750,000 people every year and killing more than 215,000. Sepsis is a serious public health problem, causing more deaths annually than heart disease. The most severe form of sepsis occurs when bacteria leak into the blood-stream, spilling their poisons and leading to a dangerous condition called septic shock. Blood pressure plunges dangerously low, the heart has difficulty pumping enough blood, and body temperature climbs or falls rapidly. In many cases, multiple organs fail and the patient dies.
Despite the obvious public health importance of finding effective ways to treat sepsis, researchers have been frustratingly unsuccessful. Kevin Tracey of the North Shore-Long Island Jewish Research Institute in Manhasset, New York, has identified an unusual suspect in the deadly crime of sepsis: the nervous system. Tracey and his coworkers have discovered an unexpected link between cytokines, the chemical weapons released by the immune system during sepsis, and a major nerve that controls critical body function such as heart rate and digestion. In animal studies, Tracy found that electrically stimulating this nerve, called the vagus nerve, significantly lowered blood levels of TNF, a cytokine that is produced when the body senses the presence of bacteria in the blood. Further research has led Tracey to conclude that production of the neurotransmitter acetylcholine underlies the inflammation-blocking response. Tracey is investigating whether stimulating the vagus nerve can be used as a component of therapy for sepsis and as a treatment for other immune disorders.
2.05: A Closer Look
One protruding end (green) of the MAO B enzyme anchors the protein inside the cell. Body molecules or drugs first come into contact with MAO B (in the hatched blue region) and are worked on within the enzyme's "active site," a cavity nestled inside the protein (the hatched red region). To get its job done, MAO B uses a helper molecule (yellow), which fits right next to the active site where the reaction takes place.
REPRINTED WITH PERMISSION FROM J. BIOL. CHEM. (2002) 277:23973-6 HTTP://WWW.JBC.ORG
Seeing is believing. The cliché could not be more apt for biologists trying to understand how a complicated enzyme works. For decades, researchers have isolated and purified individual enzymes from cells, performing experiments with these proteins to find out how they do their job of speeding up chemical reactions. But to thoroughly understand a molecule's function, scientists have to take a very, very close look at how all the atoms fit together and enable the molecular "machine" to work properly.
Researchers called structural biologists are fanatical about such detail, because it can deliver valuable information for designing drugs—even for proteins that scientists have studied in the lab for a long time. For example, biologist have known for 40 years that an enzyme called monoamine oxidase B (MAO B) works in the brain to help recycle communication molecules called neurotransmitters. MAO B and its cousin MAO A work by removing molecular pieces from neurotransmitters, part of the process of inactivating them. Scientists have developed drugs to block the actions of MAO enzymes, and by doing so, help preserve the levels of neurotransmitters in people with such disorders as Parkinson's disease and depression.
However, MAO inhibitors have many undesirable side effects. Tremors, increased hart rate, and problems with sexual function are some of the mild side effects of AMO inhibitors, but more serious problems include seizures, large dips in blood pressure, and difficulty breathing. People taking MAO inhibitors cannot eat foods containing the substance tyramine, which is found in wine, cheese, dried fruits, and many other foods. Most of the side effects occur because drugs that attach to MAO enzymes do not have a perfect fit for either MAO A or MAO B.
Dale Edmondson of Emory University in Atlanta, Georgia, has recently uncovered new knowledge that may help researchers design better, more specific drugs to interfere with these critical brain enzymes. Edmonson and his coworkers Andrea Mattevi and Claudia Binda of the University of Pavia in Italy got a crystal-clear glimpse of MAO B by determining its three-dimensional structure. The researchers also saw how one MAO inhibitor, Eldepryl®, attaches to the MAO B enzyme, and the scientists predict that their results will help in the design of more specific drugs with fewer side effects.
Got It?
Define metabolism.
How does aspirin work?
Name three functions of blood.
Give two examples of immunotherapy.
What is a technique scientists use to study a protein's three-dimensional structure? | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/02%3A_Body_Heal_Thyself/2.04%3A_Our_Immune_Army.txt |
Long before the first towns were built, before written language was invented, and even before plants were cultivated for food, the basic human desires to relieve pain and prolong life fueled the search for medicines. No one knows for sure what the earliest humans did to treat their ailments, but they probably sought cures in the plants, animals, and minerals around them.
03: Drugs from Nature Then and Now
Times have changed, but more than half of the world's population still relies entirely on plants for medicines, and plants supply the active ingredients of most traditional medical products. Plants have also served as the starting point for countless drugs on the market today. Researchers generally agree that natural products from plants and other organisms have been the most consistently successful source for ideas for new drugs, since nature is a master chemist. Drug discovery scientists often refer to these ideas as "leads," and chemicals that have desirable properties in lab tests are called lead compounds.
Natural Cholesterol-Buster
Having high cholesterol is a significant risk factor for heart disease, a leading cause of death in the industrialized world. Pharmacology research has made major strides in helping people deal with this problem. Scientists Michael Brown and Joseph Goldstein, both of the University of Texas Southwestern Medical Center at Dallas, won the 1985 Nobel Prize in physiology or medicine for their fundamental work determining how the body metabolizes cholesterol. This research, part of which first identified cholesterol receptors, led to the development of the popular cholesterol-lowering "statin" drugs such as Mevacor® and Lipitor®.
New research from pharmacologist David Mangelsdorf, also at the University of Texas Southwestern Medical Center at Dallas, is pointing to another potential treatment for high cholesterol. The "new" substance has the tongue-twisting name guggulsterone, and it isn't really new at all. Guggulsterone comes from the sap of the guggul tree, a species native to India, and has been used in India's Ayurvedic medicine since at least 600 B.C, to treat a wide variety of ailments, including obesity and cholesterol disorders. Mangelsdorf and his coworker David Moore of Baylor College of Medicine in Houston, Texas, found that guggulsterone blocks a protein called the FXR receptor that plays a role in cholesterol metabolism, converting cholesterol in the blood to bile acids. According to Mangelsdorf, since elevated levels of bile acids can actually boost cholesterol, blocking FXR helps to bring cholesterol counts down.
Sap from the guggul tree, a species native to India, contains a substance that may help fight heart disease.
Relatively speaking, very few species of living things on Earth have actually been seen and named by scientists. Many of these unidentified organisms aren't necessarily lurking in uninhabited places. A few years ago, for instance, scientists identified a brand-new species of millipede in a rotting leaf pile in New York City's Central Park, an area visited by thousands of people every day.
Scientists estimate that Earth is home to at least 250,000 different species of plants, and that up to 30 million species of insects crawl or fly somewhere around the globe. Equal numbers of species of fungi, algae, and bacteria probably also exist. Despite these vast numbers, chemists have tested only a few of these organisms to see whether they harbor some sort of medically useful substance.
Pharmaceutical chemists seek ideas for new drugs not only in plants, but in any part of nature where they may find valuable clues. This includes searching for organisms from what has been called the last unexplored frontier: the seawater that blankets nearly three-quarters of Earth.
Cancer Therapy Sees the Light
A novel drug delivery system called photodynamic therapy combines an ancient plant remedy, modern blood transfusion techniques, and light. Photodynamic therapy has been approved by the Food and Drug Administration to treat several cancers and certain types of age-related macular degeneration, a devastating eye disease that is the leading cause of blindness in North America and Europe. Photodynamic therapy is also being tested as a treatment for some skin and immune disorders.
The key ingredient in this therapy is psoralen, a plant-derived chemical that has a peculiar property: It is inactive until exposed to light. Psoralen is the active ingredient in a Nile-dwelling weed called ammi. This remedy was used by ancient Egyptians, who noticed that people became prone to sunburn after eating the weed. Modern researchers explained this phenomenon by discovering that psoralen, after being digested, goes to the skin's surface, where it is activated by the sun's ultraviolet rays. Activated psoralen attaches tenaciously to the DNA of rapidly dividing cancer cells and kills them. Photopheresis, a method that exposes a psoralen-like drug to certain wave lengths of light, is approved for the treatment of some forms of lymphoma, a cancer of white blood cells.
Some forms of cancer can be treated with photodynamic therapy, in which a cancer-killing molecule is activated by certain wavelengths of light.
JOSEPH FRIEDBERG | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.01%3A_Nature%27s_Medicine_Cabinet.txt |
Marine animals fight daily for both food and survival, and this underwater warfare is waged with chemicals. As with plants, researchers have recognized the potential use of this chemical weaponry to kill bacteria or raging cancer cells. Scientists isolated the first marine-derived cancer drug, now known as Cytosar-U®, decades ago. They found this chemical, a staple for treating leukemia and lymphoma, in a Caribbean sea sponge. In recent years, scientists have discovered dozens of similar ocean-derived chemicals that appear to be powerful cancer cell killers. Researchers are testing these natural products for their therapeutic properties.
For example, scientists have unearthed several promising drugs from sea creatures called tunicates. More commonly known as sea squirts, tunicates are a group of marine organisms that spend most of their lives attached to docks, rocks, or the undersides of boats. To an untrained eye they look like nothing more than small, colorful blobs, but tunicates are evolutionarily more closely related to vertebrates like ourselves than to most other invertebrate animals.
One tunicate living in the crystal waters of West Indies coral reefs and mangrove swamps turned out to be the source of an experimental cancer drug called ecteinascidin. Ken Rinehart, a chemist who was then at the University of Illinois at Urbana-Champaign discovered this natural substance. PharmaMar, a pharmaceutical company based in Spain, now holds the licenses for ecteinascidin, which it calls Yondelis™, and is conducting clinical trials on this drug. Lab tests indicate that Yondelis can kill cancer cells, and the first set of clinical studies has shown that the drug is safe for use in humans. Further phases of clinical testing—to evaluate whether Yondelis effectively treats soft-tissue sarcomas (tumors of the muscles, tendons, and supportive tissues)—and other types of cancer—are under way.
Miracle Cures
A penicillin-secreting Penicillium mold colony inhibits the growth of bacteria (zig-zag smear growing on culture dish).
CHRISTINE L. CASE
Led by the German scientist Paul Ehrlich, a new era in pharmacology began in the late 19th century. Although Ehrlich's original idea seems perfectly obvious now, it was considered very strange at the time. He proposed that every disease should be treated with a chemical specific for that disease, and that the pharmacologist's task was to find these treatments by systematically testing potential drugs.
The approach worked: Ehrlich's greatest triumph was his discovery of salvarsan, the first effective treatment for the sexually transmitted disease syphilis. Ehrlich discovered salvarsan after screening 605 different arsenic-containing compounds. Later, researchers around the world had great success in developing new drugs by following Ehrlich's methods. For example, testing of sulfur-containing dyes led to the 20th century's first "miracle drugs"—the sulfa drugs, used to treat bacterial infections. During the 1940s, sulfa drugs were rapidly replaced by a new, more powerful, and safer antibacterial drug, penicillin—originally extracted from the soil-dwelling fungus Penicillium.
Yondelis is an experimental cancer drug isolated from the marine organism Ecteinascidia turbinata.
PHARMAMAR
Animals that live in coral reefs almost always rely on chemistry to ward off hungry predators. Because getting away quickly isn't an option in this environment, lethal chemical brews are the weaponry of choice for these slow-moving or even sedentary animals. A powerful potion comes from one of these animals, a stunningly gorgeous species of snail found in the reefs surrounding Australia, Indonesia, and the Philippines. The animals, called cone snails, have a unique venom containing dozens of nerve toxins. Some of these venoms instantly shock prey, like the sting of an electric eel or the poisons of scorpions and sea anemones. Others cause paralysis, like the venoms of cobras and puffer fish.
Pharmacologist Baldomero Olivera of the University of Utah in Salt Lake City, a native of the Philippines whose boyhood fascination with cone snails matured into a career studying them, has discovered one cone snail poison that has become a potent new pain medicine. Olivera's experiments have shown that the snail toxin is 1,000 times more powerful than morphine in treating certain kinds of chronic pain. The snail-derived drug, named Prialt™ by the company (Elan Corporation, plc in Dublin, Ireland) that developed and markets it, jams up nerve transmission in the spinal cord and blocks certain pain signals from reaching the brain. Scientists predict that many more cone snail toxins will be drug leads, since 500 different species of this animal populate Earth.
A poison produced by the cone snail C. geographus has become a powerful new pain medicine.
Prospecting Biology?
The cancer drug Taxol originally came from the bark and needles of yew trees.
Are researchers taking advantage of nature when it comes to hunting for new medicines? Public concern has been raised about scientists scouring the world's tropical rain forests and coral reefs to look for potential natural chemicals that may end up being useful drugs. While it is true that rainforests in particular are home to an extraordinarily rich array of species of animals and plants, many life-saving medicines derived from natural products have been discovered in temperate climates not much different from our kitchens and backyards.
Many wonder drugs have arisen from non-endangered species, such as the bark of the willow tree, which was the original source of aspirin. The antibiotic penicillin, from an ordinary mold, is another example. Although scientists first found the chemical that became the widely prescribed cancer drug Taxol® in the bark of an endangered species of tree called the Pacific yew, researchers have since found a way to manufacture Taxol in the lab, starting with an extract from pine needles of the much more abundant European yew. In many cases, chemists have also figured out ways to make large quantities of rainforest-and reef-derived chemicals in the lab (see main text). | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.02%3A_Ocean_Medicines.txt |
Searching nature's treasure trove for potential medicines is often only the first step. Having tapped natural resources to hunt for new medicines, pharmaceutical scientists then work to figure out ways to cultivate natural products or to make them from scratch in the lab. Chemists play an essential role in turning marine and other natural products, which are often found in minute quantities, into useful medicines.
In the case of Yondelis, chemist Elias J. Corey of Harvard University in Boston, Massachusetts, deciphered nature's instructions on how to make this powerful medicinal molecule. That's important, because researchers must harvest more than a ton of Caribbean sea squirts to produce just 1 gram of the drug. By synthesizing drugs in a lab, scientists can produce thousands more units of a drug, plenty to use in patients if it proves effective against disease.
Scientists are also beginning to use a relatively new procedure called combinatorial genetics to custom-make products that don't even exist in nature. Researchers have discovered ways to remove the genetic instructions for entire metabolic pathways from certain microorganisms, alter the instructions, and then put them back. This method can generate new and different "natural" products.
Toxicogenetics: Poisons and Your Genes
Just as your genes help determine how you respond to certain medicines, your genetic code can also affect your susceptibility to illness. Why is it that two people with a similar lifestyle and a nearly identical environment can have such different propensities to getting sick? Lots of factors contribute, including diet, but scientists believe that an important component of disease risk is the genetic variability of people's reactions to chemicals in the environment.
On hearing the word "chemical," many people think of smokestacks and pollution. Indeed, our world is littered with toxic chemicals, some natural and some synthetic. For example, nearly all of us would succumb quickly to the poisonous bite of a cobra, but it is harder to predict which of us will develop cancer from exposure to carcinogens like cigarette smoke.
Toxicologists are researchers who study the effects of poisonous substances on living organisms. One toxicologist, Serrine Lau of the University of Texas at Austin, is trying to unravel the genetic mystery of why people are more or less susceptible to kidney damage after coming into contact with some types of poisons. Lau and her coworkers study the effects of a substance called hydroquinone (HQ), an industrial pollutant and a contaminant in cigarette smoke and diesel engine exhaust. Lau is searching for genes that play a role in triggering cancer in response to HQ exposure. Her research and the work of other so-called toxicogeneticists should help scientists find genetic "signatures" that can predict risk of developing cancer in people exposed to harmful carcinogens.
3.04: Is It Chemistry or Genetics
Regardless of the way researchers find new medicines, drug discovery often takes many unexpected twists and turns. Scientists must train their eyes to look for new opportunities lurking in the outcomes of their experiments. Sometimes, side trips in the lab can open up entirely new avenues of discovery.
Take the case of cyclosporine, a drug discovered three decades ago that suppresses the immune system and thereby prevents the body from rejecting transplanted organs. Still a best-selling medicine, cyclosporine was a research breakthrough. The drug made it possible for surgeons to save the lives of many critically ill patients by transplanting organs. But it's not hard to imagine that the very properties that make cyclosporine so powerful in putting a lid on the immune system can cause serious side effects, by damping immune function too much.
Years after the discovery of cyclosporine, researchers looking for less toxic versions of this drug found a natural molecule called FK506 that seemed to produce the same immune-suppressing effects at lower doses. The researchers found, to their great surprise, that cyclosporine and FK506 were chemically very different. To try to explain this puzzling result, Harvard University organic chemist Stuart Schreiber (then at Yale University in New Haven, Connecticut) decided to take on the challenge of figuring out how to make FK506 in his lab, beginning with simple chemical building blocks.
Schreiber succeeded, and he and scientists at Merck & Co., Inc. (Whitehouse Station, New Jersey) used the synthetic FK506 as a tool to unravel the molecular structure of the receptor for FK506 found on immune cells. According to Schreiber, information about the receptor's structure from these experiments opened his eyes to consider an entirely new line of research.
Schreiber reasoned that by custom-making small molecules in the lab, scientists could probe the function of the FK506 receptor to systematically study how the immune system works. Since then, he and his group have continued to use synthetic small molecules to explore biology. Although Schreiber's strategy is not truly genetics, he calls the approach chemical genetics, because the method resembles the way researchers go about their studies to understand the functions of genes.
In one traditional genetic approach, scientists alter the "spelling" (nucleotide components) of a gene and put the altered gene into a model organism—for example, a mouse, a plant, or a yeast cell—to see what effect the gene change has on the biology of that organism. Chemical genetics harnesses the power of chemistry to custom-produce any molecule and introduce it into cells, then look for biological changes that result. Starting with chemicals instead of genes gives drug development a step up. If the substance being tested produces a desired effect, such as stalling the growth of cancer cells, then the molecule can be chemically manipulated in short order since the chemist already knows how to make it.
Blending Science
These days, it's hard for scientists to know what to call themselves. As research worlds collide in wondrous and productive ways, the lines get blurry when it comes to describing your expertise. Craig Crews of Yale University, for example, mixes a combination of molecular pharmacology, chemistry, and genetics. In fact, because of his multiple scientific curiosities, Crews is a faculty member in three different Yale departments: molecular, cellular, and developmental biology; chemistry, and pharmacology. You might wonder how he has time to get anything done.
The herb feverfew (bachelor's button) contains a substance called parthenolide that appears to block inflammation.
He's getting plenty done—Crews is among a new breed of researchers delving into a growing scientific area called chemical genetics (see main text). Taking this approach, scientists use chemistry to attack biological problems that traditionally have been solved through genetic experiments such as the genetic engineering of bacteria, yeast, and mice. Crews' goal is to explore how natural products work in living systems and to identify new targets for designing drugs. He has discovered how an inflammation-fighting ingredient in the medicinal herb feverfew may work inside cells. He found that the ingredient, called parthenolide, appears to disable a key process that gets inflammation going. In the case of feverfew, a handful of controlled scientific studies in people have hinted that the herb, also known by its plant name "bachelor's button," is effective in combating migraine headaches, but further studies are needed to confirm these preliminary findings. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.03%3A_Tweaking_Nature.txt |
To translate pharmacology research into patient care, potential drugs ultimately have to be tested in people. This multistage process is known as clinical trials, and it has led researchers to validate life-saving treatments for many diseases, such as childhood leukemia and Hodgkin's disease. Clinical trials, though costly and very time-consuming, are the only way researchers can know for sure whether experimental treatments work in humans.
Scientists conduct clinical trials in three phases (I, II, and III), each providing the answer to a different fundamental question about a potential new drug: Is it safe? Does it work? Is it better than the standard treatment? Typically, researchers do years of basic work in the lab and in animal models before they can even consider testing an experimental treatment in people. Importantly, scientists who wish to test drugs in people must follow strict rules that are designed to protect those who volunteer to participate in clinical trials. Special groups called Institutional Review Boards, or IRBs, evaluate all proposed research involving humans to determine the potential risks and anticipated benefits. The goal of an IRB is to make sure that the risks are minimized and that they are reasonable compared to the knowledge expected to be gained by performing the study. Clinical studies cannot go forward without IRB approval. In addition, people in clinical studies must agree to the terms of a trial by participating in a process called informed consent and signing a form, required by law, that says they understand the risks and benefits involved in the study.
Phase I studies test a drug's safety in a few dozen to a hundred people and are designed to figure out what happens to a drug in the body—how it is absorbed, metabolized, and excreted. Phase I studies usually take several months. Phase II trials test whether or not a drug produces a desired effect. These studies take longer—from several months to a few years—and can involve up to several hundred patients. A phase III study further examines the effectiveness of a drug as well as whether the drug is better than current treatments. Phase III studies involve hundreds to thousands of patients, and these advanced trials typically last several years. Many phase II and phase III studies are randomized, meaning that one group of patients gets the experimental drug being tested while a second, control group gets either a standard treatment or placebo (that is, no treatment, often masked as a "dummy" pill or injection). Also, usually phase II and phase III studies are "blinded"—the patients and the researchers do not know who is getting the experimental drug. Finally, once a new drug has completed phase III testing, a pharmaceutical company can request approval from the Food and Drug Administration to market the drug.
Got It?
Scientists are currently testing cone snail toxins for the treatment of which health problem?
How are people protected when they volunteer to participate in a clinical trial?
Why do plants and marine organisms have chemicals that could be used as medicines?
What is a drug "lead?"
Name the first marine-derived cancer medicine. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.05%3A_TestingI_II_III.txt |
As you've read so far, the most important goals of modern pharmacology are also the most obvious. Pharmacologists want to design, and be able to produce in sufficient quantity, drugs that will act in a specific way without too many side effects. They also want to deliver the correct amount of a drug to the proper place in the body. But turning molecules into medicines is more easily said than done. Scientists struggle to fulfill the twin challenges of drug design and drug delivery.
04: Molecules to Medicines
Medicine Hunting
While sometimes the discovery of potential medicines falls to researchers' good luck, most often pharmacologists, chemists, and other scientists looking for new drugs plod along methodically for years, taking suggestions from nature or clues from knowledge about how the body works.
Finding chemicals' cellular targets can educate scientists about how drugs work. Aspirin's molecular target, the enzyme cyclooxygenase, or COX (see page 22), was discovered this way in the early 1970s in Nobel Prize-winning work by pharmacologist John Vane, then at the Royal College of Surgeons in London, England. Another example is colchicines, a relatively old drug that is still widely used to treat gout, an excruciatingly painful type of arthritis in which needle-like crystals of uric acid clog joints, leading to swelling, heat, pain, and stiffness. Lab experiments with colchicine led scientists to this drug's molecular target, a cell-scaffolding protein called tubulin. Colchicine works by attaching itself to tubulin, causing certain parts of a cell's architecture to crumble, and this action can interfere with a cell's ability to move around. Researchers suspect that in the case of gout, colchicine works by halting the migration of immune cells called granulocytes that are responsible for the inflammation characteristic of gout.
A Drug By Another Name
Drugs used to treat bone ailments may be useful for treating infectious diseases like malaria.
As pet owners know, you can teach some old dogs new tricks. In a similar vein, scientists have in some cases found new uses for "old" drugs. Remarkably, the potential new uses often have little in common with a drug's product label (its "old" use). For example, chemist Eric Oldfield of the University of Illinois at Urbana-Champaign discovered that one class of drugs called bisphosphonates, which are currently approved to treat osteoporosis and other bone disorders, may also be useful for treating malaria, Chagas' disease, leishmaniasis, and AIDS-related infections like toxoplasmosis.
Previous research by Oldfield and his coworkers had hinted that the active ingredient in the bisphosphonate medicines Fosamax®, Actonel®, and Aredia® blocks a critical step in the metabolism of parasites, the microorganisms that cause these diseases. To test whether this was true, Oldfield gave the medicines to five different types of parasites, each grown along with human cells in a plastic lab dish. The scientists found that small amounts of the osteoporosis drugs killed the parasites while sparing human cells. The researchers are now testing the drugs in animal models of the parasitic diseases and so far have obtained cures—in mice—of certain types of leishmaniasis. If these studies prove that bisphosphonate drugs work in larger animal models, the next step will be to find out if the medicines can thwart these parasitic diseases in humans.
Current estimates indicate that scientists have identified roughly 500 to 600 molecular targets where medicines may have effects in the body. Medicine hunters can strategically "discover" drugs by designing molecules to "hit" these targets. That has already happened in some cases. Researchers knew just what they were looking for when they designed the successful AIDS drugs called HIV protease inhibitors. Previous knowledge of the three-dimensional structure of certain HIV proteins (the target) guided researchers to develop drugs shaped to block their action. Protease inhibitors have extended the lives of many people with AIDS.
However, sometimes even the most targeted approaches can end up in big surprises. The New York City pharmaceutical firm Pfizer had a blood pressure-lowering drug in mind, when instead its scientists discovered Viagra®, a best-selling drug approved to treat erectile dysfunction. Initially, researchers had planned to create a heart drug, using knowledge they had about molecules that make blood clot and molecular signals that instruct blood vessels to relax. What the scientists did not know was how their candidate drug would fare in clinical trials.
Colchicine, a treatment for gout, was originally derived from the stem and seeds of the meadow saffron (autumn crocus).
NATIONAL AGRICULTURE LIBRARY, ARS, USDA
Sildenafil (Viagra's chemical name) did not work very well as a heart medicine, but many men who participated in the clinical testing phase of the drug noted one side effect in particular: erections. Viagra works by boosting levels of a natural molecule called cyclic GMP that plays a key role in cell signaling in many body tissues. This molecule does a good job of opening blood vessels in the penis, leading to an erection.
4.02: 21st-Century Science
21st-Century Science
While strategies such as chemical genetics can quicken the pace of drug discovery, other approaches may help expand the number of molecular targets from several hundred to several thousand. Many of these new avenues of research hinge on biology.
Relatively new brands of research that are stepping onto center stage in 21st-century science include genomics (the study of all of an organism's genetic material), proteomics (the study of all of an organism's proteins), and bioinformatics (using computers to sift through large amounts of biological data). The "omics" revolution in biomedicine stems from biology's gradual transition from a gathering, descriptive enterprise to a science that will someday be able to model and predict biology. If you think 25,000 genes is a lot (the number of genes in the human genome), realize that each gene can give rise to different molecular job. Scientists estimate that humans have hundreds of thousands of protein variants. Clearly, there's lots of work to be done, which will undoubtedly keep researchers busy for years to come.
A Chink in Cancer's Armor
Doctors use the drug Gleevec to treat a form of leukemia, a disease in which abnormally high numbers of immune cells (larger, purple circles in photo) populate the blood.
Recently, researchers made an exciting step forward in the treatment of cancer. Years of basic research investigating circuits of cellular communication led scientists to tailor-make a new kind of cancer medicine. In May 2001, the drug Gleevec™ was approved to treat a rare cancer of the blood called chronic myelogenous leukemia (CML). The Food and Drug Administration described Gleevec's approval as " …a testament to the groundbreaking scientific research taking place in labs throughout America."
Researchers designed this drug to halt a cell-communication pathway that is always "on" in CML. Their success was founded on years of experiments in the basis biology of how cancer cells grow. The discovery of Gleevec in an example of the success of so-called molecular targeting: understanding how diseases arise at the level of cells, then figuring out ways to treat them. Scores of drugs, some to treat cancer but also many other health conditions, are in the research pipeline as a result of scientists' eavesdropping on how cells communicate. | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.01%3A_Medicine_Hunting.txt |
Finding new medicines and cost-effective ways to manufacture them is only half the battle. An enormous challenge for pharmacologists is figuring out how to get drugs to the right place, a task known as drug delivery.
Ideally, a drug should enter the body, go directly to the diseased site while bypassing healthy tissue, do its job, and then disappear. Unfortunately, this rarely happens with the typical methods of delivering drugs: swallowing and injection. When swallowed, many medicines made of protein are never absorbed into the bloodstream because they are quickly chewed up by enzymes as they pass through the digestive system. If the drug does get to the blood from the intestines, it falls prey to liver enzymes. For doctors prescribing such drugs, this first-pass effect means that several doses of an oral drug are needed before enough makes it to the blood. Drug injections also cause problems, because they are expensive, difficult for patients to self-administer, and are unwieldy if the drug must be taken daily. Both methods of administration also result in fluctuating levels of the drug in the blood, which is inefficient and can be dangerous.
What to do? Pharmacologists can work around the first-pass effect by delivering medicines via the skin, nose, and lungs. Each of these methods bypasses the intestinal tract and can increase the amount of drug getting to the desired site of action in the body. Slow, steady drug delivery directly to the bloodstream—without stopping at the liver first—is the primary benefit of skin patches, which makes this form of drug delivery particularly useful when a chemical must be administered over a long period.
Hormones such as testosterone, progesterone, and estrogen are available as skin patches. These forms of medicines enter the blood via a meshwork of small arteries, veins, and capillaries in the skin. Researchers also have developed skin patches for a wide variety of other drugs. Some of these include Duragesic® (a prescription-only pain medicine), Transderm Scop® (a motion-sickness drug), and Transderm Nitro® (a blood vessel-widening drug used to treat chest pain associated with heart disease). Despite their advantages, however, skin patches have a significant drawback. Only very small drug molecules can get into the body through the skin.
Inhaling drugs through the nose or mouth is another way to rapidly deliver drugs and bypass the liver. Inhalers have been a mainstay of asthma therapy for years, and doctors prescribe nasal steroid drugs for allergy and sinus problems.
Researchers are investigating insulin powders that can be inhaled by people with diabetes who rely on insulin to control their blood sugar daily. This still-experimental technology stems from novel uses of chemistry and engineering to manufacture insulin particles of just the right size. Too large, and the insulin particles could lodge in the lungs; too small, and the particles will be exhaled. If clinical trials with inhaled insulin prove that it is safe and effective, then this therapy could make life much easier for people with diabetes.
Reading a Cell MAP
Scientists try hard to listen to the noisy, garbled "discussions" that take place inside and between cells. Less than a decade ago, scientists identified one very important cellular communication stream called MAP (mitogen-activated protein) kinase signaling. Today, molecular pharmacologists such as Melanie H. Cobb of the University of Texas Southwestern Medical Center at Dallas are studying how MAP kinase signaling pathways malfunction in unhealthy cells.
Kinases are enzymes that add phosphate groups (red-yellow structures) to proteins (green), assigning the proteins a code. In this reaction, an intermediate molecule called ATP (adenosine triphosphate) donates a phosphate group from itself, becoming ADP (adenosine diphosphate).
Some of the interactions between proteins in these pathways involve adding and taking away tiny molecular labels called phosphate groups. Kinases are the enzymes that add phosphate groups to proteins, and this process is called phosphorylation. Marking proteins in this way assigns the proteins a code, instructing the cell to do something, such as divide or grow. The body employs many, many signaling pathways involving hundreds of different kinase enzymes. Some of the important functions performed by MAP kinase pathways include instructing immature cells how to "grow up" to be specialized cell types like muscle cells, helping cells in the pancreas respond to the hormone insulin, and even telling cells how to die.
Since MAP kinase pathways are key to so many important cell processes, researchers consider them good targets for drugs. Clinical trials are under way to test various molecules that, in animal studies, can effectively lock up MAP kinase signaling when it's not wanted, for example, in cancer and in diseases involving an overactive immune system, such as arthritis. Researchers predict that if drugs to block MAP kinase signaling prove effective in people, they will likely be used in combination with other medicines that treat a variety of health conditions, since many diseases are probably caused by simultaneous errors in multiple signaling pathways.
Proteins that snake through membranes help transport molecules into cells. HTTP:/WWW.PHARMACOLOGY.UCLA.EDU | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.03%3A_Rush_Delivery.txt |
Scientists are solving the dilemma of drug delivery with a variety of other clever techniques. Many of the techniques are geared toward sneaking through the cellular gate-keeping systems' membranes. The challenge is a chemistry problem—most drugs are water-soluble, but membranes are oily. Water and oil don't mix, and thus many drugs can't enter the cell. To make matters worse, size matters too. Membranes are usually constructed to permit the entry of only small nutrients and hormones, often through private cellular alleyways called transporters.
Many pharmacologists are working hard to devise ways to work not against, but with nature, by learning how to hijack molecular transporters to shuttle drugs into cells. Gordon Amidon, a pharmaceutical chemist at the University of Michigan-Ann Arbor, has been studying one particular transporter in mucosal membranes lining the digestive tract. The transporter, called hPEPT1, normally serves the body by ferrying small, electrically charged particles and small protein pieces called peptides into and out of the intestines.
Amidon and other researchers discovered that certain medicines, such as the antibiotic penicillin and certain types of drugs used to treat high blood pressure and heart failure, also travel into the intestines via hPEPT1. Recent experiments revealed that the herpes drug Valtrex® and the AIDS drug Retrovir® also hitch a ride into intestinal cells using the hPEPT1 transporter. Amidon wants to extend this list by synthesizing hundreds of different molecules and testing them for their ability to use hPEPT1 and other similar transporters. Recent advances in molecular biology, genomics, and bioinformatics have sped the search for molecules that Amidon and other researchers can test.
Scientists are also trying to slip molecules through membranes by cloaking them in disguise. Steven Regen of Lehigh University in Bethlehem, Pennsylvania, has manufactured miniature chemical umbrellas that close around and shield a molecule when it encounters a fatty membrane and then spread open in the watery environment inside a cell. So far, Regen has only used test molecules, not actual drugs, but he has succeeded in getting molecules that resemble small segments of DNA across membranes. The ability to do this in humans could be a crucial step in successfully delivering therapeutic molecules to cells via gene therapy.
4.05: Act Like a Membrane
Researchers know that high concentrations of chemotherapy drugs will kill every single cancer cell growing in a lab dish, but getting enough of these powerful drugs to a tumor in the body without killing too many healthy cells along the way has been exceedingly difficult. These powerful drugs can do more harm than good by severely sickening a patient during treatment.
Some researchers are using membrane-like particles called liposomes to package and deliver drugs to tumors. Liposomes are oily, microscopic capsules that can be filled with biological cargo, such as a drug. They are very, very small—only one one-thousand the width of a single human hair. Researchers have known about liposomes for many years, but getting them to the right place in the body hasn't been easy. Once in the bloodstream, these foreign particles are immediately shipped to the liver and spleen, where they are destroyed.
Anesthesia Dissected
Scientists who study anesthetic medicines have a daunting task—for the most part, they are "shooting in the dark" when it comes to identifying the molecular targets of these drugs. Researchers do know that anesthetics share one common ingredient: Nearly all of them somehow target membranes, the oily wrappings surrounding cells. However, despite the fact that anesthesia is a routine part of surgery, exactly how anesthetic medicines work in the body has remained a mystery for more than 150 years. It's an important problem, since anesthetics have multiple effects on key body functions, including critical processes such as breathing.
Scientists define anesthesia as a state in which no movement occurs in response to what should be painful. The problem, is even though a patient loses a pain response, the anesthesiologist can't tell what is happening inside the person's organs and cells. Further complicating the issue, scientists know that many different types of drugs—with little physical resemblance to each other—can all produce anesthesia. This makes it difficult to track down causes and effects.
Anesthesiologist Robert Veselis of the Memorial Sloan-Kettering Institute for Cancer Research in New York City clarified how certain types of these mysterious medicines work. Veselis and his coworkers measured electrical activity in the brains of healthy volunteers receiving anesthetics while they listened to different sounds. To determine how sedated the people were, the researchers measured reaction time to the sounds the people heard. To measure memory effects, they quizzed the volunteers at the end of the study about word lists they had heard before and during anesthesia. Veselis' experiments show that the anesthetics they studied affect separate brain areas to produce the two different effects of sedation and memory loss. The findings may help doctors give anesthetic medicines more effectively and safely and prevent reactions with other drugs a patient may be taking.
Materials engineer David Needham of Duke University in Durham, North Carolina, is investigating the physics and chemistry of liposomes to better understand how the liposomes ad their cancer-fighting cargo can travel through the body. Needham worked for 10 years to create a special kind of liposome that melts at just a few degrees above body temperature. The end result is a tiny molecular "soccer ball" made from two different oils that wrap around a drug. At room temperature, the liposomes are solid and they stay solid at body temperature, so they can be injected into the bloodstream. The liposomes are designed to spill their drug cargo into a tumor when heat is applied to the cancerous tissue. Heat is know to perturb tumors, making the blood vessels surrounding cancer cells extra-leaky. As the liposomes approach the warmed tumor tissue, the "stitches" of the miniature soccer balls begin to dissolve, rapidly leaking the liposome's contents.
Needham and Duke oncologist Mark Dewhirst teamed up to do animal studies with the heat-activated liposomes. Experiments in mice and dogs revealed that, when heated, the drug-laden capsules flooded tumors with a chemotherapy drug and killed the cancer cells inside. Researchers hope to soon begin the first stage of human studies testing the heat-triggered liposome treatment in patients with prostate and breast cancer. The results of these and later clinical trials will determine whether liposome therapy can be a useful weapon for treating breast and prostate cancer and other hard-to-treat solid tumors.
David Needham designed liposomes resembling tiny molecular "soccer balls" made from two different oils that wrap around a drug.
LAWRENCE MAYER, LUDGER ICKENSTEIN, KATRINA EDWARDS | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.04%3A_Transportation_Dilemmas.txt |
G proteins act like relay batons to pass messages from circulating hormones into cells.
1. A hormone (red) encounters a receptor (blue) in the membrane of a cell.
2. A G protein (green) becomes activated and makes contact with the receptor to which the hormone is attached.
3. The G protein passes the hormone's message to the cell by switching on a cell enzyme (purple) that triggers a response.
Imagine yourself sitting on a cell, looking outward to the bloodstream rushing by. Suddenly, a huge glob of something hurls toward you, slowing down just as it settles into a perfect dock on the surface of your cell perch. You don't realize it, but your own body sent this substance—a hormone called epinephrine—to protect you, telling you to get out of the way of a car that just about sideswiped yours while drifting out of its lane. Your body reacts, whipping up the familiar, spine-tingling, "fight-or-flight" response that gears you to respond quickly to potentially threatening situations such as this one.
How does it all happen so fast?
Getting into a cell is a challenge, a strictly guarded process kept in control by a protective gate called the plasma membrane. Figuring out how molecular triggers like epinephrine communicate important messages to the inner parts of cells earned two scientists the Nobel Prize in physiology or medicine in 1994. Getting a cellular message across the membrane is called signal transduction, and it occurs in three steps. First, a message (such as epinephrine) encounters the outside of a cell and makes contact with a molecule on the surface called a receptor. Next, a connecting transducer, or switch molecule, passes the message inward, sort of like a relay baton. Finally, in the third step, the signal gets amplified, prompting the cell to do something: move, produce new proteins, even send out more signals.
One of the Nobel Prize winners, pharmacologist Alfred G. Gilman of the University of Texas Southwestern Medical Center at Dallas, uncovered the identity of the switch molecule, called a G protein. Gilman named the switch, which is actually a huge family of switch molecules, not after himself but after the type of cellular fuel it uses: and energy currency called GTP. As with any switch, G proteins must be turned on only when needed, then shut off. Some illnesses, including fatal diseases like cholera, occur when a G protein is errantly left on. In the case of cholera, the poisonous weaponry of the cholera bacterium "freezes" in place one particular type of G protein that controls water balance. The effect is constant fluid leakage, causing life-threatening diarrhea.
In the few decades since Gilman and the other Nobel Prize winner, the late National Institutes of Health scientist Martin Rodbell, made their fundamental discovery about G protein switches, pharmacologists all over the world have focused on these signaling molecules. Research on G proteins and on all aspects of cell signaling has prospered, and as a result scientists now have an avalanche of data. In the fall of 2000, Gilman embarked on a groundbreaking effort to begin to untangle and reconstruct some of this information to guide the way toward creating a "virtual cell." Gilman leads the Alliance for Cellular Signaling, a large, interactive research network. The group has a big dream: to understand everything there is to know about signaling inside cells. According to Gilman, Alliance researchers focus lots of attention on G proteins and also on other signaling systems in selected cell types. Ultimately, the scientists hope to test drugs and learn about disease through computer modeling experiments with the virtual cell system.
Exercise \(1\)
1. What is a liposome?
2. Name three drug delivery methods.
3. Describe how G proteins work.
4. What do kinases do?
5. Discuss the "omics" revolution in biomedical research.
05: Medicines for the Future
The advances in drug development and delivery described in this booklet reflect scientists' growing knowledge about human biology. This knowledge has allowed them to develop medicines targeted to specific molecules or cells. In the future, doctors may be able to treat or prevent diseases with drugs that actually repair cells or protect them from attack. No one knows which of the techniques now being developed will yield valuable future medicines, but it is clear that thanks to pharmacology research, tomorrow's doctors will have an unprecedented array of weapons to fight disease.
Careers in Pharmacology
Wanna be a pharmacologist? If you choose pharmacology as a career, here are some of the places you might find yourself working:
College or University. Most basic biomedical research across the county is done by scientists at colleges and universities. Academic pharmacologists perform research to determine how medicines interact with living systems. They also teach pharmacology to graduate, medical, pharmacy, veterinary, dental, or undergraduate students.
Pharmaceutical Company. Pharmacologists who work in industry participate in drug development as part of a team of scientists. A key aspect of pharmaceutical industry research is making sure new medicines are effective and safe for use in people.
Hospital or Medical Center. Most clinical pharmacologists are physicians who have specialized training in the use of drugs and combinations of drugs to treat various health conditions. These scientists often work with patients and spend a lot of time trying to understand issues relating to drug dosage, including side effects and drug interactions.
Government Agency. Pharmacologists and toxicologists play key roles in formulating drug laws and chemical regulations. Federal agencies such as the National Institutes of Health and the Food and Drug Administration hire many pharmacologists for their expertise in how drugs work. These scientists help develop policies about the safe use of medicines.
You can learn more about careers in pharmacology by contacting professional organizations such as the American Society for Pharmacology and Experimental Therapeutics (http://www.aspet.org/) or the American Society for Clinical Pharmacology and Therapeutics (http://www.ascpt.org/) | textbooks/chem/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.06%3A_The_G_Switch.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.