chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Introduction
In the past few years, nanotechnology research has expanded out of the chemistry department and into the fields of medicine, energy, aerospace and even computing and information technology. With bulk materials, the surface area to volume is insignificant in relation to the number of atoms in the bulk, however when the particles are only 1 to 100 nm across, different properties begin to arise. For example, commercial grade zinc oxide has a surface area range of 2.5 to 12 m2/g while nanoparticle zinc oxide can have surface areas as high as 54 m2/g . The nanoparticles have superior UV blocking properties when compared to the bulk material, making them useful in applications such as sunscreen. Many useful properties of nanoparticles rise from their small size, making it very important to be able to determine their surface area.
Overview of BET Theory
The BET theory was developed by Stephen Brunauer (Figure $1$ ), Paul Emmett (Figure $2$ ), and Edward Teller (Figure $3$ ) in 1938. The first letter of each publisher’s surname was taken to name this theory. The BET theory was an extension of the Langmuir theory, developed by Irving Langmuir (Figure $4$ ) in 1916.
The Langmuir theory relates the monolayer adsorption of gas molecules (Figure $5$ ), also called adsorbates, onto a solid surface to the gas pressure of a medium above the solid surface at a fixed temperature to \ref{1} , where θ is the fractional cover of the surface, P is the gas pressure and α is a constant.
$\Theta \ =\ \frac{\alpha \cdot P}{1\ +\ (\alpha \cdot P)} \label{1}$
The Langmuir theory is based on the following assumptions:
• All surface sites have the same adsorption energy for the adsorbate, which is usually argon, krypton or nitrogen gas. The surface site is defined as the area on the sample where one molecule can adsorb onto.
• Adsorption of the solvent at one site occurs independently of adsorption at neighboring sites.
• Activity of adsorbate is directly proportional to its concentration.
• Adsorbates form a monolayer.
• Each active site can be occupied only by one particle.
The Langmuir theory has a few flaws that are addressed by the BET theory. The BET theory extends the Langmuir theory to multilayer adsorption (Figure $1$ ) with three additional assumptions:
• Gas molecules will physically adsorb on a solid in layers infinitely.
• The different adsorption layers do not interact.
• The theory can be applied to each layer.
How does BET Work?
Adsorption is defined as the adhesion of atoms or molecules of gas to a surface. It should be noted that adsorption is not confused with absorption, in which a fluid permeates a liquid or solid. The amount of gas adsorbed depends on the exposed surface are but also on the temperature, gas pressure and strength of interaction between the gas and solid. In BET surface area analysis, nitrogen is usually used because of its availability in high purity and its strong interaction with most solids. Because the interaction between gaseous and solid phases is usually weak, the surface is cooled using liquid N2 to obtain detectable amounts of adsorption. Known amounts of nitrogen gas are then released stepwise into the sample cell. Relative pressures less than atmospheric pressure is achieved by creating conditions of partial vacuum. After the saturation pressure, no more adsorption occurs regardless of any further increase in pressure. Highly precise and accurate pressure transducers monitor the pressure changes due to the adsorption process. After the adsorption layers are formed, the sample is removed from the nitrogen atmosphere and heated to cause the adsorbed nitrogen to be released from the material and quantified. The data collected is displayed in the form of a BET isotherm, which plots the amount of gas adsorbed as a function of the relative pressure. There are five types of adsorption isotherms possible.
Type I Isotherm
Type I is a pseudo-Langmuir isotherm because it depicts monolayer adsorption (Figure $6$ ). A type I isotherm is obtained when P/Po < 1 and c > 1 in the BET equation, where P/Po is the partial pressure value and c is the BET constant, which is related to the adsorption energy of the first monolayer and varies from solid to solid. The characterization of microporous materials, those with pore diameters less than 2 nm, gives this type of isotherm.
Type II Isotherm
A type II isotherm (Figure $7$ ) is very different than the Langmuir model. The flatter region in the middle represents the formation of a monolayer. A type II isotherm is obtained when c > 1 in the BET equation. This is the most common isotherm obtained when using the BET technique. At very low pressures, the micropores fill with nitrogen gas. At the knee, monolayer formation is beginning and multilayer formation occurs at medium pressure. At the higher pressures, capillary condensation occurs.
Type III Isotherm
A type III isotherm (Figure $8$ ) is obtained when the c < 1 and shows the formation of a multilayer. Because there is no asymptote in the curve, no monolayer is formed and BET is not applicable.
Type IV Isotherm
Type IV isotherms (Figure $9$ ) occur when capillary condensation occurs. Gases condense in the tiny capillary pores of the solid at pressures below the saturation pressure of the gas. At the lower pressure regions, it shows the formation of a monolayer followed by a formation of multilayers. BET surface area characterization of mesoporous materials, which are materials with pore diameters between 2 - 50 nm, gives this type of isotherm.
Type V Isotherm
Type V isotherms (Figure $10$ ) are very similar to type IV isotherms and are not applicable to BET.
Calculations
The BET Equation, \ref{2} , uses the information from the isotherm to determine the surface area of the sample, where X is the weight of nitrogen adsorbed at a given relative pressure (P/Po), Xm is monolayer capacity, which is the volume of gas adsorbed at standard temperature and pressure (STP), and C is constant. STP is defined as 273 K and 1 atm.
$\frac{1}{X[(P_{0}/P)-1]} = \frac{1}{X_{m}C} + \frac{C-1}{X_{m}C} (\frac{P}{P_{0}}) \label{2}$
Multi-point BET
Ideally five data points, with a minimum of three data points, in the P/P0 range 0.025 to 0.30 should be used to successfully determine the surface area using the BET equation. At relative pressures higher than 0.5, there is the onset of capillary condensation, and at relative pressures that are too low, only monolayer formation is occurring. When the BET equation is plotted, the graph should be of linear with a positive slope. If such a graph is not obtained, then the BET method was insufficient in obtaining the surface area.
• The slope and y-intercept can be obtained using least squares regression.
• The monolayer capacity Xm can be calculated with \ref{3} .
• Once Xm is determined, the total surface area St can be calculated with the following equation, where Lav is Avogadro’s number, Am is the cross sectional area of the adsorbate and equals 0.162 nm2 for an absorbed nitrogen molecule, and Mv is the molar volume and equals 22414 mL, \ref{4} .
$X_{m}\ = \frac{1}{s\ +\ i} = \frac{C-1}{C_{s}} \label{3}$
$S\ = \frac{X_{m} L_{av} A_{m}}{M_{v}} \label{4}$
Single point BET can also be used by setting the intercept to 0 and ignoring the value of C. The data point at the relative pressure of 0.3 will match up the best with a multipoint BET. Single point BET can be used over the more accurate multipoint BET to determine the appropriate relative pressure range for multi-point BET.
Sample Preparation and Experimental Setup
Prior to any measurement the sample must be degassed to remove water and other contaminants before the surface area can be accurately measured. Samples are degassed in a vacuum at high temperatures. The highest temperature possible that will not damage the sample’s structure is usually chosen in order to shorten the degassing time. IUPAC recommends that samples be degassed for at least 16 hours to ensure that unwanted vapors and gases are removed from the surface of the sample. Generally, samples that can withstand higher temperatures without structural changes have smaller degassing times. A minimum of 0.5 g of sample is required for the BET to successfully determine the surface area.
Samples are placed in glass cells to be degassed and analyzed by the BET machine. Glass rods are placed within the cell to minimize the dead space in the cell. Sample cells typically come in sizes of 6, 9 and 12 mm and come in different shapes. 6 mm cells are usually used for fine powders, 9 mm cells for larger particles and small pellets and 12 mm are used for large pieces that cannot be further reduced. The cells are placed into heating mantles and connected to the outgas port of the machine.
After the sample is degassed, the cell is moved to the analysis port (Figure $11$ ). Dewars of liquid nitrogen are used to cool the sample and maintain it at a constant temperature. A low temperature must be maintained so that the interaction between the gas molecules and the surface of the sample will be strong enough for measurable amounts of adsorption to occur. The adsorbate, nitrogen gas in this case, is injected into the sample cell with a calibrated piston. The dead volume in the sample cell must be calibrated before and after each measurement. To do that, helium gas is used for a blank run, because helium does not adsorb onto the sample.
Shortcomings of BET
The BET technique has some disadvantages when compared to NMR, which can also be used to measure the surface area of nanoparticles. BET measurements can only be used to determine the surface area of dry powders. This technique requires a lot of time for the adsorption of gas molecules to occur. A lot of manual preparation is required.
The Surface Area Determination of Metal-Organic Frameworks
The BET technique was used to determine the surface areas of metal-organic frameworks (MOFs), which are crystalline compounds of metal ions coordinated to organic molecules. Possible applications of MOFs, which are porous, include gas purification and catalysis. An isoreticular MOF (IRMOF) with the chemical formula Zn4O(pyrene-1,2-dicarboxylate)3 (Figure $12$ ) was used as an example to see if BET could accurately determine the surface area of microporous materials. The predicted surface area was calculated directly from the geometry of the crystals and agreed with the data obtained from the BET isotherms. Data was collected at a constant temperature of 77 K and a type II isotherm (Figure $13$ ) was obtained.
The isotherm data obtained from partial pressure range of 0.05 to 0.3 is plugged into the BET equation, \ref{2} , to obtain the BET plot (Figure $14$ ).
Using \ref{5} , the monolayer capactiy is determined to be 391.2 cm3/g.
$X_{m}\ = \frac{1}{(2.66E\ -\ 3)\ +\ (-5.212E\ -\ 0.05)} \label{5}$
Now that Xm is known, then \ref{6} can be used to determine that the surface area is 1702.3 m2/g.
$S\ =\frac{391.2cm^{2} \ast 0.162nm^{2} \ast 6.02\ast 10^{23}}{22.414:L} \label{6}$ | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.03%3A_BET_Surface_Area_Analysis_of_Nanoparticles.txt |
Dynamic light scattering (DLS), which is also known as photon correlation spectroscopy (PCS) or quasi-elastic light scattering (QLS), is a spectroscopy method used in the fields of chemistry, biochemistry, and physics to determine the size distribution of particles (polymers, proteins, colloids, etc.) in solution or suspension. In the DLS experiment, normally a laser provides the monochromatic incident light, which impinges onto a solution with small particles in Brownian motion. And then through the Rayleigh scattering process, particles whose sizes are sufficiently small compared to the wavelength of the incident light will diffract the incident light in all direction with different wavelengths and intensities as a function of time. Since the scattering pattern of the light is highly correlated to the size distribution of the analyzed particles, the size-related information of the sample could be then acquired by mathematically processing the spectral characteristics of the scattered light.
Herein a brief introduction of basic theories of DLS will be demonstrated, followed by descriptions and guidance on the instrument itself and the sample preparation and measurement process. Finally, data analysis of the DLS measurement, and the applications of DLS as well as the comparison against other size-determine techniques will be shown and summarized.
DLS Theory
The theory of DLS can be introduced utilizing a model system of spherical particles in solution. According to the Rayleigh scattering (Figure $1$), when a sample of particles with diameter smaller than the wavelength of the incident light, each particle will diffract the incident light in all directions, while the intensity $I$ is determined by \ref{1} , where $I_0$ and $λ$ is the intensity and wavelength of the unpolarized incident light, $R$ is the distance to the particle, $θ$ is the scattering angel, $n$ is the refractive index of the particle, and $r$ is the radius of the particle.
$I\ =\ I_{0} \frac{1\ +\cos^{2}\theta}{2R^{2}} \left(\frac{2\pi }{\lambda }\right)^{4}\left(\frac{n^{2}\ -\ 1}{n^{2}\ +\ 2}\right)^{2}r^{6} \label{1}$
If that diffracted light is projected as an image onto a screen, it will generate a “speckle" pattern (Figure $2$ ); the dark areas represent regions where the diffracted light from the particles arrives out of phase interfering destructively and the bright area represent regions where the diffracted light arrives in phase interfering constructively.
In practice, particle samples are normally not stationary but moving randomly due to collisions with solvent molecules as described by the Brownian motion, \ref{2}, where $\overline{(\Delta x)^{2}}$ is the mean squared displacement in time t, and D is the diffusion constant, which is related to the hydrodynamic radius a of the particle according to the Stokes-Einstein equation, \ref{3} , where kB is Boltzmann constant, T is the temperature, and μ is viscosity of the solution. Importantly, for a system undergoing Brownian motion, small particles should diffuse faster than large ones.
$\overline{(\Delta x)^{2}}\ =\ 2\Delta t \label{2}$
$D\ =\frac{k_{B}T}{6\pi \mu a} \label{3}$
As a result of the Brownian motion, the distance between particles is constantly changing and this results in a Doppler shift between the frequency of the incident light and the frequency of the scattered light. Since the distance between particles also affects the phase overlap/interfering of the diffracted light, the brightness and darkness of the spots in the “speckle” pattern will in turn fluctuate in intensity as a function of time when the particles change position with respect to each other. Then, as the rate of these intensity fluctuations depends on how fast the particles are moving (smaller particles diffuse faster), information about the size distribution of particles in the solution could be acquired by processing the fluctuations of the intensity of scattered light. Figure $3$ shows the hypothetical fluctuation of scattering intensity of larger particles and smaller particles.
In order to mathematically process the fluctuation of intensity, there are several principles/terms to be understood. First, the intensity correlation function is used to describe the rate of change in scattering intensity by comparing the intensity $I(t)$ at time $t$ to the intensity $I(t + τ)$ at a later time $(t + τ)$, and is quantified and normalized by \ref{4} and \ref{5}, where braces indicate averaging over $t$.
$G_{2} ( \tau ) =\ \langle I(t)I(t\ +\ \tau)\rangle \label{4}$
$g_{2}(\tau )=\frac{\langle I(t)I(t\ +\ \tau)\rangle}{\langle I(t)\rangle ^{2}} \label{5}$
Second, since it is not possible to know how each particle moves from the fluctuation, the electric field correlation function is instead used to correlate the motion of the particles relative to each other, and is defined by \ref{6} and \ref{7} , where E(t) and E(t + τ) are the scattered electric fields at times t and t+ τ.
$G_{1}(\tau ) =\ \langle E(t)E(t\ +\ \tau )\rangle \label{6}$
$g_{1}(\tau ) = \frac{\langle E(t)E(t\ +\ \tau)\rangle}{\langle E(t) E(t)\rangle} \label{7}$
For a monodisperse system undergoing Brownian motion, g1(τ) will decay exponentially with a decay rate Γ which is related by Brownian motion to the diffusivity by \ref{8} , \ref{9} , and \ref{10} , where q is the magnitude of the scattering wave vector and q2 reflects the distance the particle travels, n is the refraction index of the solution and θ is angle at which the detector is located.
$g_{1}(\tau )=\ e^{- \Gamma \tau} \label{8}$
$\Gamma \ =\ -Dq^{2} \label{9}$
$q = \frac{4\pi n}{\lambda } \sin\frac{\Theta }{2} \label{10}$
For a polydisperse system however, g1(τ) can no longer be represented as a single exponential decay and must be represented as a intensity-weighed integral over a distribution of decay rates $G(Γ)$ by \ref{11} where G(Γ) is normalized, \ref{12} .
$g_{1}(\tau )= \int ^{\infty}_{0} G(\Gamma )e^{-\Gamma \tau} d\Gamma \label{11}$
$\int ^{\infty}_{0} G(\Gamma ) d\Gamma\ =\ 1 \label{12}$
Third, the two correlation functions above can be equated using the Seigert relationship based on the principles of Gaussian random processes (which the scattering light usually is), and can be expressed as \ref{13} , where β is a factor that depends on the experimental geometry, and B is the long-time value of g2(τ), which is referred to as the baseline and is normally equal to 1. Figure $4$ shows the decay of g2(τ) for small size sample and large size sample.
$g_{2}(\tau ) =\ B\ +\ \beta [g_{1}(\tau )]^{2} \label{13}$
When determining the size of particles in solution using DLS, g2(τ) is calculated based on the time-dependent scattering intensity, and is converted through the Seigert relationship to g1(τ) which usually is an exponential decay or a sum of exponential decays. The decay rate Γ is then mathematically determined (will be discussed in section ) from the g1(τ) curve, and the value of diffusion constant D and hydrodynamic radius a can be easily calculated afterwards.
Experimental
Instrument of DLS
In a typical DLS experiment, light from a laser passes through a polarizer to define the polarization of the incident beam and then shines on the scattering medium. When the sizes of the analyzed particles are sufficiently small compared to the wavelength of the incident light, the incident light will scatters in all directions known as the Rayleigh scattering. The scattered light then passes through an analyzer, which selects a given polarization and finally enters a detector, where the position of the detector defines the scattering angle θ. In addition, the intersection of the incident beam and the beam intercepted by the detector defines a scattering region of volume V. As for the detector used in these experiments, a phototube is normally used whose dc output is proportional to the intensity of the scattered light beam. Figure $5$ shows a schematic representation of the light-scattering experiment.
In modern DLS experiments, the scattered light spectral distribution is also measured. In these cases, a photomultiplier is the main detector, but the pre- and postphotomultiplier systems differ depending on the frequency change of the scattered light. The three different methods used are filter (f > 1 MHz), homodyne (f > 10 GHz), and heterodyne methods (f < 1 MHz), as schematically illustrated in Figure $6$. Note that that homodyne and heterodyne methods use no monochromator of “filter” between the scattering cell and the photomultiplier, and optical mixing techniques are used for heterodyne method. shows the schematic illustration of the various techniques used in light-scattering experiments.
As for an actual DLS instrument, take the Zetasizer Nano (Malvern Instruments Ltd.) as an example (Figure $7$), it actually looks like nothing other than a big box, with components of power supply, optical unit (light source and detector), computer connection, sample holder, and accessories. The detailed procedure of how to use the DLS instrument will be introduced afterwards.
Sample Preparation
Although different DLS instruments may have different analysis ranges, we are usually looking at particles with a size range of nm to μm in solution. For several kinds of samples, DLS can give results with rather high confidence, such as monodisperse suspensions of unaggregated nanoparticles that have radius > 20 nm, or polydisperse nanoparticle solutions or stable solutions of aggregated nanoparticles that have radius in the 100 - 300 nm range with a polydispersity index of 0.3 or below. For other more challenging samples such as solutions containing large aggregates, bimodal solutions, very dilute samples, very small nanoparticles, heterogeneous samples, or unknown samples, the results given by DLS could not be really reliable, and one must be aware of the strengths and weaknesses of this analytical technique.
Then, for the sample preparation procedure, one important question is how much materials should be submit, or what is the optimal concentration of the solution. Generally, when doing the DLS measurement, it is important to submit enough amount of material in order to obtain sufficient signal, but if the sample is overly concentrated, then light scattered by one particle might be again scattered by another (known as multiple scattering), and make the data processing less accurate. An ideal sample submission for DLS analysis has a volume of 1 – 2 mL and is sufficiently concentrated as to have strong color hues, or opaqueness/turbidity in the case of a white or black sample. Alternatively, 100 - 200 μL of highly concentrated sample can be diluted to 1 mL or analyzed in a low-volume microcuvette.
In order to get high quality DLS data, there are also other issues to be concerned with. First is to minimize particulate contaminants, as it is common for a single particle contaminant to scatter a million times more than a suspended nanoparticle, by using ultra high purity water or solvents, extensively rinsing pipettes and containers, and sealing sample tightly. Second is to filter the sample through a 0.2 or 0.45 μm filter to get away of the visible particulates within the sample solution. Third is to avoid probe sonication to prevent the particulates ejected from the sonication tip, and use the bath sonication in stead.
Measurement
Now that the sample is readily prepared and put into the sample holder of the instrument, the next step is to actually do the DLS measurement. Generally the DLS instrument will be provided with software that can help you to do the measurement rather easily, but it is still worthwhile to understand the important parameters used during the measurement.
Firstly, the laser light source with an appropriate wavelength should be selected. As for the Zetasizer Nano series (Malvern Instruments Ltd.), either a 633 nm “red” laser or a 532 nm “green” laser is available. One should keep in mind that the 633 nm laser is least suitable for blue samples, while the 532 nm laser is least suitable for red samples, since otherwise the sample will just absorb a large portion of the incident light.
Then, for the measurement itself, one has to select the appropriate stabilization time and the duration time. Normally, longer striation/duration time can results in more stable signal with less noises, but the time cost should also be considered. Another important parameter is the temperature of the sample, as many DLS instruments are equipped with the temperature-controllable sample holders, one can actually measure the size distribution of the data at different temperature, and get extra information about the thermal stability of the sample analyzed.
Next, as is used in the calculation of particle size from the light scattering data, the viscosity and refraction index of the solution are also needed. Normally, for solutions with low concentration, the viscosity and refraction index of the solvent/water could be used as an approximation.
Finally, to get data with better reliability, the DLS measurement on the same sample will normally be conducted multiple times, which can help eliminate unexpected results and also provide additional error bar of the size distribution data.
Data Analysis
Although size distribution data could be readily acquired from the software of the DLS instrument, it is still worthwhile to know about the details about the data analysis process.
Cumulant method
As is mentioned in the Theory portion above, the decay rate Γ is mathematically determined from the g1(τ) curve; if the sample solution is monodispersed, g1(τ) could be regard as a single exponential decay function e-Γτ, and the decay rate Γ can be in turn easily calculated. However, in most of the practical cases, the sample solution is always polydispersed, g1(τ) will be the sum of many single exponential decay functions with different decay rates, and then it becomes significantly difficult to conduct the fitting process.
There are however, a few methods developed to meet this mathematical challenge: linear fit and cumulant expansion for mono-modal distribution, exponential sampling and CONTIN regularization for non-monomodal distribution. Among all these approaches, cumulant expansion is most common method and will be illustrated in detail in this section.
Generally, the cumulant expansion method is based on two relations: one between g1(τ) and the moment-generating function of the distribution, and one between the logarithm of g1(τ) and the cumulant-generating function of the distribution.
To start with, the form of g1(τ) is equivalent to the definition of the moment-generating function M(-τ, Γ) of the distribution G(Γ), \ref{14} .
$g_{1}(\tau ) =\ \int _{0}^{\infty} G(\Gamma )e^{- \Gamma \tau} d\Gamma \ =\ M(-\tau ,\Gamma) \label{14}$
The mth moment of the distribution $mm(Γ)$ is given by the mth derivative of M(-τ, Γ) with respect to τ, \ref{15} .
$m_{m}(\Gamma )=\ \int ^{\infty}_{0} G(\Gamma )\Gamma^{m} e^{-\Gamma \tau} d\Gamma \mid_{- \tau = 0} \label{15}$
Similarly, the logarithm of g1(τ) is equivalent to the definition of the cumulant-generating function K(-τ, Γ), EQ, and the mth cumulant of the distribution km(Γ) is given by the mth derivative of K(-τ, Γ) with respect to τ, \ref{16} and \ref{17} .
$ln\ g_{1}(\tau )= ln\ M(-\tau ,\Gamma)\ =\ K(-\tau , \Gamma) \label{16}$
$k_{m}(\Gamma )=\frac{d^{m}K(-\tau , \Gamma )}{d(-\tau )^{m} } \mid_{-\tau = 0} \label{17}$
By making use of that the cumulants, except for the first, are invariant under a change of origin, the km(Γ) could be rewritten in terms of the moments about the mean as \ref{18} , \ref{19} , \ref{20} , and \ref{21} where here μm are the moments about the mean, defined as given in \ref{22} .
\begin{align} k_{1}(\tau ) &=\ \int _{0}^{\infty} G(\Gamma )\Gamma d\Gamma = \bar{\Gamma } \label{18} \[4pt] k_{2}(\tau ) &=\ \mu _{2} \label{19} \[4pt] k_{3}(\tau ) &=\ \mu _{3} \label{20} \[4pt] k_{4}(\tau ) &=\ \mu _{4} - 3\mu ^{2}_{2} \cdots \label{21} \end{align}
$\mu_{m}\ =\ \int _{0}^{\infty} G(\Gamma )(\Gamma \ -\ \bar{\Gamma})^{m} d\Gamma \label{22}$
Based on the Taylor expansion of K(-τ, Γ) about τ = 0, the logarithm of g1(τ) is given as \ref{23} .
$ln\ g_{1}(\tau )=\ K(-\tau , \Gamma )=\ -\bar{\Gamma} \tau \ +\frac{k_{2}}{2!}\tau ^{2}\ -\frac{k_{3}}{3!}\tau^{3}\ +\frac{k_{4}}{4!}\tau^{4} \cdots \label{23}$
Importantly, if look back at the Seigert relationship in the logarithmic form, \ref{24} .
$ln(g_{2}(\tau )-B)=ln\beta \ +\ 2ln\ g_{1}(\tau ) \label{24}$
The measured data of g2(τ) could be fitted with the parameters of km using the relationship of \ref{25} , where $\bar{\Gamma }$ (k1), k2, and k3 describes the average, variance, and skewness (or asymmetry) of the decay rates of the distribution, and polydispersity index $\gamma \ =\ \frac{k_{2}}{\bar{\Gamma}^{2}}$ is used to indicate the width of the distribution. And parameters beyond k3 are seldom used to prevent overfitting the data. Finally, the size distribution can be easily calculated from the decay rate distribution as described in theory section previously. Figure $6$ shows an example of data fitting using the cumulant method.
$ln(g_{2}(\tau )-B)=] ln\beta \ +\ 2(-\bar{\Gamma} \tau \ +\frac{k_{2}}{2!}\tau^{2}\ -\frac{k_{3}}{3!}\tau^{3} \cdots ) \label{25}$
When using the cumulant expansion method however, one should keep in mind that it is only suitable for monomodal distributions (Gaussian-like distribution centered about the mean), and for non-monomodal distributions, other methods like exponential sampling and CONTIN regularization should be applied instead.
Three Index of Size Distribution
Now that the size distribution is able to be acquired from the fluctuation data of the scattered light using cumulant expansion or other methods, it is worthwhile to understand the three kinds of distribution index usually used in size analysis: number weighted distribution, volume weighted distribution, and intensity weighted distribution.
First of all, based on all the theories discussed above, it should be clear that the size distribution given by DLS experiments is the intensity weighted distribution, as it is always the intensity of the scattering that is being analyzed. So for intensity weighted distribution, the contribution of each particle is related to the intensity of light scattered by that particle. For example, using Rayleigh approximation, the relative contribution for very small particles will be proportional to a6.
For number weighted distribution, given by image analysis as an example, each particle is given equal weighting irrespective of its size, which means proportional to a0. This index is most useful where the absolute number of particles is important, or where high resolution (particle by particle) is required.
For volume weighted distribution, given by laser diffraction as an example, the contribution of each particle is related to the volume of that particle, which is proportional to a3. This is often extremely useful from a commercial perspective as the distribution represents the composition of the sample in terms of its volume/mass, and therefore its potential money value.
When comparing particle size data for the same sample represented using different distribution index, it is important to know that the results could be very different from number weighted distribution to intensity weighted distribution. This is clearly illustrated in the example below (Figure $9$ ), for a sample consisting of equal numbers of particles with diameters of 5 nm and 50 nm. The number weighted distribution gives equal weighting to both types of particles, emphasizing the presence of the finer 5 nm particles, whereas the intensity weighted distribution has a signal one million times higher for the coarser 50 nm particles. The volume weighted distribution is intermediate between the two.
Furthermore, based on the different orders of correlation between the particle contribution and the particle size a, it is possible to convert particle size data from one type of distribution to another type of distribution, and that is also why the DLS software can also give size distributions in three different forms (number, volume, and intensity), where the first two kinds are actually deducted from the raw data of intensity weighted distribution.
An Example of an Application
As the DLS method could be used in many areas towards size distribution such as polymers, proteins, metal nanoparticles, or carbon nanomaterials, here gives an example about the application of DLS in size-controlled synthesis of monodisperse gold nanoparticles.
The size and size distribution of gold particles are controlled by subtle variation of the structure of the polymer, which is used to stabilize the gold nanoparticles during the reaction. These variations include monomer type, polymer molecular weight, end-group hydrophobicity, end-group denticity, and polymer concentration; a total number of 88 different trials have been conducted based on these variations. By using the DLS method, the authors are able to determine the gold particle size distribution for all these trials rather easily, and the correlation between polymer structure and particle size can also be plotted without further processing the data. Although other sizing techniques such as UV-V spectroscopy and TEM are also used in this paper, it is the DLS measurement that provides a much easier and reliable approach towards the size distribution analysis.
Comparison with TEM and AFM
Since DLS is not the only method available to determine the size distribution of particles, it is also necessary to compare DLS with the other common-used general sizing techniques, especially TEM and AFM.
First of all, it has to be made clear that both TEM and AFM measure particles that are deposited on a substrate (Cu grid for TEM, mica for AFM), while DLS measures particles that are dispersed in a solution. In this way, DLS will be measuring the bulk phase properties and give a more comprehensive information about the size distribution of the sample. And for AFM or TEM, it is very common that a relatively small sampling area is analyzed, and the size distribution on the sampling area may not be the same as the size distribution of the original sample depending on how the particles are deposited.
On the other hand however, for DLS, the calculating process is highly dependent on the mathematical and physical assumptions and models, which is, monomodal distribution (cumulant method) and spherical shape for the particles, the results could be inaccurate when analyzing non-monomodal distributions or non-spherical particles. Yet, since the size determining process for AFM or TEM is nothing more than measuring the size from the image and then using the statistic, these two methods can provide much more reliable data when dealing with “irregular” samples.
Another important issue to consider is the time cost and complication of size measurement. Generally speaking, the DLS measurement should be a much easier technique, which requires less operation time and also cheaper equipment. And it could be really troublesome to analysis the size distribution data coming out from TEM or AFM images without specially programmed software.
In addition, there are some special issues to consider when choosing size analysis techniques. For example, if the originally sample is already on a substrate (synthesized by the CVD method), or the particles could not be stably dispersed within solution, apparently the DLS method is not suitable. Also, when the particles tend to have a similar imaging contrast against the substrate (carbon nanomaterials on TEM grid), or tend to self-assemble and aggregate on the surface of the substrate, the DLS approach might be a better choice.
In general research work however, the best way to do size distribution analysis is to combine these analyzing methods, and get complimentary information from different aspects. One thing to keep in mind, since the DLS actually measures the hydrodynamic radius of the particles, the size from DLS measurement is always larger than the size from AFM or TEM measurement. As a conclusion, the comparison between DLS and AFM/TEM is shown in Table $1$.
DLS AFM/TEM
Sample Preparation Solution Substrate
Measurement Easy Difficult
Sampling Bulk Small area
Shape of Particles Sphere No Requirement
Polydispersity Low No Requirement
Size Range nm to um nm to um
Size Info. Hydrodynamic radius Physical size
Table $1$ Comparison between DLS, AFM, and TEM.
Conclusion
In general, relying on the fluctuating Rayleigh scattering of small particles that randomly moves in solution, DLS is a very useful and rapid technique used in the size distribution of particles in the fields of physics, chemistry, and bio-chemistry, especially for monomodally dispersed spherical particles, and by combining with other techniques such as AFM and TEM, a comprehensive understanding of the size distribution of the analyte can be readily acquired. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.04%3A_Dynamic_Light_Scattering.txt |
Introduction
The physical properties of colloids (nanoparticles) and suspensions are strongly dependent on the nature and extent of the particle-liquid interface. The behavior of aqueous dispersions between particles and liquid is especially sensitive to the ionic and electrical structure of the interface.
Zeta potential is a parameter that measures the electrochemical equilibrium at the particle-liquid interface. It measures the magnitude of electrostatic repulsion/attraction between particles and thus, it has become one of the fundamental parameters known to affect stability of colloidal particles. It should be noted that that term stability, when applied to colloidal dispersions, generally means the resistance to change of the dispersion with time. Figure $1$ illustrates the basic concept of zeta potential.
From the fundamental theory’s perspective, zeta potential is the electrical potential in the interfacial double layer (DL) at the location of the slipping plane (shown in Figure $1$ ). We can regard zeta potential as the potential difference between the dispersion medium and the stationary layer of the fluid attached to the particle layer. Therefore, in experimental concerns, zeta potential is key factor in processes such as the preparation of colloidal dispersions, utilization of colloidal phenomena and the destruction of unwanted colloidal dispersions. Moreover, zeta potential analysis and measurements nowadays have a lot of real-world applications. In the field of biomedical research, zeta potential measurement, in contrast to chemical methods of analysis which can disrupt the organism, has the particular merit of providing information referring to the outermost regions of an organism. It is also largely utilized in water purification and treatment. Zeta potential analysis has established optimum coagulation conditions for removal of particulate matter and organic dyestuffs from aqueous waste products.
Brief History and Development of Zeta Potential
Zeta potential is a scientific term for electrokinetic potential in colloidal dispersions. In prior literature, it is usually denoted using the Greek letter zeta, Ζ, hence it has obtained the name zeta potential as Ζ-potential. The earliest theory for calculating Zeta potential from experimental data was developed by Marian Smoluchowski in 1903 (Figure $2$ ). Even till today, this theory is still the most well-known and widely used method for calculating zeta potential.
Interestingly, this theory was originally developed for electrophoresis. Later on, people started to apply his theory in calculation of zeta potential. The main reason that this theory is powerful is because of its universality and validity for dispersed particles of any shape and any concentration. However, there still some limitations to this early theory as it was mainly determined experimentally. The main limitations are that Smoluchowski’s theory neglects the contribution of surface conductivity and only works for particles which have sizes much larger than the interface layer, denoted as κa (1/κ is called Debye length and a is the particle radius).
Overbeek and Booth as early pioneers in this direction started to develop more theoretical and rigorous electrokinetic theories that were able to incorporate surface conductivity for electrokinetic applications. Modern rigorous electrokinetic theories that are valid almost any κa mostly are generated from Ukrainian (Dukhin) and Australian (O’Brien) scientists.
Principle of Zeta Potential Analysis
Electrokinetic Phenomena
Because an electric double-layer (EDL) exists between a surface and solution, then any relative motion between the rigid and mobile parts of the EDL will result in the generation of an electrokinetic potential. As described above, zeta potential is essentially a electrokinetic potential which rises from electrokinetic phenomena. So it is important to understand different situations where electrokinetic potential can be produced. There are generally four fundamental ways which zeta potential can be produced, via electrophoresis, electro-osmosis, streaming potential, and sedimentation potential as shown from Figure $3$.
Calculations of Zeta Potential
There are many different ways of calculating zeta potential . In this section, the methods of calculating zeta potential in electrophoresis and electroosmosis will be introduced.
Zeta Potential in Electrophoresis
Electrophoresis is the movement of charged colloidal particles or polyelectrolytes, immersed in a liquid, under the influence of an external electric field. In such case, the electrophoretic velocity, ve (ms-1) is the velocity during electrophoresis and the electrophoretic mobility, ue (m 2 V -1 s -1 ) is the magnitude of the velocity divided by the magnitude of the electric field strength. The mobility is counted positive if the particles move toward lower potential and negative in the opposite case. And therefore, we have the relationship v= ueE, where E is the externally applied field.
Thus, the formula accounted for zeta potential in electrophoresis case is given in EQ, where εrs is the relative permittivity of the electrolyte solution, ε0 is the electric permittivity of vacuum and η is the viscosity.
$\mathit{u}_{e}\ =\frac{\varepsilon _{rs} \varepsilon_{0} \zeta}{\eta } \label{1}$
$\mathit{v}_{e}\ =\frac{\varepsilon _{rs} \varepsilon_{0} \zeta}{\eta } E \label{2}$
There are two cases regarding the size of κa:
1. κa < 1: the formula is similar, \ref{3} .
2. κa > 1: the formula is rather complicated and we need to solve equation for zeta potential, \ref{4} , where $y^{e \zeta}=\ e\zeta /kT$, m is about 0.15 for aqueous solution.
$\mathit{u}_{e} = \frac{2}{3} \frac{\varepsilon _{rs} \varepsilon_{0} \zeta}{\eta } \label{3}$
$\frac{3}{2}\frac{\eta e}{\varepsilon _{rs} \varepsilon _{0}kT} \mathit{u_{e}} =\frac{3}{2}y^{ek} -\frac{6[\frac{y^{ek}}{2}-\frac{ln\ 2}{\zeta}\{1-e^{-\zeta y^{ek}}\}]}{2+ \frac{ka}{1+3m/\zeta ^{2}}e^{\frac{-\zeta y^{ek}}{2}}} \label{4}$
Zeta Potential in Electroosmosis
Electroosmosis is the motion of a liquid through an immobilized set of particles, a porous plug, a capillary, or a membrane, in response to an applied electric field. Similar to electrophoresis, it has the electroosmotic velocity, veo (ms -1 ) as the uniform velocity of the liquid far from the charged interface. Usually, the measured quantity is the volume flow rate of liquid divided by electric field strength, Qeo,E (m 4 V -1 s -1 ) or diveided by the electric current, Qeo,I (m 3 C -1 ). Therefore, the relationship is given by \ref{5} .
$Q_{eo} =\ \int \int v_{eo} dS \label{5}$
Thus the formula accounted for Zeta potential in electroosmosis is given in EQ.
As with electrophoresis there are two cases regarding the size of κa:
• κa >>1 and there is no surface conduction, where Ac is the cross-section area and KL is the bulk conductivity of particle.
• κa < 1, \ref{6} , where $\Delta u \ =\frac{K^{\sigma }}{K_{L}}$ is the Dukhin number account for surface conductivity, $K^{\sigma}$ is the surface conductivity of the particle.
$Q_{eo , E} =\frac{-\varepsilon _{rs} \varepsilon_{0} \zeta }{\eta} Ac \nonumber$
$Q_{eo , I} =\frac{-\varepsilon _{rs} \varepsilon_{0} \zeta }{\eta} \frac{1}{K_{L}} \nonumber$
$Q_{eo , I} =\frac{-\varepsilon _{rs} \varepsilon_{0} \zeta }{\eta} \frac{1}{K_{L}(1+2\Delta u)} \label{6}$
Relationship Between Zeta Potential and Particle Stability in Electrophoresis
Using the above theoretical methods, we can calculate zeta potential for particles in electrophoresis. The following table summarizes the stability behavior of the colloid particles with respect to zeta potential. Thus, we can use zeta potential to predict the stability of colloidal particles in the electrokinetic phenomena of electrophoresis.
Zeta Potential (mV) Stability behavior of the particles
0 to ±5 Rapid Coagulation or Flocculation
±10 to ±30 Incipient Instability
±30 to ±40 Moderate Stability
±40 to ±60 Good Stability
More than ±61 Excellent Stability
Table $1$ Stability behavior of the colloid particles with respect to zeta potential.
Instrumentation
In this section, a market-available zeta potential analyzer will be used as an example of how experimentally zeta potential is analyzed. Figure $4$ shows an example of a typical zeta potential analyzer for electrophoresis.
The inside measuring principle is described in the following diagram, which shows the detailed mechanism of zeta potential analyzer (Figure $5$ ).
When a voltage is applied to the solution in which particles are dispersed, particles are attracted to the electrode of the opposite polarity, accompanied by the fixed layer and part of the diffuse double layer, or internal side of the "sliding surface". Using the following formula below of this specific Analyzer and the computer program, we can obtain the zeta potential for electrophoresis using this typical zeta potential analyzer (Figure $6$. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.05%3A_Zeta_Potential_Analysis.txt |
Introduction
All liquids have a natural internal resistance to flow termed viscosity. Viscosity is the result of frictional interactions within a given liquid and is commonly expressed in two different ways.
Dynamic Viscosity
The first is dynamic viscosity, also known as absolute viscosity, which measures a fluid’s resistance to flow. In precise terms, dynamic viscosity is the tangential force per unit area necessary to move one plane past another at unit velocity at unit distance apart. As one plane moves past another in a fluid, a velocity gradient is established between the two layers (Figure $1$ ). Viscosity can be thought of as a drag coefficient proportional to this gradient.
The force necessary to move a plane of area A past another in a fluid is given by Equation \ref{1} where $V$ is the velocity of the liquid, Y is the separation between planes, and η is the dynamic viscosity.
$F = \eta A \frac{V}{Y} \label{1}$
V/Y also represents the velocity gradient (sometimes referred to as shear rate). Force over area is equal to τ, the shear stress, so the equation simplifies to Equation \ref{2} .
$\tau = \eta \frac{V}{Y} \label{2}$
For situations where V does not vary linearly with the separation between plates, the differential formula based on Newton’s equations is given in Equation \ref{3}.
$\tau = \eta \frac{\delta V}{\delta Y} \label{3}$
Kinematic Viscosity
Kinematic viscosity, the other type of viscosity, requires knowledge of the density, ρ, and is given by Equation \ref{4} , where v is the kinematic viscosity and the $\eta$ is the dynamic viscosity.
$\nu = \frac{\eta }{\rho } \label{4}$
Units of Viscosity
Viscosity is commonly expressed in Stokes, Poise, Saybolt Universal Seconds, degree Engler, and SI units.
Dynamic Viscosity
The SI units for dynamic (absolute) viscosity is given in units of N·S/m2, Pa·S, or kg/(m·s), where N stands for Newton and Pa for Pascal. Poise are metric units expressed as dyne·s/cm2 or g/(m·s). They are related to the SI unit by g/(m·s) = 1/10 Pa·S. 100 centipoise, the centipoise (cP) being the most used unit of viscosity, is equal to one Poise. Table $1$ shows the interconversion factors for dynamic viscosity.
Table $1$: The interconversion factors for dynamic viscosity.
Unit Pa*S Dyne·s/cm2 or g/(m·s) (Poise) Centipoise (cP)
Pa*S 1 10 1000
Dyne·s/cm2 or g/(m·s) (Poise) 0.1 1 100
Centipoise (cP) 0.001 0.01 1
Table $2$ lists the dynamic viscosities of several liquids at various temperatures in centipoise. The effect of the temperature on viscosity is clearly evidenced in the drastic drop in viscosity of water as the temperature is increased from near ambient to 60 degrees Celsius. Ketchup has a viscosity of 1000 cP at 30 degrees Celsius or more than 1000 times that of water at the same temperature!
Table $2$: Viscosities of common liquids (*at 0% evaporation volume).
Liquid $\eta$ (cP) Temperature (°C)
Water 0.89 25
Water 0.47 60
Milk 2.0 18
Olive Oil 107.5 20
Toothpaste 70,000 - 100,000 18
Ketchup 1000 30
Custard 1,500 85-90
Crude Oil (WTI)* 7 15
Kinematic Viscosity
The CGS unit for kinematic viscosity is the Stoke which is equal to 10-4 m2/s. Dividing by 100 yields the more commonly used centistoke. The SI unit for viscosity is m2/s. The Saybolt Universal second is commonly used in the oilfield for petroleum products represents the time required to efflux 60 milliliters from a Saybolt Universal viscometer at a fixed temperature according to ASTM D-88. The Engler scale is often used in Britain and quantifies the viscosity of a given liquid in comparison to water in an Engler viscometer for 200cm3 of each liquid at a set temperature.
Newtonian versus Non-Newtonian Fluids
One of the invaluable applications of the determination of viscosity is identifying a given liquid as Newtonian or non-Newtonian in nature.
• Newtonian liquids are those whose viscosities remain constant for all values of applied shear stress.
• Non-Newtonian liquids are those liquids whose viscosities vary with applied shear stress and/or time.
Moreover, non-Newtonian liquids can be further subdivided into classes by their viscous behavior with shear stress:
• Pseudoplastic fluids whose viscosity decreases with increasing shear rate
• Dilatants in which the viscosity increases with shear rate.
• Bingham plastic fluids, which require some force threshold be surpassed to begin to flow and which thereafter flow proportionally to increasing shear stress.
Measuring Viscosity
Viscometers are used to measure viscosity. There are seven different classes of viscometer:
1. Capillary viscometers.
2. Orifice viscometers.
3. High temperature high shear rate viscometers.
4. Rotational viscometers.
5. Falling ball viscometers.
6. Vibrational viscometers.
7. Ultrasonic Viscometers.
Capillary Viscometers
Capillary viscometers are the most widely used viscometers when working with Newtonian fluids and measure the flow rate through a narrow, usually glass tube. In some capillary viscometers, an external force is required to move the liquid through the capillary; in this case, the pressure difference across the length of the capillary is used to obtain the viscosity coefficient.
Capillary viscometers require a liquid reservoir, a capillary of known dimensions, a pressure controller, a flow meter, and a thermostat be present. These viscometers include, Modified Ostwald viscometers, Suspended-level viscometers, and Reverse-flow viscometers and measure kinematic viscosity.
The equation governing this type of viscometry is the Pouisille law (Equation \ref{5} ), where Q is the overall flowrate, ΔP, the pressure difference, a, the internal radius of the tube, η, the dynamic viscosity, and l the path length of the fluid.
$Q\ =\frac{\pi \Delta Pa^{4}}{8\eta l} \label{5}$
Here, Q is equal to V/t; the volume of the liquid measured over the course of the experiment divided by the time required for it to move through the capillary where V is volume and t is time.
For gravity-type capillary viscometers, those relying on gravity to move the liquid through the tube rather than an applied force, Equation \ref{6} is used to find viscosity, obtained by substituting the relation Equation \ref{5} with the experimental values, where P is pressure, ρ is density, g is the gravitational constant, and h is the height of the column.
$\eta \ =\frac{\pi gha^{4}}{8lV} \rho t \label{6}$
An example of a capillary viscometer (Ostwald viscometer) is shown in Figure $2$.
Orifice Viscometers
Commonly found in the oil industry, orifice viscometers consist of a reservoir, an orifice, and a receiver. These viscometers report viscosity in units of efflux time as the measurement consists of measuring the time it takes for a given liquid to travel from the orifice to the receiver. These instruments are not accurate as the set-up does not ensure that the pressure on the liquid remains constant and there is energy lost to friction at the orifice. The most common types of these viscometer include Redwood, Engler, Saybolt, and Ford cup viscometers. A Saybolt viscometer is represented in Figure $3$.
High Temperature, High Shear Rate Viscometers
These viscometers, also known as cylinder-piston type viscometers are employed when viscosities above 1000 poise, need to be determined, especially of non-Newtonian fluids. In a typical set-up, fluid in a cylindrical reservoir is displaced by a piston. As the pressure varies, this type of viscometry is well-suited for determining the viscosities over varying shear rates, ideal for characterizing fluids whose primary environment is a high temperature, high shear rate environment, e.g., motor oil. A typical cylinder-piston type viscometer is shown in Figure $4$.
Rotational Viscometers
Well-suited for non-Newtonian fluids, rotational viscometers measure the rate at which a solid rotates in a viscous medium. Since the rate of rotation is controlled, the amount of force necessary to spin the solid can be used to calculate the viscosity. They are advantageous in that a wide range of shear stresses and temperatures and be sampled across. Common rotational viscometers include: the coaxial-cylinder viscometer, cone and plate viscometer, and coni-cylinder viscometer. A cone and plate viscometer is shown in Figure $5$.
Falling Ball Viscometer
This type of viscometer relies on the terminal velocity achieved by a balling falling through the viscous liquid whose viscosity is being measured. A sphere is the simplest object to be used because its velocity can be determined by rearranging Stokes’ law Equation \ref{7} to Equation \ref{8} , where r is the sphere’s radius, η the dynamic viscosity, v the terminal velocity of the sphere, σ the density of the sphere, ρ the density of the liquid, and g the gravitational constant
$6\pi r\eta v\ =\ \frac{4}{3} \pi r^{3} (\sigma - \rho)g \label{7}$
$\eta\ =\frac{\frac{4}{3} \pi r^{2}(\sigma - \rho)g}{6\pi v} \label{8}$
A typical falling ball viscometric apparatus is shown in Figure $6$.
Vibrational Viscometers
ften used in industry, these viscometers are attached to fluid production processes where a constant viscosity quality of the product is desired. Viscosity is measured by the damping of an electrochemical resonator immersed in the liquid to be tested. The resonator is either a cantilever, oscillating beam, or a tuning fork. The power needed to keep the oscillator oscillating at a given frequency, the decay time after stopping the oscillation, or by observing the difference when waveforms are varied are respective ways in which this type of viscometer works. A typical vibrational viscometer is shown in Figure $7$.
Ultrasonic Viscometers
This type of viscometer is most like vibrational viscometers in that it is obtaining viscosity information by exposing a liquid to an oscillating system. These measurements are continuous and instantaneous. Both ultrasonic and vibrational viscometers are commonly found on liquid production lines and constantly monitor the viscosity. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.06%3A_Viscosity.txt |
Cyclic Voltammetry Measurements
Introduction
Cyclic voltammetry (CV) is one type of potentiodynamic electrochemical measurements. Generally speaking, the operating process is a potential-controlled reversible experiment, which scans the electric potential before turning to reverse direction after reaching the final potential and then scans back to the initial potential, as shown in Figure $1$ -a . When voltage is applied to the system changes with time, the current will change with time accordingly as shown in Figure $1$ -b. Thus the curve of current and voltage, illustrated in Figure $1$ -c, can be represented from the data, which can be obtained from Figure $1$ -a and Figure $1$ -b.
Cyclic voltammetry is a very important analytical characterization in the field of electrochemistry. Any process that includes electron transfer can be investigated with this characterization. For example, the investigation of catalytical reactions, analyzing the stoichiometry of complex compounds, and determining of the photovoltaic materials’ band gap. In this module, I will focus on the application of CV measurement in the field of characterization of solar cell materials.
Although CV was first practiced using a hanging mercury drop electrode, based on the work of Nobel Prize winner Heyrovský (Figure $2$ ), it did not gain widespread until solid electrodes like Pt, Au and carbonaceous electrodes were used, particularly to study anodic oxidations. A major advance was made when mechanistic diagnostics and accompanying quantitations became known through the computer simulations. Now, the application of computers and related software packages make the analysis of data much quicker and easier.
The Components of a CV System
As shown in Figure $3$, the CV systems are as follows:
• The epsilon includes potentiostat and current-voltage converter. The potentiostat is required for controlling the applied potential, and a current-to-voltage converter is used for measuring the current, both of which are contained within the epsilon (Figure $3$.
• The input system is a function generator (Figure $3$. Operators can change parameters, including scan rate and scan range, through this part. The output part is a computer screen, which can show data and curves directly to the operators.
• All electrodes must work in electrolyte solution.
• Sometimes, the oxygen and water in the atmosphere will dissolve in the solution, and will be deoxidized or oxidized when voltage is applied. Therefore the data will be less accurate. To prevent this from happening, bubbling of an inert gas (nitrogen or argon) is required.
• The key component of the CV systems is the electrochemical cell which is connected to the epsilon part. Electrochemical cell contains three electrodes, counter electrode (C in Figure $3$ ) working electrode (W in Figure $3$ ) and reference electrode (R in Figure $3$ ). All of them must be immersed in an electrolyte solution when working.
In order to better understand the electrodes mentioned above, three kinds of electrodes will be discussed in more detail.
• Counter electrodes (C in Figure $3$ are non-reactive high surface area electrodes, for which the platinum gauze is the common choice.
• The working electrode in (W in Figure $3$ ) is commonly an inlaid disc electrodes (Pt, Au, graphite, etc.) of well-defined area are most commonly used. Other geometries may be available in appropriate circumstances, such as dropping or hanging mercury hemisphere, cylinder, band, arrays, and grid electrodes.
• For the reference electrode (R in Figure $3$ ) aqueous Ag/AgCl or calomel half cells are commonly used, and can be obtained commercially or easily prepared in the laboratory. Sometimes, a simple silver or platinum wire is used in conjunction with an internal potential reference provided by ferrocene, when a suitable conventional reference electrode is not available. Ferrocene undergoes a one-electron oxidation at a low potential, around 0.5 V versus a saturated calomel electrode (SCE). It is also been used as standard in electrochemistry as Fc+/Fc = 0.64 V versus a normal hydrogen electrode (NHE).
Cyclic voltammetry systems employ different types of potential waveforms (Figure $4$ ) that can be used to satisfy different requirements. Potential waveforms reflect the way potential is applied to this system. These different types are referred to by characteristic names, for example, cyclic voltammetry, and differential pulse voltammetry. The cyclic voltammetry analytical method is the one whose potential waveform is generally an isosceles triangle (Figure $4$ a).
Physical Principles of CV Systems
As mentioned above, there are two main parts of a CV system: the electrochemical cell and the epsilon. Figure $6$ shows the schematic drawing of circuit diagram in electrochemical cell.
In a voltammetric experiment, potential is applied to a system, using working electrode (W in Figure $7$ ) and the reference electrode (R = Figure $7$ ) and the current response is measured using the working electrode and a third electrode, the counter electrode (C in Figure $7$ ). The typical current-voltage curve for ferricyanide/ferrocyanide, \ref{1} , is shown in Figure $7$.
$E_{eq} \ =\ E^{\circ ' } \ +\ ( 0.059/n ) \ log( [ reactant ] / [ product ] ) \label{1}$
What Useful Information Can We Get From The Data Collected
The information we are able to obtain from CV experimental data is the current-voltage curve. From the curve we can then determine the redox potential, and gain insights into the kinetics of electron reactions, as well as determine the presence of reaction intermediate.
Why CV For The Characterizations Of Solar Cell Materials
Despite some limitations, cyclic voltammetry is very well suited for a wide range of applications. Moreover, in some areas of research, cyclic voltammetry is one of the standard techniques used for characterization. Due to its characteristic shapes of curves, it has been considered as ‘electrochemical spectroscopy’. In addition, the system is quite easy to operate, and sample preparation is relatively simple.
The band gap of a semiconductor is a very important value to be determined for photovoltaic materials. Figure $8$ shows the relative energy level involved in light harvesting of an organic solar cell. The energy difference (Eg) between the lowest unoccupied molecular orbital (LUMO) and the highest occupied molecular orbital (HOMO), which determines the efficiency. The oxidation and reduction of an organic molecule involve electron transfers (Figure $9$ ), and CV measurements can be used to determine the potential change during redox. Through the analysis of data obtained by the CV measurement the electronic band gap is obtained.
The Example Of The Analysis Of CV Data In Solar Cell Material Charecterization
Graphene nanoribbons (GNRs) are long, narrow sheets of graphene formed from the unzipping of carbon nanotubes (Figure $10$ ). GNRs can be both semiconducting and semi-metallic, depending on their width, and they represent a particularly versatile variety of graphene. The high surface area, high aspect ratio, and interesting electronic properties of GNRs render them promising candidates for applications of energy-storage materials.
Graphene nanoribbons can be oxidized to oxidized graphene nanoribbons (XGNRs), are readily soluble in water easily. Cyclic voltammetry is an effective method to characterize the band gap of semiconductor materials. To test the band gap of oxidized graphene nanoribbons (XGNRs), operating parameters can be set as follows:
• 0.1M KCl solution
• Working electrode: evaporated gold on silicon.
• Scan rate: 10 mV/s.
• Scan range: 0 ~ 3000 mV for oxidization reaction; -3000 ~ 0 mV for reduction reaction.
• Samples preparation: spin coat an aqueous solution of the oxidized graphene nanoribbons onto the working electrode, and dry at 100 °C.
To make sure that the results are accurate, two samples can be tested under the same condition to see whether the redox peaks are at the same position. The amount of XGNRs will vary from sample to sample, thus the height of peaks will vary also. Typical curves obtained from the oxidation reaction (Figure $9$ a) and reduction reaction (Figure $9$ b) are shown in Figure $10$ and Figure $11$, respectively.
From the curves shown in Figure $11$ and Figure $12$ the following conclusions can be obtained:
• Two reduction peak and onset is about -0.75 eV (i.e. Figure $9$ b).
• One oxidation peak with onset about 0.85 eV (i.e. Figure $9$ a).
• The calculated band gap = 1.60 eV
In conclusion, there are many applications for CV system, efficient method, and the application in the field of solar cell provides the band gap information for research.
Applications of Cyclic Voltammetry in Proton Exchange Membrane Fuel Cells
Introduction
Proton exchange membrane fuel cells (PEMFCs) are one promising alternative to traditional combustion engines. This method takes advantage of the exothermic hydrogen oxidation reaction in order to generate energy and water (Table $1$ ).
Acidic Electrolyte Acidic Redox Potential at STP (V) Basic Electrolyte Basic Redox Potential at STP (V)
Anode half-reaction $2H_{2} \rightarrow \ 4H^{+}\ +\ 4e^{-}$ $2H_{2}\ +\ 4OH^{-} \rightarrow \ 4H_{2}O\ +\ 4e^{-}$
Cathode half-reaction $O_{2} + 4e^{-}\ +\ 4H^{+} \rightarrow \ 2H_{2}O$ 1.23 $O_{2}\ +\ 4e^{-}\ +\ 2H_{2}O \rightarrow \ 4OH^{-}$ 0.401
Table $1$ Summary of oxidation-reduction reactions in PEMFC in acidic and basic electrolytes.
The basic PEMFC consists of an anode and a cathode separated by a proton exchange membrane (Figure $13$ ). This membrane is a key component of the fuel cell because for the redox couple reactions to successfully occur, protons must be able to pass from the anode to the cathode. The membrane in a PEMFC is usually composed of Nafion, which is a polyfluorinated sulfonic acid, and exclusively allows protons to pass through. As a result, electrons and protons travel from the anode to the cathode through an external circuit and through the proton exchange membrane, respectively, to complete the circuit and form water.
PEMFCs present many advantages compared to traditional combustion engines. They are more efficient and have a greater energy density than traditional fossil fuels. Additionally, the fuel cell itself is very simple with few or no moving parts, which makes it long-lasting, reliable, and very quiet. Most importantly, however, the operation of a PEMFC results in zero emissions as the only byproduct is water (Table $2$ ). However, the use of PEMFCs has been limited because of the slow reaction rate for the oxygen reduction half-reaction (ORR). Reaction rates, k°, for reduction-oxidation reactions such as these tend to be on the order of 10-10 – 10-9 where 10-10 is the fastest reaction rate and 10-9 is the slowest reaction rate. Compared to the hydrogen oxidation half-reaction (HOR), which has a reaction rate of k° = 1x10-10 cm/s, the reaction rate for the ORR is k° ~ 1x10-9 cm/s. Thus, the ORR is the kinetic rate-limiting half-reaction and its reaction rate must be increased for PEMFCs to be a viable alternative to combustion engines. Because cyclic voltammetry can be used to examine the kinetics of the ORR reaction, it is a critical technique in evaluating potential solutions to this problem.
Advantages Disadvantages
More efficient than combustion ORR half-reaction too slow for commercial use
Greater energy density than fossil fuels Hydrogen fuel is not readily available
Long-lasting Water circulation must be managed to keep the proton exchange membrane hydrated
Reliable
Quiet
No harmful emissions
Table $2$ Summary of advantages and disadvantages of PEFMCs as an alternative to combustion engines.
Cyclic Voltammetry
Overview
Cyclic voltammetry is a key electrochemical technique that, among its other uses, can be employed to examine the kinetics of oxidation-reduction reactions in electrochemical systems. Specifically, data collected with cyclic voltammetry can be used to determine the rate of reaction. In its simplest form, this technique requires a simple three electrode cell and a potentiostat Figure $14$.
A potential applied to the working electrode is varied linearly with time and the response in the current is measured Figure $14$. Typically the potential is cycled between two values once in the forward direction and once in the reverse direction. For example, in Figure $15$, the potential is cycled between 0.8V and -0.2V with the forward scan moving from positive to negative potential and the reverse scan moving from negative to positive potential. Various parameters can be adjusted including the scan rate, the number of scan cycles, and the direction of the potential scan i.e. whether the forward scan moves from positive to negative voltages or vice versa. For publication, data is typically collected at a scan rate of 20 mV/s with at least 3 scan cycles.
Reading a Voltammogram
From a cyclic voltammetry experiment, a graph called a voltammogram will be obtained. Because both the oxidation and reduction half-reactions occur at the working electrode surface, steep changes in the current will be observed when either of these half-reactions occur.A typical voltammogram will feature two peaks where one peak corresponds to the oxidation half-reaction and the other to the reduction half-reaction. In an oxidation half-reaction in an electrochemical cell, electrons flow from the species in solution to the electrode resulting in an anodic current, ia. Frequently, this oxidation peak appears when scanning from negative to positive potentials (Figure $16$ ). In a reduction half-reaction in an electrochemical cell, electrons flow from the electrode to the species in solution, resulting in a cathodic current, ic. This type of current is most often observed when scanning from positive to negative potentials. When the starting reactant is completely oxidized or completely reduced, peak anodic current, ipa, and peak cathodic current, ipc, respectively, are reached. Then, the current decays as the oxidized or reduced species leaves the electrode surface. The shape of these anodic and cathodic peaks can be modeled with the Nernst equation, \ref{2} , where number of electrons transferred and E˚ (formal reduction potential) = (Epa + Epc)/2
$E_{eq}\ =\ E^{\circ '} \ +\ (0.059/n)\ log\ ( [ reactant ] / [ product ] ) \label{2}$
Important Values from the Voltammogram
Several key pieces of information can be obtained through examination of the voltammogram including ipa, ipc, and the anodic and cathodic peak potentials. ipa and ipcboth serve as important measures of catalytic activity: the larger the peak currents, the greater the activity of the catalyst. Values for ipa and ipc can be obtained through one of two methods: physical examination of the graph or the Randles-Sevick equation. To determine the peak potentials directly from the graph, a vertical tangent line from the peak current is intersected with an extrapolated baseline. In contrast, the Randles-Sevick equation uses information about the electrode and the experimental parameters to calculate the peak current, \ref{3} ,where A = electrode area; D = diffusion coefficient; C = concentration; v = scan rate.
$i_{p} \ =\ (2.69x10^{5})n^{3/2}AD^{1/2}C\nu ^{12} \label{3}$
Anodic peak potential, Epa, and cathodic peak potential, Epc, can also be obtained from the voltammogram by determining the potential at which ipa and ipc respectively occur. These values are an indicator of the relative magnitude of the reaction rate. If the exchange of electrons between the oxidizing and reducing agents is fast, they form an electrochemically reversible couple. These redox couples fulfill the relationship: ΔEp = Epa – Epc ≡ 0.059/n. In contrast, a nonreversible couple will have a slow exchange of electrons and ΔEp > 0.059/n. However, it is important to note that ΔEp is dependent on scan rate.
Analysis of Reaction Kinetics
The Tafel and Butler-Volmer equations allow for the calculation of the reaction rate from the current-potential data generated by the voltammogram. In these analyses, the rate of the reaction can be expressed as two values: k° and io. k˚, the standard rate constant, is a measure of how fast the system reaches equilibrium: the larger the value of k°, the faster the reaction. The exchange current density, (io) is the current flow at the surface of the electrode at equilibrium: the larger the value of io, the faster the reaction. While both io and k° can be used, io is more frequently used because it is directly related to the overpotential through the current-overpotential and Butler-Volmer equations. When the reaction is at equilibrium, ko and io are related by \ref{4} , where Co,eq and CR,eq= equilibrium concentrations of the oxidized and reduced species respectively and a = symmetry factor.
$i_{O} \ =\ nFk^{\circ }C_{O, eq} ^{1-a} C_{R, eq} ^{a} \label{4}$
Tafel equation
In its simplest form, the Tafel equation is expressed as \ref{4} , where a and b can be a variety of constants. Any equation which has the form of \ref{5} is considered a Tafel equation.
$E-E^{\circ} \ =\ a\ +\ b\ log(i) \label{5}$
For example, the relationship between current, potential, the concentration of reactants and products, and k˚ can be expressed as \ref{6} , where CO(0,t) and CR(0,t) = concentrations of the oxidized and reduced species respectively at a specific reaction time, F = Faraday constant, R = gas constant, and T = temperature.
$C_{O}(0,t)\ -\ C_{R}(0,t)e^{ {[nf/RT] (E-E^{\circ } ) } } \ =\ [i/nFk^{\circ } ][e^{ {[anF/RT](E-E^{\circ } ) } } ] \label{6}$
At very large overpotentials, this equation reduces to a Tafel equation, \ref{7} , where a = -[RT/(1-a)nF]ln(io) and b = [RT/(1-a)nF].
$E-E^{\circ } \ =\ [RT/(1-a)nF] ln(i)\ -\ [RT/(1-a)nF]ln(i_{0}) \label{7}$
The linear relationship between E-E˚ and log(i) can be exploited to determine io through the formation of a Tafel plot (Figure $17$ ), E-E˚ versus log(i).The resulting anodic and cathodic branches of the graph have slopes of [(1-a)nF/2.3RT] and[-anF/2.3RT], respectively. An extrapolation of these two branches results in a y-intercept = log(io). Thus, this plot directly relates potential and current data collected by cyclic voltammetry to io.
Butler-Volmer Equation
While the Butler-Volmer equation resembles the Tafel equation, and in some cases can even be reduced to the Tafel formulation, it uniquely provides a direct relationship between io and Η. Without simplification, the Butler-Volmer equation is known as the current-overpotential \ref{8} .
$i/i_{O}\ =\ C_{O}(0,t)/C_{O,eq}]e^{ { [anF/RT] (E-E^{\circ } ) } } \ -\ [ C_{R}(0,t)/C_{R,eq} ] e^{ { [ (1-a)nF/RT] (E-E^{\circ } ) } } \label{8}$
If the solution is well-stirred, the bulk and surface concentrations can be assumed to be equal and \ref{8} can be reduced to Butler-Volmer equation, \ref{9} .
$I\ =\ i_{O}[ e^{\{ [ anF/RT] (E-E^{\circ } )\} } - e^{ [ (1-a)nF/RT] (E-E^{\circ }) } ] \label{9}$
Cyclic Voltammetry in ORR Catalysis Research
Platinum Catalysis
While the issue of a slow ORR reaction rate has been addressed in many ways, it is most often overcome with the use of catalysts. Traditionally, platinum catalysts have demonstrated the best performance at 30 °C, the ORR io on a Pt catalyst is 2.8 x 10-7 A/cm2 compared to the limiting case of ORR where io = 1 x 10-10A/cm2. Pt is particularly effective as a catalyst for the ORR in PEMFCs because its binding energy for both O and OH is the closest to ideal of all the bulk metals, its activity is the highest of all the bulk metals, its selectivity for O2 adsorption is close to 100%, and its extreme stability under a variety of acidic and basic conditions as well as high operating voltages Figure $18$.
Metal-Nitrogren-Carbon Composite Catalysis
Nonprecious metal catalysts (NPMCs) show great potential to reduce the cost of the catalyst without sacrificing catalytic activity. The best NPMCs currently in development have comparable or even better ORR activity and stability than platinum-based catalysts in alkaline electrolytes; in acidic electrolytes, however, NPMCs perform significantly worse than platinum-based catalysts.
In particular, transition metal-nitrogen-carbon composite catalysts (M-N-C) are the most promising type of NPMC. The highest-performing members of this group catalyze the ORR at potentials within 60 mV of the highest-performing platinum catalysts (Figure $19$ ). Additionally, these catalysts have excellent stability: after 700 hours at 0.4 V, they do not show any performance degradation. In a comparison of high-performing PANI-Co-C and PANI-Fe-C (PANI = polyaniline), Zelenay and coworkers used cyclic voltammetry to compare the activity and performance of these two catalysts in H2SO4. The Co-PANI-C catalyst was found to have no reduction-oxidation features on its voltammogram whereas Fe-PANI-C was found to have two redox peaks at ~0.64 (Figure $20$ ). These Fe-PANI-C peaks have a full width at half maximum of ~100 mV, which is indicative of the reversible one-electron Fe3+/Fe2+ reduction-oxidation (theoretical FWHM = 96 mV). Zelenay and coworkers also determined the exchange current density using the Tafel analysis and found that Fe-PANI-C has a significantly greater io (io = 4 x 10-8 A/cm2) compared to Co-PANI-C (io = 5 x 10-10 A/cm2). These differences not only demonstrate the higher ORR activity of Fe-PANI-C when compared to Co-PANI-C, but also suggest that the ORR-active sites and reaction mechanisms are different for these two catalysts. While the structure of Fe-PANI-C has been examined (Figure $21$ ) the structure of Co-PANI-C is still being investigated.
While the majority of the M-N-C catalysts show some ORR activity, the magnitude of this activity is highly dependent upon a variety of factors; cyclic voltammetry is critical in the examination of the relationships between each factor and catalytic activity. For example, the activity of M-N-Cs is highly dependent upon the synthetic procedure. In their in-depth examination of Fe-PANI-C catalysts, Zelenay and coworkers optimized the synthetic procedure for this catalyst by examining three synthetic steps: the first heating treatment, the acid-leaching step, and the second heating treatment. Their synthetic procedure involved the formation of a PANI-Fe-carbon black suspension that was vacuum-dried onto a carbon support. Then, the intact catalyst underwent a one-hour heating treatment followed by acid leaching and a three-hour heating treatment. The heating treatments were performed at 900˚C, which was previously determined to be the optimal temperature to achieve maximum ORR activity (Figure $21$ ).
To determine the effects of the synthetic steps on the intact catalyst, the Fe-PANI-C catalysts were analyzed by cyclic voltammetry after the first heat treatment (HT1), after the acid-leaching (AL), and after the second heat treatment (HT2). Compared to HT1, both the AL and HT2 steps showed increases in the catalytic activity. Additionally, HT2 was found to increase the catalytic activity even more than AL (Figure $22$ ). Based on this data, Zelenay and coworkers concluded HT1 likely either creates active sites in the catalytic surface while both the AL step removes impurities, which block the surface pores, to expose more active sites. However, this step is also known to oxidize some of the catalytic area. Thus, the additional increase in activity after HT2 is likely a result of “repairing” the catalytic surface oxidation.
Conclusion
With further advancements in catalytic research, PEMFCs will become a viable and advantageous technology for the replacement of combustion engines. The analysis of catalytic activity and reaction rate that cyclic voltammetry provides is critical in comparing novel catalysts to the current highest-performing catalyst: Pt.
Chronocoulometry: A Technique for Electroplating
Fundamentals of Electrochemistry
A chemical reaction that involves a change in the charge of a chemical species is called an electrochemical reaction. As the name suggests, these reactions involve electron transfer between chemicals. Many of these reactions occur spontaneously when the various chemicals come in contact with one another. In order to force a nonspontaneous electrochemical reaction to occur, a driving force needs to be provided. This is because every chemical species has a relative reduction potential. These values provide information on the ability of the chemical to take extra electrons. Conversely, we can think if relative oxidation potentials, which indicate the ability of a chemical to give away electrons. It is important to note that these values are relative and need to be defined against a reference reaction. A list of standard reduction potentials (standard indicating measurement against the normal hydrogen electrode as seen in (Figure $23$ ) for common electrochemical half-reactions is given in Table $3$. Nonspontaneous electrochemical systems, often called electrolytic cells, as mentioned previously, require a driving force to occur. This driving force is an applied voltage, which forces reduction of the chemical that is less likely to gain an electron.
Oxidant Reductant E° (V vs NHE)
2H2O +2e- H2 (g) +2OH- -0.8227
Cu2O (s) H2O + 2e- 2Cu (s) + 2OH- -0.360
Sn4+ + 2e- Sn2+ +0.15
Cu2+ + 2e- Cu (s) +0.337
O2 (g) + 2H+ + 2e- H2O2 (aq) +0.70
Table $3$ List of standard reduction potentials of various half reactions.
Design of an Electrochemical Cell
A schematic of an electrochemical cell is seen in Figure $24$. Any electrochemical cell must have two electrodes – a cathode, where the reduction half-reaction takes place, and an anode, where the oxidation half-reaction occurs. Examples of half reactions can be seen in Table $3$. The two electrodes are electrically connected in two ways – the electrolyte solution and the external wire. The electrolyte solution typically includes a small amount of the electroactive analyte (the chemical species that will actually participate in electron transfer) and a large amount of supporting electrolyte (the chemical species that assist in the movement of charge, but are not actually involved in electron transfer. The external wire provides a path for the electrons to travel from the oxidation half-reaction to the reduction half-reaction. As mentioned previously, when an electrolytic reaction (nonspontaneous) is being forced to occur a voltage needs to be applied. This requires the wires to be connected to a potentiostat. As its name suggests, a potentiostat controls voltage (i.e., “potentio” = potential measured in volts). The components of an electrochemical cell and their functions are also given in Table $4$.
Component Function
Electrode Interface between ions and electrons
Anode Electrode at which the oxidation half reaction takes place
Cathode Electrode at which the reduction half reaction takes place
Electrolyte solution Solution that contains supporting electrolyte and electroactive analyte
Supporting electrolyte Not a part of the faradaic process; only a part of the capacitive process
Electroactive analyte The chemical species responsible for all faradaic current
Potentiostat DC Voltage source; sets the potential difference between the cathode and anode
Wire Connects the electrodes to the potentiostat
Table $4$ Various components of an electrochemical cell and their respective functions.
Chronocoulometry: an Electroanalytical Technique
Theory
Chronocoulometry, as indicated by the name, is a technique in which the charge is measured (i.e. “coulometry”) as a function of time (i.e., “chrono”). There are various types of coulometry. The one discussed here is potentiostatic coulometry in which the potential (or voltage) is set and, as a result, charge flows through the cell. The input and output example graphs can be seen in Figure $25$. The input is a potential step that spans the reduction potential of the electroactive species. If this potential step is performed in an electrochemical cell that does not contain and electroactive species, only capacitive current will flow (Figure $26$ ), in which the ions migrate in such a way that charges are aligned (positive next to negative, but no charge is transferred. Once an electroactive species is introduced into the system however, the faradaic current begins to flow. This current is a result of the electron transfer between the electrode and the electroactive species.
Electroplating: an Application of Chronocoulometry
Electroplating is an electrochemical process that utilizes techniques such as chronocoulometry to electrodeposit a charged chemical from a solution as a neutral chemical on the surface of another chemical. These chemicals are typically metals. The science of electroplating dates back to the early 1800s when Luigi Valentino Brugnatelli (Figure $27$ ) electroplated gold from solution onto silver metals. By the mid 1800s, the process of electroplating was patented by cousins George and Henry Elkington (Figure $28$ ). The Elkingtons brought electroplated goods to the masses by producing consumer products such as artificial jewelry and other commemorative items (Figure $29$ ).
Recent scientific studies have taken interest in studying electroplating. Trejo and coworkers have demonstrated that a quartz microbalance can be used to measure the change in mass over time during electrodeposition via chronocoulometry. Figure $30$ a shows the charge transferred at various potential steps. Figure $30$ b shows the change in mass as a function of potential step. It is clear that the magnitude of the potential step is directly related to the amount of charge transferred and consequently the mass of the electroactive species deposited.
The effect of electroplating via chronocoulometry on the localized surface plasmon resonance (LSPR) has been studied on metallic nanoparticles. An LSPR is the collective oscillation of electrons as induced by an electric field (Figure $31$ ). In various studies by Mulvaney and coworkers, a clear effect on the LSPR frequency was seen as potentials were applied (Figure $32$ ). In initial studies, no evidence of electroplating was reported. In more recent studies by the same group, it was shown that nanoparticles could be electroplated using chronocoulometry (Figure $33$. Such developments can lead to an expansion of the applications of both electroplating and plasmonics. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.07%3A_Electrochemistry.txt |
Thermogravimetric Analysis
TGA and SWNTS
Thermogravimetric analysis (TGA) and the associated differential thermal analysis (DTA) are widely used for the characterization of both as-synthesized and side-wall functionalized single walled carbon nanotubes (SWNTs). Under oxygen, SWNTs will pyrolyze leaving any inorganic residue behind. In contrast in an inert atmosphere since most functional groups are labile or decompose upon heating and as SWNTs are stable up to 1200 °C, any weight loss before 800 °C is used to determine the functionalization ratio of side-wall functionalized SWNTs. The following properties of SWNTs can be determined using this TGA;
1. The mass of metal catalyst impurity in as synthesized SWNTs.
2. The number of functional groups per SWNT carbon (CSWNT).
3. The mass of a reactive species absorbed by a functional group on a SWNT.
Quantitative determination of these properties are used to define the purity of SWNTs, and the extent of their functionalization.
An Overview of Thermogravimetric Analysis
The main function of TGA is the monitoring of the thermal stability of a material by recording the change in mass of the sample with respect to temperature. Figure $1$ shows a simple diagram of the inside of a typical TGA.
Inside the TGA, there are two pans, a reference pan and a sample pan. The pan material can be either aluminium or platinum. The type of pan used depends on the maximum temperature of a given run. As platinum melts at 1760 °C and alumium melts at 660 °C, platinum pans are chosen when the maximum temperature exceeds 660 °C. Under each pan there is a thermocouple which reads the temperature of the pan. Before the start of each run, each pan is balanced on a balance arm. The balance arms should be calibrated to compensate for the differential thermal expansion between the arms. If the arms are not calibrated, the instrument will only record the temperature at which an event occurred and not the change in mass at a certain time. To calibrate the system, the empty pans are placed on the balance arms and the pans are weighed and zeroed.
As well as recording the change in mass, the heat flow into the sample pan (differential scanning calorimetry, DSC) can also be measured and the difference in temperature between the sample and reference pan (differential thermal analysis, DTA). DSC is quantitative and is a measure of the total energy of the system. This is used to monitor the energy released and absorbed during a chemical reaction for a changing temperature. The DTA shows if and how the sample phase changed. If the DTA is constant, this means that there was no phase change. Figure $2$ shows a DTA with typical examples of an exotherm and an endotherm.
When the sample melts, the DTA dips which signifies an endotherm. When the sample is melting it requires energy from the system. Therefore the temperature of the sample pan decreases compared with the temperature of the reference pan. When the sample has melted, the temperature of the sample pan increases as the sample is releasing energy. Finally the temperatures of the reference and sample pans equilibrate resulting in a constant DTA. When the sample evaporates, there is a peak in the DTA. This exotherm can be explained in the same way as the endotherm.
Typically the sample mass range should be between 0.1 to 10 mg and the heating rate should be 3 to 5 °C/min.
Determination of the Mass of Iron Catalyst Impurity in HiPCO SWNTs
SWNTs are typically synthesized using metal catalysts. Those prepared using the HiPco method, contain residual Fe catalyst. The metal (i.e., Fe) is usually oxidized upon exposure to air to the appropriate oxide (i.e., Fe2O3). While it is sometimes unimportant that traces of metal oxide are present during subsequent applications it is often necessary to quantify their presence. This is particularly true if the SWNTs are to be used for cell studies since it has been shown that the catalyst residue is often responsible for observed cellular toxicity.
In order to calculate the mass of catalyst residue the SWNTs are pyrolyzed under air or O2, and the residue is assumed to be the oxide of the metal catalyst. Water can be added to the raw SWNTs, which enhances the low-temperature catalytic oxidation of carbon. A typical TGA plot of a sample of raw HiPco SWNTs is shown in Figure $3$.
The weight gain (of ca. 5%) at 300 °C is due to the formation of metal oxide from the incompletely oxidized catalyst. To determine the mass of iron catalyst impurity in the SWNT, the residual mass must be calculated. The residual mass is the mass that is left in the sample pan at the end of the experiment. From this TGA diagram, it is seen that 70% of the total mass is lost at 400 °C. This mass loss is attributed to the removal of carbon. The residual mass is 30%. Given that this is due to both oxide and oxidized metal, the original total mass of residual catalyst in raw HiPCO SWNTs is ca. 25%.
Determining the Number of Functional Groups on SWNTs
The limitation of using SWNTs in any practical applications is their solubility; for example SWNTs have little to no solubility in most solvents due to aggregation of the tubes. Aggregation/roping of nanotubes occurs as a result of the high van der Waals binding energy of ca. 500 eV per μm of tube contact. The van der Waals force between the tubes is so great, that it take tremendous energy to pry them apart, making it very difficult to make combination of nanotubes with other materials such as in composite applications. The functionalization of nanotubes, i.e., the attachment of “chemical functional groups”, provides the path to overcome these barriers. Functionalization can improve solubility as well as processability, and has been used to align the properties of nanotubes to those of other materials. In this regard, covalent functionalization provides a higher degree of fine-tuning for the chemical and physical properties of SWNTs than non-covalent functionalization.
Functionalized nanotubes can be characterized by a variety of techniques, such as atomic force microscopy (AFM), transmission electron microscopy (TEM), UV-vis spectroscopy, and Raman spectroscopy, however, the quantification of the extent of functionalization is important and can be determined using TGA. Because any sample of functionalized-SWNTs will have individual tubes of different lengths (and diameters) it is impossible to determine the number of substituents per SWNT. Instead the extent of functionalization is expressed as number of substituents per SWNT carbon atom (CSWNT), or more often as CSWNT/substituent, since this is then represented as a number greater than 1.
Figure $4$ shows a typical TGA for a functionalized SWNT. In this case it is polyethyleneimine (PEI) functionalized SWNTs prepared by the reaction of fluorinated SWNTs (F-SWNTs) with PEI in the presence of a base catalyst.
In the present case the molecular weight of the PEI is 600 g/mol. When the sample is heated, the PEI thermally decomposes leaving behind the unfunctionalized SWNTs. The initial mass loss below 100 °C is due to residual water and ethanol used to wash the sample.
In the following example the total mass of the sample is 25 mg.
• The initial mass, Mi = 25 mg = mass of the SWNTs, residues and the PEI.
• After the initial moisture has evaporated there is 68% of the sample left. 68% of 25 mg is 17 mg. This is the mass of the PEI and the SWNTs.
• At 300 °C the PEI starts to decompose and all of the PEI has been removed from the SWNTs at 370 °C. The mass loss during this time is 53% of the total mass of the sample. 53% of 25 mg is 13.25 mg.
• The molecular weight of this PEI is 600 g/mol. Therefore there is 0.013 g / 600 g/mol = 0.022 mmole of PEI in the sample.
• 15% of the sample is the residual mass, this is the mass of the decomposed SWNTs. 15% of 25 mg is 3.75 mg. The molecular weight of carbon is 12 g/mol. So there is 0.3125 mmole of carbon in the sample.
• There is 93.4 mol% of carbon and 6.5 mol% of PEI in the sample.
Determination of the Mass of a Chemical Absorbed by Functionalized SWNTs
Solid-state 13C NMR of PEI-SWNTs shows the presence of carboxylate substituents that can be attributed to carbamate formation as a consequence of the reversable CO2 absorption to the primary amine substituents of the PEI. Desorption of CO2 is accomplished by heating under argon at 75 °C.
The quantity of CO2 absorbed per PEI-SWNT unit may be determined by initially exposing the PEI-SWNT to a CO2 atmosphere to maximize absorption. The gas flow is switched to either Ar or N2 and the sample heated to liberate the absorbed CO2 without decomposing the PEI or the SWNTs. An example of the appropriate TGA plot is shown in Figure $5$.
The sample was heated to 75 °C under Ar, and an initial mass loss due to moisture and/or atmospherically absorbed CO2 is seen. In the temperature range of 25 °C to 75 °C the flow gas was switched from an inert gas to CO2. In this region an increase in m-depenass is seen, the increase is due to CO2 absorption by the PEI (10000Da)-SWNT. Switching the carrier gas back to Ar resulted in the desorption of the CO2.
The total normalized mass of CO2 absorbed by the PEI(10000)-SWNT can be calculated as follows;
Solution Outline
1. Minimum mass = mass of absorbant = Mabsorbant
2. Maximum mass = mass of absorbant and absorbed species = Mtotal
3. Absorbed mass = Mabsorbed = Mtotal - Mabsorbant
4. % of absorbed species= (Mabsorbed/Mabsorbant)*100
5. 1 mole of absorbed species = MW of absorbed species
6. Number of moles of absorbed species = (Mabsorbed/MW of absorbed species)
7. The number of moles of absorbed species absorbed per gram of absorbant= (1g/Mtotal)*(Number of moles of absorbed species)
Solution
1. Mabsorbant = Mass of PEI-SWNT = 4.829 mg
2. Mtotal = Mass of PEI-SWNT and CO2 = 5.258 mg
3. Mabsorbed = Mtotal - Mabsorbant = 5.258 mg - 4.829 mg = 0.429 mg
4. % of absorbed species= % of CO2 absorbed = (Mabsorbed/Mabsorbant)*100 = (0.429/4.829)*100 = 8.8%
5. 1 mole of absorbed species = MW of absorbed species = MW of CO2 = 44 therefore 1 mole = 44g
6. Number of moles of absorbed species = (Mabsorbed/MW of absorbed species)= (0.429 mg / 44 g) = 9.75 μM
7. The number of moles of absorbed species absorbed per gram of absorbant =(1 g/Mtotal)*(Number of moles of absorbed species) = (1 g/5.258 mg)*(9.75)= 1.85 mmol of CO2 absorbed per gram of absorbant
TGA/DSC-FTIR Charecterization of Oxide Nanoparticles
Metal Oxide Nanoparticles
The binary compound of one or more oxygen atoms with at least one metal atom that forms a structure ≤100 nm is classified as metal oxide (MOx) nanoparticle. MOxnanoparticles have exceptional physical and chemical properties (especially if they are smaller than 10 nm) that are strongly related to their dimensions and to their morphology. These enhanced features are due to the increased surface to volume ratio which has a strong impact on the measured binding energies. Based on theoretical models, binding or cohesive energy is inversely related to particle size with a linear relationship \ref{1} .
$E_{NP} = E_{bulk} /cdot [1 - c \cdot r^{-1} \label{1}$
where ENP and Ebulk is the binding energy of the nanoparticle and the bulk binding energy respectively, c is a material constant and r is the radius of the cluster. As seen from \ref{1} , nanoparticles have lower binding energies than bulk material, which means lower electron cloud density and therefore more mobile electrons. This is one of the features that have been identified to contribute to a series of physical and chemical properties.
Synthesis of Metal Oxide Nanoparticles
Since today, numerous synthetic methods have been developed with the most common ones presented in Table $1$. These methods have been successfully applied for the synthesis of a variety of materials with 0-D to 3-D complex structures. Among them, the solvothermal methods are by far the most popular ones due to their simplicity. Between the two classes of solvothermal methods, slow decomposition methods, usually called thermal decomposition methods, are preferred over the hot injection methods since they are less complicated, less dangerous and avoid the use of additional solvents.
Table 1 Methods for synthesizing MOx nanoparticles
Method Characteristics Advantages Disadvantages
Solvothermal
1. Slow decomposition
2. Hot injection
1. Slow heating of M-precursor in the presence of ligand/surfactant precursor
2. Injection of M-precursor into solution at high Temp.
1. Safe, easily carried out, variety of M-precursors to use
2. Excellent control of particle distribution
1. Poor control of nucleation/ growth stages – Particle size
2. Hazardous, Reproducibility depends on individual
Template directed Use of organic molecules or preexistent nanoparticles as templates for directing nanoparticle formation High yield and high purity of nanoparticles Template removal in some cases causes particle deformation or loss
Sonochemical Ultrasound influence particle nucleation Mild synthesis conditions Limited applicability
Thermal evaporation Thermal evaporation of Metal oxides Monodisperse particle formation, excellent control in shape and structure Extremely high temperatures, and vacuum system is required
Gas phase catalytic growth Use of catalyst that serves as a preferential site for absorbing Metal reactants Excellent control in shape and structure Limited applicability
A general schematic diagram of the stages involving the nanoparticles formation is shown in Figure $6$. As seen, first step is the M-atom generation by dissociation of the metal-precursor. Next step is the M-complex formulation, which is carried out before the actual particle assembly stage. Between this step and the final particle formulation, oxidation of the activated complex occurs upon interaction with an oxidant substance. The x-axis is a function of temperature or time or both depending on the synthesis procedure.
In all cases, the particles synthesized consist of MOx nanoparticle structures stabilized by one or more types of ligand(s) as seen in Figure $7$. The ligands are usually long-chained organic molecules that have one more functional groups. These molecules protect the nanoparticles from attracting each other under van der Waals forces and therefore prevent them from aggregating.
Even though often not referred to specifically, all particles synthesized are stabilized by organic (hydrophilic, hydrophobic or amphoteric) ligands. The detection and the understanding of the structure of these ligands can be of critical importance for understanding the controlling the properties of the synthesized nanoparticles.
Metal Oxide Nanoparticles Synthesized via slow decomposition
In this work, we refer to MOx nanoparticles synthesized via slow decomposition of a metal complex. In Table $2$, a number of different MOxnanoparticles are presented, synthesized via metal complex dissociation. Metal–MOx and mixed MOx nanoparticles are not discussed here.
Metal Oxide Shape Size (approx.)
Cerium oxide dots 5-20 nm
Iron oxide dots, cubes 8.5-23.4 nm
Maganese oxide Multipods > 50 nm
Zinc oxide Hexagonal pyramid 15-25 nm
Cobalt oxide dots ~ 10 nm
Chromium oxide dots 12 nm
Vanadium oxide dots 9 - 15 nm
Molybdenum oxide dots 5 nm
Rhodium oxide dots, rods 16 nm
Palladium oxide dots 18 nm
Ruthenium oxide dots 9 - 14 nm
Zirconium oxide rods 7 x 30 nm
Barium oxide dots 20 nm
Magnesium oxide dots 4 - 8 nm
Calcium oxide dots, rods 7 - 12 nm
Nickel oxide dots 8 - 15 nm
Titanium oxide dots and rods 2.3 - 30 nm
Tin oxide dots 2 - 5 nm
Indium oxide dots ~ 5 nm
Samaria Square ~ 10 nm
Table $2$ Examples of MOx nanoparticles synthesized via decomposition of metal complexes.
A significant number of metal oxides synthesized using slow decomposition is reported in literature. If we use the periodic table to map the different MOx nanoparticles (Figure $8$ ), e notice that most of the alkali and transition metals generate MOx nanoparticles, while only a few of the poor metals seem to do so, using this synthetic route. Moreover, two of the rare earth metals (Ce and Sm) have been reported to successfully give metal oxide nanoparticles via slow decomposition.
Among the different characterization techniques used for defining these structures, transition electron microscopy (TEM) holds the lion’s share. Nevertheless, most of the modern characterization methods are more important when it comes to understanding the properties of nanoparticles. X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), nuclear magnetic resonance (NMR), IR spectroscopy, Raman spectroscopy, and thermogravimetric analysis (TGA) methods are systematically used for characterization.
Synthesis and Characterization of WO3-x nanorods
The synthesis of WO3-x nanorods is based on the method published by Lee et al. A slurry mixture of Me3NO∙2H2O, oleylamine and W(CO)6 was heated up to 250 °C at a rate of 3 °C/min (Figure $9$ ). The mixture was aged at this temperature for 3 hours before cooling down to room temperature.
Multiple color variations were observed between 100 - 250 °C with the final product having a dark blue color. Tungsten oxide nanorods (W18O49 identified by XRD) with a diameter of 7±2 nm and 50±2 nm long were acquired after centrifugation of the product solution. A TEM image of the W18O49 nanorods is shown in Figure $10$.
Thermogravimetric Analysis (TGA)/Differential Scanning Calorimetry (DSC)
Thermogravimetric analysis (TGA) is a technique widely used for determining the organic and inorganic content of various materials. Its basic rule of function is the high precision measurement of weight gain/loss with increasing temperature under inert or reactive atmospheres. Each weight change corresponds to physical (crystallization, phase transformation) or chemical (oxidation, reduction, reaction) processes that take place by increasing the temperature. The sample is placed into platinum or alumina pan and along with an empty or standard pan are placed onto two high precision balances inside a high temperature oven. A method for pretreating the samples is selected and the procedure is initiated. Differential scanning calorimetry (DSC) is a technique usually accompanying TGA and is used for calculating enthalpy energy changes or heat capacity changes associated with phase transitions and/or ligand-binding energy cleavage.
In Figure $11$ the TGA/DSC plot acquired for the ligand decomposition of WO3-x nanorods is presented. The sample was heated at constant rate under N2 atmosphere up to 195 °C for removing moisture and then up to 700 °C for removing the oleylamine ligands. It is important to use an inert gas for performing such a study to avoid any premature oxidation and/or capping agent combustion. 26.5% of the weight loss is due to oleylamine evaporations which means about 0.004 moles per gram of sample. After isothermal heating at 700 °C for 25 min the flow was switched to air for oxidizing the ligand-free WO3-x to WO3. From the DSC curve we noticed the following changes of the weight corrected heat flow:
1. From 0 – 10 min assigned to water evaporation.
2. From 65 – 75 min assigned to OA evaporation.
3. From 155 – 164 min assigned to WO3-x oxidation.
4. From 168 – 175 min is also due to further oxidation of W5+ atoms.
The heat flow increase during the WO3-x to WO3 oxidation is proportional to the crystal phase defects (or W atoms of oxidation state +5) and can be used for performing qualitative studies between different WOx nanoparticles.
The detailed information about the procedure used to acquire the TGA/DSC plot shown in Figure $11$ is as follows.
• Select gas (N2 with flow rate 50 mL/min.)
• Ramp 20 °C/min to 200 °C.
• Isothermal for 20 min.
• Ramp 5 °C/min to 700 °C.
• Isothermal for 25 min.
• Select gas (air).
• Isothermal for 20 min.
• Ramp 10 °C/min to 850 °C.
• Cool down
Fourier Transform Infrared Spectroscopy
Fourier transform infrared spectroscopy (FTIR) is the most popular spectroscopic method used for characterizing organic and inorganic compounds. The basic modification of an FTIR from a regular IR instrument is a device called interferometer, which generates a signal that allows very fast IR spectrum acquisition. For doing so, the generatated interferogram has to be “expanded” using a Fourier transformation to generate a complete IR frequency spectrum. In the case of performing FTIR transmission studies the intensity of the transmitted signal is measured and the IR fingerprint is generated \ref{2} .
$T = \frac{I}{L} = e^{c \varepsilon l} \label{2}$
Where I is the intensity of the samples, Ib is the intensity of the background, c is the concentration of the compound, ε is the molar extinction coefficient and l is the distance that light travels through the material. A transformation of transmission to absorption spectra is usually performed and the actual concentration of the component can be calculated by applying the Beer-Lambert law \ref{3}
$A = -ln(T) = c \varepsilon l \label{3}$
A qualitative IR-band map is presented in Figure $12$.
The absorption bands between 4000 to 1600 cm-1 represent the group frequency region and are used to identify the stretching vibrations of different bonds. At lower frequencies (from 1600 to 400 cm-1) vibrations due to intermolecular bond bending occurs upon IR excitation and therefore are usually not taken into account.
TGA/DSC-FTIRCharacterization
TGA/DSC is a powerful tool for identifying the different compounds evolved during the controlled pyrolysis and therefore provide qualitative and quantitative information about the volatile components of the sample. In metal oxide nanoparticle synthesis TGA/DSC-FTIR studies can provide qualitative and quantitative information about the volatile compounds of the nanoparticles.
TGA–FTIR results presented below were acquired using a Q600 Simultaneous TGA/DSC (SDT) instrument online with a Nicolet 5700 FTIR spectrometer. This system has a digital mass flow control and two gas inlets giving the capability to switch reacting gas during each run. It allows simultaneous weight change and differential heat flow measurements up to 1500 °C, while at the same time the outflow line is connected to the FTIR for performing gas phase compound identification. Grand-Schmidt thermographs were usually constructed to present the species evolution with time in 3 dimensions.
Selected IR spectra are presented in Figure $13$. Four regions with intense peaks are observed. Between 4000 – 3550 cm-1 due to O-H bond stretching assigned to H2O that is always present and due to due to N-H group stretching that is assigned to the amine group of oleylamine. Between 2400 – 2250 cm-1 due to O=C=O stretching, between 1900 – 1400 cm-1 which is mainly to C=O stretching and between 800 – 400 cm-1 cannot be resolved as explained previously.
The peak intensity evolution with time can be more easily observed in Figure $14$ and Figure $15$. As seen, CO2 evolution increases significantly with time especially after switching our flow from N2 to air. H2O seems to be present in the outflow stream up to 700 °C while the majority of the N-H amine peaks seem to disappear at about 75 min. C=N compounds are not expected to be present in the stream which leaves bands between 1900 – 1400 cm-1 assigned to C=C and C=O stretching vibrations. Unsaturated olefins resulting from the cracking of the oleylamine molecule are possible at elevated temperatures as well as the presence of CO especially under N2atmosphere.
From the above compound identification we can summarize and propose the following applications for TGA-FTIR. First, more complex ligands, containing aromatic rings and maybe other functional groups may provide more insight in the ligand to MOx interaction. Second, the presence of CO and CO2 even under N2 flow means that complete O2 removal from the TGA and the FTIR cannot be achieved under these conditions. Even though the system was equilibrated for more than an hour, traces of O2 are existent which create errors in our calculations.
Determination of Sublimation Enthalpy and Vapor Pressure for Inorganic and Metal-Organic Compounds by Thermogravimetric Analysis
Metal compounds and complexes are invaluable precursors for the chemical vapor deposition (CVD) of metal and non-metal thin films. In general, the precursor compounds are chosen on the basis of their relative volatility and their ability to decompose to the desired material under a suitable temperature regime. Unfortunately, many readily obtainable (commercially available) compounds are not of sufficient volatility to make them suitable for CVD applications. Thus, a prediction of the volatility of a metal-organic compounds as a function of its ligand identity and molecular structure would be desirable in order to determine the suitability of such compounds as CVD precursors. Equally important would be a method to determine the vapor pressure of a potential CVD precursor as well as its optimum temperature of sublimation.
It has been observed that for organic compounds it was determined that a rough proportionality exists between a compound’s melting point and sublimation enthalpy; however, significant deviation is observed for inorganic compounds.
Enthalpies of sublimation for metal-organic compounds have been previously determined through a variety of methods, most commonly from vapor pressure measurements using complex experimental systems such as Knudsen effusion, temperature drop microcalorimetry and, more recently, differential scanning calorimetry (DSC). However, the measured values are highly dependent on the experimental procedure utilized. For example, the reported sublimation enthalpy of Al(acac)3 (Figure $16$ a where M = Al, n = 3) varies from 47.3 to 126kJ/mol.
Thermogravimetric analysis offers a simple and reproducible method for the determination of the vapor pressure of a potential CVD precursor as well as its enthalpy of sublimation.
Determination of Sublimation Enthalpy
The enthalpy of sublimation is a quantitative measure of the volatility of a particular solid. This information is useful when considering the feasibility of a particular precursor for CVD applications. An ideal sublimation process involves no compound decomposition and only results in a solid-gas phase change, i.e., \ref{4}
$[M(L)_{n}I]_{ (solid) } \rightarrow [M(L_{n})]_{ (vapor) } \label{4}$
Since phase changes are thermodynamic processes following zero-order kinetics, the evaporation rate or rate of mass loss by sublimation (msub), at a constant temperature (T), is constant at a given temperature, \ref{5} . Therefore, the msub values may be directly determined from the linear mass loss of the TGA data in isothermal regions.
$m_{sub} \ =\ \frac{\Delta [mass]}{\Delta t} \label{5}$
The thermogravimetric and differential thermal analysis of the compound under study is performed to determine the temperature of sublimation and thermal events such as melting. Figure $17$ shows a typical TG/DTA plot for a gallium chalcogenide cubane compound (Figure $18$ ).
Data Collection
In a typical experiment 5 - 10 mg of sample is used with a heating rate of ca. 5 °C/min up to under either a 200-300 mL/min inert (N2 or Ar) gas flow or a dynamic vacuum (ca. 0.2 Torr if using a typical vacuum pump). The argon flow rate was set to 90.0 mL/min and was carefully monitored to ensure a steady flow rate during runs and an identical flow rate from one set of data to the next.
Once the temperature range is defined, the TGA is run with a preprogrammed temperature profile (Figure $19$ ). It has been found that sufficient data can be obtained if each isothermal mass loss is monitored over a period (between 7 and 10 minutes is found to be sufficient) before moving to the next temperature plateau. In all cases it is important to confirm that the mass loss at a given temperature is linear. If it is not, this can be due to either (a) temperature stabilization had not occurred and so longer times should be spent at each isotherm, or (b) decomposition is occurring along with sublimation, and lower temperature ranges must be used. The slope of each mass drop is measured and used to calculate sublimation enthalpies as discussed below.
As an illustrative example, Figure $20$ displays the data for the mass loss of Cr(acac)3 (Figure $16$ a, where M = Cr, n = 3 ) at three isothermal regions under a constant argon flow. Each isothermal data set should exhibit a linear relation. As expected for an endothermal phase change, the linear slope, equal to msub, increases with increasing temperature.
Samples of iron acetylacetonate (Figure $16$ a, where M = Fe, n = 3) may be used as a calibration standard through ΔHsub determinations before each day of use. If the measured value of the sublimation enthalpy for Fe(acac)3 is found to differ from the literature value by more than 5%, the sample is re-analyzed and the flow rates are optimized until an appropriate value is obtained. Only after such a calibration is optimized should other complexes be analyzed. It is important to note that while small amounts (< 10%) of involatile impurities will not interfere with the ΔHsub analysis, competitively volatile impurities will produce higher apparent sublimation rates.
It is important to discuss at this point the various factors that must be controlled in order to obtain meaningful (useful) msub data from TGA data.
1. The sublimation rate is independent of the amount of material used but may exhibit some dependence on the flow rate of an inert carrier gas, since this will affect the equilibrium concentration of the cubane in the vapor phase. While little variation was observed we decided that for consistency msub values should be derived from vacuum experiments only.
2. The surface area of the solid in a given experiment should remain approximately constant; otherwise the sublimation rate (i.e., mass/time) at different temperatures cannot be compared, since as the relative surface area of a given crystallite decreases during the experiment the apparent sublimation rate will also decrease. To minimize this problem, data was taken over a small temperature ranges (ca. 30 °C), and overall sublimation was kept low (ca. 25% mass loss representing a surface area change of less than 15%). In experiments where significant surface area changes occurred the values of msub deviated significantly from linearity on a log(msub) versus 1/T plot.
3. The compound being analyzed must not decompose to any significant degree, because the mass changes due to decomposition will cause a reduction in the apparent msub value, producing erroneous results. With a simultaneous TG/DTA system it is possible to observe exothermic events if decomposition occurs, however the clearest indication is shown by the mass loss versus time curves which are no longer linear but exhibit exponential decays characteristic of first or second order decomposition processes.
Data Analysis
The basis of analyzing isothermal TGA data involves using the Clausius-Clapeyron relation between vapor pressure (p) and temperature (T), \ref{6} , where ∆Hsub is the enthalpy of sublimation and R is the gas constant (8.314 J/K.mol).
$\frac{d\ ln(p)}{dT}\ =\ \frac{\Delta H_{sub} }{RT^{2} } \label{6}$
Since msub data are obtained from TGA data, it is necessary to utilize the Langmuir equation, \ref{7} , that relates the vapor pressure of a solid with its sublimation rate.
$p\ =\ [\frac{2\pi RT}{M_{W} }]^{0.5} m_{sub} \label{7}$
After integrating \ref{6} in log form, substituting \ref{7} , and consolidating, one one obtains the useful equality, \ref{8} .
$log(m_{sub} \sqrt{T} ) = \frac{-0.0522(\Delta H_{sub} )}{T} + [ \frac{0.0522(\Delta H_{sub} )}{T} - \frac{1}{2} log(\frac{1306}{M_{W} } ) ] \label{8}$
Hence, the linear slope of a log(msubT1/2) versus 1/T plot yields ΔHsub. An example of a typical plot and the corresponding ΔHsub value is shown in Figure $21$. In addition, the y intercept of such a plot provides a value for Tsub, the calculated sublimation temperature at atmospheric pressure.
Table $3$ lists the typical results using the TGA method for a variety of metal β-diketonates, while Table $4$ lists similar values obtained for gallium chalcogenide cubane compounds.
Compound ΔHsub (kJ/mol) ΔSsub (J/K.mol) Tsub calc. (°C) Calculated vapor pressure @ 150 °C (Torr)
Al(acac)3 93 220 150 3.261
Al(tfac)3 74 192 111 9.715
Al(hfac)3 52 152 70 29.120
Cr(acac)3 91 216 148 3.328
Cr(tfac)3 71 186 109 9.910
Cr(hfac)3 46 134 69 29.511
Fe(acac)3 112 259 161 2.781
Fe(tfac)3 96 243 121 8.340
Fe(hfac)3 60 169 81 25.021
Co(acac)3 138 311 170 1.059
Co(tfac)3 119 295 131 3.319
Co(hfac)3 73 200 90 9.132
Table $3$ Selected thermodynamic data for metal β-diketonate compounds determined from thermogravimetric analysis. Data from B. D. Fahlman and A. R. Barron, Adv. Mater. Optics Electron., 2000, 10, 223.
Compound ∆Hsub (kJ/mol) ∆Ssub (J/K. mol) Tsub calc. (°C) Calculated vapor pressure @ 150 °C (Torr)
[(Me3C)GaS]4 110 300 94 22.75
[(EtMe2C)GaS]4 124 330 102 18.89
[(Et2MeC)GaS]4 137 339 131 1.173
[(Et3C)GaS]4 149 333 175 0.018
[(Me3C)GaSe)]4 119 305 116 3.668
[(EtMe2C)GaSe]4 137 344 124 2.562
[(Et2MeC)GaSe]4 147 359 136 0.815
[(Et3C)GaSe]4 156 339 189 0.005
Table $4$ Selected thermodynamic data for gallium chalcogenide cubane compounds determined from thermogravimetric analysis. Data from E. G. Gillan, S. G. Bott, and A. R. Barron, Chem. Mater., 1997, 9, 3, 796.
A common method used to enhance precursor volatility and corresponding efficacy for CVD applications is to incorporate partially (Figure $16$ b ) or fully (Figure $16$ c) fluorinated ligands. As may be seen from Table $3$ this substitution does results in significant decrease in the ΔHsub, and thus increased volatility. The observed enhancement in volatility may be rationalized either by an increased amount of intermolecular repulsion due to the additional lone pairs or that the reduced polarizability of fluorine (relative to hydrogen) causes fluorinated ligands to have less intermolecular attractive interactions.
Determination of Sublimation Entropy
The entropy of sublimation is readily calculated from the ΔHsub and the calculated Tsub data, \ref{9}
$\Delta S_{sub} \ =\ \frac{ \Delta H_{sub} }{ T_{sub} } \label{9}$
Table $3$ and Table $4$ show typical values for metal β-diketonate compounds and gallium chalcogenide cubane compounds, respectively. The range observed for gallium chalcogenide cubane compounds (ΔSsub = 330 ±20 J/K.mol) is slightly larger than values reported for the metal β-diketonates compounds (ΔSsub = 130 - 330 J/K.mol) and organic compounds (100 - 200 J/K.mol), as would be expected for a transformation giving translational and internal degrees of freedom. For any particular chalcogenide, i.e., [(R)GaS]4, the lowest ΔSsubare observed for the Me3C derivatives, and the largest ΔSsub for the Et2MeC derivatives, see Table $4$. This is in line with the relative increase in the modes of freedom for the alkyl groups in the absence of crystal packing forces.
Determination of Vapor Pressure
While the sublimation temperature is an important parameter to determine the suitability of a potential precursor compounds for CVD, it is often preferable to express a compound's volatility in terms of its vapor pressure. However, while it is relatively straightforward to determine the vapor pressure of a liquid or gas, measurements of solids are difficult (e.g., use of the isoteniscopic method) and few laboratories are equipped to perform such experiments. Given that TGA apparatus are increasingly accessible, it would therefore be desirable to have a simple method for vapor pressure determination that can be accomplished on a TGA.
Substitution of \ref{5} into \ref{8} allows for the calculation of the vapor pressure (p) as a function of temperature (T). For example, Figure $22$ shows the calculated temperature dependence of the vapor pressure for [(Me3C)GaS]4. The calculated vapor pressures at 150 °C for metal β-diketonates compounds and gallium chalcogenide cubane compounds are given in Table $3$ and Table $4$
The TGA approach to show reasonable agreement with previous measurements. For example, while the value calculated for Fe(acac)3(2.78 Torr @ 113 °C) is slightly higher than that measured directly by the isoteniscopic method (0.53 Torr @ 113 °C); however, it should be noted that measurements using the sublimation bulb method obtained values much lower (8 x 10-3 Torr @ 113 °C). The TGA method offers a suitable alternative to conventional (direct) measurements of vapor pressure.
Differential Scanning Calorimetry (DSC)
Differential scanning calorimetry (DSC) is a technique used to measure the difference in the heat flow rate of a sample and a reference over a controlled temperature range. These measurements are used to create phase diagrams and gather thermoanalytical information such as transition temperatures and enthalpies.
History
DSC was developed in 1962 by Perkin-Elmer employees Emmett Watson and Michael O’Neill and was introduced at the Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. The equipment for this technique was available to purchase beginning in 1963 and has evolved to control temperatures more accurately and take measurements more precisely, ensuring repeatability and high sensitivity.
Theory
Phase Transitions
Phase transitions refer to the transformation from one state of matter to another. Solids, liquids, and gasses are changed to other states as the thermodynamic system is altered, thereby affecting the sample and its properties. Measuring these transitions and determining the properties of the sample is important in many industrial settings and can be used to ensure purity and determine composition (such as with polymer ratios). Phase diagrams (Figure $23$ ) can be used to clearly demonstrate the transitions in graphical form, helping visualize the transition points and different states as the thermodynamic system is changed.
Differential Thermal Analysis
Prior to DSC, differential thermal analysis (DTA) was used to gather information about transition states of materials. In DTA, the sample and reference are heated simultaneously with the same amount of heat and the temperature of each is monitored independently. The difference between the sample temperature and the reference temperature gives information about the exothermic or endothermic transition occurring in the sample. This strategy was used as the foundation for DSC, which sought to measure the difference in energy needed to keep the temperatures the same instead of measure the difference in temperature from the same amount of energy.
Differntial Scanning Calorimeter
Instead of measuring temperature changes as heat is applied as in DTA, DSC measures the amount of heat that is needed to increase the temperatures of the sample and reference across a temperature gradient. The sample and reference are kept at the same temperature as it changes across the gradient, and the differing amounts of heat required to keep the temperatures synchronized are measured. As the sample undergoes phase transitions, more or less heat is needed, which allows for phase diagrams to be created from the data. Additionally, specific heat, glass transition temperature, crystallization temperature, melting temperature, and oxidative/thermal stability, among other properties, can be measured using DSC.
Applications
DSC is often used in industrial manufacturing, ensuring sample purity and confirming compositional analysis. Also used in materials research, providing information about properties and composition of unknown materials can be determined. DSC has also been used in the food and pharmaceutical industries, providing characterization and enabling the fine-tuning of certain properties. The stability of proteins and folding/unfolding information can also be measured with DSC experiments.
Instrumentation
Equipment
The sample and reference cells (also known as pans), each enclosing their respective materials, are contained in an insulted adiabatic chamber (Figure $25$ ). The cells can be made of a variety of materials, such as aluminum, copper, gold and platinum. The choice of which is dictated by the necessary upper temperature limit. A variable heating element around each cell transfers heat to the sample, causing both cells’ temperature to rise in coordination with the other cell. A temperature monitor measures the temperatures of each cell and a microcontroller controls the variable heating elements and reports the differential power required for heating the sample versus the reference. A typical setup, including a computer for controlling software, is shown in Figure $26$.
Modes of Operations
With advancement in DSC equipment, several different modes of operations now exist that enhance the applications of DSC. Scanning mode typically refers to conventional DSC, which uses a linear increase or decrease in temperature. An example of an additional mode often found in newer DSC equipment is an isothermal scan mode, which keeps temperature constant while the differential power is measured. This allows for stability studies at constant temperatures, particularly useful in shelf life studies for pharmaceutical drugs.
Calibration
As with practically all laboratory equipment, calibration is required. Calibration substances, typically pure metals such as indium or lead, are chosen that have clearly defined transition states to ensure that the measured transitions correlate to the literature values.
Obtaining Measurements
Sample Preparation
Sample preparation mostly consists of determining the optimal weight to analyze. There needs to be enough of the sample to accurately represent the material, but the change in heat flow should typically be between 0.1 - 10 mW. The sample should be kept as thin as possible and cover as much of the base of the cell as possible. It is typically better to cut a slice of the sample rather than crush it into a thin layer. The correct reference material also needs to be determined in order to obtain useful data.
DSC Curves
DSC curves (e.g., Figure $27$ ) typically consist of heat flow plotted versus the temperature. These curves can be used to calculate the enthalpies of transitions, (ΔH), \ref{10} , by integrating the peak of the state transition, where K is the calorimetric constant and A is the area under the curve.
$\Delta H \ =\ KA \label{10}$
Sources of error
Common error sources apply, including user and balance errors and improper calibration. Incorrect choice of reference material and improper quantity of sample are frequent errors. Additionally, contamination and how the sample is loaded into the cell affect the DSC.
DSC Characterization of Polymers
Differential scanning calorimetry (DSC), at the most fundamental level, is a thermal analysis technique used to track changes in the heat capacity of some substance. To identify this change in heat capacity, DSC measures heat flow as a function of temperature and time within a controlled atmosphere. The measurements provide a quantitative and qualitative look into the physical and chemical alterations of a substance related to endothermic or exothermic events.
The discussion done here will be focused on the analysis of polymers; therefore, it is important to have an understanding of polymeric properties and how heat capacity is measured within a polymer.
Overview of Polymeric Properties
A polymer is, essentially, a chemical compound whose molecular structure is a composition of many monomer units bonded together (Figure $28$ ). The physical properties of a polymer and, in turn, its thermal properties are determined by this very ordered arrangement of the various monomer units that compose a polymer. The ability to correctly and effectively interpret differential scanning calorimetry data for any one polymer stems from an understanding of a polymer’s composition. As such, some of the more essential dynamics of polymers and their structures are briefly addressed below.
An aspect of the ordered arrangement of a polymer is its degree of polymerization, or, more simply, the number of repeating units within a polymer chain. This degree of polymerization plays a role in determining the molecular weight of the polymer. The molecular weight of the polymer, in turn, plays a role in determining various thermal properties of the polymer such as the perceived melting temperature.
Related to the degree of polymerization is a polymer’s dispersity, i.e. the uniformity of size among the particles that compose a polymer. The more uniform a series of molecules, the more monodisperse the polymer; however, the more non-uniform a series of molecules, the more polydisperse the polymer. Increases in initial transition temperatures follow an increase in polydispersity. This increase is due to higher intermolecular forces and polymer flexibility in comparison to more uniform molecules.
In continuation with the study of a polymer’s overall composition is the presence of cross-linking between chains. The ability for rotational motion within a polymer decreases as more chains become cross-linked, meaning initial transition temperatures will increase due to a greater level of energy needed to overcome this restriction. In turn, if a polymer is composed of stiff functional groups, such as carbonyl groups, the flexibility of the polymer will drastically decrease, leading to higher transitional temperatures as more energy will be required to break these bonds. The same is true if the backbone of a polymer is composed of stiff molecules, like aromatic rings, as this also causes the flexibility of the polymer to decrease. However, if the backbone or internal structure of the polymer is composed of flexible groups, such as aliphatic chains, then either the packing or flexibility of the polymer decreases. Thus, transitional temperatures will be lower as less energy is needed to break apart these more flexible polymers.
Lastly, the actual bond structure (i.e. single, double, triple) and chemical properties of the monomer units will affect the transitional temperatures. For examples, molecules more predisposed towards strong intermolecular forces, such as molecules with greater dipole to dipole interactions, will result in the need for higher transitional temperatures to provide enough energy to break these interactions.
In terms of the relationship between heat capacity and polymers: heat capacity is understood to be the amount of energy a unit or system can hold before its temperature raises one degree; further, in all polymers, there is an increase in heat capacity with an increase in temperature. This is due to the fact that as polymers are heated, the molecules of the polymer undergo greater levels of rotation and vibration which, in turn, contribute to an increase in the internal energy of the system and thus an increase in the heat capcity of the polymer.
In knowing the composition of a polymer, it becomes easier to not only pre-emptively hypothesize the results of any DSC analysis but also troubleshoot why DSC data does not seem to corroborate with the apparent properties of a polymer.
Note, too, that there are many variations in DSC techniques and types as they relate to characterization of polymers. These differences are discussed below.
Standard DSC (Heat Flux DSC)
The composition of a prototypical, unmodified DSC includes two pans. One is an empty reference plate and the other contains the polymer sample. Within the DSC system is also a thermoelectric disk. Calorimetric measurements are then taken by heating both the sample and empty reference plate at a controlled rate, say 10 °C/min, through the thermoelectric disk. A purge gas is admitted through an orifice in the system, which is preheated by circulation through a heating block before entering the system. Thermocouples within the thermoelectric disk then register the temperature difference between the two plates. Once a temperature difference between the two plates is measured, the DSC system will alter the applied heat to one of the pans so as to keep the temperature between the two pans constant. In Figure $29$ is a cross-section of a common heat flux DSC instrument.
The resulting plot that is one in which the heat flow is understood to be a function of temperature and time. As such, the slope at any given point is proportional to the heat capacity of the sample. The plot as a whole, however, is reperesentative of thermal events within the polymer. The orientation of peaks or stepwise movements within the plot, therefore, lend themselves to interpretation as thermal events.
To interpret these events, it is important to define the thermodynamic system of the DSC instrument. For most heat flux systems, the thermodynamic system is understood to be only the sample. This means that when, for example, an exothermic event occurs, heat from the polymer is released to the outside environment and a positive change is measured on the plot. As such, all exothermic events will be positive shifts within the plot while all endothermic events will be negative shifts within the plot. However, this can be flipped within the DSC system, so be sure to pay attention to the orientation of your plot as “exo up” or “exo down.” See Figure $30$ for an example of a standard DSC plot of polymer poly(ethylene terephthalate) (PET). By understanding this relationship within the DSC system, the ability to interpret thermal events, such as the ones described below, becomes all the more approachable.
Heat Capacity (Cp)
As previously stated, a typical plot created via DSC will be a measure of heat flow vs temperature. If the polymer undergoes no thermal processes, the plot of heat flow vs temperature will be zero slope. If this is the case, then the heat capacity of the polymer is proportional to the distance between the zero-slopped line and the x-axis. However, in most instances, the heat capacity is measured to be the slope of the resulting heat flow vs temperature plot. Note that any thermal alteration to a polymer will result in a change in the polymer’s heat capacity; therefore, all DSC plots with a non-zero slope indicate some thermal event must have occurred.
However, it is also possible to directly measure the heat capacity of a polymer as it undergoes some phase change. To do so, a heat capacity vs temperature plot is to be created. In doing so it becomes easier to zero in on and analyze a weak thermal event in a reproducible manner. To measure heat capacity as a function of increasing temperature, it is necessary to divide all values of a standard DSC plot by the measured heating rate.
For example, say a polymer has undergone a subtle thermal event at a relatively low temperature. To confirm a thermal event is occurring, zero in on the temperature range the event was measured to have occurred at and create a heat capacity vs temperature plot. The thermal event becomes immediately identifiable by the presence of a change in the polymer’s heat capacity as shown in Figure $31$.
Glass Transition Temperature (Tg)
As a polymer is continually heated within the DSC system, it may reach the glass transition: a temperature range under which a polymer can undergo a reversible transition between a brittle or viscous state. The temperature at which this reversible transition can occur is understood to be the glass transition temperature (Tg); however, make note that the transition does not occur suddenly at one temperature but, instead, transitions slowly across a range of temperatures.
Once a polymer is heated to the glass transition temperature, it will enter a molten state. Upon cooling the polymer, it loses its elastic properties and instead becomes brittle, like glass, due to a decrease in chain mobility. Should the polymer continue to be heated above the glass transition temperature, it will become soft due to increased heat energy inducing different forms of transitional and segmental motion within the polymer, promoting chain mobility. This allows the polymer to be deformed or molded without breaking.
Upon reaching the glass transition range, the heat capacity of the polymer will change, typically become higher. In turn, this will produce a change in the DSC plot. The system will begin heating the sample pan at a different rate than the reference pan to accommodate this change in the polymer’s heat capacity. Figure $32$ is an example of the glass transition as measured by DSC. The glass transition has been highlighted, and the glass transition temperature is understood to be the mid-point of the transitional range.
While the DSC instrument will capture a glass transition, the glass transition temperature cannot, in actuality, be exactly defined with a standard DSC. The glass transition is a property that is completely dependent on the extent that the polymer is heated or cooled. As such, the glass transition is dependent on the applied heating or cooling rate of the DSC system. Therefore, the glass transition of the same polymer can have different values when measured on separate occasions. For example, if the applied cooling rate is lower during a second trial, then the measured glass transition temperature will also be lower.
However, in having a general knowledge of the glass transition temperature, it becomes possible to hypothesize the polymers chain length and structure. For example, the chain length of a polymer will affect the number of Van der Waal or entangling chain interactions that occur. These interactions will in turn determine just how resistant the polymer is to increasing heat. Therefore, the temperature at which Tg occurs is correlated to the magnitude of chain interactions. In turn, if the glass transition of a polymer is consistently shown to occur quickly at lower temperatures, it may be possible to infer that the polymer has flexible functional groups that promote chain mobility.
Crystallization (Tc)
Should a polymer sample continue to be heated beyond the glass transition temperature range, it becomes possible to observe crystallization of the polymer sample. Crystallization is understood to be the process by which polymer chains form ordered arrangements with one another, thereby creating crystalline structures.
Essentially, before the glass transition range, the polymer does not have enough energy from the applied heat to induce mobility within the polymer chains; however, as heat is continually added, the polymer chains begin to have greater and greater mobility. The chains eventually undergo transitional, rotational, and segmental motion as well as stretching, disentangling, and unfolding. Finally, a peak temperature is reached and enough heat energy has been applied to the polymer that the chains are mobile enough to move into very ordered parallel, linear arrangements. At this point, crystallization begins. The temperature at which crystallization begins is the crystallization temperature (Tc).
As the polymer undergoes crystalline arrangements, it will release heat since intramolecular bonding is occurring. Because heat is being released, the process is exothermic and the DSC system will lower the amount of heat being supplied to the sample plate in relation to the reference plate so as to maintain a constant temperature between the two plates. As a result, a positive amount of energy is released to the environment and an increase in heat flow is measured in an “exo up” DSC system, as seen in Figure $33$. The maximum point on the curve is known to be the Tc of the polymer while the area under the curve is the latent energy of crystallization, i.e., the change in the heat content of the system associated with the amount of heat energy released by the polymer as it undergoes crystallization.
The degree to which crystallization can be measured by the DSC is dependent not only on the measured conditions but also on the polymer itself. For example, in the case of a polymer with very random ordering, i.e., an amorphous polymer, crystallization will not even occur.
In knowing the crystallization temperature of the polymer, it becomes possible to hypothesize on the polymer’s chain structure, average molecular weight, tensile strength, impact strength, resistance to solvents, etc. For example, if the polymer tends to have a lower crystallization temperature and a small latent heat of crystallization, it becomes possible to assume that the polymer may already have a chain structure that is highly linear since not much energy is needed to induce linear crystalline arrangements.
In turn, in obtaining crystallization data via DSC, it becomes possible to determine the percentage of crystalline structures within the polymer, or, the degree of crystallinity. To do so, compare the latent heat of crystallization, as determined by the area under the crystallization curve, to the latent heat of a standard sample of the same polymer with a known crystallization degree.
Knowledge of the polymer sample’s degree of crystallinity also provides an avenue for hypothesizing the composition of the polymer. For example, having a very high degree of crystallinity may suggest that the polymer contains small, brittle molecules that are very ordered.
Melting behavior (Tm)
As the heat being applied pushes the temperature of the system beyond Tc, the polymer begins to approach a thermal transition associated with melting. In the melting phase, the heat applied provides enough energy to, now, break apart the intramolecular bonds holding together the crystalline structure, undoing the polymer chains’ ordered arrangements. As this occurs, the temperature of the sample plate does not change as the applied heat is no longer being used to raise the temperature but instead to break apart the ordered arrangements.
As the sample melts, the temperature slowly increases as less and less of the applied heat is needed to break apart crystalline structures. Once all the polymer chains in the sample are able to move around freely, the temperature of the sample is said to reach the melting temperature (Tm). Upon reaching the melting temperature, the applied heat begins exclusively raising the temperature of the sample; however, the heat capacity of the polymer will have increased upon transitioning from the solid crystalline phase to the melt phase, meaning the temperature will increase more slowly than before.
Since, during the endothermic melting process of the polymer, most of the applied heat is being absorbed by the polymer, the DSC system must substantially increase the amount of heat applied to the sample plate so as to maintain the temperature between the sample plate and the reference plate. Once the melting temperature is reached, however, the applied heat of the sample plate decreases to match the applied heat of the reference plate. As such, since heat is being absorbed from the environment, the resulting “exo up” DSC plot will have a negative curve as seen in Figure $34$ where the lowest point is understood to be the melt phase temperature. The area under the curve is, in turn, understood to be the latent heat of melting, or, more precisely, the change in the heat content of the system associated with the amount of heat energy absorbed by the polymer to undergo melting.
Once again, in knowing the melting range of the polymer, insight can be gained on the polymer’s average molecular weight, composition, and other properties. For example, the greater the molecular weight or the stronger the intramolecular attraction between functional groups within crosslinked polymer chains, the more heat energy that will be needed to induce melting in the polymer.
Modulated DSC: an Overview
While standard DSC is useful in characterization of polymers across a broad temperature range in a relatively quick manner and has user-friendly software, it still has a series of limitations with the main limitation being that it is highly operator dependent. These limitations can, at times, reduce the accuracy of analysis regarding the measurements of Tg, Tc and Tm, as described in the previous section. For example, when using a synthesized polymer that is composed of multiple blends of different monomer compounds, it can become difficult to interpret the various transitions of the polymer due to overlap. In turn, some transitional events are completely dependent on what the user decides to input for the heating or cooling rate.
To resolve some of the limitations associated with standard DSC, there exists modulated DSC (MDSC). MDSC not only uses a linear heating rate like standard DSC, but also uses a sinusoidal, or modulated, heating rate. In doing so, it is as though the MDSC is performing two, simultaneous experiements on the sample.
What is meant by a modulated heating rate is that the MDSC system will vary the heating rate of the sample by a small range of heat across some modulating period of time. However, while the temperature rate of change is sinusoidal, it is still ultimately increasing acorss time as indicated in Figure $35$. In turn, Figure $36$ also shows the sinusoidal heating rate as a function of time overlaying the linear heating rate of standard DSC. The linear heating rate of DSC is 2 °C/min and the modulated heating rate of MDSC varies from roughly ~0.1 °C/min and ~3.8 °C/min across a period of time.
By providing two heating rates, a linear and a modulated one, MDSC is able to measure more accurately how heating rates affect the rate of heat flow within a polymer sample. As such, MDSC offers a means to eliminate the applied heating rate aspects of operator dependency.
In turn, the MDSC instrument also performs mathematical processes that separate the standard DSC plot into reversing and a non-reversing components. The reversing signal is representative of properties that respond to temperature modulation and heating rate, such as glass transition and melting. On the other hand, the non-reversing component is representative of kinetic, time-dependent process such as decomposition, crystallization, and curing. Figure $37$ provides an example of such a plot using PET.
The mathematics behind MDSC is most simply represented by this formula: dH/dt = Cp(dT/dt) + f(T,t) where dH/dt is the total change in heat flow that would be derived from a standard DSC. Cp is heat capacity derived from modulated heating rate, dT/dt is representative of both the linear and modulated heating rate, and f(T,t) is representative of kinetic, time-dependent events, i.e the non-reversing signal. When combining Cp and dT/dt, creating Cp(dT/dt), the reversing signal is produced. The non-reversing signal is, therefore, found by simply subtracting the reversing signal from the total heat flow singal, i.e. dH/dt = Cp(dT/dt) + f(T,t)
As such, MDSC is capable of independently measuring not only total heat flow but also the heating rate and kinetic components of said heat flow, meaning MDSC can break down complex or small transitions into their many singular components with improved sensitivity, allowing for more accurate analysis. Below are some cases in which MDSC proved to be useful for analytics.
Modulated DSC: Advanced Analysis of Tg
Using a standard DSC, it can be difficult to ascertain the accuracy of measured transitions that are relatively weak, such as Tg, since these transitions can be overlapped by stronger, kinetic transitions. This is quite the problem as missing a weak transition could cause the misinterpretation of polymer to be a uniform sample as opposed to a polymer blend. To resolve this, it is useful to split the plot into its reversing component, i.e. the portion which will contain heat dependent properties like Tg, and its non-reversing, kinetic component.
For example, shown in the Figure $38$ is the MDSC of an unknown polymer blend which, upon analysis, is composed of PET, amorphous polycarbonate (PC), and a high density polyethylene (HDPE). Looking at the reversing signal, the Tg of polycarbonate is around 140 °C and the Tg of PET is around 75 °C. As seen in the total heat flow signal, which is representative of a standard DSC plot, the Tg of PC would have been more difficult to analyze and, as such, may have been incorrectly analyzed.
Modulated DSC: Advanced Analysis of Tm
Further, there are instances in which a polymer or, more likely, a polymer blend will produce two different sets of crystalline structures. With two crystalline structures, the resulting melting peak will be poorly defined and, thus, difficult to analyze via a standard DSC.
Using MDSC, however, it becomes possible to isolate the reversing signal, which will contain the melting curve. Through isolation of the reversing signal, it becomes clear that there is an overlapping of two melting peaks such that the MDSC system reveals two melting points. For example, as seen in Figure $39$ the analysis of a poly(lactic acid) polymer (PLA) with 10% wt of a plasticize (P600) reveals two melting peaks in the reversing signal not visible in the total heat flow. The presence of two melting peaks could, in turn, suggest the formation of two crystalline structures within the polymer sample. Other interpretations are, of course, possible via analyzing the reversing signal.
Modulated DSC: Analysis of Polymer Aging
In many instances, polymers may be left to sit in refrigeration or stored at temperatures below their respective glass transition temperatures. By leaving a polymer under such conditions, the polymer is situated to undergo physical aging. Typically, the more flexible the chains of a polymer are, the more likely they will undergo time-related changes in storage. That is to say, the polymer will begin to undergo molecular relaxation such that the chains will form very dense regions while they conglomerate together. As the polymer ages, it will tend towards embrittlement and develop internal stresses. As such, it is very important to be aware if the polymer being studied has gone through aging while in storage.
If a polymer has undergone physical aging, it will develop a new endothermic peak when undergoing thermal analysis. This occurs because, as the polymer is being heated, the polymer chains absorb heat, increase mobility, and move to a more relaxed condition as time goes on, transforming back to pre-aged conditions. In turn an endothermic shift, in association with this heat absorbance, will occur just before the Tg step change. This peak is known as the enthalpy of relaxation (ΔHR).
Since the Tg and ΔHR are relatively close to one another energy-wise, they will tend to overlap, making it difficult to distinguish the two from one another. However, ΔHR is a kinetics dependent thermal shift while Tg is a heating dependent thermal shift; therefore, the two can be separated into a non-reversing and reversing plot via MDSC and be independently analyzed.
Figure $40$ is an example of an MDSC plot of a polymer blend of PET, PC, and HDPE in which the enthalpy of relaxation of PET is visible in the dashed non reversing signal around 75 °C. In turn, within the reversing signal, the glass transition of PET is visible around 75 °C as well.
Quasi-isothermal DSC
While MDSC is a strong step in the direction of elinating operator error, it is possible to have an even higher level of precision and accuracy when analyzing a polymer. To do so, the DSC system must expose the sample to quasi-isothermal conditions. In creating quasi-isothermal conditions, the polymer sample is held at a specific temperature for extended periods of time with no applied heating rate. With the heating rate being efficticely zero, the conditions are isothermal. The temperature of the sample may change, but the change will be derived solely from a kinetic transition that has occurred within the polymer. Once a kinetic transition has occurred within the polymer, it will absorb or release some heat, which will raise or decrease the temperature of the system without the application of any external heat.
In creating these conditions, issues created by the variation of the applied heating rate by operators is no longer a large concern. Further, in subjecting a polymer sample to quasi-isothermal conditions, it becomes possible to get improved and more accurate measurements of heat dependent thermal events, such as events typically found in the reversing signal, as a function of time.
Quasi-isothermal DSC: Improved Glass Transition
As mentioned earlier, the glass transition is volatile in the sense that it is highly dependent on the heating and cooling rate of the DSC system as applied by the operator. An minor change in the heating or cooling rate between two experimental measurements of the same polymer sample can result in fairly different measured glass transitions, even though the sample itself has not been altered.
Remember also, that the glass transition is a measure of the changing Cp of the polymer sample as it crosses certain heat energy thresholds. Therefore, it should be possible to capture a more accurate and precise glass transition under quasi-isothermal conditions since these conditions produce highly accurate Cpmeasurements as a function of time.
By applying quasi-isothermal conditions, the polymer’s Cp can be measured in fixed-temperature steps within the apparent glass transition range as measured via standard DSC. In measuring the polymer across a set of quasi-isothermal steps, it becomes possible to obtain changing Cp rates that, in turn, would be nearly reflective of an exact glass transition range for a polymer.
In Figure $41$ the glass transition of polystyrene is shown to vary depending on the heating or cooling rate of the DSC; however, when applying qusi-isothermal conditions and measuring the heat capacity at temperature steps produces a very accurate glass transition that can be used as a standard for comparison.
Low-Temperature Specific Heat Measurements for Magnetic Materials
Magnetic materials attract the attention of researchers and engineers because of their potential for application in magnetic and electronic devices such as navigational equipment, computers, and even high-speed transportation. Perhaps more valuable still, however, is the insight they provide into fundamental physicals. Magnetic materials provide an opportunity for studying exotic quantum mechanical phenomena such as quantum criticality, superconductivity, and heavy fermionic behavior intrinsic to these materials. A battery of characterization techniques exist for measuring the physical properties of these materials, among them a method for measuring the specific heat of a material throughout a large range of temperatures. Specific heat measurments are an important means of determining the transition temperature of magnetic materials—the temperature below which magnetic ordering occurs. Additionally, the functionality of specific heat with temperature is characteristic of the behavior of electrons within the material and can be used to classify materials into different categories.
Temperature-dependence of Specific Heat
The molar specific heat of a material is defined as the amount of energy required to raise the temperature of 1 mole of the material by 1 K. This value is calculated theoretically by taking the partial derivative of the internal energy with respect to temperature. This value is not a constant, as it is typically treated in high-school science courses: it depends on the temperature of the material. Moreover, the temperature-dependence itself also changes based on the type of material. There are three broad families of solid state materials defined by their specific heat behaviors. Each of these families is discussed in the following sections.
Insulators
Insulators have specific heat with the simplest dependence on temperature. According to the Debye theory of specific heat, which models materials as phonons (lattice vibrational modes) in a potential well, the internal energy of an insulating system is given by \ref{11} , where TD is the Debye temperature, defined as the temperature associated with the energy of the highest allowed phonon mode of the material. In the limit that T<<TD, the energy expression reduces to \ref{12} .
$U\ =\frac{9Nk_{B}T^{4} }{T^{3}_{D}} \int ^{T_{D}/T}_{0} \frac{x^{3}}{e^{x}-1} dx \label{11}$
$U\ =\frac{3 \pi ^{4} N k_{B} T^{4}}{5T^{3}_{D} } \label{12}$
For most magnetic materials, the Debye temperature is several orders of magnitude higher than the temperature at which magnetic ordering occurs, making this a valid approximation of the internal energy. The specific heat derived from this expression is given by \ref{13}
$C_{\nu }\ =\frac{\delta U}{\delta T} =\frac{12 \pi ^{4} Nk_{B} }{5T^{3}_{D}} T^{3} = \beta T^{3} \label{13}$
The behavior described by the Debye theory accurately matches experimental measurements of specific heat for insulators at low temperatures. Normal insulators, then, have a T3 dependence in the specific heat that is dominated by contributions from phonon excitations. Essentially all energy absorbed by insulating materials is stored in the vibrational modes of a solid lattice. At very low temperatures this contribution is very small, and insulators display a high sensitivity to changes in heat energy.
Metals: Fermi Liquids
While the Debye theory of specific heat accurately describes the behavior of insulators, it does not adequately describe the temperature dependence of the specific heat for metallic materials at low temperatures, where contributions from delocalized conduction electrons becomes significant. The predictions made by the Debye model are corrected in the Einstein-Debye model of specific heat, where an additional term describing the contributions from the electrons (as modeled by a free electron gas) is added to the phonon contribution. The internal energy of a free electron gas is given by \ref{14} ,where g(Ef) is the density of states at the Fermi level, which is material dependent. The partial derivative of this expression with respect to temperature yields the specific heat of the electron gas, \ref{15} .
$U = \frac{\pi ^{2}}{6}(k_{B}T)^{2}g(E_{f})+U_{0} \label{14}$
$C_{\nu }= \frac{ \pi^{2}} {3} k^{2}_{B}g(E_{f})T= \gamma T \label{15}$
Combining this expression with the phonon contribution to specific heat gives the expression predicted by the Einstein-Debye model, \ref{16} .
$C_{\nu }= \frac{pi^{2}}{3} k^{2}_{B} g(E_{f})T\ + \frac{12 \pi^{4}Nk_{B}}{5T^{3}_{D}}T^{3} = \gamma T\ +\ \beta T^{3} \label{16}$
This is the general expression for the specific heat of a Fermi liquid—a variation on the Fermi gas in which fermions (typically electrons) are allowed to interact with each other and form quasiparticles—weakly bound and often short-lived composites of more fundamental particles such as electron-hole pairs or the Cooper pairs of BCS superconductor theory.
Most metallic materials follow this behavior and are thus classified as Fermi liquids. This is easily confirmed by measuring the heat capacity as a function of temperature and linearizing the results by plotting C/T vs. T2. The slope of this graph equal the coefficient β, and the y-intercept is equal to γ. The ability to obtain these coefficients is important for gaining understanding of some unique physical phenomena. For example, the compound YbRh2Si2 is a heavy fermionic material—a material with charge carriers that have an “effective” mass much greater than the normal mass of an electron. The increased mass is due to coupling of magnetic moments between conduction electrons and localized magnetic ions. The coefficient γ is related to the density of states at the Fermi level, which is dependent on the carrier mass. Determination of this coefficient via specific heat measurements provides a way to determine the effective carrier mass and the coupling strength of the quasiparticles.
Additionally, knowledge of Fermi-liquid behavior provides insight for application development. The temperature dependence of the specific heat shows that the phonon contribution dominates at higher temperatures, where the behavior of metals and insulators is very similar. At low temperatures, the electronic term is dominant, and metals can absorb more heat without a signficant change in temperature. As will be discussed breifly later, this property of metals is utilized in low-temperature refrigeration systems for heat storage at low temperatures.
Metals: non-Fermi liquids
While most metals fall under the category of Fermi liquids, there are some that show a different dependence on temperature. Naturally, these are classified as non-Fermi liquids. Often, deviation from Fermi-liquid behavior is an indicator of some of the interesting physical phenomena that currently garner the attention of many condensed matter researchers. For instance, non-Fermi liquid behavior has been observed near quantum critical points. Classically, fluctuations in physical properties such as magnetic susceptibility and resistivity occur near critical points which include phase changes or magnetic ordering transitions. Normally, these fluctuations are suppressed at low temperatures—at absolute zero, classical systems collapse into the lowest energy state and remain stable; However, when the critical transition temperature is lowered by the application of pressure, doping, or magnetic field to absolute zero, the fluctuations are enhanced as the temperature approaches absolute zero, propagating throughout the whole of the material. As this is not classically allowed, this behavior indicates a quantum mechanical effect at play that is currently not well understood. The transition point is then called a quantum critical point. Non-fermi liquid behavior as identified by deviations in the expected specific heat, then, is used to identify materials that can provide an experimental basis for development of a theory that describes the physics of quantum criticality.
Determination of magnetic transition temperatures viz specific heat measurements
While analysis of the temperature dependence of specific heat is a vital tool for studying the strange physical behaviors of quantum mechanics in solid state materials, these are studied by only a small subsection of the physics community. The utility of specific heat measurements is not limited to a few niche subjects, however. Possibly the most important use for specific heat measurements is the determination of critical transition temperatures. For any sort of physical state transition—phase transitions, magnetic ordering, transitions to superconducting states—a sharp increase in the specific heat occurs during the transition. This increase in specific heat is the reason why, for example, water does not change temperature as it changes from a liquid to a solid. These increases are quite obvious in plots of the specific heat vs. temperature as seen in Figure $42$. These transition-associated peaks are called Schottky anomalies, as normal specific heat behavior is not followed near to the transition temperature.
For the purposes of this chapter, the following sections will focus on specific heat measurements as they relate to magnetic ordering transititions. The following sections will describe the practical aspects of measuring the specific heat of these materials.
A practical guide to low-temperature specific heat measurements
The thermal relaxation method of measurement
Specific heat is measured using a calorimeter. The design of basic calorimeters for use over a short range of temperatures is relatively simple. They consist of a sample with a known mass and an unknown specific heat, an energy source which provides heat energy to the sample, a heat reservoir (of known mass and specific heat) that absorbs heat from the sample, insulation to provide adiabatic conditions inside the calorimeter, and probes for measuring the temperature of the sample and the reservoir. The sample is heated with a pulse to a temperature higher than the heat reservoir, which decreases as energy is absorbed by the reservoir until a thermal equilibrium is established. The total energy change is calculated using the specific heat and temperature change of the reservoir. The specific heat of the sample is calculated by dividing the total energy change by the product of the mass of the sample and the temperature change of the sample.
However, this method of measurement produces an average value of the specific heat over the range of the change in temperature of the sample, and therefore, is insufficient for producing accurate measurements of the specific heat as a function of temperature. The solution, then, is to minimize the temperature change by reducing the amount of heat added to the system; yet, this presents another obstacle to making measurement as, in general, the temperature change of the reservoir is much smaller than that of the sample. If the change in temperature of the sample is minimized, the temperature change of reservoir becomes too small to measure with precision. A more direct method of measurement, then, seems to be required.
Fortunately, such a method exists: it is known as the thermal relaxation method. This method involves measurement of the specific heat without the need for precise knowledge of temperature changes in the reservoir. In this method, solid samples are affixed to a platform. Both the specific heat of the sample and the platform itself contribute to the measured specific heat; therefore, the contribution from the platform must be subtracted. This contribution is determined by measuring the specific heat without a sample present. Both the sample and the platform are in thermal contact with a heat reservoir at low temperature as depicted in Figure $43$.
A heat pulse is delivered to the sample to produce a minimal increase in the temperature of the sample. The temperature is measured vs. time as it decays back to the temperature of the reservoir as shown in $44$.
The temperature of the sample decays according to \ref{17} , where T0 is the temperature of the heat reservoir, and ΔT is the temperature difference between the initial sample temperature and the reservoir temperature. The decay time constant τ is directly related to the specific heat of the sample by \ref{18} , Where K is the thermal conductance of the thermal link between the sample and the heat reservoir. In order for this to be valid, however, the thermal conductance must be sufficiently large that the energy transfer from the heated sample to the reservoir can be treated as a single process. If the thermal conduction is poor, a two-τ behavior arises corresponding to two separate processes with different time constants—slow heat transfer from the sample to the platform, and fast transfer from the platform to the reservoir. Figure $45$ shows a relaxation curve in which the two- τ behavior plays a significant role.
$T = \Delta e^{t/ \tau}\ +\ T_{0} \label{17}$
$\tau \ =\ C_{p}/K \label{18}$
The two-τ effect is generally undesireable for making measurements. It can be avoided by reducing thermal conductance between the sample and the platform, effectively making the contribution from the heat transfer from the sample to the platform insignificant compared to the transfer from the platform to the reservoir; however, if the conductance between the sample and the platform is too low, the time required to reach thermal equilibrium becomes excessively long, translating into very long measurement times. It is necessary, then, to optimize the conductance to compensate for both of these issues. This essentially provides a limitation on the temperature range over which these effects are insignificant.
In order to measure at different temepratures, the temperature of the heat reservoir is increased stepwise from the lowest temperature until the desired temperature range is covered. At each step, the temperature is allowed to equilibrate, and a data point is measured.
Instrumentation
Thermal relaxation calorimeters use advanced technology to make precise measurements of the specific heat using components made of highly specialized materials. For example, the sample platform is made of synthetic sapphire which is used as a standard material, the grease which is applied to the sample to provide even thermal contact with the platform is a special hydrocarbon-based material which can withstand millikelvin temperatures without creeping, cracking, or releasing vapor, and the resistance thermometers used for ultralow temperatures are often made of treated graphite or germanium. The culmination of years of materials science research and careful engineering has produced instrumentation with the capability for precise measurements from temperatures down to the millikelvin level. There are four main systems that function to provide the proper conditions for measurement: the reservoir temperature control, the sample temperature control, the magnetic field control, and the pressure control system. The essential components of these systems will be discussed in more detail in the following sections with special emphasis on the cooling systems that allow these extreme low temperatures to be achieved.
Cooling systems
The first of these is responsible for maintaining the low baseline temperature to which the sample temperature relaxes. This is typically accomplished with the use of liquid helium cryostats or, in more recent years, so-called “cryogen-free” pulse tube coolers.
A cryostat is simply a bath of cryogenic fluid that is kept in thermal contact with the sample. The fluid bath may be static or may be pumped through a circulation system for better cooling. The cryostat must also be thermally insulated from the external environment in order to maintain low temperatures. Insulation is provided by a metallic vacuum dewar: The vacuum virtually eliminates conuductive or convective heat transfer from the environment and the reflective metallic outer sleeve acts as a radiation shield. For the low temperatures required to observe some magnetic transitions, liquid helium is generally required. 4He liquefies at 4.2 K, and the rarer (and much more expensive) isotope, 3He, liquefies at 1.8 K. For temperatures lower than 1.8 K, modern instruments employ evaporative attachments such as a 1-K pot, 3He refrigerator, or a dilution refrigerator. The 1-K pot is so named because it can achieve temperatures down to 1 K. It consists of a small vessel filled with liquid 4He under reduced pressure. Heat is absorbed as the liquid evaporates and is carried away by the vapor. The 3He refrigerator utilizes a 1-K pot for liquefaction of 3He, then evaporation of 3He provide cooling to the sample. 3He refrigerators can provide temperatures as low as 200 mK. The dilution refrigerator works on a similar principle, however the working fluid is a mixture of 3He and 4He. Phase separation of the 3He from the mixture provides further heat absorption as the 3He evaporates. Dilution refrigerators can achieve temperatures as low as 0.002 K (That’s cold!). Evaporative refrigerators work only on a small area in thermal contact with the sample, rather than delivering cooling power to the entire volume of the cryostat bath.
Cryostat baths provide very high cooling power for very efficient cooling; however, they come with a major drawback: the cost of helium is prohibitively high. The helium vapor that boils off as it provides cooling to the sample must leave the system in order to carry the heat away and must therefore be replaced. Even when the instrument is not in use, there is some loss of helium due to the imperfect nature of the insulating dewars. In order to get the most use out of the helium, then, cryostat systems must always be in use. In addition, rather than allowing expensive helium to simply escape, recovery systems for helium exhaust must be installed in order to operate in a cost-effective manner, though these systems are not 100% efficient, and the cost of operation and maintenance of recovery systems is not small either. “Cryogen-free” coolers provide an alternative to cryostats in order to avoid the costs associated with helium usage and recovery.
Figure $46$ shows a Gifford-McMahon type pulse tube—one example of the cryogen-free coolers.
In this type of cooler, helium gas is driven through the regenerator by a compressor. As a small volume element of the gas passes throughout the regenerator, it drops in temperature as it deposits heat into the regenerator. The regenerator must have a high specific heat in order to effectively absorb energy from the helium gas. For higher-temperature pulse tube coolers, the regenerator is often made of copper mesh; however, for very low temperatures, helium has a higher specific heat than most metals. Regenerators for this temperature range are often made of porous rare earth ceramics with magnetic transitions in the low temperature range. The increase in specific heat near the Schottky anomaly for these materials provides the necessary capacity for heat absorption. As the gas enters the tube at a temperature TL(from the diagram above) it is compressed, raising the temperature in accordance with the ideal gas law. At this point, the gas is at a temperature higher than TH and excess heat is exhausted through the heat exchanger marked X3 until the temperature is in equilibrium with TH. When the rotary valve in the compressor turns, the expansion cycle begins, and the gas cools as it expands adiabatically to a temperature below TL. It then absorbs heat from the sample through the heat exchanger X2. This step provides the cooling power in pulse tube coolers. Afterward, it travels back through the regenerator at a cold temperature and reabsorbs the heat that was initially stored during compression, and regains it’s original temperature through the heat exchanger X1. Figure $47$ illustrates the temperature cyle experienced by a volume element of the working gas as it moves through the pulse tube.
Pulse tube coolers are not truly “cryogen-free” as they are advertised, but they are preferable to cryostats because there is no net loss of the helium in the system. However, pulse tubes are not a perfect solution. They have very low efficiency over large changes in temperature and at very low temperatures as given by \ref{19} .
$\zeta \ =\ 1\ -\ \frac{\Delta T}{T_{H}} \label{19}$
As a result, pulse tube coolers consume a lot of electricity to provide the necessary cooling and may take a long time to achieve the desired temperature. Over large temperature ranges such as the 4 – 300 K range typically used in specific heat measurements, pulse tubes can be used in stages, with one providing pre-cooling for the next, to increase the cooling power and provide a shorter cooling time, though this tends to increase the energy consumption. The cost of running a pulse tube system is still generally less that that of a cryostat, however, and unlike cryostats, pulse tube systems do not have to be used constantly in order to remain cost-effective.
Sample Conditions
While the cooling system works more or less independently, the other systems—the sample temperature control, the magnetic field control, and the pressure control systems—work together to create the proper conditions for measurement of the sample. The sample temperature control system provides the heat pulse used to increase the temperature of the sample before relaxation occurs. The components of this system are incorporated into the sapphire sample platform as shown in Figure $48$.
Figure $48$ The sample platform with important components of the sample temperature control system. Reused with permission from R. J. Schutz. Rev. Sci. Instrum., 1974, 45, 548. Copyright: AIP publishing.
The sample is affixed to the platform over the thermometer with a small amount of grease, which also provides thermal conductance between the heating element and the sample. The heat pulse is delivered to the sample by running a small current pulse through the heating element, and the response is measured by a resistance thermometer. The resistance thermometer is made of specially-treated carbon or germanium which have standardized resistances for given temperatures. The thermometer is calibrated to these standards to provide accurate temperature readings throughout the range of temperatures used for specific heat measurements. A conductive wire provides thermal connection between the sample platform and the heat reservoir. This wire must provide high conductivity to ensure that the heat transfer from the sample to the platform is the dominant process and prevent significant two-τ behavior. Sample preparation is also governed by the temperature control system. The sample must be in good thermal contact with the platform, therefore, a sample with a flat face is preferable. The volume of the sample cannot be too large, either, or the heating element will not be able to heat the sample uniformly. A temperature gradient throughout the sample skews the measurement of the temperature made by the thermometer. Moreover, it is impossible to assign a 1:1 correspondence between the specific heat and temperature if the specific heat values do not correspond to a singular temperature. For the best measurements, heat capacity samples must be cut from large single-crystals or polycrystalline solids using a hard diamond saw to prevent contamination of the sample with foreign material.
The magnetic field control system provides magnetic fields ranging from 0 to >15 T. As was mentioned previously, strong magnetic fields can suppress the transition to magnetically ordered states to lower temepratures, which is important for studying quantum critical behaviors. The magnetic field control consists of a high-current solenoid and regulating electronics to ensure stable current and field outputs.
The pressure systems controls the pressure in the sample chamber, which is physically separated from the bath by a wall which allows thermal transfer only. While the sample is installed in the chamber, the vacuum system must be able to maintain low pressures (~10-5 torr) to ensure that no gas is present. If the vacuum system fails, water from any air present in the system can condense inside the sample chamber, including on the sample platform, which alters thermal conductance and throws off measurement of the specific heat. Moreover, as the temperature in the chamber drops, water can freeze and expand in the chamber which can cause significant damage to the instrument itself.
Conclusions
Through the application of specialized materials and technology, measurements of the specific heat have become both highly accurate and very precise. As our measurement capabilities expand toward the 0 K limit, exciting prospects arise for completion of our understanding, discovery of new phenomena, and development of important applications of novel magnetic materials. Specific heat measurements, then, are a vital tool for studying magnetic materials, whether as a means of exploring the strange phenomena of quantum physics such as quantum criticality or heavy fermions, or simply as a routine method of characterizing physical transitions between different states. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.08%3A_Thermal_Analysis.txt |
Introduction
Permittivity (in the framework of electromagnetics) is a fundamental material property that describes how a material will affect, and be affected by, a time-varying electromagnetic field. The parameters of permittivity are often treated as a complex function of the applied electromagnetic field as complex numbers allow for the expression of magnitude and phase. The fundamental equation for the complex permittivity of a substance (εs) is given by \ref{1} , where ε’ and ε’’ are the real and imaginary components, respectively, ω is the radial frequency (rad/s) and can be easily converted to frequency (Hertz, Hz) using \ref{2} .
$\varepsilon _{s} = \varepsilon ' ( \omega )\ -\ i\varepsilon ''(\omega ) \label{1}$
$\omega \ =\ 2\pi f \label{2}$
Specifically, the real and imaginary parameters defined within the complex permittivity equation describe how a material will store electromagnetic energy and dissipate that energy as heat. The processes that influence the response of a material to a time-varying electromagnetic field are frequency dependent and are generally classified as either ionic, dipolar, vibrational, or electronic in nature. These processes are highlighted as a function of frequency in Figure $1$. Ionic processes refer to the general case of a charged ion moving back and forth in response a time-varying electric field, whilst dipolar processes correspond to the ‘flipping’ and ‘twisting’ of molecules, which have a permanent electric dipole moment such as that seen with a water molecule in a microwave oven. Examples of vibrational processes include molecular vibrations (e.g. symmetric and asymmetric) and associated vibrational-rotation states that are Infrared (IR) active. Electronic processes include optical and ultra-violet (UV) absorption and scattering phenomenon seen across the UV-visible range.
The most common relationship scientists that have with permittivity is through the concept of relative permittivity: the permittivity of a material relative to vacuum permittivity. Also known as the dielectric constant, the relative permittivity (εr) is given by \ref{3} , where εs is the permittivity of the substance and ε0 is the permittivity of a vacuum (ε0 = 8.85 x 10-12 Farads/m). Although relative permittivity is in fact dynamic and a function of frequency, the dielectric constants are most often expressed for low frequency electric fields where the electric field is essential static in nature. Table $1$ depicts the dielectric constants for a range of materials.
$\varepsilon _{r} \ =\ \varepsilon_{s} / \varepsilon_{0} \label{3}$
Table $1$: Relative permittivities of various materials under static (i.e. non time-varying) electric fields.
Material Relative Permittivity
Vacuum 1 (by definition)
Air 1.00058986
Polytetrafluoroethylene (PTFE, Teflon) 2.1
Paper 3.85
Diamond 5.5-10
Methanol 30
Water 80.1
Titanium dioxide (TiO2) 86-173
Strontium titanate (SrTiO3) 310
Barium titanate (BaTiO3) 1,200 - 10,000
Calcium copper titanate (CaCu3Ti4O12) >250,000
Dielectric constants may be useful for generic applications whereby the high-frequency response can be neglected, although applications such as radio communications, microwave design, and optical system design call for a more rigorous and comprehensive analysis. This is especially true for electrical devices such as capacitors, which are circuit elements that store and discharge electrical charge in both a static and time-varying manner. Capacitors can be thought of as two parallel plate electrodes that are separated by a finite distance and ‘sandwich’ together a piece of material with characteristic permittivity values. As can be seen in Figure $2$, the capacitance is a function of the permittivity of the material between the plates, which in turn is dependent on frequency. Hence, for capacitors incorporated into the circuit design for radio communication applications, across the spectrum 8.3 kHz – 300 GHz, the frequency response would be important as this will determine the capacitors ability to charge and discharge as well as the thermal response from electric fields dissipating their power as heat through the material.
Evaluating the electrical characteristics of materials is become increasingly popular – especially in the field of electronics whereby miniaturization technologies often require the use of materials with high dielectric constants. The composition and chemical variations of materials such as solids and liquids can adopt characteristic responses, which are directly proportional to the amounts and types of chemical species added to the material. The examples given herein are related to aqueous suspensions whereby the electrical permittivity can be easily modulated via the addition of sodium chloride (NaCl).
Instrumentation
A common and reliable method for measuring the dielectric properties of liquid samples is to use an impedance analyzer in conjunction with a dielectric probe. The impedance analyzer directly measures the complex impedance of the sample under test and is then converted to permittivity using the system software. There are many methods used for measuring impedance, each of which has their own inherent advantages and disadvantages and factors associated with that particular method. Such factors include frequency range, measurement accuracy, and ease of operation. Common impedance measurements include bridge method, resonant method, current-voltage (I-V) method, network analysis method, auto-balancing bridge method, and radiofrequency (RF) I-V method. The RF I-V method used herein has several advantages over previously mentioned methods such as extended frequency coverage, better accuracy, and a wider measured impedance range. The principle of the RF I-V method is based on the linear relationship of the voltage-current ratio to impedance, as given by Ohm’s law (V=IZ where V is voltage, I is current, and Z is impedance). This results in the impedance measurement sensitivity being constant regardless of measured impedance. Although a full description of this method involves circuit theory and is outside the scope of this module (see “Impedance Measurement Handbook” for full details) a brief schematic overview of the measurement principles is shown in Figure $3$.
As can be seen in Figure 3, the RF I-V method, which incorporates the use of a dielectric probe, essentially measures variations in voltage and current when a sample is placed on the dielectric probe. For the low-impedance case, the impedance of the sample (Zx) is given by \ref{4} , for a high-impedance sample, the impedance of the sample (Zx) is given by \ref{5} .
$Z_{x} \ =\ V/I \ =\frac{2R}{ \frac{V_{2}}{V_{1}}\ -\ 1} \label{4}$
$Z_{x} \ =\ V/I \ =\frac{R}{2}[\frac{V_{1}}{V_{2}} -\ 1] \label{5}$
The instrumentation and methods described herein consist of an Agilent E4991A impedance analyzer connected to an Agilent 85070E dielectric probe kit. The impedance analyzer directly measures the complex impedance of the sample under test by measuring either the frequency-dependent voltage or current across the sample. These values are then converted to permittivity values using the system software.
Applications
Electrical permittivity of deionized water and saline (0.9 % w/v NaCl)
In order to acquire the electrical permittivity of aqueous solutions the impedance analyzer and dielectric probe must first be calibrated. In the first instance, the impedance analyzer unit is calibrated under open-circuit, short-circuit, 50 ohm load, and low loss capacitance conditions by attaching the relevant probes shown in Figure $4$. The dielectric probe is then attached to the system and re-calibrated in open-air, with an attached short circuit probe, and finally with 500 μl of highly purified deionized water (with a resistivity of 18.2 MΩ/cm at 25 °C) (Figure $5$ ). The water is then removed and the system is ready for acquiring data.
In order to maintain accurate calibration only the purest deionized water with a resistivity of 18.2 MΩ/cm at 25 °C should be used. To perform an analysis simply load the dielectric probe with 500 μl of the sample and click on the ‘acquire data’ tab in the software. The system will perform a scan across the frequency range 200 MHz – 3 GHz and acquire the real and imaginary parts of the complex permittivity. The period with which a data point is taken as well as the scale (i.e. log or linear) can also be altered in the software if necessary. To analyze another sample, remove the liquid and gently dry the dielectric probe with a paper towel. An open air refresh calibration should then be performed (by pressing the relevant button in the software) as this prevents errors and instrument drift from sample to sample. To analyze a normal saline (0.9 % NaCl w/v) solution, dissolve 8.99 g of NaCl in 1 litre of DI water (18.2 MΩ/cm at 25 °C) to create a 154 mM NaCl solution (equivalent to a 0.9 % NaCl w/v solution). Load 500 μl of the sample on the dielectric probe and acquire a new data set as mentioned previously.
Users should consult the “Agilent Installation and Quick Start Guide” manual for full specifics in regards to impedance analyzer and dielectric probe calibration settings.
Data Analysis
The data files extracted from the impedance analyzer and dielectric probe setup previously described can be opened using any standard data processing software such as Microsoft Excel. The data will appear in three columns, which will be labeled frequency (Hz), ε', and ε" (representing the real and imaginary components of the permittivity, respectively). Any graphing software can be used to create simple graphs of the complex permittivity versus frequency. In the example below (Figure $6$ ) we have used Prism to graph the real and complex permittivity’s versus frequency (200 MHz – 3 GHz) for the water and saline samples. For this frequency range no error correction is needed. For the analysis of frequencies below 200 MHz down to 10 MHz, which can be achieved using the impedance analyzer and dielectric probe configuration, error correction algorithms are needed to take into account electrode polarization effects that skew and distort the data. Gach et al. cover these necessary algorithms that can be used if needed. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.09%3A_Electrical_Permittivity_Characterization_of_Aqueous_Solutions.txt |
Dynamic mechanical analysis (DMA), also known as forced oscillatory measurements and dynamic rheology, is a basic tool used to measure the viscoelastic properties of materials (particularly polymers). To do so, DMA instrument applies an oscillating force to a material and measures its response; from such experiments, the viscosity (the tendency to flow) and stiffness of the sample can be calculated. These viscoelastic properties can be related to temperature, time, or frequency. As a result, DMA can also provide information on the transitions of materials and characterize bulk properties that are important to material performance. DMA can be applied to determine the glass transition of polymers or the response of a material to application and removal of a load, as a few common examples. The usefulness of DMA comes from its ability to mimic operating conditions of the material, which allows researchers to predict how the material will perform.
A Brief History
Oscillatory experiments have appeared in published literature since the early 1900s and began with rudimentary experimental setups to analyze the deformation of metals. In an initial study, the material in question was hung from a support, and torsional strain was applied using a turntable. Early instruments of the 1950s from manufacturers Weissenberg and Rheovibron exclusively measured torsional stress, where force is applied in a twisting motion.
Due to its usefulness in determining polymer molecular structure and stiffness, DMA became more popular in parallel with the increasing research on polymers. The method became integral in the analysis of polymer properties by 1961. In 1966, the revolutionary torsional braid analysis was developed; because this technique used a fine glass substrate imbued with the material of analysis, scientists were no longer limited to materials that could provide their own support. Using torsional braid analysis, the transition temperatures of polymers could be determined through temperature programming. Within two decades, commercial instruments became more accessible, and the technique became less specialized. In the early 1980s, one of the first DMAs using axial geometries (linear rather than torsional force) was introduced.
Since the 1980s, DMA has become much more user-friendly, faster, and less costly due to competition between vendors. Additionally, the developments in computer technology have allowed easier and more efficient data processing. Today, DMA is offered by most vendors, and the modern instrument is detailed in the Instrumentationsection.
Basic Principles of DMA
DMA is based on two important concepts of stress and strain. Stress (σ) provides a measure of force (F) applied to area (A), \ref{1} .
$\sigma \ =\ F/A \label{1}$
Stress to a material causes strain (γ), the deformation of the sample. Strain can be calculated by dividing the change in sample dimensions (∆Y) by the sample’s original dimensions (Y) (\ref{2} ). This value is often given as a percentage of strain.
$\gamma \ =\ \Delta Y/Y \label{2}$
The modulus (E), a measure of stiffness, can be calculated from the slope of the stress-strain plot, Figure $1$, as displayed in \label{3} . This modulus is dependent on temperature and applied stress. The change of this modulus as a function of a specified variable is key to DMA and determination of viscoelastic properties. Viscoelastic materials such as polymers display both elastic properties characteristic of solid materials and viscous properties characteristic of liquids; as a result, the viscoelastic properties are often a compromise between the two extremes. Ideal elastic properties can be related to Hooke’s spring, while viscous behavior is often modeled using a dashpot, or a motion-resisting damper.
$E \ =\ \sigma /y \label{3}$
Creep-recovery
Creep-recovery testing is not a true dynamic analysis because the applied stress or strain is held constant; however, most modern DMA instruments have the ability to run this analysis. Creep-recovery tests the deformation of a material that occurs when load applied and removed. In the “creep” portion of this analysis, the material is placed under immediate, constant stress until the sample equilibrates. “Recovery” then measures the stress relaxation after the stress is removed. The stress and strain are measured as functions of time. From this method of analysis, equilibrium values for viscosity, modulus, and compliance (willingness of materials to deform; inverse of modulus) can be determined; however, such calculations are beyond the scope of this review.
Creep-recovery tests are useful in testing materials under anticipated operation conditions and long test times. As an example, multiple creep-recovery cycles can be applied to a sample to determine the behavior and change in properties of a material after several cycles of stress.
Dynamic Testing
DMA instruments apply sinusoidally oscillating stress to samples and causes sinusoidal deformation. The relationship between the oscillating stress and strain becomes important in determining viscoelastic properties of the material. To begin, the stress applied can be described by a sine function where σo is the maximum stress applied, ω is the frequency of applied stress, and t is time. Stress and strain can be expressed with the following \ref{4} .
$\sigma \ = \ \sigma_{0} sin(\omega t + \delta);\ y=y_{0} cos(\omega t) \label{4}$
The strain of a system undergoing sinusoidally oscillating stress is also sinuisoidal, but the phase difference between strain and stress is entirely dependent on the balance between viscous and elastic properties of the material in question. For ideal elastic systems, the strain and stress are completely in phase, and the phase angle (δ) is equal to 0. For viscous systems, the applied stress leads the strain by 90o. The phase angle of viscoelastic materials is somewhere in between (Figure $2$ ).
In essence, the phase angle between the stress and strain tells us a great deal about the viscoelasticity of the material. For one, a small phase angle indicates that the material is highly elastic; a large phase angle indicates the material is highly viscous. Furthermore, separating the properties of modulus, viscosity, compliance, or strain into two separate terms allows the analysis of the elasticity or the viscosity of a material. The elastic response of the material is analogous to storage of energy in a spring, while the viscosity of material can be thought of as the source of energy loss.
A few key viscoelastic terms can be calculated from dynamic analysis; their equations and significance are detailed in Table $1$.
Term Equation Significance
Complex modulus (E*) E* = E’ + iE” Overall modulus representing stiffness of material; combined elastic and viscous components
Elastic modulus (E’) E’ = (σoo)cosδ Storage modulus; measures stored energy and represents elastic portion
Viscous modulus (E”) E” = (σoo)sinδ Loss modulus; contribution of viscous component on polymer that flows under stress
Loss tangent (tanδ) Tanδ = E”/E’ Damping or index of viscoelasticity; compares viscous and elastic moduli
Table $1$ Key viscoelastic terms that can be calculated with DMA.
Types of Dynamic Experiments
A temperature sweep is the most common DMA test used on solid materials. In this experiment, the frequency and amplitude of oscillating stress is held constant while the temperature is increased. The temperature can be raised in a stepwise fashion, where the sample temperature is increased by larger intervals (e.g., 5 oC) and allowed to equilibrate before measurements are taken. Continuous heating routines can also be used (1-2 oC/minute). Typically, the results of temperature sweeps are displayed as storage and loss moduli as well as tan delta as a function of temperature. For polymers, these results are highly indicative of polymer structure. An example of a thermal sweep of a polymer is detailed later in this module.
In time scans, the temperature of the sample is held constant, and properties are measured as functions of time, gas changes, or other parameters. This experiment is commonly used when studying curing of thermosets, materials that change chemically upon heating. Data is presented graphically using modulus as a function of time; curing profiles can be derived from this information.
Frequency scans test a range of frequencies at a constant temperature to analyze the effect of change in frequency on temperature-driven changes in material. This type of experiment is typically run on fluids or polymer melts. The results of frequency scans are displayed as modulus and viscosity as functions of log frequency.
Instrumentation
The most common instrument for DMA is the forced resonance analyzer, which is ideal for measuring material response to temperature sweeps. The analyzer controls deformation, temperature, sample geometry, and sample environment.
Figure $3$ displays the important components of the DMA, including the motor and driveshaft used to apply torsional stress as well as the linear variable differential transformer (LVDT) used to measure linear displacement. The carriage contains the sample and is typically enveloped by a furnace and heat sink.
The DMA should be ideally selected to analyze the material at hand. The DMA can be either stress or strain controlled: strain-controlled analyzers move the probe a certain distance and measure the stress applied; strain-controlled analyzers provide a constant deformation of the sample (Figure $4$ ) Although the two techniques are nearly equivalent when the stress-strain plot (Figure $1$ ) is linear, stress-controlled analyzers provide more accurate results.
DMA analyzers can also apply stress or strain in two manners—axial and torsional deformation (Figure $5$ ) Axial deformation applies a linear force to the sample and is typically used for solid and semisolid materials to test flex, tensile strength, and compression. Torsional analyzers apply force in a twisting motion; this type of analysis is used for liquids and polymer melts but can also be applied to solids. Although both types of analyzers have wide analysis range and can be used for similar samples, the axial instrument should not be used for fluid samples with viscosities below 500 Pa-s, and torsional analyzers cannot handle materials with high modulus.
Different fixtures can be used to hold the samples in place and should be chosen according to the type of samples analyzed. The sample geometry affects both stress and strain and must be factored into the modulus calculations through a geometry factor. The fixture systems are specific to the type of stress application. Axial analyzers have a greater number of fixture options; one of the most commonly used fixtures is extension/tensile geometry used for thin films or fibers. In this method, the sample is held both vertically and lengthwise by top and bottom clamps, and stress is applied upwards
For torsional analyzers, the simplest geometry is the use of parallel plates. The plates are separated by a distance determined by the viscosity of the sample. Because the movement of the sample depends on its radius from the center of the plate, the stress applied is uneven; the measured strain is an average value.
DMA of the glass transition polymers
As the temperature of a polymer increases, the material goes through a number of minor transitions (Tγ and Tβ) due to expansion; at these transitions, the modulus also undergoes changes. The glass transition of polymers (Tg) occurs with the abrupt change of physical properties within 140-160 oC; at some temperature within this range, the storage (elastic) modulus of the polymer drops dramatically. As the temperature rises above the glass transition point, the material loses its structure and becomes rubbery before finally melting. The idealized modulus transition is pictured in Figure $6$.
The glass transition temperature can be determined using either the storage modulus, complex modulus, or tan δ (vs temperature) depending on context and instrument; because these methods result in such a range of values (Figure $6$ ), the method of calculation should be noted. When using the storage modulus, the temperature at which E’ begins to decline is used as the Tg. Tan δ and loss modulus E” show peaks at the glass transition; either onset or peak values can be used in determining Tg. These different methods of measurement are depicted graphically in Figure $7$.
Advantages and limitations of DMA
Dynamic mechanical analysis is an essential analytical technique for determining the viscoelastic properties of polymers. Unlike many comparable methods, DMA can provide information on major and minor transitions of materials; it is also more sensitive to changes after the glass transition temperature of polymers. Due to its use of oscillating stress, this method is able to quickly scan and calculate the modulus for a range of temperatures. As a result, it is the only technique that can determine the basic structure of a polymer system while providing data on the modulus as a function of temperature. Finally, the environment of DMA tests can be controlled to mimic real-world operating conditions, so this analytical method is able to accurately predict the performance of materials in use.
DMA does possess limitations that lead to calculation inaccuracies. The modulus value is very dependent on sample dimensions, which means large inaccuracies are introduced if dimensional measurements of samples are slightly inaccurate. Additionally, overcoming the inertia of the instrument used to apply oscillating stress converts mechanical energy to heat and changes the temperature of the sample. Since maintaining exact temperatures is important in temperature scans, this also introduces inaccuracies. Because data processing of DMA is largely automated, the final source of measurement uncertainty comes from computer error. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.10%3A_Dynamic_Mechanical_Analysis.txt |
From sediment to sample
Sample sediments are typically sent in a large plastic bag inside a brown paper bag labeled with the company or organization name, drill site name and number, and the depth the sediment was taken (in meters).
The first step in determining a lithology is to prepare a sample from your bulk sediment. To do this, you will need to crush some of the bulk rocks of your sediment into finer grains (Figure \(1\) ). You will need a hard surface, a hammer or mallet, and your sediment. An improvised container such as the cardboard one shown in Figure \(2\) may be useful in containing fragments that try to escape the hard surface during vigorous hammering. Remove the plastic sediment bag from the brown mailer bag. Empty approximately 10-20 g of bulk sediment onto the hard surface. Repeatedly strike the larger rock sized portions of the sediment until the larger units are broken into grains that are approximately the size of a grain of rice.
Some samples will give off oily or noxious odors when crushed. This is because of trapped hydrocarbons or sulfurous compounds and is normal. The next step in the process, washing, will take care of these impurities and the smell.
Once the sample has been appropriately crushed on the macro scale, a micro uniformity in grain size can be achieved through the use of a pulverizing micro mill machine such as the Planetary Mills Pulverisette 7 in Figure \(2\).
To use the mill, load your crushed sample into the milling cup (Figure \(3\) ) along with milling stones of 15 mm diameter. Set your rotational speed and time using the machine interface. A speed of 500-600 rpm and mill time of 3-5 minutes is suggested. Using higher speeds or longer times can result in loss of sample as dust. Load the milling cup into the mill and press start; make sure to lower the mill hood. Once the mill has completed its cycle, retrieve the sample and dump it into a plastic cup labelled with the drill site name and depth in order to prepare it for washing. Be sure to wash and dry the mill cup and mill stones between samples if multiple samples are being tested.
Washing the Sample
If your sample is dirty, as in contaminated with hydrocarbons such as crude oil, it will need to be washed. To wash your sample you will need your sample cup, a washbasin, a spoon, a 150-300 µm sieve, household dish detergent, and a porcelain ramekin if a drying oven is available (Figure \(4\) ).
Take your sample cup to the wash basin and fill the cup halfway with water, adding a squirt of dish detergent. Vigorously stir the cup with the spoon for 20 seconds, ensuring each grain is coated with the detergent water. Pour your sample into the sieve and turn on the faucet. Run water over the sample to allow the detergent and dust particles to wash through the sieve. Continue to wash the sample this way until all the detergent is washed from the sample. Once clean, empty the sieve onto a surface to leave to dry overnight, or into a ramekin if a drying oven is available. Place ramekin into drying oven set to at least 100 °C for a minimum of 2 hours to allow thorough drying (Figure \(5\) ). Once dry, the sample is ready to be picked.
Picking the Sample
Picking the sample is arguably the most important step in determining the lithology (Figure \(6\) ).
During this step you will create a sample uniformity to eliminate random minerals, macro contaminates such as wood, and dropstones that dropped into your sediment depth when the sediment was drilled. You will also be able to get a general judgment as to the lithology after picking, though further analysis is needed if chemical composition is desired. Remove sample from drying oven. Take a piece of weighing paper and weigh out 5-10 g of sample. Use a light microscope to determine whether most of the sample is either silt, clay, silty-clay, or sand.
• Clay grains will have a gray coloration with large flat sub-surfaces and less angulation. Clay will easily deform under pressure from forceps.
• Silt grains will be darker than clay and will have specks that shine when the grain is rotated. Texture is long pieces with jagged edges. Silt is harder in consistency.
• Silty clay is a heterogenous mixture (half and half mixture) of the above.
• Sand is defined as larger grain size, lighter and varied coloration, and many crystalline substructures. Sand is hard to deform with the forceps.
Pelleting the Sample
To prepare your sample for X-ray fluorescence (XRF) analysis you will need to prepare a sample pellet. To pellet your sample you will need a mortar and pestle, pellet binder such as Cerox, a scapula to remove binder, a micro scale, a pellet press with housing, and a pellet tin cup. Measure out and pour 2-4 g of sample into your mortar. Measure out and add 50% of your sample weight of pellet binder. For example, if your sample weight was 2 g, add 1 g of binder. Grind the sample into a fine, uniform powder, ensuring that all of the binder is thoroughly mixed with the sample (Figure \(7\) ).
Drop a sample of tin foil into the press housing. Pour sample into the tin foil, and then gently tap the housing against a hard surface two to three times to ensure sample settles into the tin. Place the top press disk into the channel. Place the press housing into the press, oriented directly under the pressing arm. Crank the lever on the press until the pressure gauge reads 15 tons (Figure \(8\) ). Wait for one minute, then twist the pressure release valve and remove the press housing from the press. Reverse the press and apply the removal cap to the bottom of the press. Place the housing into the press bottom side up and manually apply pressure by turning the crank on top of the press until the sample pops out of the housing. Retrieve the pelleted sample (Figure \(9\) ). The pelleted sample is now ready for X-ray fluorescence analysis (XRF).
XRF Analysis
Place the sample pellet into the XRF (Figure \(10\) and Figure \(11\) ) and close the XRF hood. The XRF obtain the spectrum from the associated computer.
The XRF spectrum is a plot of energy and intensity. The software equipped with the XRF will be pre-programmed to recognize the characteristic energies associated with the X-ray emissions of the elements. The XRF functions by shooting a beam of high energy photons that are absorbed by the atoms of the sample. The inner shell electrons of sample atoms are ejected. This leaves the atom in an excited state, with a vacancy in the inner shell. Outer shell electrons then fall into the vacancy, emitting photons with energy equal to the energy difference between these two energy levels. Each element has a unique set of energy levels, therefore each element emits a pattern of X-rays characteristic of that element. The intensity of these characteristic X-rays increases with the concentration of the corresponding element leading to higher counts and higher peaks on the spectrum (Figure \(12\) ). | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.11%3A_Finding_a_Representative_Lithology.txt |
• 3.1: Principles of Gas Chromatography
Nowadays, gas chromatography is a mature technique, widely used worldwide for the analysis of almost every type of organic compound, even those that are not volatile in their original state but can be converted to volatile derivatives.
• 3.2: High Performance Liquid chromatography
High-performance liquid chromatography (HPLC) is a technique in analytical chemistry used to separate the components in a mixture, and to identify and quantify each component. It was initially discovered as an analytical technique in the early twentieth century and was first used to separate colored compounds. The word chromatography means color writing.
• 3.3: Basic Principles of Supercritical Fluid Chromatography and Supercrtical Fluid Extraction
The discovery of supercritical fluids led to novel analytical applications in the fields of chromatography and extraction known as supercritical fluid chromatography (SFC) and supercritical fluid extraction (SFE). Supercritical fluid chromatography is accepted as a column chromatography methods along with gas chromatography (GC) and high-performance liquid chromatography (HPLC).
• 3.4: Supercritical Fluid Chromatography
A popular and powerful tool in the chemical world, chromatography separates mixtures based on chemical properties – even some than were previously thought inseparable. It combines a multitude of pieces, concepts, and chemicals to form an instrument suited to specific separation. One form of chromatography that is often overlooked is that of supercritical fluid chromatography.
• 3.5: Ion Chromatography
Ion Chromatography is a method of separating ions based on their distinct retention rates in a given solid phase packing material. Given different retention rates for two anions or two cations, the elution time of each ion will differ, allowing for detection and separation of one ion before the other.
• 3.6: Capillary Electrophoresis
Capillary electrophoresis (CE) encompasses a family of electrokinetic separation techniques that uses an applied electric field to separate out analytes based on their charge and size. The basic principle is hinged upon that of electrophoresis, which is the motion of particles relative to a fluid (electrolyte) under the influence of an electric field.
Thumbnail: A gas chromatography oven, open to show a capillary column. (CC BY-SA 4.0; Polimerek)
03: Principles of Gas Chromatography
Archer J.P. Martin (Figure $1$ ) and Anthony T. James (Figure $2$ ) introduced liquid-gas partition chromatography in 1950 at the meeting of the Biochemical Society held in London, a few months before submitting three fundamental papers to the Biochemical Journal. It was this work that provided the foundation for the development of gas chromatography. In fact, Martin envisioned gas chromatography almost ten years before, while working with R. L. M. Synge (Figure $3$ ) on partition chromatography. Martin and Synge, who were awarded the chemistry Nobel prize in 1941, suggested that separation of volatile compounds could be achieved by using a vapor as the mobile phase instead of a liquid.
Gas chromatography quickly gained general acceptance because it was introduced at the time when improved analytical controls were required in the petrochemical industries, and new techniques were needed in order to overcome the limitations of old laboratory methods. Nowadays, gas chromatography is a mature technique, widely used worldwide for the analysis of almost every type of organic compound, even those that are not volatile in their original state but can be converted to volatile derivatives.
The Chromatographic Process
Gas chromatography is a separation technique in which the components of a sample partition between two phases:
1. The stationary phase.
2. The mobile gas phase.
According to the state of the stationary phase, gas chromatography can be classified in gas-solid chromatography (GSC), where the stationary phase is a solid, and gas-liquid chromatography (GLC) that uses a liquid as stationary phase. GLC is to a great extent more widely used than GSC.
During a GC separation, the sample is vaporized and carried by the mobile gas phase (i.e., the carrier gas) through the column. Separation of the different components is achieved based on their relative vapor pressure and affinities for the stationary phase. The affinity of a substance towards the stationary phase can be described in chemical terms as an equilibrium constant called the distribution constant Kc, also known as the partition coefficient, \ref{1} , where [A]s is the concentration of compound A in the stationary phase and [A]m is the concentration of compound A in the mobile phase.
$K_{c} = [A]_{s}/[A]_{m} \label{1}$
The distribution constant (Kc) controls the movement of the different compounds through the column, therefore differences in the distribution constant allow for the chromatographic separation. Figure $4$ shows a schematic representation of the chromatographic process. Kc is temperature dependent, and also depends on the chemical nature of the stationary phase. Thus, temperature can be used as a way to improve the separation of different compounds through the column, or a different stationary phase.
A Typical Chromatogram
Figure $5$ shows a chromatogram of the analysis of residual methanol in biodiesel, which is one of the required properties that must be measured to ensure the quality of the product at the time and place of delivery.
Chromatogram (Figure $5$ a) shows a standard solution of methanol with 2-propanol as the internal standard. From the figure it can be seen that methanol has a higher affinity for the mobile phase (lower Kc) than 2-propanol (iso-propanol), and therefore elutes first. Chromatograms (Figure $5$ b and c) show two samples of biodiesel, one with methanol (Figure $5$ b) and another with no methanol detection. The internal standard was added to both samples for quantitation purposes.
Instrument Overview
Components of a Gas Chromatograph System
Figure $6$ shows a schematic diagram of the components of a typical gas chromatograph, while Figure $7$ shows a photograph of a typical gas chromatograph coupled to a mass spectrometer (GC/MS).
Carrier Gas
The role of the carrier gas -GC mobile phase- is to carry the sample molecules along the column while they are not dissolved in or adsorbed on the stationary phase. The carrier gas is inert and does not interact with the sample, and thus GC separation's selectivity can be attributed to the stationary phase alone. However, the choice of carrier gas is important to maintain high efficiency. The effect of different carrier gases on column efficiency is represented by the van Deemter (packed columns) and the Golay equation (capillary columns). The van Deemter equation, \ref{2} , describes the three main effects that contribute to band broadening in packed columns and, as a consequence, to a reduced efficiency in the separation process.
$HEPT\ =\ A+\frac{B}{u} + Cu \label{2}$
These three factors are:
1. the eddy diffusion (the A-term), which results from the fact that in packed columns spaces between particles along the column are not uniform. Therefore, some molecules take longer pathways than others, and there are also variations in the velocity of the mobile phase.
2. the longitudinal molecular diffusion (the B-term) which is a consequence of having regions with different analyte concentrations.
3. the mass transfer in the stationary liquid phase (the C-term)
The broadening is described in terms of the height equivalent to a theoretical plate, HEPT, as a function of the average linear gas velocity, u. A small HEPT value indicates a narrow peak and a higher efficiency.
Since capillary columns do not have any packing, the Golay equation, \ref{3} , does not have an A-term. The Golay equation has 2 C-terms, one for mass transfer in then stationary phase (Cs) and one for mass transfer in the mobile phase (CM).
$HEPT\ =\ \frac{B}{u} \ +\ (C_{s}\ +\ C_{M})u \label{3}$
High purity hydrogen, helium and nitrogen are commonly used for gas chromatography. Also, depending on the type of detector used, different gases are preferred.
Injector
This is the place where the sample is volatilized and quantitatively introduced into the carrier gas stream. Usually a syringe is used for injecting the sample into the injection port. Samples can be injected manually or automatically with mechanical devices that are often placed on top of the gas chromatograph: the auto-samplers.
Column
The gas chromatographic column may be considered the heart of the GC system, where the separation of sample components takes place. Columns are classified as either packed or capillary columns. A general comparison of packed and capillary columns is shown in Table $1$. Images of packed columns are shown in Figure $8$ and Figure $9$.
Column Type Packed Column Capillary Column
History First type of GC column used Modern technology. Today most GC applications are developed using capillary columns
Composition Packed with silica particles onto which the stationary phase is coated. Not packed with particulate material. Made of chemically treated silica covered with thin, uniform liquid phase films.
Efficiency Low High
Outside diameter 2-4 mm 0.4 mm
Column length 2-4 meters 15-60 meters
Advantages Lower cost, larger samples Faster, better for complex mixtures
Table $1$ A summary of the differences between a packed and a capillary column.
Since most common applications employed nowadays use capillary columns, we will focus on this type of columns. To define a capillary column, four parameters must be specified:
1. The stationary phase is the parameter that will determine the final resolution obtained, and will influence other selection parameters. Changing the stationary phase is the most powerful way to alter selectivity in GC analysis.
2. The length is related to the overall efficiency of the column and to overall analysis time. A longer column will increase the peak efficiency and the quality of the separation, but it will also increase analysis time. One of the classical trade-offs in gas chromatography (GC) separations lies between speed of analysis and peak resolution.
3. The column internal diameter (ID) can influence column efficiency (and therefore resolution) and also column capacity. By decreasing the column internal diameter, better separations can be achieved, but column overload and peak broadening may become an issue.
4. The sample capacity of the column will also depend on film thickness. Moreover, the retention of sample components will be affected by the thickness of the film, and therefore its retention time. A shorter run time and higher resolution can be achieved using thin films, however these films offer lower capacity.
Detector
The detector senses a physicochemical property of the analyte and provides a response which is amplified and converted into an electronic signal to produce a chromatogram. Most of the detectors used in GC were invented specifically for this technique, except for the thermal conductivity detector (TCD) and the mass spectrometer. In total, approximately 60 detectors have been used in GC. Detectors that exhibit an enhanced response to certain analyte types are known as "selective detectors".
During the last 10 years there had been an increasing use of GC in combination with mass spectrometry (MS). The mass spectrometer has become a standard detector that allows for lower detection limits and does not require the separation of all components present in the sample. Mass spectroscopy is one of the types of detection that provides the most information with only micrograms of sample. Qualitative identification of unknown compounds as well as quantitative analysis of samples is possible using GC-MS. When GC is coupled to a mass spectrometer, the compounds that elute from the GC column are ionized by using electrons (EI, electron ionization) or a chemical reagent (CI, chemical ionization). Charged fragments are focused and accelerated into a mass analyzer: typically a quadrupole mass analyzer. Fragments with different mass to charge ratios will generate different signals, so any compound that produces ions within the mass range of the mass analyzer will be detected. Detection limits of 1-10 ng or even lower values (e.g., 10 pg) can be achieved selecting the appropriate scanning mode.
Derivatization
Gas chromatography is primarily used for the analysis of thermally stable volatile compounds. However, when dealing with non-volatile samples, chemical reactions can be performed on the sample to increase the volatility of the compounds. Compounds that contain functional groups such as OH, NH, CO2H, and SH are difficult to analyze by GC because they are not sufficiently volatile, can be too strongly attracted to the stationary phase or are thermally unstable. Most common derivatization reactions used for GC can be divided into three types:
1. Silylation.
2. Acylation.
3. Alkylation & Esterification.
Samples are derivatized before being analyzed to:
• Increase volatility and decrease polarity of the compound
• Reduce thermal degradation
• Increase sensitivity by incorporating functional groups that lead to higher detector signals
• Improve separation and reduce tailing
Advantages and Disadvantages
GC is the premier analytical technique for the separation of volatile compounds. Several features such as speed of analysis, ease of operation, excellent quantitative results, and moderate costs had helped GC to become one of the most popular techniques worldwide.
Advantages of GC
• Due to its high efficiency, GC allows the separation of the components of complex mixtures in a reasonable time.
• Accurate quantitation (usually sharp reproducible peaks are obtained)
• Mature technique with many applications notes available for users.
• Multiple detectors with high sensitivity (ppb) are available, which can also be used in series with a mass spectrometer since MS is a non-destructive technique.
Disadvantages of GC
• Limited to thermally stable and volatile compounds.
• Most GC detectors are destructive, except for MS.
Gas Chromatography Versus High Performance Liquid Chromatography (HPLC)
Unlike gas chromatography, which is unsuitable for nonvolatile and thermally fragile molecules, liquid chromatography can safely separate a very wide range of organic compounds, from small-molecule drug metabolites to peptides and proteins.
GC HPLC
Sample must be volatile or derivatized previous to GC analysis Volatility is not important, however solubility in the mobile phase becomes critical for the analysis.
Most analytes have a molecular weight (MW) below 500 Da (due to volatility issues) There is no upper molecular weight limit as far as the sample can be dissolved in the appropriate mobile phase
Can be coupled to MS. Several mass spectral libraries are available if using electron ionization (e.g., http://chemdata.nist.gov/) Methods must be adapted before using an MS detector (non-volatile buffers cannot be used)
Can be coupled to several detectors depending on the application For some detectors the solvent must be an issue. When changing detectors some methods will require prior modification
Table $2$ Relative advantages and disadvantages of GC versus HPLC. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.01%3A_Principles_of_Gas_Chromatography.txt |
High-performance liquid chromatography (HPLC) is a technique in analytical chemistry used to separate the components in a mixture, and to identify and quantify each component. It was initially discovered as an analytical technique in the early twentieth century and was first used to separate colored compounds. The word chromatography means color writing. It was the botanist M. S. Tswett (Figure $1$ ) who invented this method in around 1900 to study leaf pigments (mainly chlorophyll). He separated the pigments based on their interaction with a stationary phase. In 1906 Tswett published two fundamental papers describing the various aspects of liquid-adsorption chromatography in detail. He also pointed out that in spite of its name, other substances also could be separated by chromatography. The modern high performance liquid chromatography has developed from this separation; the separation efficiency, versatility and speed have been improved significantly.
The molecular species subjected to separation exist in a sample that is made of analytes and matrix. The analytes are the molecular species of interest, and the matrix is the rest of the components in the sample. For chromatographic separation, the sample is introduced in a flowing mobile phase that passes a stationary phase. Mobile phase is a moving liquid, and is characterized by its composition, solubility, UV transparency, viscosity, and miscibility with other solvents. Stationary phase is a stationary medium, which can be a stagnant bulk liquid, a liquid layer on the solid phase, or an interfacial layer between liquid and solid. In HPLC, the stationary phase is typically in the form of a column packed with very small porous particles and the liquid mobile phase is moved through the column by a pump. The development of HPLC is mainly the development of the new columns, which requires new particles, new stationary phases (particle coatings), and improved procedures for packing the column. A picture of modern HPLC is shown in Figure $2$.
Instrumentation
The major components of a HPLC are shown in Figure $3$. The role of a pump is to force a liquid (mobile phase) through at a specific flow rate (milliliters per minute). The injector serves to introduce the liquid sample into the flow stream of the mobile phase. Column is the most central and important component of HPLC, and the column’s stationary phase separates the sample components of interest using various physical and chemical parameters. The detector is to detect the individual molecules that elute from the column. The computer usually functions as the data system, and the computer not only controls all the modules of the HPLC instrument but it takes the signal from the detector and uses it to determine the retention time, the sample components, and quantitative analysis.
Columns
Different separation mechanisms were used based on different property of the stationary phase of the column. The major types include normal phase chromatography, reverse phase chromatography, ion exchange, size exclusion chromatography, and affinity chromatography.
Normal-phase Chromatography
In this method the columns are packed with polar, inorganic particles and a nonpolar mobile phase is used to run through the stationary phase (Table $1$ ). Normal phase chromatography is mainly used for purification of crude samples, separation of very polar samples, or analytical separations by thin layer chromatography. One problem when using this method is that, water is a strong solvent for the normal-phase chromatography, traces of water in the mobile phase can markedly affect sample retention, and after changing the mobile phase, the column equilibration is very slow.
Stationary Phase Mobile Phase
Normal Phase Polar Non polar
Reverse Phase Non polar Polar
Table $1$ Mobile phase and stationary phase used for normal phase and reverse-phase chromatography
Reverse-phase Chromatography
In reverse-phase (RP) chromatography the stationary phase has a hydrophobic character, while the mobile phase has a polar character. This is the reverse of the normal-phase chromatography (Table $2$ ). The interactions in RP-HPLC are considered to be the hydrophobic forces, and these forces are caused by the energies resulting from the disturbance of the dipolar structure of the solvent. The separation is typically based on the partition of the analyte between the stationary phase and the mobile phase. The solute molecules are in equilibrium between the hydrophobic stationary phase and partially polar mobile phase. The more hydrophobic molecule has a longer retention time while the ionized organic compounds, inorganic ions and polar metal molecules show little or no retention time.
Ion Exchange Chromatography
The ion exchange mechanism is based on electrostatic interactions between hydrated ions from a sample and oppositely charged functional groups on the stationary phase. Two types of mechanisms are used for the separation: in one mechanism, the elution uses a mobile phase that contains competing ions that would replace the analyte ions and push them off the column; another mechanism is to add a complexing reagent in the mobile phase and to change the sample species from their initial form. This modification on the molecules will lead them to elution. In addition to the exchange of ions, ion-exchange stationary phases are able to retain specific neutral molecules. This process is related to the retention based on the formation of complexes, and specific ions such as transition metals can be retained on a cation-exchange resin and can still accept lone-pair electrons from donor ligands. Thus neutral ligand molecules can be retained on resins treated with the transitional metal ions.
The modern ion exchange is capable of quantitative applications at rather low solute concentrations, and can be used in the analysis of aqueous samples for common inorganic anions (range 10 μg/L to 10 mg/L). Metal cations and inorganic anions are all separated predominantly by ionic interactions with the ion exchange resin. One of the largest industrial users of ion exchange is the food and beverage sector to determine the nitrogen-, sulfur-, and phosphorous- containing species as well as the halide ions. Also, ion exchange can be used to determine the dissolved inorganic and organic ions in natural and treated waters.
Size Exclusion Chromatography
It is a chromatographic method that separate the molecules in the solutions based on the size (hydrodynamic volume). This column is often used for the separation of macromolecules and of macromolecules from small molecules. After the analyte is injected into the column, molecules smaller than he pore size of the stationary phase enter the porous particles during the separation and flow through he intricate channels of the stationary phase. Thus smaller components have a longer path to traverse and elute from the column later than the larger ones. Since the molecular volume is related to molecular weight, it is expected that retention volume will depend to some degree on the molecular weight of the polymeric materials. The relation between the retention time and the molecular weight is shown in Figure $4$.
Usually the type of HPLC separation method to use depends on the chemical nature and physicochemical parameters of the samples. Figure $5$ shows a flow chart of preliminary selection for the separation method according to the properties of the analyte.
Detectors
Detectors that are commonly used for liquid chromatography include ultraviolet-visible absorbance detectors, refractive index detectors, fluorescence detectors, and mass spectrometry. Regardless of the class, a LC detector should ideally have the characteristics of about 10-12-10-11 g/mL, and a linear dynamic range of five or six orders. The principal characteristics of the detectors to be evaluated include dynamic range, response index or linearity, linear dynamic range, detector response, detector sensitivity, etc.
Among these detectors, the most economical and popular methods are UV and refractive index (RI) detectors. They have rather broad selectivity reasonable detection limits most of the time. The RI detector was the first detector available for commercial use. This method is particularly useful in the HPLC separation according to size, and the measurement is directly proportional to the concentration of polymer and practically independent of the molecular weight. The sensitivity of RI is 10-6 g/mL, the linear dynamic range is from 10-6to 10-4 g/mL, and the response index is between 0.97 and 1.03.
UV detectors respond only to those substances that absorb UV light at the wavelength of the source light. A great many compounds absorb light in the UV range (180-350 nm) including substances having one or more double bonds and substances having unshared electrons. and the relationship between the intensity of UV light transmitted through the cell and solute concentration is given by Beer’s law, \ref{1} and \ref{2} .
$I_{T} \ =\ I_{0} e^{kcl} \label{1}$
$ln(I_{T})\ =\ ln(I_{0}) (-kcl) \label{2}$
Where I0 is the intensity of the light entering the cell, and IT is the light transmitted through the cell, l is the path length of the cell, c is the concentration of the solute, and k is the molar absorption coefficient of the solute. UV detectors include fixed wavelength UV detector and multi wavelength UV detector. The fixed wavelength UV detector has sensitivity of 5*10-8 g/mL, has linear dynamic range between 5*10-8 and 5*10-4g/mL, and the response index is between 0.98 and 1.02. The multi-wavelength UV detector has sensitivity of 10-7 g/mL, the linear dynamic range is between 5*10-7 and 5*10-4 g/mL, and the response index is from 0.97 to 1.03. UV detectors could be used effectively for the reverse-phase separations and ion exchange chromatography. UV detectors have high sensitivity, are economically affordable, and easy to operate. Thus UV detector is the most common choice of detector for HPLC.
Another method, mass spectrometry, has certain advantages over other techniques. Mass spectra could be obtained rapidly; only small amount (sub-μg) of sample is required for analysis, and the data provided by the spectra is very informative of the molecular structure. Mass spectrometry also has strong advantages of specificity and sensitivity compared with other detectors. The combination of HPLC-MS is oriented towards the specific detection and potential identification of chemicals in the presence of other chemicals. However, it is difficult to interface the liquid chromatography to a mass-spectrometer, because all the solvents need to be removed first. The common used interface includes electrospray ionization, atmospheric pressure photoionization, and thermospray ionization.
Parameters related to HPLC separation
Flow Rate
Flow rate shows how fast the mobile phase travels across the column, and is often used for calculation of the consumption of the mobile phase in a given time interval. There are volumetric flow rate U and linear flow rate u. These two flow rate is related by \ref{3} , where A is the area of the channel for the flow, \ref{4} .
$U = Au \label{3}$
$A\ =\ (1/4) \pi \varepsilon d^{2} \label{4}$
Retention Time
The retention time (tR) can be defined as the time from the injection of the sample to the time of compound elution, and it is taken at the apex of the peak that belongs to the specific molecular species. The retention time is decided by several factors including the structure of the specific molecule, the flow rate of the mobile phase, column dimension. And the dead time t0 is defined as the time for a non-retained molecular species to elute from the column.
Retention Volume
Retention volume (VR) is defined as the volume of the mobile phase flowing from the injection time until the corresponding retention time of a molecular species, and are related by \ref{5} . The retention volume related to the dead time is known as dead volume V0.
$V_{R} \ =\ U_{tR} \label{5}$
Migration Rate
The migration rate can be defined as the velocity at which the species moves through the column. And the migration rate (UR) is inversely proportional to the retention times. If only a fraction of molecules that are present in the mobile phase are moving. The value of migration rate is then given by \ref{6} .
$u_{R} \ =\ u*V_{mo}/(V_{mo}+V_{st}) \label{6}$
Capacity Factor
Capacity factor (k) is the ratio of reduced retention time and the dead time, \ref{7} .
$K \ =\ (t_{R} - t_{0})/t_{0} \ =\ (v_{R} - v_{0})/v_{0} \label{7}$
Equilibrium Constant and Phase Ratio
In the separation, the molecules running through the column can also be considered as being in a continuous equilibrium between the mobile phase and the stationary phase. This equilibrium could be governed by an equilibrium constant K, defined as \ref{8} , in which Cmo is the molar concentration of the molecules in the mobile phase, and Cst is the molar concentration of the molecules in the stationary phase. The equilibrium constant K can also be written as \ref{9} .
$K\ =\ C_{st}/C_{mo} \label{8}$
$K\ =\ k(V_{0}/V_{st}) \label{9}$
Advantage of HPLC
The most important aspect of HPLC is the high separation capacity which enables the batch analysis of multiple components. Even if the sample consists of a mixture, HPLC will allows the target components to be separated, detected, and quantified. Also, under appropriate condition, it is possible to attain a high level of reproducibility with a coefficient of variation not exceeding 1%. Also, it has a high sensitivity while a low sample consumption. HPLC has one advantage over GC column that analysis is possible for any sample can be stably dissolved in the eluent and need not to be vaporized.With this reason, HPLC is used much more frequently in the field of biochemistry and pharmaceutical than the GC column. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.02%3A_High_Performance_Liquid_chromatography.txt |
The discovery of supercritical fluids led to novel analytical applications in the fields of chromatography and extraction known as supercritical fluid chromatography (SFC) and supercritical fluid extraction (SFE). Supercritical fluid chromatography is accepted as a column chromatography methods along with gas chromatography (GC) and high-performance liquid chromatography (HPLC). Due to to the properties of supercritical fluids, SFC combines each of the advantages of both GC and HPLC in one method. In addition, supercritical fluid extraction is an advanced analytical technique.
Definition and Formation of Supercritical Fluids
A supercritical fluid is the phase of a material at critical temperature and critical pressure of the material. Critical temperature is the temperature at which a gas cannot become liquid as long as there is no extra pressure; and, critical pressure is the minimum amount of pressure to liquefy a gas at its critical temperature. Supercritical fluids combine useful properties of gas and liquid phases, as it can behave like both a gas and a liquid in terms of different aspects. A supercritical fluid provides a gas-like characteristic when it fills a container and it takes the shape of the container. The motion of the molecules are quite similar to gas molecules. On the other hand, a supercritical fluid behaves like a liquid because its density property is near liquid and, thus, a supercritical fluid shows a similarity to the dissolving effect of a liquid.
The characteristic properties of a supercritical fluid are density, diffusivity and viscosity. Supercritical values for these features take place between liquids and gases. Table \(1\) demonstrates numerical values of properties for gas, supercritical fluid and liquid.
Gas Supercritical fluid Liquid
Density (g/cm3) 0.6 x 10-3-2.0 x 10-3 0.2-0.5 0.6-2.0
Diffusivity (cm2/s) 0.1-0.4 10-3-10-4 0.2 x 10-5-2.0 x 10-5
Viscosity (cm/s) 1 x 10-4-3 x 10-4 1 x 10-4-3 x 10-4 0.2 x 10-2-3.0 x 10-2
Table \(1\) Supercritical fluid properties compared to liquids and gases
The formation of a supercritical fluid is the result of a dynamic equilibrium. When a material is heated to its specific critical temperature in a closed system, at constant pressure, a dynamic equilibrium is generated. This equilibrium includes the same number of molecules coming out of liquid phase to gas phase by gaining energy and going in to liquid phase from gas phase by losing energy. At this particular point, the phase curve between liquid and gas phases disappears and supercritical material appears.
In order to understand the definition of SF better, a simple phase diagram can be used. Figure \(1\) displays an ideal phase diagram. For a pure material, a phase diagram shows the fields where the material is in the form of solid, liquid, and gas in terms of different temperature and pressure values. Curves, where two phases (solid-gas, solid-liquid and liquid-gas) exist together, defines the boundaries of the phase regions. These curves, for example, include sublimation for solid-gas boundary, melting for solid-liquid boundary, and vaporization for liquid-gas boundary. Other than these binary existence curves, there is a point where all three phases are present together in equilibrium; the triple point (TP).
There is another characteristic point in the phase diagram, the critical point (CP). This point is obtained at critical temperature (Tc) and critical pressure (Pc). After the CP, no matter how much pressure or temperature is increased, the material cannot transform from gas to liquid or from liquid to gas phase. This form is the supercritical fluid form. Increasing temperature cannot result in turning to gas, and increasing pressure cannot result in turning to liquid at this point. In the phase diagram, the field above Tc and Pc values is defined as the supercritical region.
In theory, the supercritical region can be reached in two ways:
• Increasing the pressure above the Pc value of the material while keeping the temperature stable and then increasing the temperature above Tc value at a stable pressure value.
• Increasing the temperature first above Tc value and then increasing the pressure above Pc value.
The critical point is characteristic for each material, resulting from the characteristic Tc and Pc values for each substance.
Physical Properties of Supercritical Fluids
As mentioned above, SF shares some common features with both gases and liquids. This enables us to take advantage of a correct combination of the properties.
Density
Density characteristic of a supercritical fluid is between that of a gas and a liquid, but closer to that of a liquid. In the supercritical region, density of a supercritical fluid increases with increased pressure (at constant temperature). When pressure is constant, density of the material decreases with increasing temperature. The dissolving effect of a supercritical fluid is dependent on its density value. Supercritical fluids are also better carriers than gases thanks to their higher density. Therefore, density is an essential parameter for analytical techniques using supercritical fluids as solvents.
Diffusivity
Diffusivity of a supercritical fluid can be 100 x that of a liquid and 1/1,000 to 1/10,000 x less than a gas. Because supercritical fluids have more diffusivity than a liquid, it stands to reason a solute can show better diffusivity in a supercritical fluid than in a liquid. Diffusivity is parallel with temperature and contrary with pressure. Increasing pressure affects supercritical fluid molecules to become closer to each other and decreases diffusivity in the material. The greater diffusivity gives supercritical fluids the chance to be faster carriers for analytical applications. Hence, supercritical fluids play an important role for chromatography and extraction methods.
Viscosity
Viscosity for a supercritical fluid is almost the same as a gas, being approximately 1/10 of that of a liquid. Thus, supercritical fluids are less resistant than liquids towards components flowing through. The viscosity of supercritical fluids is also distinguished from that of liquids in that temperature has a little effect on liquid viscosity, where it can dramatically influence supercritical fluid viscosity.
These properties of viscosity, diffusivity, and density are related to each other. The change in temperature and pressure can affect all of them in different combinations. For instance, increasing pressure causes a rise for viscosity and rising viscosity results in declining diffusivity.
Super Fluid Chromatography (SFC)
Just like supercritical fluids combine the benefits of liquids and gases, SFC bring the advantages and strong aspects of HPLC and GC together. SFC can be more advantageous than HPLC and GC when compounds which decompose at high temperatures with GC and do not have functional groups to be detected by HPLC detection systems are analyzed.
There are three major qualities for column chromatographies:
• Selectivity.
• Efficiency.
• Sensitivity.
Generally, HPLC has better selectivity that SFC owing to changeable mobile phases (especially during a particular experimental run) and a wide range of stationary phases. Although SFC does not have the selectivity of HPLC, it has good quality in terms of sensitivity and efficiency. SFC enables change of some properties during the chromatographic process. This tuning ability allows the optimization of the analysis. Also, SFC has a broader range of detectors than HPLC. SFC surpasses GC for the analysis of easily decomposable substances; these materials can be used with SFC due to its ability to work with lower temperatures than GC.
Instrumentation for SFC
As it can be seen in Figure \(2\) SFC has a similar setup to an HPLC instrument. They use similar stationary phases with similar column types. However, there are some differences. Temperature is critical for supercritical fluids, so there should be a heat control tool in the system similar to that of GC. Also, there should be a pressure control mechanism, a restrictor, because pressure is another essential parameter in order for supercritical fluid materials to be kept at the required level. A microprocessor mechanism is placed in the instrument for SFC. This unit collects data for pressure, oven temperature, and detector performance to control the related pieces of the instrument.
Stationary Phase
SFC columns are similar to HPLC columns in terms of coating materials. Open-tubular columns and packed columns are the two most common types used in SFC. Open-tubular ones are preferred and they have similarities to HPLC fused-silica columns. This type of column contains an internal coating of a cross-linked siloxane material as a stationary phase. The thickness of the coating can be 0.05-1.0 μm. The length of the column can range from of 10 to 20 m.
Mobile Phases
There is a wide variety of materials used as mobile phase in SFC. The mobile phase can be selected from the solvent groups of inorganic solvents, hydrocarbons, alcohols, ethers, halides; or can be acetone, acetonitrile, pyridine, etc. The most common supercritical fluid which is used in SFC is carbon dioxide because its critical temperature and pressure are easy to reach. Additionally, carbon dioxide is low-cost, easy to obtain, inert towards UV, non-poisonous and a good solvent for non-polar molecules. Other than carbon dioxide, ethane, n-butane, N2O, dichlorodifluoromethane, diethyl ether, ammonia, tetrahydrofuran can be used. Table \(2\) shows select solvents and their Tc and Pc values.
Solvent Critical Temperature (°C) Critical Pressure (bar)
Carbon dioxide (CO2) 31.1 72
Nitrous oxide (N2O) 36.5 70.6
Ammonia (NH3) 132.5 109.8
Ethane (C2H6) 32.3 47.6
n-Butane (C4H10) 152 70.6
Diethyl ether (Et2O) 193.6 63.8
Tetrahydrofuran (THF, C4H8O) 267 50.5
Dichlorodifluoromethane (CCl2F2) 111.7 109.8
Table \(2\) Properties of some solvents as mobile phase at the critical point.
Detectors
One of the biggest advantage of SFC over HPLC is the range of detectors. Flame ionization detector (FID), which is normally present in GC setup, can also be applied to SFC. Such a detector can contribute to the quality of analyses of SFC since FID is a highly sensitive detector. SFC can also be coupled with a mass spectrometer, an UV-visible spectrometer, or an IR spectrometer more easily than can be done with an HPLC. Some other detectors which are used with HPLC can be attached to SFC such as fluorescence emission spectrometer or thermionic detectors.
Advantages of working with SFC
The physical properties of supercritical fluids between liquids and gases enables the SFC technique to combine with the best aspects of HPLC and GC, as lower viscosity of supercritical fluids makes SFC a faster method than HPLC. Lower viscosity leads to high flow speed for the mobile phase.
Thanks to the critical pressure of supercritical fluids, some fragile materials that are sensitive to high temperature can be analyzed through SFC. These materials can be compounds which decompose at high temperatures or materials which have low vapor pressure/volatility such as polymers and large biological molecules. High pressure conditions provide a chance to work with lower temperature than normally needed. Hence, the temperature-sensitive components can be analyzed via SFC. In addition, the diffusion of the components flowing through a supercritical fluid is higher than observed in HPLC due to the higher diffusivity of supercritical fluids over traditional liquids mobile phases. This results in better distribution into the mobile phase and better separation.
Applications of SFC
The applications of SFC range from food to environmental to pharmaceutical industries. In this manner, pesticides, herbicides, polymers, explosives and fossil fuels are all classes of compounds that can be analyzed. SFC can be used to analyze a wide variety of drug compounds such as antibiotics, prostaglandins, steroids, taxol, vitamins, barbiturates, non-steroidal anti-inflammatory agents, etc. Chiral separations can be performed for many pharmaceutical compounds. SFC is dominantly used for non-polar compounds because of the low efficiency of carbon dioxide, which is the most common supercritical fluid mobile phase, for dissolving polar solutes. SFC is used in the petroleum industry for the determination of total aromatic content analysis as well as other hydrocarbon separations.
Supercritical Fluid Extraction (SFE)
The unique physical properties of supercritical fluids, having values for density, diffusivity and viscosity values between liquids and gases, enables supercritical fluid extraction to be used for the extraction processes which cannot be done by liquids due to their high density and low diffusivity and by gases due to their inadequate density in order to extract and carry the components out.
Complicated mixtures containing many components should be subject to an extraction process before they are separated via chromatography. An ideal extraction procedure should be fast, simple, and inexpensive. In addition, sample loss or decomposition should not be experienced at the end of the extraction. Following extraction, there should be a quantitative collection of each component. Ideally, the amount of unwanted materials coming from the extraction should be kept to a minimum and be easily disposable; the waste should not be harmful for environment. Unfortunately, traditional extraction methods often do not meet these requirements. In this regard, SFE has several advantages in comparison with traditional techniques.
The extraction speed is dependent on the viscosity and diffusivity of the mobile phase. With a low viscosity and high diffusivity, the component which is to be extracted can pass through the mobile phase easily. The higher diffusivity and lower viscosity of supercritical fluids, as compared to regular extraction liquids, help the components to be extracted faster than other techniques. Thus, an extraction process can take just 10-60 minutes with SFE, while it would take hours or even days with classical methods.
The dissolving efficiency of a supercritical fluid can be altered by temperature and pressure. In contrast, liquids are not affected by temperature and pressure changes as much. Therefore, SFE has the potential to be optimized to provide a better dissolving capacity.
In classical methods, heating is required to get rid of the extraction liquid. However, this step causes the temperature-sensitive materials to decompose. For SFE, when the critical pressure is removed, a supercritical fluid transforms to gas phase. Because supercritical fluid solvents are chemically inert, harmless and inexpensive; they can be released to atmosphere without leaving any waste. Through this, extracted components can be obtained much more easily and sample loss is minimized.
Instrumentation of SFE
The necessary apparatus for a SFE setup is simple. Figure \(3\) depicts the basic elements of a SFE instrument, which is composed of a reservoir of supercritical fluid, a pressure tuning injection unit, two pumps (to take the components in the mobile phase in and to send them out of the extraction cell), and a collection chamber.
There are two principle modes to run the instrument:
• Static extraction.
• Dynamic extraction.
In dynamic extraction, the second pump sending the materials out to the collection chamber is always open during the extraction process. Thus, the mobile phase reaches the extraction cell and extracts components in order to take them out consistently.
In the static extraction experiment, there are two distinct steps in the process:
1. The mobile phase fills the extraction cell and interacts with the sample.
2. The second pump is opened and the extracted substances are taken out at once.
In order to choose the mobile phase for SFE, parameters taken into consideration include the polarity and solubility of the samples in the mobile phase. Carbon dioxide is the most common mobile phase for SFE. It has a capability to dissolve non-polar materials like alkanes. For semi-polar compounds (such as polycyclic aromatic hydrocarbons, aldehydes, esters, alcohols, etc.) carbon dioxide can be used as a single component mobile phase. However, for compounds which have polar characteristic, supercritical carbon dioxide must be modified by addition of polar solvents like methanol (CH3OH). These extra solvents can be introduced into the system through a separate injection pump.
Extraction Modes
There are two modes in terms of collecting and detecting the components:
• Off-line extraction.
• On-line extraction.
Off-line extraction is done by taking the mobile phase out with the extracted components and directing them towards the collection chamber. At this point, supercritical fluid phase is evaporated and released to atmosphere and the components are captured in a solution or a convenient adsorption surface. Then the extracted fragments are processed and prepared for a separation method. This extra manipulation step between extractor and chromatography instrument can cause errors. The on-line method is more sensitive because it directly transfers all extracted materials to a separation unit, mostly a chromatography instrument, without taking them out of the mobile phase. In this extraction/detection type, there is no extra sample preparation after extraction for separation process. This minimizes the errors coming from manipulation steps. Additionally, sample loss does not occur and sensitivity increases.
Applications of SFE
SFE can be applied to a broad range of materials such as polymers, oils and lipids, carbonhydrates, pesticides, organic pollutants, volatile toxins, polyaromatic hydrocarbons, biomolecules, foods, flavors, pharmaceutical metabolites, explosives, and organometallics, etc. Common industrial applications include the pharmaceutical and biochemical industry, the polymer industry, industrial synthesis and extraction, natural product chemistry, and the food industry.
Examples of materials analyzed in environmental applications: oils and fats, pesticides, alkanes, organic pollutants, volatile toxins, herbicides, nicotin, phenanthrene, fatty acids, aromatic surfactants in samples from clay to petroleum waste, from soil to river sediments. In food analyses: caffeine, peroxides, oils, acids, cholesterol, etc. are extracted from samples such as coffee, olive oil, lemon, cereals, wheat, potatoes and dog feed. Through industrial applications, the extracted materials vary from additives to different oligomers, and from petroleum fractions to stabilizers. Samples analyzed are plastics, PVC, paper, wood etc. Drug metabolites, enzymes, steroids are extracted from plasma, urine, serum or animal tissues in biochemical applications.
Summary
Supercritical fluid chromatography and supercritical fluid extraction are techniques that take advantage of the unique properties of supercritical fluids. As such, they provide advantages over other related methods in both chromatography and extraction. Sometimes they are used as alternative analytical techniques, while other times they are used as complementary partners for binary systems. Both SFC and SFE demonstrate their versatility through the wide array of applications in many distinct domains in an advantageous way. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.03%3A_Basic_Principles_of_Supercritical_Fluid_Chromatography_and_Supercrtical_Fluid_Extraction.txt |
A popular and powerful tool in the chemical world, chromatography separates mixtures based on chemical properties – even some than were previously thought inseparable. It combines a multitude of pieces, concepts, and chemicals to form an instrument suited to specific separation. One form of chromatography that is often overlooked is that of supercritical fluid chromatography.
History
Supercritical fluid chromatography (SFC) begins its history in 1962 under the name “high pressure gas chromatography”. It started off slow and was quickly overshadowed by the development of high performance liquid chromatography (HPLC) and the already developed gas chromatography. SFC was not a popular method of chromatography until the late 1980s, when more publications began exemplifying its uses and techniques.
SFC was first reported by Klesper et al. They succeeded in separating thermally labile porphyrin mixtures on polyethylene glycol stationary phase with two mobile phase units: dichlorodifluoromethane (CCl2F2) and monochlorodifluoromethane (CHCl2F), as shown in Figure \(1\). Their results proved that supercritical fluids’ low viscosity but high diffusivity functions well as a mobile phase.
After Klesper’s paper detailing his separation procedure, subsequent scientists aimed to find the perfect mobile phase and the possible uses for SFC. Using gases such as He, N2, CO2, and NH3, they examined purines, nucleotides, steroids, sugars, terpenes, amino acids, proteins, and many more substances for their retention behavior. They discovered that CO2 was an ideal supercritical fluid due to its low critical temperature of 31 °C and relatively low critical pressure of 72.8 atm. Extra advantages of CO2 included it being cheap, non-flammable, and non-toxic. CO2 is now the standard mobile phase for SFC.
In the development of SFC over the years, the technique underwent multiple trial-and-error phases. Open tubular capillary column SFC had the advantage of independently and cooperatively changing all three parameters (pressure, temperature, and modifier content) to a certain extent. Like any chromatography method, however, it had its drawbacks. Changing the pressure, the most important parameter, often required changing the flow velocity due to the constant diameter of the capillaries. Additionally, CO2, the ideal mobile phase, is non-polar, and its polarity could not be altered easily or with a gradient.
Over the years, many uses were discovered for SFC. It was identified as a useful tool in the separation of chiral compounds, drugs, natural products, and organometallics (see below for more detail). Most SFCs currently are involved a silica (or silica + modifier) packed column with a CO2 (or CO2 + modifier) mobile phase. Mass spectrometry is the most common tool used to analyze the separated samples.
Supercritical Fluids
What is a Supercritical Fluid?
As mentioned previously, the advantage to supercritical fluids is the combination of the useful properties from two phases: liquids and gases. Supercritical fluids are gas-like in the ways of expanding to fill a given volume, and the motions of the particles are close to that of a gas. On the side of liquid properties, supercritical fluids have densities near that of liquids and thus dissolve and interact with other particles, as you would expect of a liquid. To visualize phase changes in relation to pressure and temperature, phase diagrams are used as shown in Figure \(2\)
Figure \(2\) shows the stark differences between two phases in relation to the surrounding conditions. There exist two ambiguous regions. One of these is the point at which all three lines intersect: the triple point. This is the temperature and pressure at which all three states can exist in a dynamic equilibrium. The second ambiguous point comes at the end of the liquid/gas line, where it just ends. At this temperature and pressure, the pure substance has reached a point where it will no longer exist as just one phase or the other: it exists as a hybrid phase – a liquid and gas dynamic equilibrium.
Unique Properties of Supercritical Fluids
As a result of the dynamic liquid-gas equilibrium, supercritical fluids possess three unique qualities: increased density (on the scale of a liquid), increased diffusivity (similar to that of a gas), and lowered viscosity (on the scale of a gas). Table \(1\) shows the similarities in each of these properties. Remember, each of these explains a part of why SFC is an advantageous method of chemical separation.
Density (g/mL) Diffusivity (cm2/s) Dynamic Viscosity (g/cm s)
Gas 1 x 10-3 1 x 10-1 1 x 10-2
Liquid 1.0 5 x 10-6 1 x 10-4
Supercritical Fluid 3 x 10-1 1 x 10-3 1 x 10-2
Table \(1\): Typical properties of gas, liquid, and supercritical fluid of typical organic compounds (order of magnitude).
Applying the Properties of Supercritical Fluids to Chromatography
How are these properties useful? An ideal mobile phase and solvent will do three things well: interact with other particles, carry the sample through the column, and quickly (but accurately) elute it.
Density, as a concept, is simple: the denser something is, the more likely that it will interact with particles it moves through. Affected by an increase in pressure (given constant temperature), density is largely affected by a substance entering the supercritical fluid zone. Supercritical fluids are characterized with densities comparable to those of liquids, meaning they have a better dissolving effect and act as a better carrier gas. High densities among supercritical fluids are imperative for both their effect as solvents and their effect as carrier gases.
Diffusivity refers to how fast the substance can spread among a volume. With increased pressure comes decreased diffusivity (an inverse relationship) but with increased temperature comes increased diffusivity (a direct relationship related to their kinetic energy). Because supercritical fluids have diffusivity values between a gas and liquid, they carry the advantage of a liquid’s density, but the diffusivity closer to that of a gas. Because of this, they can quickly carry and elute a sample, making for an efficient mobile phase.
Finally, dynamic viscosity can be viewed as the resistance to other components flowing through, or intercalating themselves, in the supercritical fluid. Dynamic viscosity is hardly affected by temperature or pressure for liquids, whereas it can be greatly affected for supercritical fluids. With the ability to alter dynamic viscosity through temperature and pressure, the operator can determine how resistant their supercritical fluid should be.
Supercritical Properties of CO2
Because of its widespread use in SFC, it’s important to discuss what makes CO2 an ideal supercritical fluid. One of the biggest limitations to most mobile phases in SFC is getting them to reach the critical point. This means extremely high temperatures and pressures, which is not easily attainable. The best gases for this are ones that can achieve a critical point at relatively low temperatures and pressures.
As seen from Figure \(3\), CO2 has a critical temperature of approximately 31 °C and a critical pressure of around 73 atm. These are both relatively low numbers and are thus ideal for SFC. Of course, with every upside there exists a downside. In this case, CO2 lacks polarity, which makes it difficult to use its mobile phase properties to elute polar samples. This is readily fixed with a modifier, which will be discussed later.
The Instrument
SFC has a similar instrument setup to most other chromatography machines, notably HPLC. The functions of the parts are very similar, but it is important to understand them for the purposes of understanding the technique. Figure \(4\) shows a schematic representation of a typical apparatus.
Columns
There are two main types of columns used with SFC: open tubular and packed, as seen below. The columns themselves are near identical to HPLC columns in terms of material and coatings. Open tubular columns are most used and are coated with a cross-linked silica material (powdered quartz, SiO2) for a stationary phase. Column lengths range, but usually fall between 10 and 20 meters and are coated with less than 1 µm of silica stationary phase. Figure \(5\) demonstrates the differences in the packing of the two columns.
Injector
Injectors act as the main site for the insertion of samples. There are many different kinds of injectors that depend on a multitude of factors. For packed columns, the sample must be small and the exact amount depends on the column diameter. For open tubular columns, larger volumes can be used. In both cases, there are specific injectors that are used depending on how the sample needs to be placed in the instrument. A loop injector is used mainly for preliminary testing. The sample is fed into a chamber that is then flushed with the supercritical fluid and pushed down the column. It uses a low-pressure pump before proceeding with the full elution at higher pressures. An inline injector allows for easy control of sample volume. A high-pressure pump forces the (specifically measured) sample into a stream of eluent, which proceeds to carry the sample through the column. This method allows for specific dilutions and greater flexibility. For samples requiring no dilution or immediate interaction with the eluent, an in-column injector is useful. This allows the sample to be transferred directly into the packed column and the mobile phase to then pass through the column.
Pump
The existence of a supercritical fluid, as discussed previously, depends on high temperatures and high pressures. The pump is responsible for delivering the high pressures. By pressurizing the gas (or liquid), it can cause the substance to become dense enough to exhibit signs of the desired supercritical fluid. Because pressure couples with heat to create the supercritical fluid, the two are usually very close together on the instrument.
Oven
The oven, as referenced before, exists to heat the mobile phase to its desired temperature. In the case of SFC, the desired temperature is always the critical temperature of the supercritical fluid. These ovens are precisely controlled and standard across SFC, HPLC, and GC.
Detector
So far, there has been one largely overlooked component of the SFC machine: the detector. Technically not a part of the chromatographic separation process, the detector still plays an important role: identifying the components of the solution. While the SFC aims to separate components with good resolution (high purity, no other components mixed in), the detector aims to define what each of these components is made of.
The two detectors most often found on SFC instruments are either flame ionization detectors (FID) or mass spectrometers (MS):
• FIDs operate through ionizing the sample in a hydrogen-powered flame. By doing so, they produce charged particles, which hit electrodes, and the particles are subsequently quantified and identified.
• MS operates through creating an ionized spray of the sample, and then separating the ions based on a mass/charge ratio. The mass/charge ratio is plotted against ion abundance and creates a “fingerprint” for the chemical identified. This chemical fingerprint is then matched against a database to isolate which compound it was. This can be done for each unique elution, rendering the SFC even more useful than if it were standing alone.
Sample
Generally speaking, samples need little preparation. The only major requirement is that it dissolves in a solvent less polar than methanol: it must have a dielectric constant lower than 33, since CO2 has a low polarity and cannot easily elute polar samples. To combat this, modifiers are added to the mobile phase.
Stationary Phase
The stationary phase is a neutral compound that acts as a source of “friction” for certain molecules in the sample as they slide through the column. Silica attracts polar molecules and thus the molecules attach strongly, holding until enough of the mobile phase has passed through to attract them away. The combination of the properties in the stationary phase and the mobile phase help determine the resolution and speed of the experiment.
Mobile Phase
The mobile phase (the supercritical fluid) pushes the sample through the column and elutes separate, pure, samples. This is where the supercritical fluid’s properties of high density, high diffusivity, and low viscosity come into play. With these three properties, the mobile phase is able to adequately interact with the sample, quickly push through it, and strongly plow through the sample to separate it out. The mobile phase also partly determines how it separates out: it will first carry out similar molecules, ones with similar polarities, and follow gradually with molecules with larger polarities.
Modifiers
Modifiers are added to the mobile phase to play with its properties. As mentioned a few times previously, CO2supercritical fluid lacks polarity. In order to add polarity to the fluid (without causing reactivity), a polar modifier will often be added. Modifiers usually raise the critical pressure and temperature of the mobile phase a little, but in return add polarity to the phase and result in a fully resolved sample. Unfortunately, with too much modifier, higher temperatures and pressures are needed and reactivity increases (which is dangerous and bad for the operator). Modifiers, such as ethanol or methanol, are used in small amounts as needed for the mobile phase in order to create a more polar fluid.
Advantages of Supercritical Fluid Chromatography
Clearly, SFC possesses some extraordinary potential as far as chromatography techniques go. It has some incredible capabilities that allow efficient and accurate resolution of mixtures. Below is a summary of its advantages and disadvantages stacked against other conventional (competing) chromatography methods.
Advantages over HPLC
• Because supercritical fluids have low viscosities the analysis is faster, there is a much lower pressure drop across the column, and open tubular columns can be used.
• Shorter column lengths are needed (10-20 m for SFC versus 15-60 m for HPLC) due to the high diffusivity of the supercritical fluid. More interactions can occur in a shorter span of time/distance.
• Resolving power is much greater (5x) than HPLC due to the high diffusivity of the supercritical fluid. More interactions result in better separation of the components in a shorter amount of time.
Advantages over GC
• Able to analyze many solutes with no derivatization since there is no need to convert most polar groups into nonpolar ones.
• Can analyze thermally labile compounds more easily with high resolution since it can provide faster analysis at lower temperatures.
• Can analyze solutes with high molecular weight due to their greater solubizing power.
General Disadvantages
• Cannot analyze extremely polar solutes due to relatively nonpolar mobile phase, CO2.
Applications
While the use of SFC has been mainly organic-oriented, there are still a few ways that inorganic compound mixtures are separated using the method. The two main ones, separation of chiral compounds (mainly metal-ligand complexes) and organometallics are discussed here.
Chiral Compounds
For chiral molecules, the procedures and choice of column in SFC are very similar to those used in HPLC. Packed with cellulose type chiral stationary phase (or some other chiral stationary phase), the sample flows through the chiral compound and only molecules with a matching chirality will stick to the column. By running a pure CO2 supercritical fluid mobile phase, the non-sticking enantiomer will elute first, followed eventually (but slowly) with the other one.
In the field of inorganic chemistry, a racemic mixture of Co(acac)3, both isomers shown in Figure \(6\) has been resolved using a cellulose-based chiral stationary phase. The SFC method was one of the best and most efficient instruments in analyzing the chiral compound. While SFC easily separates coordinate covalent compounds, it is not necessary to use such an extensive instrument to separate mixtures of it since there are many simpler techniques.
Organometallics
Many d-block organometallics are highly reactive and easily decompose in air. SFC offers a way to chromatograph mixtures of large, unusual organometallic compounds. Large cobalt and rhodium based organometallic compound mixtures have been separated using SFC (Figure \(7\) ) without exposing the compounds to air.
By using a stationary phase of siloxanes, oxygen-linked silicon particles with different substituents attached, the organometallics were resolved based on size and charge. Thanks to the non-polar, highly diffusive, and high viscosity properties of a 100% CO2 supercritical fluid, the mixture was resolved and analyzed with a flame ionization detector. It was determined that the method was sensitive enough to detect impurities of 1%. Because the efficiency of SFC is so impressive, the potential for it in the organometallic field is huge. Identifying impurities down to 1% shows promise for not only preliminary data in experiments, but quality control as well.
Conclusion
While it may have its drawbacks, SFC remains an untapped resource in the ways of chromatography. The advantages to using supercritical fluids as mobile phases demonstrate how resolution can be increased without sacrificing time or increasing column length. Nonetheless, it is still a well-utilized resource in the organic, biomedical, and pharmaceutical industries. SFC shows promise as a reliable way of separating and analyzing mixtures. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.04%3A_Supercritical_Fluid_Chromatography.txt |
Ion Chromatography is a method of separating ions based on their distinct retention rates in a given solid phase packing material. Given different retention rates for two anions or two cations, the elution time of each ion will differ, allowing for detection and separation of one ion before the other. Detection methods are separated between electrochemical methods and spectroscopic methods. This guide will cover the principles of retention rates for anions and cations, as well as describing the various types of solid-state packing materials and eluents that can be used.
Principles of Ion Chromatography
Retention Models in Anion Chromatography
The retention model for anionic chromatography can be split into two distinct models, one for describing eluents with a single anion, and the other for describing eluents with complexing agents present. Given an eluent anion or an analyte anion, two phases are observed, the stationary phase (denoted by S) and the mobile phase (denoted by M). As such, there is equilibrium between the two phases for both the eluent anions and the analyte anions that can be described by Equation \ref{1}.
$y*[A^{x-}_{M}]\ +\ x*[E^{y-}_{S}]\ \Leftrightarrow \ y*[A^{x-}_{S}]\ +\ x*[E^{y-}_{M}] \label{1}$
This yields an equilibrium constant as given in Equation \ref{2} .
$K_{A,E} = \frac{ [A^{x-}_{S}]^{y} [E^{y-}_{M}]^{x} \gamma ^{y} _{A^{x-}_{S} } \gamma ^{x} _{E^{y-}_{S}} }{ [A^{x-}_{M}] ^{y} [E^{y-}_{S}]^{x} \gamma ^{y} _{A^{x-}_{M}} \gamma ^{x} _{E^{y-}_{S}}} \label{2}$
Given the activity of the two ions cannot be found in the stationary or mobile phases, the activity coefficients are set to 1. Two new quantities are then introduced. The first is the distribution coefficient, DA, which is the ratio of analyte concentrations in the stationary phase to the mobile phase, Equation \ref{3} . The second is the retention factor, k1A, which is the distribution coefficient times the ratio of volume between the two phases, Equation \ref{4} .
$D_{A} \ =\ \frac{[A_{S}]}{[A_{M}]} \label{3}$
$k_{A}^{1} \ = \ D_{A} * \frac{V_{S}}{V_{M}} \label{4}$
Substituting the two quantities from Equation \ref{3} and Equation \ref{4} into Equation \ref{2} , the equilibrium constant can be written as Equation \ref{5}
$K_{A,E} \ = (k_{A}^{1} \frac{V_{M}}{V_{S}})^{y} * (\frac{[E_{M}^{y-} ]}{[E^{y-}_{S}]})^{x} \label{5}$
Given there is usually a large difference in concentrations between the eluent and the analyte (with magnitudes of 10 greater eluent), equation 4 can be re-written under the assumption that all the solid phase packing material’s functional groups are taken up by Ey-. As such, the stationary Ey- can be substituted with the exchange capacity divided by the charge of Ey-. This yields Equation \ref{6}
$K_{A,E} \ = (k_{A}^{1} \frac{V_{M}}{V_{S}})^{y} * (\frac{Q}{\gamma })^{-x} [E_{M}^{y-}] \label{6}$
Solving for the retention factor Equation \ref{7} is developed.
$z*[A^{x-}_{M}] \ +\ x*[B^{z-}_{S}] \Leftrightarrow z* [A^{x-}_{S}] \ +\ x*[B^{z-}_{M}] \label{7}$
Equation \ref{8} shows the relationship between retention factor and parameters like eluent concentration and the exchange capacity, which allows parameters of the ion chromatography to be manipulated and the retention factors to be determined. Equation \ref{9} only works for a single analyte present, but a relationship for the selectivity between two analytes [A] and [B] can easily be determined.
First the equilibrium between the two analytes is determined as Equation \ref{8}
$K_{A,B} \ = \frac{[A^{x-}_{S}]^{z} [B^{z-}_{M}]^{x}}{[A^{x-}_{M}]^{z} [B^{z-}_{S}]^{x}} \label{8}$
The equilibrium constant can be written as Equation \ref{9} (ignoring activity):
$\alpha _{A,B} \ = \frac{[A^{x-}_{S}][B^{z-}_{M}]}{[A^{x-}_{M}][B^{z-}_{S}]} \label{9}$
The selectivity can then be determined to be Equation \ref{10}
$\alpha _{A,B} \ = \frac{[A^{x-}_{S}][B^{z-}_{M}]}{[A^{x-}_{M}][B^{z-}_{S}]} \label{10}$
Equation \ref{10} can then be simplified into a logarithmic form as the following two equations:
$\log \alpha _{A,B} = \frac{1}{z} log K_{A,B} \ + \frac{x-z}{z} log \frac{ k_{A}^{1} V_{M}}{V_{S}} \label{11}$
$\log \alpha _{A,B} = \frac{1}{x} log K_{A,B} \ + \frac{x-z}{z} log \frac{ k_{A}^{1} V_{M}}{V_{S}} \label{12}$
When the two charges are the same, it can be seen that the selectivity is only a factor of the selectivity coefficients and the charges. When the two charges are different, it can be seen that the two retention factors are dependent upon each other.
In situations with a polyatomic eluent, three models are used to account for the multiple anions in the eluent. The first is the dominant equilibrium model, in which one anion is so dominant in concentration; the other eluent anions are ignored. The dominant equilibrium model works best for multivalence analytes. The second is the effective charge model, where an effective charge of the eluent anions is found, and a relationship similar to EQ is found with the effective charge. The effective charge models works best with monovalent analytes. The third is the multiple eluent species model, where Equation \ref{13} describes the retention factor:
$\log K_{A}^{1} \ =\ C_{3} - (\frac{X_{1}}{a} + \frac{X_{2}}{b} + \frac{X_{3}}{c}) -\ log C_{P} \label{13}$
C3 is a constant that includes the phase volume ratio between stationary, the equilibrium constant, and mobile and the exchange capacity. Cp is the total concentration of the eluent species. X1, X2, X3, correspond to the shares of a particular eluent anion in the retention of the analyte.
Retention Models of Cation Chromatography
For eluents with a single cation and analytes that are alkaline earth metals, heavy metals or transition metals, a complexing agent is used to bind with the metal during chromatography. This introduces the quantity A(m) to the retention rate calculations, where A(m) is the ratio of free metal ion to the total concentration of metal. Following a similar derivation to the single anion case, Equation \ref{14} is found.
$K_{A,E} = \ (\frac{ k_{A}^{1}}{ \alpha _{M} \phi } )^{y} * (\frac{Q}{\gamma })^{-x} [E ^{y+} _{M} ]^{x} \label{14}$
Solving for the retention coefficient, Equation \ref{15} is found.
$k_{A}^{1} = \alpha _{M} \phi * K_{A,E} ^{\frac{1}{\gamma } } (\frac{Q}{\gamma })^{\frac{x}{y} } ([E_{M}^{y+}]^{- \frac{x}{y} } \label{15}$
From this expression, the retention rate of the cation can be determined from eluent concentration and the ratio of free metal ions to the total concentration of the metal, which itself is depended on the equilibrium of the metal ion with the complexing agent.
Solid Phase Packing Materials
The solid phase packing material used in the chromatography column is important to the exchange capacity of the anion or cation. There are many types of packing material, but all share a functional group that can bind either the anion or the cation complex. The functional group is mounted on a polymer surface or sphere, allowing large surface area for interaction.
Packing Material for Anion Chromatography
The primary functional group used for anion chromatography is the ammonium group. Amine groups are mounted on the polymer surface, and the pH is lowered to produce ammonium groups. As such, the exchange capacity is depended on the pH of the eluent. To reduce the pH dependency, the protons on the ammonium are successively replaced with alkyl groups until the all the protons are replaced and the functional group is still positively charged, but pH independent. The two packing materials used in almost all anion chromatography are trimethylamine (NMe3, Figure $1$ ) and dimethylanolamine (Figure $2$ ).
Packing Material for Cation Chromatography
Cation chromatography allows for the use of both organic polymer based and silica gel based packing material. In the silica gel based packing material, the most common packing material is a polymer-coated silica gel. The silicate is coated in polymer, which is held together by cross-linking of the polymer. Polybutadiene maleic acid (Figure $3$ ) is then used to create a weakly acidic material, allowing the analyte to diffuse through the polymer and exchange. Silica gel based packing material is limited by the pH dependent solubility of the silica gel and the pH dependent linking of the silica gel and the functionalized polymer. However, silica gel based packing material is suitable for separation of alkali metals and alkali earth metals.
Organic polymer based packing material is not limited by pH like the silica gel materials are, but are not suitable for separation of alkali metals and alkali earth metals. The most common functional group is the sulfonic acid group (Figure $4$ ) attached with a spacer between the polymer and the sulfonic acid group.
Detection Methods
Spectroscopic Detection Methods
Photometric detection in the UV region of the spectrum is a common method of detection in ion chromatography. Photometric methods limit the eluent possibilities, as the analyte must have a unique absorbance wavelength to be detectable. Cations that do not have a unique absorbance wavelength, i.e. the eluent and other contaminants have similar UV visible spectra can be complexed to for UV visible compounds. This allows detection of the cation without interference from eluents.
Coupling the chromatography with various types of spectroscopy such as Mass spectroscopy or IR spectroscopy can be a useful method of detection. Inductively coupled plasma atomic emission spectroscopy is a commonly used method.
Direct Conductivity Methods
Direct conductivity methods take advantage of the change in conductivity that an analyte produces in the eluent, which can be modeled by Equation \ref{16} where equivalent conductivity is defined as Equation \ref{17} .
$\Delta K \ =\frac{(\Lambda _{A} \ -\ \Lambda _{g} ) * C_{s}}{1000} \label{16}$
$\Lambda \ =\frac{L}{A*R} * \frac{1}{C} \label{17}$
With L being the distance between two electrodes of area A and R being the resistance the ion creates. C is the concentration of the ion. The conductivity can be plotted over time, and the peaks that appear represent different ions coming through the column as described by Equation \ref{18}
$K_{peak} \ =\ (\Lambda _{A} \ -\ \Lambda _{g})*C_{A} \label{18}$
The values of Equivalent conductivity of the analyte and of the eluent common ions can be found in Table $1$
Table $1$
Cations $\Lambda ^{+} (S\ cm^{2} eq^{-1} )$ Anions $\Lambda ^{-} (S\ cm^{2} eq^{-1} )$
$H^{+}$ 350 $OH^{-}$ 198
$Li ^{+}$ 39 $F^{-}$ 54
$Na^{+}$ 50 $Cl^{-}$ 76
$K^{+}$ 74 $Br^{-}$ 78
$NH^{4+}$ 73 $I^{-}$ 77
$1/2 Mg^{2+}$ 53 $NO^{-}_{2}$ 72
$1/2 Ca^{2+}$ 60 $NO^{-}_{3}$ 71
$1/2Sr^{2+}$ 59 $HCO_{3}^{-}$ 45
$1/2 Ba^{2+}$ 64 $1/2 CO_{3}^{2-}$ 72
$1/2 Zn^{2+}$ 52 $H_{2}PO_{4}^{-}$ 33
$1/2 Hg^{2+}$ 53 $1/2 HPO_{4}^{-}$ 57
$1/2 Cu^{2+}$ 55 $1/3 PO_{4}^{-}$ 69
$1/2 Pb ^{2+}$ 71 $1/2 SO_{4}^{2-}$ 80
$1/2 Co ^{2+}$ 53 $CN^{-}$ 82
$1/3 Fe^{3+}$ 70 $SCN^{-}$ 66
$N(Et)^{4+}$ 33 Acetate 41
1/2 Phthalate 38
Propionate 36
Benzoate 32
Salicylate 30
1/2 Oxalate 74
Eluents
The choice of eluent depends on many factors, namely, pH, buffer capacity, the concentration of the eluent, and the nature of the eluent’s reaction with the column and the packing material.
Eluents in Anion Chromatography
In non-suppressed anion chromatography, where the eluent and analyte are not altered between the column and the detector, there is a wide range of eluents to be used. In the non-suppressed case, the only issue that could arise is if the eluent impaired the detection ability (absorbing in a similar place in a UV-spectra as the analyte for instance). As such, there are a number of commonly used eluents. Aromatic carboxylic acids are used in conductivity detection because of their low self-conductivity. Aliphatic carboxylic acids are used for UV/visible detection because they are UV transparent. Inorganic acids can only be used in photometric detection.
In suppressed anion chromatography, where the eluent and analyte are treated between the column and detection, fewer eluents can be used. The suppressor modifies the eluent and the analyte, reducing the self-conductivity of the eluent and possibly increasing the self-conductivity of the analyte. Only alkali hydroxides and carbonates, borates, hydrogen carbonates, and amino acids can be used as eluents.
Eluents in Cation Chromatography
The primary eluents used in cation chromatography of alkali metals and ammoniums are mineral acids such as HNO3. When the cation is multivalent, organic bases such as ethylenediamine (Figure $5$ ) serve as the main eluents. If both alkali metals and alkali earth metals are present, hydrochloric acid or 2,3-diaminopropionic acid (Figure $6$ ) is used in combination with a pH variation. If the chromatography is unsuppressed, the direct conductivity measurement of the analyte will show up as a negative peak due to the high conductivity of the H+ in the eluent, but simple inversion of the data can be used to rectify this discrepancy.
If transition metals or H+ are the analytes in question, complexing carboxylic acids are used to suppress the charge of the analyte and to create photometrically detectable complexes, forgoing the need for direct conductivity as the detection method. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.05%3A_Ion_Chromatography.txt |
Capillary electrophoresis (CE) encompasses a family of electrokinetic separation techniques that uses an applied electric field to separate out analytes based on their charge and size. The basic principle is hinged upon that of electrophoresis, which is the motion of particles relative to a fluid (electrolyte) under the influence of an electric field. The founding father of electrophoresis, Arne W. K. Tiselius (Figure $1 a$ ), first used electrophoresis to separate proteins, and he went on to win a Nobel Prize in Chemistry in 1948 for his work on both electrophoresis and adsorption analysis. However, it was Stellan Hjerten (Figure $1 b$ ) who worked under Arne W. K. Tiselius, who pioneered work in CE in 1967, although CE was not well recognized until 1980 when James W. Jorgenson (Figure $1 c$ ) and Krynn D. Lukacs published a series of papers describing this new technique.
Instrument Overview
The main components of CE are shown in Figure $2$. The electric circuit of the CE is the heart of the instrument.
Injection Methods
The samples that are studied in CE are mainly liquid samples. A typical capillary column has an inner diameter of 50 μm and a length of 25 cm. Because the column can only contain a minimal amount of running buffer, only small sample volumes can be tested (nL to μL). The samples are introduced mainly by two injection methods: hydrodynamic and electrokinetic injection. The two methods are displayed in Table $1$ A disadvantage of electrokinetic injection is that the composition of the injected sample may not be the same as the composition of the original sample. This is because the injection method is dependent on the electrophoretic and electroosmotic mobility of the species in the sample. However, both injection methods depend on the temperature and the viscosity of the solution. Hence, it is important to control both parameters when a reproducible volume of sample injections is desired. It is advisable to use internal standards instead of external standards when performing quantitative analysis on the samples as it is hard to control both the temperature and viscosity of the solution.
Injection Methods Working Principle
Hydrodynamic Injection The sample vial is enclosed in a chamber with one end of fixed capillary column immersed in it. Pressure is then applied to the chamber for a fixed period so that the sample can enter the capillary. After the sample, has been introduced, the capillary is withdrawn and then re-immersed into the source reservoir and separation takes place.
Electrokinetic Injection The sample is enclosed in a chamber with one end of capillary column immersed in it with an electrode present. The electric field is applied, and the samples enter the capillary. After the sample, has been introduced, the capillary is withdrawn and then re-immersed into the source reservoir and separation takes place.
Table $1$ The working principle of the two injection methods used in CE.
Column
After the samples have been injected, the capillary column is used as the main medium to separate the components. The capillary column used in CE shares the same characteristics as the capillary column used in gas chromatography (GC); however, the most critical components of the CE column are:
• the inner diameter of the capillary,
• the total length of the capillary,
• the length of the column from the injector to the detector.
Solvent Buffer
The solvent buffer carries the sample through the column. It is crucial to employ a good buffer as a successful CE experiment is hinged upon this. CE is based on the separation of charges in an electric field. Therefore, the buffer should either sustain the pre-existing charge on the analyte or enable the analyte to obtain a charge, and it is important to consider the pH of the buffer before using it.
Applied Voltage (kV)
The applied voltage is important in the separation of the analytes as it drives the movement of the analyte. It is important that it is not too high as it may become a safety concern.
Detectors
Analytes that have been separated after the applying the voltage can be detected by many detection methods. The most common method is UV-visible absorbance. The detection takes place across the capillary with a small portion of the capillary acting as the detection cell. The on-tube detection cell is usually made optically transparent by scraping off the polyimide coating and coating it with another optically transparent material so that the capillary would not break easily. For species that do not have a chromophore, a chromophore can be added to the buffer solution. When the analyte passes by, there would be a decrease in signal. This decreased signal will correspond to the amount of analyte present. Other common detection techniques employable in CE are fluorescence and mass spectrometry (MS).
Theory
In CE, the sample is introduced into the capillary by the above-mentioned methods. A high voltage is then applied causing the ions of the sample to migrate towards the electrode in the destination reservoir, in this case, the cathode. Sample components migration and separation are determined by two factors, electrophoretic mobility and electroosmotic mobility.
Electrophoretic Mobility
The electrophoretic mobility, $μ_{ep}$, is inherently dependent on the properties of the solute and the medium in which the solute is moving. Essentially, it is a constant value, that can be calculated as given by \ref{1} where $q$ is the solutes charge, $η$ is the buffer viscosity and $r$ is the solute radius.
$\mu _{ep} = \dfrac{q}{6\pi \eta r} \label{1}$
The electrophoretic velocity, $v_{ep}$, is dependent on the electrophoretic mobility and the applied electric field, $E$ (\ref{2}).
$\nu _{ep} = \mu _{ep} E \label{2}$
Thus, when solutes have a larger charge to size ratio the electrophoretic mobility and velocity will increase. Cations and the anion would move in opposing directions corresponding to the sign of the electrophoretic mobility with is a result of their charge. Thus, neutral species that have no charge do not have an electrophoretic mobility.
Electroosmotic Mobility
The second factor that controls the migration of the solute is the electroosmotic flow. With zero charge, it is expected that the neutral species should remain stationary. However, under normal conditions, the buffer solution moves towards the cathode as well. The cause of the electroosmotic flow is the electric double layer that develops at the silica solution interface.
At pH more than 3, the abundant silanol (-OH) groups present on the inner surface of the silica capillary, de-protonate to form negatively charged silanate ions (-SiO-). The cations present in the buffer solution will be attracted to the silanate ions and some of them will bind strongly to it forming a fixed layer. The formation of the fixed layer only partially neutralizes the negative charge on the capillary walls. Hence, more cations than anions will be present in the layer adjacent to the fixed layer, forming the diffuse layer. The combination of the fixed layer and diffuse layer is known as the double layer as shown in Figure $3$. The cations present in the diffuse layer will migrate towards the cathode, as these cations are solvated the solution will also flow with it, producing the electroosmotic flow. The anions present in the diffuse layer are solvated and will move towards the anode. However, as there are more cations than anions the cations will push the anions together with it in the direction of the cathode. Hence, the electroosmotic flow moves in the direction of the cathode.
The electroosmotic mobility, μeof, is described by \ref{3} where ξ is the zeta potential, ε is the buffer dielectric constant and η is the buffer viscosity. The electroosmotic velocity, veof, is the rate at which the buffer moves through the capillary is given by \ref{4} .
$\mu _{eof} \ =\ \frac{\zeta \varepsilon }{4\pi \eta } \label{3}$
$\nu _{eof}\ =\ \mu _{eof}E \label{4}$
Zeta Potential
The zeta potential, ξ, also known as the electrokinetic potential is the electric potential at the interface of the double layer. Hence, in our case, it is the potential of the diffuse layer that is at a finite distance from the capillary wall. Zeta potential is mainly affected and directly proportional to two factors:
1. The thickness of the double layer. A higher concentration of cations possibly due to an increase in the buffers ionic strength would lead to a decrease in the thickness of the double layer. As the thickness of the double layer decreases, the zeta potential would decrease that results in the decrease of the electroosmotic flow.
2. The charge on the capillary walls. A greater density of the silanate ions corresponds to a larger zeta potential. The formation of silanate ions is pH dependent. Hence, at pH less than 2 there is a decrease in the zeta potential and the electroosmotic flow as the silanol exists in its protonated form. However, as the pH increases, there are more silanate ions formed causing an increase in zeta potential and hence, the electroosmotic flow.
Order of Elution
Electroosmotic flow of the buffer is generally greater than the electrophoretic flow of the analytes. Hence, even the anions would move to the cathode as illustrated in Figure $4$. Small, highly charged cations would be the first to elute before larger cations with lower charge. This is followed by the neutral species which elutes as one band in the middle. The larger anions with low charge elute next and lastly, the highly charged small anion would have the longest elution time. This is clearly portrayed in the electropherogram in Figure $5$.
Optimizing the CE Experiment
There are several components that can be varied to optimize the electropherogram obtained from CE. Hence, for any given setup certain parameters should be known:
• the total length of the capillary (L),
• the length the solutes travel from the start to the detector (l),
• the applied voltage (V).
Reduction in Migration Time, tmn
To shorten the analysis time, a higher voltage can be used or a shorter capillary tube can be used. However, it is important to note that the voltage cannot be arbitrarily high as it will lead to joule heating. Another possibility is to increase μeof by increasing pH or decreasing the ionic strength of the buffer, \ref{5} .
$t_{mn} \ =\ \frac{1\ L}{(\mu _{ep} \ +\ \mu_{eof}) V } \label{5}$
Efficiency
In chromatography, the efficiency is given by the number of theoretical plates, N. In CE, there exist a similar parameter, \ref{6} where D is the solute`s diffusion coefficient. Efficiency increase s with an increase in voltage applied as the solute spends less time in the capillary there will be less time for the solute to diffuse. Generally, for CE, N will be very large.
$N\ =\frac{1^{2}}{2Dt_{mn}} = \frac{\mu _{tot} V l}{2DL} \label{6}$
Resolution Between Two Peaks
The resolution between two peaks, R, is defined by \ref{7} where Δv is the difference in velocity of two solutes and ṽ is the average velocity of two solutes.
$R= \frac{\sqrt{N} }{4} \times \frac{\Delta v}{ \tilde{\nu } } \label{7}$
Substituting the equation by N gives \ref{8}
$R\ = 0.177(\mu _{ep,1} \ -\ \mu _{ep,2}) \sqrt{ \frac{V}{D(\nu _{av} + \mu _{eof})} } \label{8}$
Therefore, increasing the applied voltage, V, will increase the resolution. However, it is not very effective as a 4-fold increase in applied voltage would only give a 2-fold increase in resolution. In addition, increase in N, the number of theoretical plates would result in better resolution.
Selectivity
In chromatography, selectivity, α, is defined as the ratio of the two retention factors of the solute. This is the same for CE, \ref{9} , where t2 and t1 are the retention times for the two solutes such that, α is more than 1.
$\alpha =\frac{t_{2}}{t_{1}} \label{9}$
Selectivity can be improved by adjusting the pH of the buffer solution. The purpose is to change the charge of the species being eluted.
Comparison Between CE and HPLC
CE unlike High-performance liquid chromatography (HPLC) accommodates many samples and tends to have a better resolution and efficiency. A comparison between the two methods is given in Table $2$.
CE HPLC
Wider selection of analyte to be analyzed Limited by the solubility of the sample
Higher efficiency, no stationary mass transfer term as there is no stationary phase Efficiency is lowered due to the stationary mass transfer term (equilibration between the stationary and mobile phase)
Electroosmotic flow profile in the capillary is flat as a result no band broadening. Better peak resolution and sharper peaks Rounded laminar flow profile that is common in pressure driven systems such as HPLC. Resulting in broader peaks and lower resolution
Can be coupled to most detectors depending on application Some detectors require the solvent to be changed and prior modification of the sample before analysis
Greater peak capacity as it uses a very large number of theoretical plates, N The peak capacity is lowered as N is not as large
High voltages are used when carrying out the experiment No need for high voltage
Table $2$ Advantages and disadvantages of CE versus HPLC.
Micellar Electrokinetic Chromatography
CE allows the separation of charged particles, and it is mainly compared to ion chromatography. However, no separation takes place for neutral species in CE. Thus, a modified CE technique named micellar electrokinetic chromatography (MEKC) can be used to separate neutrals based on its size and its affinity to the micelle. In MEKC, surfactant species is added to the buffer solution at a concentration at which micelles will form. An example of a surfactant is sodium dodecyl sulfate (SDS) as seen in Figure $6$
Neutral molecules are in dynamic equilibrium between the bulk solution and interior of the micelle. In the absence of the micelle the neutral species would reach the detector at t0 but in the presence of the micelle, it reaches the detector at tmc, where tmc is greater than t0. The longer the neutral molecule remains in the micelle, the longer it's migration time. Thus small, non-polar neutral species that favor interaction with the interior of the micelle would take a longer time to reach the detector than a large, polar species. Anionic, cationic and zwitter ionic surfactants can be added to change the partition coefficient of the neutral species. Cationic surfactants would result in positive micelles that would move in the direction of electroosmotic flow. This enables it to move faster towards the cathode. However, due to the fast migration, it is possible that insufficient time is given for the neutral species to interact with the micelle resulting in poor separation. Thus, all factors must be considered before choosing the right surfactant to be used. The mechanism of separation between MEKC and liquid chromatography is the same. Both are dependent on the partition coefficient of the species between the mobile phase and stationary phase. The main difference lies in the pseudo stationary phase in MEKC, the micelles. The micelle which can be considered the stationary phase in MEKC moves at a slower rate than the mobile ions.
Case Study: The Use of CE in Separation of Quantum Dots
Quantum dots (QD) are semiconductor nanocrystals that lie in the size range of 1-10 nm, and they have different electrophoretic mobility due to their varying sizes and surface charge. CE can be used to separate and characterize such species, and a method to characterize and separate CdSe QD in the aqueous medium has been developed. The QDs were synthesized with an outer layer of trioctylphosphine (TOP, Figure $7 a$) and trioctylphosphine oxide (TOPO, Figure $7 b$), making the surface of the QD hydrophobic. The background electrolyte solution used was SDS, in order to make the QDs soluble in water and form a QD-TOPO/TOP-SDS complex. Different sizes of CdSe were used and the separation was with respect to the charge-to-mass ratio of the complexes. It was concluded from the study that the larger the CdSe core (i.e., the larger the charge-to-mass ratio) eluted out last. The electropherogram from the study is shown in Figure $8$ from which it is visible that good separation had taken place by using CE. Laser-induced fluorescence detection was used, the buffer system was SDS, and the pH of the system set up was fixed at 6.5. The pH is highly important in this case as the stability of the system and the separation is dependent on it. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.06%3A_Capillary_Electrophoresis.txt |
• 4.1: Magnetism
The magnetic moment of a material is the incomplete cancellation of the atomic magnetic moments in that material. Electron spin and orbital motion both have magnetic moments associated with them but in most atoms the electronic moments are oriented usually randomly so that overall in the material they cancel each other out; this is called diamagnetism.
• 4.2: IR Spectroscopy
Infrared spectroscopy is based on molecular vibrations caused by the oscillation of molecular dipoles. Bonds have characteristic vibrations depending on the atoms in the bond, the number of bonds and the orientation of those bonds with respect to the rest of the molecule. Thus, different molecules have specific spectra that can be collected for use in distinguishing products or identifying an unknown substance (to an extent.)
• 4.3: Raman Spectroscopy
Raman spectroscopy is a powerful tool for determining chemical species. As with other spectroscopic techniques, Raman spectroscopy detects certain interactions of light with matter. In particular, this technique exploits the existence of Stokes and Anti-Stokes scattering to examine molecular structure.
• 4.4: UV-Visible Spectroscopy
Ultraviolet-visible (UV-vis) spectroscopy is used to obtain the absorbance spectra of a compound in solution or as a solid. What is actually being observed spectroscopically is the absorbance of light energy or electromagnetic radiation, which excites electrons from the ground state to the first singlet excited state of the compound or material. The UV-vis region of energy for the electromagnetic spectrum covers 1.5 - 6.2 eV which relates to a wavelength range of 800 - 200 nm.
• 4.5: Photoluminescence, Phosphorescence, and Fluorescence Spectroscopy
Photoluminescence spectroscopy is a contactless, nondestructive method of probing the electronic structure of materials. Light is directed onto a sample, where it is absorbed and imparts excess energy into the material in a process called photo-excitation. One way this excess energy can be dissipated by the sample is through the emission of light, or luminescence. In the case of photo-excitation, this luminescence is called photoluminescence.
• 4.6: Mössbauer Spectroscopy
In 1957 Rudolf Mossbauer achieved the first experimental observation of the resonant absorption and recoil-free emission of nuclear γ-rays in solids during his graduate work at the Institute for Physics of the Max Planck Institute for Medical Research in Heidelberg Germany. Mossbauer received the 1961 Nobel Prize in Physics for his research in resonant absorption of γ-radiation and the discovery of recoil-free emission a phenomenon that is named after him. The Mossbauer effect is the basis of Mo
• 4.7: NMR Spectroscopy
Nuclear magnetic resonance spectroscopy (NMR) is a widely used and powerful method that takes advantage of the magnetic properties of certain nuclei. The basic principle behind NMR is that some nuclei exist in specific nuclear spin states when exposed to an external magnetic field.
• 4.8: EPR Spectroscopy
Electron paramagnetic resonance spectroscopy (EPR) is a powerful tool for investigating paramagnetic species, including organic radicals, inorganic radicals, and triplet states. The basic principles behind EPR are very similar to the more ubiquitous nuclear magnetic resonance spectroscopy (NMR), except that EPR focuses on the interaction of an external magnetic field with the unpaired electron(s) in a molecule, rather than the nuclei of individual atoms.
• 4.9: X-ray Photoelectron Spectroscopy
X-ray photoelectron spectroscopy (XPS), also called electron spectroscopy for chemical analysis (ESCA), is a method used to determine the elemental composition of a material’s surface. It can be further applied to determine the chemical or electronic state of these elements.
• 4.10: ESI-QTOF-MS Coupled to HPLC and its Application for Food Safety
Mass spectrometry (MS) is a detection technique by measuring mass-to-charge ratio of ionic species. The procedure consists of different steps. First, a sample is injected in the instrument and then evaporated. Second, species in the sample are charged by certain ionized methods, such as electron ionization (EI), electrospray ionization (ESI), chemical ionization (CI), matrix-assisted laser desorption/ionization (MALDI).
• 4.11: Mass Spectrometry
Mass spectrometry (MS) is a powerful characterization technique used for the identification of a wide variety of chemical compounds. At its simplest, MS is merely a tool for determining the molecular weight of the chemical species in a sample. However, with the high resolution obtainable from modern machines, it is possible to distinguish isomers, isotopes, and even compounds with nominally identical molecular weights. Libraries of mass spectra have been compiled which allow rapid identification
04: Chemical Speciation
Magnetics
Magnetic Moments
The magnetic moment of a material is the incomplete cancelation of the atomic magnetic moments in that material. Electron spin and orbital motion both have magnetic moments associated with them (Figure $1$) but in most atoms the electronic moments are oriented usually randomly so that overall in the material they cancel each other out (Figure $2$ ) this is called diamagnetism.
If the cancelation of the moments is incomplete then the atom has a net magnetic moment. There are many subclasses of magnetic ordering such as para-, superpara-, ferro-, antiferro- or ferromagnetism which can be displayed in a material and which usually depends, upon the strength and type of magnetic interactions and external parameters such as temperature and crystal structure atomic content and the magnetic environment which a material is placed in.
$\mu _{B} \ =\ \frac{eh}{4\pi m} \ = \ 9.72 \times 10^{-23} J/T \label{1}$
The magnetic moments of atoms, molecules or formula units are often quoted in terms of the Bohr magneton, which is equal to the magnetic moment due to electron spin
Magnetization
The magnetism of a material, the extent that which a material is magnetic, is not a static quantity, but varies compared to the environment that a material is placed in. It is similar to the temperature of a material. For example if a material is placed in an oven it will heat up to a temperature similar to that of the ovens. However the speed of heating of that material, and also that of cooling are determined by the atomic structure of the material. The magnetization of a material is similar. When a material is placed in a magnetic field it maybe become magnetized to an extent and retain that magnetization after it is removed from the field. The extent of magnetization, and type of magnetization and the length of time that a material remains magnetized, depends again on the atomic makeup of the material.
Measuring a materials magnetism can be done on a micro or macro scale. Magnetism is measured over two parameters direction and strength. Thus magnetization has a vector quantity. The simplest form of a magnetometer is a compass. It measures the direction of a magnetic field. However more sophisticated instruments have been developed which give a greater insight into a materials magnetism.
So what exactly are you reading when you observe the output from a magnetometer?
The magnetism of a sample is called the magnetic moment of that sample and will be called that from now on. The single value of magnetic moment for the sample, is a combination of the magnetic moments on the atoms within the sample ( Figure $3$ ), it is also the type and level of magnetic ordering and the physical dimensions of the sample itself.
The "intensity of magnetization", M, is a measure of the magnetization of a body. It is defined as the magnetic moment per unit volume or
$M \ =\ m/V \label{2}$
with units of Am (emucm3 in cgs notation).
A material contains many atoms and their arrangement affects the magnetization of that material. In Figure $4$ (a) a magnetic moment m is contained in unit volume. This has a magnetization of m Am. Figure $4$ (b) shows two such units, with the moments aligned parallel. The vector sum of moments is 2m in this case, but as the both the moment and volume are doubled M remains the same. In Figure $4$ (c) the moments are aligned antiparallel. The vector sum of moments is now 0 and hence the magnetization is 0 Am.
Scenarios (b) and (c) are a simple representation of ferro- and antiferromagnetic ordering. Hence we would expect a large magnetization in a ferromagnetic material such as pure iron and a small magnetization in an antiferromagnet such as γ-Fe2O3
Magnetic Response
When a material is passed through a magnetic field it is affected in two ways:
1. Through its susceptibility.
2. Through its permeability
Magnetic Susceptibility
The concept of magnetic moment is the starting point when discussing the behavior of magnetic materials within a field. If you place a bar magnet in a field it will experience a torque or moment tending to align its axis in the direction of the field. A compass needle behaves in the same way. This torque increases with the strength of the poles and their distance apart. So the value of magnetic moment tells you, in effect, 'how big a magnet' you have.
If you place a material in a weak magnetic field, the magnetic field may not overcome the binding energies that keep the material in a non magnetic state. This is because it is energetically more favorable for the material to stay exactly the same. However, if the strength of the magnetic moment is increased, the torque acting on the smaller moments in the material, it may become energetically more preferable for the material to become magnetic. The reasons that the material becomes magnetic depends on factors such as crystal structure the temperature of the material and the strength of the field that it is in. However a simple explanation of this is that as the magnetic moment strength increases it becomes more favorable for the small fields to align themselves along the path of the magnetic field, instead of being opposed to the system. For this to occur the material must rearrange its magnetic makeup at the atomic level to lower the energy of the system and restore a balance.
It is important to remember that when we consider the magnetic susceptibility and take into account how a material changes on the atomic level when it is placed in a magnetic field with a certain moment. The moment that we are measuring with our magnetometer is the total moment of that sample.
$\chi \ =\ \frac{M}{H} \label{3}$
where X = susceptibility, M = variation of magnetization, and H = applied field.
Magnetic Permeability
Magnetic permeability is the ability of a material to conduct an electric field. In the same way that materials conduct or resist electricity, materials also conduct or resist a magnetic flux or the flow of magnetic lines of force (Figure $6$ ).
Ferromagnetic materials are usually highly permeable to magnetic fields. Just as electrical conductivity is defined as the ratio of the current density to the electric field strength, so the magnetic permeability, μ, of a particular material is defined as the ratio of flux density to magnetic field strength. However unlike in electrical conductivity magnetic permeability is nonlinear.
$\mu \ =\ B/H \label{4}$
Permeability, where μ is written without a subscript, is known as absolute permeability. Instead a variant is used called relative permeability.
$\mu \ =\ \mu _{0} \times \mu _{r} \label{5}$
Absolute permeability is a variation upon 'straight' or absolute permeability, μ, but is more useful as it makes clearer how the presence of a particular material affects the relationship between flux density and field strength. The term 'relative' arises because this permeability is defined in relation to the permeability of a vacuum, μ0.
$\mu _{r} \ =\ \mu / \mu_{0} \label{6}$
For example, if you use a material for which μr = 3 then you know that the flux density will be three times as great as it would be if we just applied the same field strength to a vacuum.
Initial Permeability
Initial permeability describes the relative permeability of a material at low values of B (below 0.1 T). The maximum value for μ in a material is frequently a factor of between 2 and 5 or more above its initial value.
Low flux has the advantage that every ferrite can be measured at that density without risk of saturation. This consistency means that comparison between different ferrites is easy. Also, if you measure the inductance with a normal component bridge then you are doing so with respect to the initial permeability.
Permeability of a Vacuum in the SI
The permeability of a vacuum has a finite value - about 1.257 × 10-6 H m-1 - and is denoted by the symbol μ0. Note that this value is constant with field strength and temperature. Contrast this with the situation in ferromagnetic materials where μ is strongly dependent upon both. Also, for practical purposes, most non-ferromagnetic substances (such as wood, plastic, glass, bone, copper aluminum, air and water) have permeability almost equal to μ0; that is, their relative permeability is 1.0.
The permeability, μ, the variation of magnetic induction, with applied field,
$\mu \ =\ B/H \label{7}$
Background Contributions
A single measurement of a sample's magnetization is relatively easy to obtain, especially with modern technology. Often it is simply a case of loading the sample into the magnetometer in the correct manner and performing a single measurement. This value is, however, the sum total of the sample, any substrate or backing and the sample mount. A sample substrate can produce a substantial contribution to the sample total.
For substrates that are diamagnetic, under zero applied field, this means it has no effect on the measurement of magnetization. Under applied fields its contribution is linear and temperature independent. The diamagnetic contribution can be calculated from knowledge of the volume and properties of the substrate and subtracted as a constant linear term to produce the signal from the sample alone. The diamagnetic background can also be seen clearly at high fields where the sample has reached saturation: the sample saturates but the linear background from the substrate continues to increase with field. The gradient of this background can be recorded and subtracted from the readings if the substrate properties are not known accurately.
Hysteresis
When a material exhibits hysteresis, it means that the material responds to a force and has a history of that force contained within it. Consider if you press on something until it depresses. When you release that pressure, if the material remains depressed and doesn’t spring back then it is said to exhibit some type of hysteresis. It remembers a history of what happened to it, and may exhibit that history in some way. Consider a piece of iron that is brought into a magnetic field, it retains some magnetization, even after the external magnetic field is removed. Once magnetized, the iron will stay magnetized indefinitely. To demagnetize the iron, it is necessary to apply a magnetic field in the opposite direction. This is the basis of memory in a hard disk drive.
The response of a material to an applied field and its magnetic hysteresis is an essential tool of magnetometry. Paramagnetic and diamagnetic materials can easily be recognized, soft and hard ferromagnetic materials give different types of hysteresis curves and from these curves values such as saturation magnetization, remnant magnetization and coercivity are readily observed. More detailed curves can give indications of the type of magnetic interactions within the sample.
Diamagnetism and Paramagnetizm
The intensity of magnetization depends upon both the magnetic moments in the sample and the way that they are oriented with respect to each other, known as the magnetic ordering.
Diamagnetic materials, which have no atomic magnetic moments, have no magnetization in zero field. When a field is applied a small, negative moment is induced on the diamagnetic atoms proportional to the applied field strength. As the field is reduced the induced moment is reduced.
In a paramagnet the atoms have a net magnetic moment but are oriented randomly throughout the sample due to thermal agitation, giving zero magnetization. As a field is applied the moments tend towards alignment along the field, giving a net magnetization which increases with applied field as the moments become more ordered. As the field is reduced the moments become disordered again by their thermal agitation. The figure shows the linear response M v H where μH << kT.
Ferromagnetism
The hysteresis curves for a ferromagnetic material are more complex than those for diamagnets or paramagnets. Below diagram shows the main features of such a curve for a simple ferromagnet.
In the virgin material (point 0) there is no magnetization. The process of magnetization, leading from point 0 to saturation at M = Ms, is outlined below. Although the material is ordered ferromagnetically it consists of a number of ordered domains arranged randomly giving no net magnetization. This is shown in below (a) with two domains whose individual saturation moments, Ms, lie antiparallel to each other.
As the magnetic field, H, is applied, (b), those domains which are more energetically favorable increase in size at the expense of those whose moment lies more antiparallel to H. There is now a net magnetization; M. Eventually a field is reached where all of the material is a single domain with a moment aligned parallel, or close to parallel, with H. The magnetization is now M = MsCosΘ where Θ is the angle between Ms along the easy magnetic axis and H. Finally Ms is rotated parallel to H and the ferromagnet is saturated with a magnetization M = Ms.
The process of domain wall motion affects the shape of the virgin curve. There are two qualitatively different modes of behavior known as nucleation and pinning, shown in Figure $10$ as curves 1 and 2, respectively.
In a nucleation-type magnet saturation is reached quickly at a field much lower than the coercive field. This shows that the domain walls are easily moved and are not pinned significantly. Once the domain structure has been removed the formation of reversed domains becomes difficult, giving high coercivity. In a pinning-type magnet fields close to the coercive field are necessary to reach saturation magnetization. Here the domain walls are substantially pinned and this mechanism also gives high coercivity.
Remnance
As the applied field is reduced to 0 after the sample has reached saturation the sample can still possess a remnant magnetization, Mr. The magnitude of this remnant magnetization is a product of the saturation magnetization, the number and orientation of easy axes and the type of anisotropy symmetry. If the axis of anisotropy or magnetic easy axis is perfectly aligned with the field then Mr = Ms, and if perpendicular Mr= 0.
At saturation the angular distribution of domain magnetizations is closely aligned to H. As the field is removed they turn to the nearest easy magnetic axis. In a cubic crystal with a positive anisotropy constant, K1, the easy directions are <100>. At remnance the domain magnetizations will lie along one of the three <100> directions. The maximum deviation from H occurs when H is along the <111> axis, giving a cone of distribution of 55o around the axis. Averaging the saturation magnetization over this angle gives a remnant magnetization of 0.832 Ms.
Coercivity
The coercive field, Hc, is the field at which the remnant magnetization is reduced to zero. This can vary from a few Am for soft magnets to 107Am for hard magnets. It is the point of magnetization reversal in the sample, where the barrier between the two states of magnetization is reduced to zero by the applied field allowing the system to make a Barkhausen jump to a lower energy. It is a general indicator of the energy gradients in the sample which oppose large changes of magnetization.
The reversal of magnetization can come about as a rotation of the magnetization in a large volume or through the movement of domain walls under the pressure of the applied field. In general materials with few or no domains have a high coercivity whilst those with many domains have a low coercivity. However, domain wall pinning by physical defects such as vacancies, dislocations and grain boundaries can increase the coercivity.
The loop illustrated in Figure $10$ is indicative of a simple bi-stable system. There are two energy minima: one with magnetization in the positive direction, and another in the negative direction. The depth of these minima is influenced by the material and its geometry and is a further parameter in the strength of the coercive field. Another is the angle, ΘH, between the anisotropy axis and the applied field. The above fig shows how the shape of the hysteresis loop and the magnitude of Hc varies with ΘH. This effect shows the importance of how samples with strong anisotropy are mounted in a magnetometer when comparing loops.
Temperature Dependence
A hysteresis curve gives information about a magnetic system by varying the applied field but important information can also be gleaned by varying the temperature. As well as indicating transition temperatures, all of the main groups of magnetic ordering have characteristic temperature/magnetization curves. These are summarized in Figure $11$ and Figure $12$.
At all temperatures a diamagnet displays only any magnetization induced by the applied field and a small, negative susceptibility.
The curve shown for a paramagnet (Figure $11$ ) is for one obeying the Curie law,
$\chi \ =\ \frac{c}{t} \label{8}$
and so intercepts the axis at T = 0. This is a subset of the Curie-Weiss law,
$\chi \ =\ \frac{C}{T- \Theta } \label{9}$
where θ is a specific temperature for a particular substance (equal to 0 for paramagnets).
Above TN and TC both antiferromagnets and ferromagnets behave as paramagnets with 1/χ linearly proportional to temperature. They can be distinguished by their intercept on the temperature axis, T = Θ. Ferromagnetics have a large, positive Θ, indicative of their strong interactions. For paramagnetics Θ = 0 and antiferromagnetics have a negative Θ.
The net magnetic moment per atom can be calculated from the gradient of the straight line graph of 1/χ versus temperature for a paramagnetic ion, rearranging Curie's law to give \ref{10} .
$\mu \ = \sqrt{ \frac{3Ak}{N_{X} } } \label{10}$
where A is the atomic mass, k is Boltzmann's constant, N is the number of atoms per unit volume and x is the gradient.
Ferromagnets below TC display spontaneous magnetization. Their susceptibility above TC in the paramagnetic region is given by the Curie-Weiss law
where g is the gyromagnetic constant. In the ferromagnetic phase with T greater than TC the magnetization M (T) can be simplified to a power law, for example the magnetization as a function of temperature can be given by \ref{11} .
$M(T) \approx (T_{C} \ -\ T) ^{\beta} \label{11}$
where the term β is typically in the region of 0.33 for magnetic ordering in three dimensions.
The susceptibility of an antiferromagnet increases to a maximum at TN as temperature is reduced, then decreases again below TN. In the presence of crystal anisotropy in the system this change in susceptibility depends on the orientation of the spin axes: χ (parallel)decreases with temperature whilst χ (perpendicular) is constant. These can be expressed as \ref{12} .
$\chi \perp = \frac{C}{2 \Theta} \label{12}$
where C is the Curie constant and Θ is the total change in angle of the two sublattice magnetizations away from the spin axis, and \ref{13}
$\chi \parallel \ =\ \frac{2n_{g} \mu ^{2} _{H} B'(J,a' _{0} ) }{2kT\ +\ n_{g} \mu ^{2} _{H} \gamma \rho B'(J,a' _{0}) } \perp \ = \frac{C}{2 \Theta} \label{13}$
where ng is the number of magnetic atoms per gramme, B is the derivative of the Brillouin function with respect to its argument a, evaluated at a0, μH is the magnetic moment per atom and γ is the molecular field coefficient.
Theory of a Superconducting Quantum Interference Device (SQUID)
One of the most sensitive forms of magnetometry is SQUID magnetometry. This uses technique uses a combination of superconducting materials and Josephson junctions to measure magnetic fields with resolutions up to ~10-14 kG or greater. In the proceeding pages we will describe how a SQUID actually works.
Electron-pair Waves
In superconductors the resistanceless current is carried by pairs of electrons, known as Cooper Pairs. A Cooper Pair is a pair of electrons. Each electron has a quantized wavelength. With a Cooper pair each electrons wave couples with its opposite number over a large distances. This phenomenon is a result of the very low temperatures at which many materials will superconduct.
What exactly is superconductance? When a material is at very low temperatures, its crystal lattice behaves differently than when it higher temperatures. Usually at higher temperatures a material will have large vibrations called in the crystal lattice. These vibrations scatter electrons as they pass through this lattice (Figure $13$ ), and this is the basis for bad conductance.
With a superconductor the material is designed to have very small vibrations, these vibrations are lessened even more by cooling the material to extremely low temperatures. With no vibrations there is no scattering of the electrons and this allows the material to superconduct.
The origin of a Cooper pair is that as the electron passes through a crystal lattice at superconducting temperatures it negative charge pulls on the positive charge of the nuclei in the lattice through coulombic interactions producing a ripple. An electron traveling in the opposite direction is attracted by this ripple. This is the origin of the coupling in a Cooper pair (Figure $14$ ).
A passing electron attracts the lattice, causing a slight ripple toward its path. Another electron passing in the opposite direction is attracted to that displacement (Figure $15$ ).
Each pair can be treated as a single particle with a whole spin, not half a spin such as is usually the case with electrons. This is important, as an electron which is classed in a group of matter called Fermions are governed by the Fermi exclusion principle which states that anything with a spin of one half cannot occupy the same space as something with the same spin of one half. This turns the electron means that a Cooper pair is in fact a Boson the opposite of a Fermion and this allows the Coopers pairs to condensate into one wave packet. Each Coopers pair has a mass and charge twice that of a single electron, whose velocity is that of the center of mass of the pair. This coupling can only happen in extremely cold conditions as thermal vibrations become greater than the force that an electron can exert on a lattice. And thus scattering occurs.
Each pair can be represented by a wavefunction of the form
where P is the net momentum of the pair whose center of mass is at r. However, all the Cooper pairs in a superconductor can be described by a single wavefunction yet again due to the fact that the electrons are in a Coopers pair state and are thus Bosons in the absence of a current because all the pairs have the same phase - they are said to be "phase coherent"
This electron-pair wave retains its phase coherence over long distances, and essentially produces a standing wave over the device circuit. In a SQUID there are two paths which form a circle and are made with the same standing wave (Figure $17$ ). The wave is split in two sent off along different paths, and then recombined to record an interference pattern by adding the difference between the two.
This allows measurement at any phase differences between the two components, which if there is no interference will be exactly the same, but if there is a difference in their path lengths or in some interaction that the waves encounters such as a magnetic field it will correspond in a phase difference at the end of each path length.
A good example to use is of two water waves emanating from the same point. They will stay in phase if they travel the same distance, but will fall out of phase if one of them has to deviate around an obstruction such as a rock. Measuring the phase difference between the two waves then provides information about the obstruction.
Phase and Coherence
Another implication of this long range coherence is the ability to calculate phase and amplitude at any point on the wave's path from the knowledge of its phase and amplitude at any single point, combined with its wavelength and frequency. The wavefunction of the electron-pair wave in the above eqn. can be rewritten in the form of a one-dimensional wave as
$\psi_{p} \ =\ \psi \ sin(2 \pi ) (\frac{ \chi }{ \lambda } \ - \nu t ) \label{14}$
If we take the wave frequency, V, as being related to the kinetic energy of the Cooper pair with a wavelength, λ, being related to the momentum of the pair by the relation λ = h/p then it is possible to evaluate the phase difference between two points in a current carrying superconductor.
If a resistanceless current flows between points X and Y on a superconductor there will be a phase difference between these points that is constant in time.
Effect of a Magnetic Field
The parameters of a standing wave are dependent on a current passing through the circuit; they are also strongly affected by an applied magnetic field. In the presence of a magnetic field the momentum, p, of a particle with charge q in the presence of a magnetic field becomes mV + qA where A is the magnetic vector potential. For electron-pairs in an applied field their moment P is now equal to 2mV+2eA.
In an applied magnetic field the phase difference between points X and Y is now a combination of that due to the supercurrent and that due to the applied field.
The Fluxoid
One effect of the long range phase coherence is the quantization of magnetic flux in a superconducting ring. This can either be a ring, or a superconductor surrounding a non-superconducting region. Such an arrangement can be seen in Figure $18$ where region N has a flux density B within it due to supercurrents flowing around it in the superconducting region S.
In the closed path XYZ encircling the non-superconducting region there will be a phase difference of the electron-pair wave between any two points, such as X and Y, on the curve due to the field and the circulating current.
If the superelectrons are represented by a single wave then at any point on XYZX it can only have one value of phase and amplitude. Due to the long range coherence the phase is single valued also called quantized meaning around the circumference of the ring Δφ must equal 2πn where n is any integer. Due to the wave only having a single value the fluxoid can only exist in quantized units. This quantum is termed the fluxon, φ0, given by \ref{15} .
$\Phi_{0} = \dfrac{h}{2e} = 2.07 \times 10^{-15} W b \label{15}$
Josephson Tunneling
If two superconducting regions are kept totally isolated from each other the phases of the electron-pairs in the two regions will be unrelated. If the two regions are brought together then as they come close electron-pairs will be able to tunnel across the gap and the two electron-pair waves will become coupled. As the separation decreases, the strength of the coupling increases. The tunneling of the electron-pairs across the gap carries with it a superconducting current as predicted by B.D. Josephson and is called "Josephson tunneling" with the junction between the two superconductors called a "Josephson junction" (Figure $16$ ).
The Josephson tunneling junction is a special case of a more general type of weak link between two superconductors. Other forms include constrictions and point contacts but the general form is of a region between two superconductors which has a much lower critical current and through which a magnetic field can penetrate.
Superconducting Quantum Interference Device (SQUID)
A superconducting quantum interference device (SQUID) uses the properties of electron-pair wave coherence and Josephson Junctions to detect very small magnetic fields. The central element of a SQUID is a ring of superconducting material with one or more weak links called Josephesons Junctions. An example is shown in the below. With weak-links at points W and X whose critical current, ic, is much less than the critical current of the main ring. This produces a very low current density making the momentum of the electron-pairs small. The wavelength of the electron-pairs is thus very long leading to little difference in phase between any parts of the ring.
If a magnetic field, Ba , is applied perpendicular to the plane of the ring (Figure $21$, a phase difference is produced in the electron-pair wave along the path XYW and WZX. One of the features of a superconducting loop is that the magnetic flux, Φ, passing through it which is the product of the magnetic field and the area of the loop and is quantized in units of Φ0 = h/ (2e), where h is Planck’s constant, 2e is the charge of the Cooper pair of electrons, and Φ0 has a value of 2 × 10–15 tesla m2. If there are no obstacles in the loop, then the superconducting current will compensate for the presence of an arbitrary magnetic field so that the total flux through the loop (due to the external field plus the field generated by the current) is a multiple of Φ0.
Josephson predicted that a superconducting current can be sustained in the loop, even if its path is interrupted by an insulating barrier or a normal metal. The SQUID has two such barriers or ‘Josephson junctions’. Both junctions introduce the same phase difference when the magnetic flux through the loop is 0, Φ0, 2Φ0 and so on, which results in constructive interference, and they introduce opposite phase difference when the flux is Φ0/2, 3Φ0/2 and so on, which leads to destructive interference. This interference causes the critical current density, which is the maximum current that the device can carry without dissipation, to vary. The critical current is so sensitive to the magnetic flux through the superconducting loop that even tiny magnetic moments can be measured. The critical current is usually obtained by measuring the voltage drop across the junction as a function of the total current through the device. Commercial SQUIDs transform the modulation in the critical current to a voltage modulation, which is much easier to measure.
An applied magnetic field produces a phase change around a ring, which in this case is equal
$\Delta \Phi (B) \ =\ 2 \pi \frac{ \Phi _{a} }{ \Phi _{0} } \label{16}$
where Φa is the flux produced in the ring by the applied magnetic field. The magnitude of the critical measuring current is dependent upon the critical current of the weak-links and the limit of the phase change around the ring being an integral multiple of 2π. For the whole ring to be superconducting the following condition must be met
$\alpha \ +\ \beta \ +\ 2 \pi \frac{ \Phi _{a} }{ \Phi _{0} } \label{17}$
where α and β are the phase changes produced by currents across the weak-links and 2πΦao is the phase change due to the applied magnetic field.
When the measuring current is applied α and β are no longer equal, although their sum must remain constant. The phase changes can be written as \ref{18}
$\alpha = \pi [ n - \frac{ \Phi _{a} }{ \Phi _{0} } ] \ -\ \delta \beta \ = \ \pi \ [n \ - \frac{ \Phi _{a} }{ \Phi _{0} } ] + \delta \label{18}$
where δ is related to the measuring current I. Using the relation between current and phase from the above Eqn. and rearranging to eliminate i we obtain an expression for I, \ref{19}
$I_{c} \ =\ 2i_{c} | cos \pi \frac{ \Phi _{a} }{ \Phi _{0} } , sin \delta | \label{19}$
As sinδ cannot be greater than unity we can obtain the critical measuring current, Ic from the above \ref{20}
$I_{c} \ =\ 2i_{c} | cos \pi \frac{ \Phi _{a} }{ \Phi _{0} } | \label{20}$
which gives a periodic dependence on the magnitude of the magnetic field, with a maximum when this field is an integer number of fluxons and a minimum at half integer values as shown in the below figure.
Practical Guide to Using a Superconducting Quantum Interference Device
SQUIDs offer the ability to measure at sensitivities unachievable by other magnetic sensing methodologies. However, their sensitivity requires proper attention to cryogenics and environmental noise. SQUIDs should only be used when no other sensor is adequate for the task. There are many exotic uses for SQUID however we are just concerned with the laboratory applications of SQUID.
In most physical and chemical laboratories a device called a MPMS (Figure $23$ )
is used to measure the magnetic moment of a sample by reading the output of the SQUID detector. In a MPMS the sample moves upward through the electronic pick up coils called gradiometers. One upward movement is one whole scan. Multiple scans are used and added together to improve measurement resolution. After collecting the raw voltages, there is computation of the magnetic moments of the sample.
The MPMS measures the moment of a sample by moving it through a liquid Helium cooled, superconducting sensing coil. Many different measurements can be carried out using an MPMS however we will discuss just a few.
Using an Magnetic Property Measurement System (MPMS)
DC Magentization
DC magnetization is the magnetic per unit volume (M) of a sample. If the sample doesn’t have a permanent magnetic moment, a field is applied to induce one. The sample is then stepped through a superconducting detection array and the SQUID’s output voltage is processed and the sample moment computed. Systems can be configured to measure hysteresis loops, relaxation times, magnetic field, and temperature dependence of the magnetic moment.
A DC field can be used to magnetize samples. Typically, the field is fixed and the sample is moved into the detection coil’s region of sensitivity. The change in detected magnetization is directly proportional to the magnetic moment of the sample. Commonly referred to as SQUID magnetometers, these systems are properly called SQUID susceptometers (Figure $24$ ).
They have a homogeneous superconducting magnet to create a very uniform field over the entire sample measuring region and the superconducting pickup loops. The magnet induces a moment allowing a measurement of magnetic susceptibility. The superconducting detection loop array is rigidly mounted in the center of the magnet. This array is configured as a gradient coil to reject external noise sources. The detection coil geometry determines what mathematical algorithm is used to calculate the net magnetization.
An important feature of SQUIDs is that the induced current is independent of the rate of flux change. This provides uniform response at all frequencies i.e., true dc response and allows the sample to be moved slowly without degrading performance. As the sample passes through a coil, it changes the flux in that coil by an amount proportional to the magnetic moment M of the sample. The peak-to-peak signal from a complete cycle is thus proportional to twice M. The SQUID sensor shielded inside a niobium can is located where the fringe fields generated by the magnet are less than 10 mT. The detection coil circuitry is typically constructed using NbTi (Figure $25$ ). This allows measurements in applied fields of 9 T while maintaining sensitivities of 10−8 emu. Thermal insulation not shown is placed between the detection coils and the sample tube to allow the sample temperature to be varied.
The use of a variable temperature insert can allow measurements to be made over a wide range 1.8–400 K. Typically, the sample temperature is controlled by helium gas flowing slowly past the sample. The temperature of this gas is regulated using a heater located below the sample measuring region and a thermometer located above the sample region. This arrangement ensures that the entire region has reached thermal equilibrium prior to data acquisition. The helium gas is obtained from normal evaporation in the Dewar, and its flow rate is controlled by a precision regulating valve.
Procedures when using an MPMS
Calibration
The magnetic moment calibration for the SQUID is determined by measuring a palladium standard over a range of magnetic fields and then by adjusting to obtain the correct moment for the standard. The palladium standard samples are effectively point sources with an accuracy of approximately 0.1%.
Sample mounting considerations
The type, size and geometry of a sample is usually sufficient to determine the method you use to attach it to the sample. However mostly for MPMS measurements a plastic straw is used. This is due to the straw having minimal magnetic susceptibility.
However there are a few important considerations for the sample holder design when mounting a sample for measurement in a magnetometer. The sample holder can be a major contributor to the background signal. Its contribution can be minimized by choosing materials with low magnetic susceptibility and by keeping the mass to a minimum such as a plastic straw mentioned above.
The materials used to hold a sample must perform well over the temperature range to be used. In a MPMS, the geometric arrangement of the background and sample is critical when their magnetic susceptibilities will be of similar magnitude. Thus, the sample holder should optimize the sample’s positioning in the magnetometer. A sample should be mounted rigidly in order to avoid excess sample motion during measurement. A sample holder should also allow easy access for mounting the sample, and its background contribution should be easy to measure. This advisory introduces some mounting methods and discusses some of the more important considerations when mounting samples for the MPMS magnetometer. Keep in mind that these are only recommendations, not guaranteed procedures. The researcher is responsible for assuring that the methods and materials used will meet experimental requirements.
Sample Mounts
Platform Mounting
For many types of samples, mounting to a platform is the most convenient method. The platform’s mass and susceptibility should be as small as possible in order to minimize its background contribution and signal distortion.
Plastic Disc
A plastic disc about 2 mm thick with an outside diameter equivalent to the pliable plastic tube’s diameter (a clear drinking straw is suitable) is inserted and twisted into place. The platform should be fairly rigid. Mount samples onto this platform with glue. Place a second disc, with a diameter slightly less than the inside diameter of the tube and with the same mass, on top of the sample to help provide the desired symmetry. Pour powdered samples onto the platform and place a second disc on top. The powders will be able to align with the field. Make sure the sample tube is capped and ventilated.
Crossed Threads
Make one of the lowest mass sample platforms by threading a cross of white cotton thread (colored dyes can be magnetic). Using a needle made of a nonmagneticmetal, or at least carefully cleaned, thread some white cotton sewingthread through the tube walls and tie a secure knot so that the thread platform isrigid. Glue a sample to this platform or use the platform as asupport for a sample in a container. Use an additional thread cross on top to holdthe container in place.
Gelatin Capsule
Gelatin capsules can be very useful for containing and mounting samples. Many aspects of using gelatin capsules have been mentioned in the section, Containing the Sample. It is best if the sample is mounted near the capsule’s center, or if it completely fills the capsule. Use extra capsule parts to produce mirror symmetry. The thread cross is an excellent way of holding a capsule in place.
Thread Mounting
Another method of sample mounting is attaching the sample to a thread that runs through the sample tube. The thread can be attached to the sample holder at the ends of the sample tube with tape, for example. This method can be very useful with flat samples, such as those on substrates, particularly when the field is in the plane of the film. Be sure to close the sample tube with caps.
• Mounting with a disc platform.
• Mounting on crossed threads.
• Long thread mounting.
Steps for Inserting the Sample
1. Cut off a small section of a clear plastic drinking straw. The section must be small enough to fit inside the straw.
2. Weigh and measure the sample.
3. Use plastic tweezers to place the sample inside the small straw segment. It is important to use plastic tweezers not metallic ones as these will contaminate the sample.
4. Place the small straw segment inside the larger one. It should be approximately in the middle of the large drinking straw.
5. Attach the straw to the sample rod which is used to insert the sample into the SQUID machine.
6. Insert the sample rod with the attached straw into the vertical insertion hole on top of the SQUID.
Center the Sample
The sample must be centered in the SQUID pickup coils to ensure that all coils sense the magnetic moment of the sample. If the sample is not centered, the coils read only part of the magnetic moment.
During a centering measurement the MPMS scans the entire length of the samples vertical travel path, and the MPMS reads the maximum number of data points. During centering there are a number of terms which need to be understood.
1. A scan length is the length of a scan of a particular sample which should usually try and be the maximum of the sample.
2. A sample is centered when it is in the middle of a scan length. The data points are individual voltage readings plotting response curves in centering scan data files.
3. Autotracking is the adjustment of a sample position to keep a sample centered in SQUID coils. Autotracking compensates for thermal expansion and contraction in a sample rod.
As soon as a centering measurement is initiated, the sample transport moves upward, carrying the sample through the pickup coils. While the sample moves through the coils, the MPMS measures the SQUID’s response to the magnetic moment of the sample and saves all the data from the centering measurement.
After a centering plot is performed the plot is examined to determine whether the sample is centered in the SQUID pickup coils. The sample is centered when the part of the large, middle curve is within 5cm of the half-way point of the scan length.
The shape of the plot is a function of the geometry of the coils. The coils are wound in a way which strongly rejects interference from nearby magnetic sources and lets the MPMS function without a superconducting shield around the pickup coils.
Geometric Considerations
To minimize background noise and stray field effects, the MPMS magnetometer pick-up coil takes the form of a second-order gradiometer. An important feature of this gradiometer is that moving a long, homogeneous sample through it produces no signal as long as the sample extends well beyond the ends of the coil during measurement.
As a sample holder is moved through the gradiometer pickup coil, changes in thickness, mass, density, or magnetic susceptibility produce a signal. Ideally, only the sample to be measured produces this change. A homogeneous sample that extends well beyond the pick-up coils does not produce a signal, yet a small sample does produce a signal. There must be a crossover between these two limits. The sample length (along the field direction) should not exceed 10 mm. In order to obtain the most accurate measurements, it is important to keep the sample susceptibility constant over its length; otherwise distortions in the SQUID signal (deviations from a dipole signal) can result. It is also important to keep the sample close to the magnetometer centerline to get the most accurate measurements. When the sample holder background contribution is similar in magnitude to the sample signal, the relative positions of the sample and the materials producing the background are important. If there is a spatial offset between the two along the magnet axis, the signal produced by the combined sample and background can be highly distorted and will not be characteristic of the dipole moment being measured.
Even if the signal looks good at one temperature, a problem can occur if either of the contributions are temperature dependent.
Careful sample positioning and a sample holder with a center, or plane, of symmetry at the sample (i.e. materials distributed symmetrically about the sample, or along the principal axis for a symmetry plane) helps eliminate problems associated with spatial offsets.
Containing the Sample
Keep the sample space of the MPMS magnetometer clean and free of contamination with foreign materials. Avoid accidental sample loss into the sample space by properly containing the sample in an appropriate sample holder. In all cases it is important to close the sample holder tube with caps in order to contain a sample that might become unmounted. This helps avoid sample loss and subsequent damage during the otherwise unnecessary recovery procedure. Position caps well out of the sample-measuring region and introduce proper venting.
Sample Preparation Workspace
Work area cleanliness and avoiding sample contamination are very important concerns. There are many possible sources of contamination in a laboratory. Use diamond tools when cutting hard materials. Avoid carbide tools because of potential contamination by the cobalt binder found in many carbide materials. The best tools for preparing samples and sample holders are made of plastic, titanium, brass, and beryllium copper (which also has a small amount of cobalt). Tools labeled non-magnetic can actually be made of steel and often be made "magnetic" from exposure to magnetic fields. However, the main concern from these "non-magnetic" tools is contamination by the iron and other ferrous metals in the tool. It is important to have a clean white-papered workspace and a set of tools dedicated to mounting your own samples. In many cases, the materials and tools used can be washed in dilute acid to remove ferrous metal impurities. Follow any acid washes with careful rinsing with deionized water.
Powdered samples pose a special contamination threat, and special precautions must be taken to contain them. If the sample is highly magnetic, it is often advantageous to embed it in a low susceptibility epoxy matrix like Duco cement. This is usually done by mixing a small amount of diluted glue with the powder in a suitable container such as a gelatin capsule. Potting the sample in this way can keep the sample from shifting or aligning with the magnetic field. In the case of weaker magnetic samples, measure the mass of the glue after drying and making a background measurement. If the powdered sample is not potted, seal it into a container, and watch it carefully as it is cycled in the airlock chamber.
Pressure Equalization
The sample space of the MPMS has a helium atmosphere maintained at low pressure of a few torr. An airlock chamber is provided to avoid contamination of the sample space with air when introducing samples into the sample space. By pushing the purge button, the airlock is cycled between vacuum and helium gas three times, then pumped down to its working pressure. During the cycling, it is possible for samples to be displaced in their holders, sealed capsules to explode, and sample holders to be deformed. Many of these problems can be avoided if the sample holder is properly ventilated. This requires placing holes in the sample holder, out of the measuring region that will allow any closed spaces to be opened to the interlock chamber.
Air-sensitive Samples and Liquid Samples
When working with highly air-sensitive samples or liquid samples it is best to first seal the sample into a glass tube. NMR and EPR tubes make good sample holders since they are usually made of a high-quality, low-susceptibility glass or fused silica. When the sample has a high susceptibility, the tube with the sample can be placed onto a platform like those described earlier. When dealing with a low susceptibility sample, it is useful to rest the bottom of the sample tube on a length of the same type of glass tubing. By producing near mirror symmetry, this method gives a nearly constant background with position and provides an easy method for background measurement (i.e., measure the empty tube first, then measure with a sample). Be sure that the tube ends are well out of the measuring region.
When going to low temperatures, check to make sure that the sample tube will not break due to differential thermal expansion. Samples that will go above room temperature should be sealed with a reduced pressure in the tube and be checked by taking the sample to the maximum experimental temperature prior to loading it into the magnetometer. These checks are especially important when the sample may be corrosive, reactive, or valuable.
Oxygen Contamination
This application note describes potential sources for oxygen contamination in the sample chamber and discusses its possible effects. Molecular oxygen, which undergoes an antiferromagnetic transition at about 43 K, is strongly paramagnetic above this temperature. The MPMS system can easily detect the presence of a small amount of condensed oxygen on the sample, which when in the sample chamber can interfere significantly with sensitive magnetic measurements. Oxygen contamination in the sample chamber is usually the result of leaks in the system due to faulty seals, improper operation of the airlock valve, outgassing from the sample, or cold samples being loaded. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.01%3A_Magnetism.txt |
IR Sample Preparation: A Practical Guide
Infrared spectroscopy is based on molecular vibrations caused by the oscillation of molecular dipoles. Bonds have characteristic vibrations depending on the atoms in the bond, the number of bonds and the orientation of those bonds with respect to the rest of the molecule. Thus, different molecules have specific spectra that can be collected for use in distinguishing products or identifying an unknown substance (to an extent.)
Collecting spectra through this method goes about one of three general ways. Nujol mulls and pressed pellets are typically used for collecting spectra of solids, while thin-film cells are used for solution-phase IR spectroscopy. Spectra of gases can also be obtained but will not be discussed in this guide.
Infrared Optical Materials and Handling
While it is all well and wonderful that substances can be characterized in this fashion one still has to be able to hold the substances inside of the instrument and properly prepare the samples. In an infrared spectrometer (Figure $1$ )
the sample to be analyzed is held in front of an infrared laser beam, in order to do this, the sample must be contained in something, consequently this means that the very container the sample is in will absorb some of the infrared beam.
This is made somewhat complicated by the fact that all materials have some sort of vibration associated with them. Thus, if the sample holder has an optical window made of something that absorbs near where your sample does, the sample might not be distinguishable from the optical window of the sample holder. The range that is not blocked by a strong absorbance is known as a window (not to be confused with the optical materials of the cell.)
Windows are an important factor to consider when choosing the method to perform an analysis, as seen in Table $1$ there are a number of different materials each with their own characteristic absorption spectra and chemical properties. Keep these factors in mind when performing analyses and precious sample will be saved. For most organic compounds NaCl works well though it is susceptible to attack from moisture. For metal coordination complexes KBr, or CsI typically work well due to their large windows. If money is not a problem then diamond or sapphire can be used for plates.
Material Transparent Ranges (cm -1) Solubility Notes
NaCl 40,000 - 625 H2O Easy to polish, hygroscopic
Silica glass 55,000-3,000 HF Attacked by HF
Quartz 40,000-2,500 HF Attacked by HF
Sapphire 20,000-1,780 - Strong
Diamond 40,000-2,500 and 1,800-200 - Very strong, expensive, hard, useless for pellets
CaF2 70,000-1,110 Acids Attacked by acids, avoid ammonium salts
BaF2 65,000-700 - Avoid ammonium salts
ZnSe 10,000 - 550 Acids Brittle, attacked by acids
AgCl 25,000-400 - Soft, sensitive to light.
KCl 40,000-500 H2O, Et2O, acetone Hygroscopic, soft, easily polished, commonly used in making pellets.
KBr 40,000-400 H2O, EtOH Hygroscopic, soft, easily polished, commonly used in making pellets.
CsBr 10,000-250 H2O, EtOH, acetone Hygroscopic soft
CsI 10,000-200 H2O, EtOH, MeOH, acetone Hygroscopic, soft.
Teflon 5,000-1,200; 1,200-900 - Inert, disposable
Polyethylene 4,000-3,000; 2,800-1,460; 1,380 - 730; 720- 30 - Inert, disposable
Table $1$ Various IR-transparent materials and their solubilities and other notes. M. R. Derrick, D. Stulik, and J. M. Landry, in Scientific Tools in Conservation: Infrared Spectroscopy in Conservation Science. Getty Conservation Institute (1999).
Proper handling of these plates will ensure they have a long, useful life. Here follows a few simple pointers on how to handle plates:
• Avoid contact with solvents that the plates are soluble in.
• Keep the plates in a dessicator, the less water the better, even if the plates are insoluble to water.
• Handle with gloves, clean gloves.
• Avoid wiping the plates to prevent scratching.
That said, these simple guidelines will likely reduce most damage that can occur to a plate by simply holding it other faults such as dropping the plate from a sufficient height can result in more serious damage.
Preparation of Nujol Mulls
A common method of preparing solid samples for IR analysis is mulling. The principle here is by grinding the particles to below the wavelength of incident radiation that will be passing through there should be limited scattering. To suspend those tiny particles, an oil, often referred to as Nujol is used. IR-transparent salt plates are used to hold the sample in front of the beam in order to acquire data. To prepare a sample for IR analysis using a salt plate, first decide what segment of the frequency band should be studied, refer to Table $1$ for the materials best suited for the sample. Figure $2$ shows the materials needed for preparing a mull.
Preparing the mull is performed by taking a small portion of sample and adding approximately 10% of the sample volume worth of the oil and grinding this in an agate mortar and pestle as demonstrated in Figure $3$. The resulting mull should be transparent with no visible particles.
Another method involves dissolving the solid in a solvent and allowing it to dry in the agate pestle. If using this method ensure that all of the solvent has evaporated since the solvent bands will appear in the spectrum. Some gentle heating may assist this process. This method creates very fine particles that are of a relatively consistent size. After addition of the oil further mixing (or grinding) may be necessary.
Plates should be stored in a desiccator to prevent erosion by atmospheric moisture and should appear roughly transparent. Some materials such as silicon will not, however. Gently rinse the plates with hexanes to wash any residual material off of the plates. Removing the plates from the desiccator and cleaning them should follow the preparation of the mull in order to maintain the integrity of the salt plates. Of course, if the plate is not soluble in water then it is still a good idea just to prevent the threat of mechanical trauma or a stray jet of acetone from a wash bottle.
Once the mull has been prepared, add a drop to one IR plate (Figure $4$ ), place the second plate on top of the drop and give it a quarter turn in order to evenly coat the plate surface as seen in Figure $5$. Place it into the spectrometer and acquire the desired data.
Always handle with gloves and preferably away from any sinks, faucets, or other sources of running or spraying water.
Spectra acquired by this method will have strong C-H absorption bands throughout several ranges 3,000 – 2,800 cm-1 and 1,500 – 1,300 cm-1 and may obscure signal.
Cleaning the plate is performed as previously mentioned with hexanes or chloroform can easily be performed by rinsing and leaving them to dry in the hood. Place the salt plates back into the desiccator as soon as reasonably possible to prevent damage. It is highly advisable to polish the plates after use, no scratches, fogging, or pits should be visible on the face of the plate. Chips, so long as they don’t cross the center of the plate are survivable but not desired. The samples of damaged salt plates in Figure $6$ show common problems associated with use or potentially mishandling. Clouding, and to an extent, scratches can be polished out with an iron rouge. Areas where the crystal lattice is disturbed below the surface are impossible to fix and chips cannot be reattached.
FIgure $6$ A series of plates indicating various forms of physical damage with a comparison to a good plate (Copyright: Colorado University-Boulder).
Preparation of Pellets
In an alternate method, this technique is along the same lines of the nujol mull except instead of the suspending medium being mineral oil, the suspending medium is a salt. The solid is ground into a fine powder with an agate mortar and pestle with an amount of the suspending salt. Preparing pellets with diamond for the suspending agent is somewhat illadvised considering the great hardness of the substance. Generally speaking, an amount of KBr or CsI is used for this method since they are both soft salts. Two approaches can be used to prepare pellets, one is somewhat more expensive but both usually yield decent results.
The first method is the use of a press. The salt is placed into a cylindrical holder and pressed together with a ram such as the one seen in (Figure $7$ ). Afterwards, the pellet, in the holder, is placed into the instrument and spectra acquired.
An alternate, and cheaper method requires the use of a large hex nut with a 0.5 inch inner diameter, two bolts, and two wrenches such as the kit seen in Figure $8$. Step-by-step instructions for loading and using the press follows:
1. Screw one of the bolts into the nut about half way.
2. Place the salt pellet mixture into the other opening of the nut and level by tapping the assembly on a countertop.
3. Screw in the second bolt and place the assembly on its side with the bolts parallel to the countertop. Place one of the wrenches on the bolt on the right side with the handle aiming towards yourself.
4. Take the second wrench and place it on the other bolt so that it attaches with an angle from the table of about 45 degrees.
5. The second bolt is tightened with a body weight and left to rest for several minutes. Afterwards, the bolts are removed, and the sample placed into the instrument.
Some pellet presses also have a vacuum barb such as the one seen in (Figure $8$. If your pellet press has one of these, consider using it as it will help remove air from the salt pellet as it is pressed. This ensures a more uniform pellet and removes absorbances in the collected spectrum due to air trapped in the pellet.
Preparation of Solution Cells
Solution cells (Figure $9$ ) are a handy way of acquiring infrared spectra of compounds in solution and is particularly handy for monitoring reactions.
A thin-film cell consists of two salt plates with a very thin space in between them (Figure $10$ ). Two channels allow liquid to be injected and then subsequently removed. The windows on these cells can be made from a variety of IR optical materials. One particularly useful one for water-based solutions is CaF2 as it is not soluble in water.
Cleaning these cells can be performed by removing the solution, flushing with fresh solvent and gently removing the solvent by syringe. Do not blow air or nitrogen through the ports as this can cause mechanical deformation in the salt window if the pressure is high enough.
Deuterated Solvent Effects
One of the other aspects to solution-phase IR is that the solvent utilized in the cell has a characteristic absorption spectra. In some cases this can be alleviated by replacing the solvent with its deuterated sibling. The benefit here is that C-H bonds are now C-D bonds and have lower vibrational frequencies. Compiled in Figure $11$ is a set of common solvents.
This effect has numerous benefits and is often applied to determining what vibrations correspond to what bond in a given molecular sample. This is often accomplished by using isotopically labeled “heavy” reagents such as ones that contain 2H, 15N, 18O, or 13C.
Basic Troubleshooting
There are numerous problems that can arise from improperly prepared samples, this section will go through some of the common problems and how to correct them. For this demonstration, spectra of ferrocene will be used. The molecular structure and a photograph of the brightly colored organometallic compound are shown in Figure $12$ and Figure $13$.
Figure $14$ illustrates what a good sample of ferrocene looks like prepared in a KBr pellet. The peaks are well defined and sharp. No peak is flattened at 0% transmittance and Christiansen scattering is not evident in the baseline.
Figure $15$ illustrates a sample with some peaks with intensities that are saturated and lose resolution making peak-picking difficult. In order to correct for this problem, scrape some of the sample off of the salt plate with a rubber spatula and reseat the opposite plate. By applying a thinner layer of sample one can improve the resolution of strongly absorbing vibrations.
Figure $16$ illustrates a sample in which too much mineral oil was added to the mull so that the C-H bonds are far more intense than the actual sample. This can be remedied by removing the sample from the plate, grinding more sample and adding a smaller amount of the mull to the plate. Another possible way of doing this is if the sample is insoluble in hexanes, add a little to the mull and wick away the hexane-oil mixture to leave a dry solid sample. Apply a small portion of oil and replate.
Figure $17$ illustrates the result of particles being too large and scattering light. To remedy this, remove the mull and grind further or else use the solvent deposition technique described earlier.
Characteristic IR Vibrational Modes for Hydrocarbon Compounds
Functional group Mode Wavenumber range (cm-1)
CH3 Asymmetric stretch 2962±10
CH3 Symmetric stretch 2872±10
CH3 Asymmetric bend 1460±10
CH3 Symmetric bend (umbrella mode) 1375±10
CH2 Asymmetric stretch 2926±10
CH2 Symmetric stretch 2855±10
CH2 Scissors 1455±10
CH2 Rock 720±10
CH Stretch ~2900 (weak)
CH Bend ~1350 (weak)
Table $2$ Stretching and bending bands for alkanes.
Table $3$ The stretching bands for alkenes.
Substitution C-H stretch (cm-1) C=C stretch (cm-1) Out of plane bend (cm-1)
Vinyl 3090-3075 1660-1630 900±5, 910±5
Vinylidine 3090-3075 1660-1630 890±5
Cis 3050-3000 1660-1630 690±10
Trans 3050-3000 1680-1665 965±5
Tri-substituted 3050-3000 1680-1665 815±25
Tetra-substituted - 1680-1665 -
Substitution C-H stretch (cm-1) C=C stretch (cm-1) C-H wag (cm-1)
Mono-substituted 3350-3250 2140-2100 700-600
Di-substituted - 2260-2190 -
Table $4$ The stretching bands for alkynes.
Substitution Out of plane C-H bending Ring bend (cm-1)
Mono 770-710 690±10
Ortho 810-750 -
Meta 770-735 690±10
Para 860-790 -
Table $5$ Bands for mono- and di-substituted benzene rings.
Vibration Wavenumber (cm-1)
CH3 symmetric stretch 2925±5
CH3 bend overtone 2865±5
Table $6$ Bands for methyl groups bonded to benzene rings.
Fourier Transform Infrared Spectroscopy of Metal Ligand Complexes
The infrared (IR) range of the electromagnetic spectrum is usually divided into three regions:
• The far-infrared is always used for rotational spectroscopy, with wavenumber range 400 – 10 cm−1 and lower energy.
• The mid-infrared is suitable for a detection of the fundamental vibrations and associated rotational-vibrational structure with the frequency range approximately 4000 – 400 cm−1.
• The near-Infrared with higher energy and wave number range 14000 – 4000 cm−1, can excite overtone or higher harmonic vibrations.
For classical light material interaction theory, if a molecule can interact with an electromagnetic field and absorb a photon of certain frequency, the transient dipole of molecular functional group must oscillate at that frequency. Correspondingly, this transition dipole moment must be a non-zero value, however, some special vibration can be IR inactive for the stretching motion of a homonuclear diatomic molecule and vibrations do not affect the molecule’s dipole moment (e.g., N2).
Mechanistic Description of the Vibrations of Polyatomic Molecules
A molecule can vibrate in many ways, and each way is called a "vibrational mode". If a molecule has N atoms, linear molecules have 3N-5 degrees of vibrational modes whereas nonlinear molecules have 3N-6 degrees of vibrational modes. Take H2O for example; a single molecule of H2O has O-H bending mode (Figure $18$ a), antisymmetric stretching mode (Figure $18$ b), and symmetric stretching mode (Figure $18$ c).
If a diatomic molecule has a harmonic vibration with the energy, \ref{1} , where n+1/2 with n = 0, 1, 2 ...). The motion of the atoms can be determined by the force equation, \ref{2} , where k is the force constant). The vibration frequency can be described by \ref{3} . In which m is actually the reduced mass (mred or μ), which is determined from the mass m1 and m2 of the two atoms, \ref{4} .
$E_{n} \ =\ -hv \label{1}$
$F \ =\ -kx \label{2}$
$\omega \ =\ (k/m)^{1/2} \label{3}$
$m_{red} \ =\ \mu \ =\ \frac{m_{1} m_{2}}{m_{1}\ +\ m_{2} } \label{4}$
Principle of Absorption Bands
In IR spectrum, absorption information is generally presented in the form of both wavenumber and absorption intensity or percent transmittance. The spectrum is generally showing wavenumber (cm-1) as the x-axis and absorption intensity or percent transmittance as the y-axis.
Transmittance, "T", is the ratio of radiant power transmitted by the sample (I) to the radiant power incident on the sample (I0). Absorbance (A) is the logarithm to the base 10 of the reciprocal of the transmittance (T). The absorption intensity of molecule vibration can be determined by the Lambert-Beer Law, \label{5} . In this equation, the transmittance spectra ranges from 0 to 100%, and it can provide clear contrast between intensities of strong and weak bands. Absorbance ranges from infinity to zero. The absorption of molecules can be determined by several components. In the absorption equation, ε is called molar extinction coefficient, which is related to the molecule behavior itself, mainly the transition dipole moment, c is the concentration of the sample, and l is the sample length. Line width can be determined by the interaction with surroundings.
$A\ =\ log(1/T) \ =\ -log(I/I_{0} )\ =\ \varepsilon c l \label{5}$
The Infrared Spectrometer
As shown in Figure $19$, there are mainly four parts for fourier transform infrared spectrometer (FTIR):
• Light source. Infrared energy is emitted from a glowing black-body source as continuous radiations.
• Interferometer. It contains the interferometer, the beam splitter, the fixed mirror and the moving mirror. The beam splittertakes the incoming infrared beam and divides it into two optical beams. One beam reflects off the fixed mirror. The other beam reflects off of the moving mirror which moves a very short distance. After the divided beams are reflected from the two mirrors, they meet each other again at the beam splitter. Therefore, an interference pattern is generated by the changes in the relative position of the moving mirror to the fixed mirror. The resulting beam then passes through the sample and is eventually focused on the detector.
• Sample compartment. It is the place where the beam is transmitted through the sample. In the sample compartment, specific frequencies of energy are absorbed.
• Detector. The beam finally passes to the detector for final measurement. The two most popular detectors for a FTIR spectrometer are deuterated triglycine sulfate (pyroelectric detector) and mercury cadmium telluride (photon or quantum detector). The measured signal is sent to the computer where the Fourier transformation takes place.
A Typical Application: the detection of metal ligand complexes
Some General Absorption peaks for common types of functional groups
It is well known that all molecules chemicals have distinct absorption regions in the IR spectrum. Table $7$ shows the absorption frequencies of common types of functional groups. For systematic evaluation, the IR spectrum is commonly divided into some sub-regions.
• In the region of 4000 - 2000 cm–1, the appearance of absorption bands usually comes from stretching vibrations between hydrogen and other atoms. The O-H and N-H stretching frequencies range from 3700 - 3000 cm–1. If hydrogen bond forms between O-H and other group, it generally caused peak line shape broadening and shifting to lower frequencies. The C-H stretching bands occur in the region of 3300 - 2800 cm–1. The acetylenic C-H exhibits strong absorption at around 3300 cm–1. Alkene and aromatic C-H stretch vibrations absorb at 3200-3000 cm–1. Generally, asymmetric vibrational stretch frequency of alkene C-H is around 3150 cm-1, and symmetric vibrational stretch frequency is between 3100 cm-1 and 3000 cm-1. The saturated aliphatic C-H stretching bands range from 3000 - 2850 cm–1, with absorption intensities that are proportional to the number of C-H bonds. Aldehydes often show two sharp C-H stretching absorption bands at 2900 - 2700 cm–1. However, in water solution, C-H vibrational stretch is much lower than in non-polar solution. It means that the strong polarity solution can greatly reduce the transition dipole moment of C-H vibration.
• Furthermore, the stretching vibrations frequencies between hydrogen and other heteroatoms are between 2600 - 2000cm-1, for example, S-H at 2600 - 2550 cm–1, P-H at 2440 - 2275 cm–1, Si-H at 2250 - 2100 cm–1.
• The absorption bands at the 2300 - 1850 cm–1 region usually present only from triple bonds, such as C≡C at 2260 - 2100 cm–1, C≡N at 2260 - 2000 cm–1, diazonium salts –N≡N at approximately 2260 cm–1, allenes C=C=C at 2000 - 1900 cm–1. The peaks of these groups are all have strong absorption intensities. The 1950 - 1450 cm–1 region stands for double-bonded functional groups vibrational stretching.
• Most carbonyl C=O stretching bands range from 1870 - 1550 cm–1, and the peak intensities are from mean to strong. Conjugation, ring size, hydrogen bonding, and steric and electronic effects can lead to significant shifts in absorption frequencies. Furthermore, if carbonyl links with electron-withdrawing group, such as acid chlorides and acid anhydrides, it would give rise to IR bands at 1850 - 1750 cm–1. Ketones usually display stretching bands at 1715 cm-1.
• None conjugated aliphatic C=C and C=N have absorption bands at 1690 - 1620 cm–1. Besides, around 1430 and 1370cm-1, there are two identical peaks presenting C-H bending.
• The region from 1300 - 910 cm–1 always includes the contributions from skeleton C-O and C-C vibrational stretches, giving additional molecular structural information correlated with higher frequency areas. For example, ethyl acetate not only shows its carbonyl stretch at 1750 - 1735 cm–1, but also exhibits its identical absorption peaks at 1300 - 1000 cm–1 from the skeleton vibration of C-O and C-C stretches.
Group Frequency (cm-1) Strength Appearance
C-H stretch 2850-3400 Strong in nonpolar solvent
Weak in polar solvent
O-H stretch, N-H stretch 3200-3700 Broad in solvent
C≡N stretch,
R-N=C=S stretch
2050-2300 Medium or strong
C≡O stretch (bound with metal) around 2000 Medium or strong
C≡C stretch 2100-2260 Weak
C=O stretch ca 1715 (ketone),
ca 1650 (amides)
Strong
C=C stretch 1450-1700 Weak to strong
C-H bend 1260 - 1470 Strong
C-O stretch 1040-1300 Medium or strong
Table $7$ The typical frequencies of functional groups.
General Introduction of Metal Ligand Complex
The metal electrons fill into the molecular orbital of ligands (CN, CO, etc.) to form complex compound. As shown in Figure $20$, a simple molecular orbital diagram for CO can be used to explain the binding mechanism.
The CO and metal can bind with three ways:
• Donation of a pair of electrons from the C-O σ* orbital into an empty metal orbital (Figure $21$ a).
• Donation from a metal d orbital into the C-O π* orbital to form a M-to-CO π-back bond (Figure $21$ b).
• Under some conditions a pair of carbon π electron can donate into an empty metal d-orbital.
Some Factors to Include the Band Shifts and Strength
Herein, we mainly consider two properties: ligand stretch frequency and their absorption intensity. Take the ligand CO for example again. The frequency shift of the carbonyl peaks in the IR mainly depends on the bonding mode of the CO (terminal or bridging) and electron density on the metal. The intensity and peak numbers of the carbonyl bands depends on some factors: CO ligands numbers, geometry of the metal ligand complex and fermi resonance.
Effect on Electron Density on Metal
As shown in Table $8$, a greater charge on the metal center result in the CO stretches vibration frequency decreasing. For example, [Ag(CO)]+show higher frequency of CO than free CO, which indicates a strengthening o
f the CO bond. σ donation removes electron density from the nonbonding HOMO of CO. From Figure, it is clear that the HOMO has a small amount of anti-bonding property, so removal of an electron actually increases (slightly) the CO bond strength. Therefore, the effect of charge and electronegativity depends on the amount of metal to CO π-back bonding and the CO IR stretching frequency.
dx Complex CO stretch frequency (cm-1)
free CO 2143
d10 [Ag(CO)]+ 2204
d10 Ni(CO)4 2060
d10 [Co(CO)4]- 1890
d6 [Mn(CO)6]+ 2090
d6 Cr(CO)6 2000
d6 [V(CO)6]- 1860
Table $8$ Different types of ligands frequencies of different electron density on a metal center.
If the electron density on a metal center is increasing, more π-back bonding to the CO ligand(s) will also increase, as shown in Table $9$. It means more electron density would enter into the empty carbonyl π* orbital and weaken the C-O bond. Therefore, it makes the M-CO bond strength increasing and more double-bond-like (M=C=O).
Ligation Donation Effect
Some cases, as shown in Table $9$, different ligands would bind with same metal at the same metal-ligand complex. For example, if different electron density groups bind with Mo(CO)3 as the same form, as shown in Figure $22$, the CO vibrational frequencies would depend on the ligand donation effect. Compared with the PPh3 group, CO stretching frequency which the complex binds the PF3 group (2090, 2055 cm-1) is higher. It indicates that the absolute amount of electron density on that metal may have certain effect on the ability of the ligands on a metal to donate electron density to the metal center. Hence, it may be explained by the Ligand donation effect. Ligands that are trans to a carbonyl can have a large effect on the ability of the CO ligand to effectively π-backbond to the metal. For example, two trans π-backbonding ligands will partially compete for the same d-orbital electron density, weakening each other’s net M-L π-backbonding. If the trans ligand is a π-donating ligand, the free metal to CO π-backbonding can increase the M-CO bond strength (more M=C=O character). It is well known that pyridine and amines are not those strong π-donors. However, they are even worse π-backbonding ligands. So the CO is actually easy for π-back donation without any competition. Therefore, it naturally reduces the CO IR stretching frequencies in metal carbonyl complexes for the ligand donation effect.
Metal Ligand Complex CO Stretch Frequency (cm-1)
Mo(CO)3(PF3)3 2090, 2055
Mo(CO)3[P(OMe)3]3 1977, 1888
Mo(CO)3(PPh3)3 1934, 1835
Mo(CO)3(NCCH3)3 1915, 1783
Mo(CO)3(pyridine)3 1888, 1746
Table $9$ The effect of different types of ligands on the frequency of the carbonyl ligand
Geometry Effects
Some cases, metal-ligand complex can form not only terminal but also bridging geometry. As shown in Figure $23$, in the compound Fe2(CO)7(dipy), CO can act as a bridging ligand. Evidence for a bridging mode of coordination can be easily obtained through IR spectroscopy. All the metal atoms bridged by a carbonyl can donate electron density into the π* orbital of the CO and weaken the CO bond, lowering vibration frequency of CO. In this example, the CO frequency in terminal is around 2080 cm-1, and in bridge, it shifts to around 1850 cm-1.
Pump-probe Detection of Molecular Functional Group Vibrational Lifetime
The dynamics of molecular functional group plays an important role during a chemical process, chemical bond forming and breaking, energy transfer and other dynamics happens within picoseconds domain. It is very difficult to study such fast processes directly, for decades scientists can only learn from theoretical calculations, lacking experimental methods.
However, with the development of ultrashort pulsed laser enable experimental study of molecular functional group dynamics. With ultrafast laser technologies, people develop a series of measuring methods, among which, pump-probe technique is widely used to study the molecular functional group dynamics. Here we concentrate on how to use pump-probe experiment to measure functional group vibrational lifetime. The principle, experimental setup and data analysis will be introduced.
Principles of the Pump-probe Technique
For every function group within a molecule, such as the C≡N triple bond in phenyl selenocyanate (C6H5SeCN) or the C-D single bond in deuterated chloroform (DCCl3), they have an individual infrared vibrational mode and associated energy levels. For a typical 3-level system (Figure $24$, both the 0 to 1 and the 1 to 2 transition are near the probe pulse frequency (they don't necessarily need to have exactly the same frequency).
In a pump-probe experiment, we use the geometry as is shown in Figure $25$. Two synchronized laser beams, one of which is called pump beam (Epu) while the other probe beam (Epr). There is a delay in time between each pulse. The laser pulses hit the sample, the intensity of ultrafast laser (fs or ps) is strong enough to generated 3rd order polarization and produce 3rd order optical response signal which is use to give dynamics information of molecular function groups. For the total response signals we have \label{6} , where µ10 µ21 are transition dipole moment and E0, E1, and E2 are the energies of the three levels, and t3 is the time delay between pump and probe beam. The delay t3 is varied and the response signal intensity is measured. The functional group vibration life time is determined from the data.
$S \ =\ 4 \mu _{10} ^{4} e^{ -i(E_{1} - E_{0} ) t3/h - \Gamma t3} \label{6}$
Typical Experimental Set-up
The optical layout of a typical pump-probe setup is schematically displayed in Figure $26$. In the setup, the output of the oscillator (500 mW at 77 MHz repetition rate, 40 nm bandwidth centered at 800 nm) is split into two beams (1:4 power ratio). Of this, 20% of the power is to seed a femtosecond (fs) amplifier whose output is 40 fs pulses centered at 800 nm with power of ~3.4 W at 1 KHz repetition rate. The rest (80%) of the seed goes through a bandpass filter centered at 797.5nm with a width of 0.40 nm to seed a picosecond (ps) amplifier. The power of the stretched seed before entering the ps amplifier cavity is only ~3 mW. The output of the ps amplifier is 1ps pulses centered at 800 nm with a bandwidth ~0.6 nm. The power of the ps amplifier output is ~3 W. The fs amplifier is then to pump an optical parametric amplifier (OPA) which produces ~100 fs IR pulses with bandwidth of ~200 cm-1 that is tunable from 900 to 4000 cm-1. The power of the fs IR pulses is 7~40 mW, depending on the frequencies. The ps amplifier is to pump a ps OPA which produces ~900 fs IR pulses with bandwidth of ~21 cm-1, tunable from 900 - 4000 cm-1. The power of the fs IR pulses is 10 ~ 40 mW, depending on frequencies.
In a typical pump-probe setup, the ps IR beam is collimated and used as the pump beam. Approximately 1% of the fs IR OPA output is used as the probe beam whose intensity is further modified by a polarizer placed before the sample. Another polarizer is placed after the sample and before the spectrograph to select different polarizations of the signal. The signal is then sent into a spectrograph to resolve frequency, and detected with a mercury cadmium telluride (MCT) dual array detector. Use of a pump pulse (femtosecond, wide band) and a probe pulse (picoseconds, narrow band), scanning the delay time and reading the data from the spectrometer, will give the lifetime of the functional group. The wide band pump and spectrometer described here is for collecting multiple group of pump-probe combination.
Data Analysis
For a typical pump-probe curve shown in Figure $27$ life time t is defined as the corresponding time value to the half intensity as time zero.
Table $10$ shows the pump-probe data of the C≡N triple bond in a series of aromatic cyano compounds: n-propyl cyanide (C3H7CN), ethyl thiocyanate (C2H5SCN), and ethyl selenocyanate (C2H5SeCN) for which the νC≡N for each compound (measured in CCl4 solution) is 2252 cm-1), 2156 cm-1, and ~2155 cm-1, respectively.
Delay (ps) C3H7CN C2H5SCN C2H5SeCN
0 -0.00695 -0.10918 -0.06901
0.1 -0.0074 -0.10797 -0.07093
0.2 -0.00761 -0.1071 -0.07247
0.3 -0.00768 -0.10545 -0.07346
0.4 -0.0076 -0.10487 -0.07429
0.5 -0.00778 -0.10287 -0.07282
0.6 -0.00782 -0.10286 -0.07235
0.7 -0.00803 -0.10222 -0.07089
0.8 -0.00764 -0.10182 -0.07073
0.9 -0.00776 -0.10143 -0.06861
1 -0.00781 -0.10099 -0.06867
1.1 -0.00745 -0.10013 -0.06796
1.2 -0.00702 -0.10066 -0.06773
1.3 -0.00703 -0.0989 -0.0676
1.4 -0.00676 -0.0995 -0.06638
1.5 -0.00681 -0.09757 -0.06691
1.6 -0.00639 -0.09758 -0.06696
1.7 -0.00644 -0.09717 -0.06583
1.8 -0.00619 -0.09741 -0.06598
1.9 -0.00613 -0.09723 -0.06507
2 -0.0066 -0.0962 -0.06477
2.5 -0.00574 -0.09546 -0.0639
3 -0.0052 -0.09453 -0.06382
3.5 -0.0482 -0.09353 -0.06389
4 -0.0042 -0.09294 -0.06287
4.5 -0.00387 -0.09224 -0.06197
5 -0.00351 -0.09009 -0.06189
5.5 -0.00362 -0.09084 -0.06188
6 -0.00352 -0.08938 -0.06021
6.5 -0.00269 -0.08843 -0.06028
7 -0.00225 -0.08788 -0.05961
7.5 -0.00231 -0.08694 -0.06065
8 -0.00206 -0.08598 -0.05963
8.5 -0.00233 -0.08552 -0.05993
9 -0.00177 -0.08503 -0.05902
9.5 -0.00186 -0.08508 -0.05878
10 -0.00167 -0.0842 -0.0591
11 -0.00143 -0.08295 -0.05734
Table $10$ Pump-probe intensity data for C≡N stretching frequency in n-propyl cyanide, ethyl thiocyanate, and ethyl selenocyanate as a function of delay (ps).
A plot of intensity versus time for the data from TABLE is shown Figure $28$. From these curves the C≡N stretch lifetimes can be determined for C3H7CN, C2H5SCN, and C2H5SeCN as ~5.5 ps, ~84 ps, and ~282 ps, respectively.
From what is shown above, the pump-probe method is used in detecting C≡N vibrational lifetimes in different chemicals. One measurement only takes several second to get all the data and the lifetime, showing that pump-probe method is a powerful way to measure functional group vibrational lifetime.
Attenuated Total Reflectace- Fourier Transform Infrared Spectroscopy
Attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) is a physical method of compositional analysis that builds upon traditional transmission FTIR spectroscopy to minimize sample preparation and optimize reproducibility. Condensed phase samples of relatively low refractive index are placed in close contact with a crystal of high refractive index and the infrared (IR) absorption spectrum of the sample can be collected. Based on total internal reflection, the absorption spectra of ATR resemble those of transmission FTIR. To learn more about transmission IR spectroscopy (FTIR) please refer to the section further up this page titled Fourier Transform Infrared Spectroscopy of Metal Ligand Complexes.
First publicly proposed in 1959 by Jacques Fahrenfort from the Royal Dutch Shell laboratories in Amsterdam, ATR IR spectroscopy was described as a technique to effectively measure weakly absorbing condensed phase materials. In Fahrenfort's first article describing the technique, published in 1961, he used a hemicylindrical ATR crystal (see Experimental Conditions) to produce single-reflection ATR (Figure $29$ ). ATR IR spectroscopy was slow to become accepted as a method of characterization due to concerns about its quantitative effectiveness and reproducibility. The main concern being the sample and ATR crystal contact necessary to achieve decent spectral contrast. In the late 1980’s FTIR spectrometers began improving due to an increased dynamic range, signal to noise ratio, and faster computers. As a result ATR-FTIR also started gaining traction as an efficient spectroscopic technique. These days ATR accessories are often manufactured to work in conjunction with most FTIR spectrometers, as can be seen in Figure $30$.
Total Internal Reflection
For additional information on light waves and their properties please refer to the module on Vertical Scanning Interferometry (VSI) in chapter 10.1.
When considering light propagating across an interface between two materials with different indices of refraction, the angle of refraction can be given by Snell’s law, \ref{7} , where none of the incident light will be transmitted.
$\varphi _{c} \ =\ \varphi _{max} \label{7}$
The reflectance of the interface is total and whenever light is incident from a higher refractive index medium onto a lower refractive index medium, the reflection is deemed internal (as opposed to external in the opposite scenario). Total internal reflectance experiences no losses, or no transmitted light (Figure $31$
Supercritical internal reflection refers to angles of incidence above the critical angle of incidence allowing total internal reflectance. It is in this angular regime where only incident and reflected waves will be present. The transmitted wave is confined to the interface where its amplitude is at a maximum and will damp exponentially into the lower refractive index medium as a function of distance. This wave is referred to as the evanescent wave and it extends only a very short distance beyond the interface.
To apply total internal reflection to the experimental setup in ATR, consider n2 to be the internal reflectance element or ATR crystal (the blue trapezoid in Figure $32$ )
where n2 is the material with the higher index of refraction. This should be a material that is fully transparent to the incident infrared radiation to give a real value for the refractive index. The ATR crystal must also have a high index of refraction to allow total internal reflection with many samples that have an index of refraction n1, where n1<n2.
We can consider the sample to be absorbing in the infrared. Electromagnetic energy will pass through the crystal/sample interface and propagate into the sample via the evanescent wave. This energy loss must be compensated with the incident IR light. Thus, total reflectance is no longer occurring and the reflection inside the crystal is attenuated. If a sample does not absorb, the reflectance at the interface shows no attenuation. Therefore if the IR light at a particular frequency does not reach the detector, the sample must have absorbed it.
The penetration depth of the evanescent wave within the sample is on the order of 1µm. The expression of the penetration depth is given in \ref{8} and is dependent upon the wavelength and angle of incident light as well as the refractive indices of the ATR crystal and sample. The effective path length is the product of the depth of penetration of the evanescent wave and the number of points that the IR light reflects at the interface between the crystal and sample. This path length is equivalent to the path length of a sample in a traditional transmission FTIR setup.
$d_{p} = \frac{ \lambda }{2 \pi n_{1}} (sin \omega - ( \frac{n_{1}}{n_{2}} )^{2} )^{1/2} \label{8}$
Experimental Conditions
Refractive Indices of ATR Crystal and Sample
Typically an ATR attachment can be used with a traditional FTIR where the beam of incident IR light enters a horizontally positioned crystal with a high refractive index in the range of 1.5 to 4, as can be seen in Table $11$ will consist of organic compounds, inorganic compounds, and polymers which have refractive indices below 2 and can readily be found on a database.
Material Refractive Index (RI) Spectral Range (cm-1)
Zinc Selenide (ZnSe) 2.4 20,000 - 650
Germanium (Ge) 4 5,500 - 870
Sapphire (Al2O3) 1.74 50,000 - 2,000
Diamond (C) 2.4 45,000 - 2,500,
1650 - 200
Table $11$ A summary of popular ATR crystals. Data obtained from F. M. Mirabella, Internal reflection spectroscopy: Theory and applications, 15, Marcel Dekker, Inc., New York (1993).
Single and Multiple Reflection Crystals
Multiple reflection ATR was initially more popular than single reflection ATR because of the weak absorbances associated with single reflection ATR. More reflections increased the evanescent wave interaction with the sample, which was believed to increase the signal to noise ratio of the spectrum. When IR spectrometers developed better spectral contrast, single reflection ATR became more popular. The number of reflections and spectral contrast increases with the length of the crystal and decreases with the angle of incidence as well as thickness. Within multiple reflection crystals some of the light is transmitted and some is reflected as the light exits the crystal, resulting in some of the light going back through the crystal for a round trip. Therefore, light exiting the ATR crystal contains components that experienced different number of reflections at the crystal-sample interface.
Angle of Incidence
It was more common in earlier instruments to allow selection of the incident angle, sometimes offering selection between 30°, 45°, and 60°. In all cases for total internal reflection to hold, the angle of incidence must exceed the critical angle and ideally complement the angle of the crystal edge so that the light enters at a normal angle of incidence. These days 45° is the standard angle on most ATR-FTIR setups.
ATR Crystal Shape
For the most part ATR crystals will have a trapezoidal shape as shown in Figure $31$. This shape facilitates sample preparation and handling on the crystal surface by enabling the optical setup to be placed below the crystal. However, different crystal shapes (Figure $33$ ) may be used for particular purposes, whether it is to achieve multiple reflections or reduce the spot size. For example, a hemispherical crystal may be used in a microsampling experiment in which the beam diameter can be reduced at no expense to the light intensity. This allows appropriate measurement of a small sample without compromising the quality of the resulting spectral features.
Crystal-sample contact
Because the path length of the evanescent wave is confined to the interface between the ATR crystal and sample, the sample should make firm contact with the ATR crystal (Figure $34$ ). The sample sits atop the crystal and intimate contact can be ensured by applying pressure above the sample. However, one must be mindful of the ATR crystal hardness. Too much pressure may distort the crystal and affect the reproducibility of the resulting spectrum.
The wavelength effect expressed in \label{7} shows an increase in penetration depth at increased wavelength. In terms of wavenumbers the relationship becomes inverse. At 4000 cm-1 penetration of the sample is 10x less than penetration at 400 cm-1 meaning the intensity of the peaks may appear higher at lower wavenumbers in the absorbance spectrum compared to the spectral features in a transmission FTIR spectrum (if an automated correction to the ATR setup is not already in place).
Selecting an ATR Crystal
ATR functions effectively on the condition that the refractive index of the crystal is of a higher refractive index than the sample. Several crystals are available for use and it is important to select an appropriate option for any given experiment (Table $11$ ).
When selecting a material, it is important to consider reactivity, temperature, toxicity, solubility, and hardness.
The first ATR crystals in use were KRS-5, a mixture of thallium bromide and iodide, and silver halides. These materials are not listed in the table because they are not in use any longer. While cost-effective, they are not practical due to their light sensitivity, softness, and relatively low refractive indices. In addition KRS-5 is terribly toxic and dissolves on contact with many solvents, including water.
At present diamond is a favorable option for its hardness, inertness and wide spectral range, but may not be a financially viable option for some experiments. ZnSe and germanium are the most common crystal materials. ZnSe is reasonably priced, has significant mechanical strength and a long endurance. However, the surface will become etched with exposure to chemicals on either extreme of the pH scale. With a strong acid ZnSe will react to form toxic hydrogen selenide gas. ZnSe is also prone to oxidation and care must be taken to avoid the formation of an IR absorbing layer of SeO2. Germanium has a higher refractive index, which reduces the depth of penetration to 1 µm and may be preferable to ZnSe in applications involving intense sample absorptions or for use with samples that produce strong background absorptions. Sapphire is physically robust with a wide spectral range, but has a relatively low refractive index in terms of ATR crystals, meaning it may not be able to test as many samples as another crystal might.
Sample Versatility
Solids
The versatility of ATR is reflected in the various forms and phases that a sample can assume. Solid samples need not be compressed into a pellet, dispersed into a mull or dissolve in a solution. A ground solid sample is simply pressed to the surface of the ATR crystal. For hard samples that may present a challenge to grind into a fine solid, the total area in contact with the crystal may be compromised unless small ATR crystals with exceptional durability are used (e.g., 2 mm diamond). Loss of contact with the crystal would result in decreased signal intensity because the evanescent wave may not penetrate the sample effectively. The inherently short path length of ATR due to the short penetration depth (0.5-5 µm) enables surface-modified solid samples to be readily characterized with ATR.
Powdered samples are often tedious to prepare for analysis with transmission spectroscopy because they typically require being made into a KBr pellet to and ensuring the powdered sample is ground up sufficiently to reduce scattering. However, powdered samples require no sample preparation when taking the ATR spectra. This is advantageous in terms of time and effort, but also means the sample can easily be recovered after analysis.
Liquids
The advantage of using ATR to analyze liquid samples becomes apparent when short effective path lengths are required. The spectral reproducibility of liquid samples is certain as long as the entire length of the crystal is in contact with the liquid sample, ensuring the evanescent wave is interacting with the sample at the points of reflection, and the thickness of the liquid sample exceeds the penetration depth. A small path length may be necessary for aqueous solutions in order to reduce the absorbance of water.
Sample Preparation
ATR-FTIR has been used in fields spanning forensic analysis to pharmaceutical applications and even art preservation. Due to its ease of use and accessibility ATR can be used to determine the purity of a compound. With only a minimal amount of sample this researcher is able to collect a quick analysis of her sample and determine whether it has been adequately purified or requires further processing. As can be seen in Figure $35$, the sample size is minute and requires no preparation. The sample is placed in close contact with the ATR crystal by turning a knob that will apply pressure to the sample (Figure $36$ ).
ATR has an added advantage in that it inherently encloses the optical path of the IR beam. In a transmission FTIR, atmospheric compounds are constantly exposed to the IR beam and can present significant interference with the sample measurement. Of course the transmission FTIR can be purged in a dry environment, but sample measurement may become cumbersome. In an ATR measurement, however, light from the spectrometer is constantly in contact with the sample and exposure to the environment is reduced to a minimum.
Application to Inorganic Chemistry
One exciting application of ATR is in the study of classical works of art. In the study of fragments of a piece of artwork, where samples are scarce and one-of-a-kind, ATR is a suitable method of characterization because it requires only a small sample size. Determining the compounds present in art enables proper preservation and historical insight into the pieces.
In a study examining several paint samples from a various origins, a micro-ATR was employed for analysis. This study used a silicon crystal with a refractive index of 2.4 and a reduced beam size. Going beyond a simple surface analysis, this study explored the localization of various organic and inorganic compounds in the samples by performing a stratigraphic analysis. The researchers did so by embedding the samples in both KBr and a polyester resins. Two embedding techniques were compared to observe cross-sections of the samples. The mapping of the samples took approximately 1-3 hours which may seem quite laborious to some, but considering the precious nature of the sample, the wait time was acceptable to the researchers.
The optical microscope picture ( Figure $37$ ) shows a sample of a blue painted area from the robe of a 14th century Italian polychrome statue of a Madonna. The spectra shown in Figure $38$ were acquired from the different layers pictured in the box marked in Figure $37$. All spectra were collected from the cross-sectioned sample and the false-color map on each spectrum indicates the location of each of these compounds within the embedded sample. The spectra correspond to the inorganic compounds listed in Table $12$, which also highlights characteristic vibrational bands.
Compound Selected Spectral Bands Assignment
Cu3(CO3)2(OH)2 (Azurite) 1493 CO32- asymmetric stretch
Silicate based blue-pigments 1035 Si-O stretching
2PbCO3 $\cdot$ Pb(OH)2 (White lead) 1399 CO32- asymmetric stretch
A natural ferruginous aluminum silicate red pigment (Bole) 3697 OH stretching
CaSO4 $\cdot$ (Gypsum) 1109 SO42- asymmetric stretch
Table $12$ this table shows the inorganic compounds identified in the paint sample shown in $37$. Data from R. Mazzeo, E. Joseph, S. Prati, and A. Millemaggi. Anal. Chim. Acta, 2007, 599, 107.
The deep blue layer 3 corresponds to azurite and the light blue paint layer 2 to a mixture of silicate based blue pigments and white lead. Although beyond the ATR crystal’s spatial resolution limit of 20 µm, the absorption of bole was detected by the characteristic triple absorption bands of 3697, 3651, and 3619 cm-1 as seen in spectrum d of Figure $37$. The white layer 0 was identified as gypsum.
To identify the binding material, the KBr embedded sample proved to be more effective than the polyester resin. This was due in part to the overwhelming IR absorbance of gypsum in the same spectral range (1700-1600 cm-1) as a characteristic stretch of the binding as well as some contaminant absorption due to the polyester embedding resin.
To spatially locate specific pigments and binding media, ATR mapping was performed on the area highlighted with a box in Figure $37$. The false color images alongside each spectrum in Figure $38$ indicate the relative presence of the compound corresponding to each spectrum in the boxed area. ATR mapping was achieved by taking 108 spectra across the 220x160 µm area and selecting for each identified compound by its characteristic vibrational band. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.02%3A_IR_Spectroscopy.txt |
Raman and Surface-Enhanced Raman Spectroscopy
What is Raman Spectroscopy
Raman spectroscopy is a powerful tool for determining chemical species. As with other spectroscopic techniques, Raman spectroscopy detects certain interactions of light with matter. In particular, this technique exploits the existence of Stokes and Anti-Stokes scattering to examine molecular structure. When radiation in the near infrared (NIR) or visible range interacts with a molecule, several types of scattering can occur. Three of these can be seen in the energy diagram in Figure \(1\).
In all three types of scattering, an incident photon of energy hν raises the molecule from a vibrational state to one of the infinite number of virtual states located between the ground and first electronic states. The type of scattering observed is dependent on how the molecule relaxes after excitation.
Rayleigh Scattering
1. The molecule is excited to any virtual state.
2. The molecule relaxes back to its original state.
3. The photon is scattered elastically, leaving with its original energy.
Stokes Scattering
1. The molecule is excited to any virtual state.
2. The molecule relaxes back to a higher vibrational state than it had originally.
3. The photon leaves with energy hν-ΔE and has been scattered inelastically.
Anti-Stokes Scattering
1. The molecule begins in a vibrationally excited state.
2. The molecule is excited to any virtual state.
3. The molecule relaxes back to a lower vibrational state than it had originally.
4. The photon leaves with energy hν+ΔE, and has been scattered superelastically.
Rayleigh scattering is by far the most common transition, due to the fact that no change has to occur in the vibrational state of the molecule. The anti-Stokes transition is the least common, as it requires the molecule to be in a vibrationally excited state before the photon is incident upon it. Due to the lack of intensity of the anti-Stokes signal and filtering requirements that eliminate photons with incident energy and higher, generally only Stokes scattering is used in Raman measurements. The relative intensities of Rayleigh, Stokes and anti-Stokes scattering can be seen in Figure \(2\).
Raman spectroscopy observes the change in energy between the incident and scattered photons associated with the Stokes and anti-Stokes transitions. This is typically measured as the change in the wavenumber (cm-1), from the incident light source. Because Raman measures the change in wavenumber, measurements can be taken using a source at any wavelength; however, near infrared and visible radiation are commonly used. Photons with ultraviolet wavelengths could work as well, but tend to cause photodecomposition of the sample.
Comparison between Raman and Infrared Spectroscopy
Raman spectroscopy sounds very much like infrared (IR) spectroscopy; however, IR examines the wavenumber at which a functional group has a vibrational mode, while Raman observes the shift in vibration from an incident source. The Raman frequency shift is identical to the IR peak frequency for a given molecule or functional group. As mentioned above, this shift is independent of the excitation wavelength, giving versatility to the design and applicability of Raman instruments.
The cause of the vibration is also mechanistically different between IR and Raman. This is because the two operate on different sets of selection rules. IR absorption requires a dipole moment or change in charge distribution to be associated with the vibrational mode. Only then can photons of the same energy as the vibrational state of molecule interact. A schematic of this can be seen in Figure \(3\).
Raman signals, on the other hand, due to scattering, occur because of a molecule’s polarizability, illustrated in Figure \(4\). Many molecules that are inactive or weak in the IR will have intense Raman signals. This results in often complementary techniques.
What does Raman Spectroscopy Measure?
Raman activity depends on the polarizability of a bond. This is a measure of the deformability of a bond in an electric field. This factor essentially depends on how easy it is for the electrons in the bond to be displaced, inducing a temporary dipole. When there is a large concentration of loosely held electrons in a bond, the polarizability is also large, and the group or molecule will have an intense Raman signal. Because of this, Raman is typically more sensitive to the molecular framework of a molecule rather than a specific functional group as in IR. This should not be confused with the polarity of a molecule, which is a measure of the separation of electric charge within a molecule. Polar molecules often have very weak Raman signals due to the fact that electronegative atoms hold electrons so closely.
Raman spectroscopy can provide information about both inorganic and organic chemical species. Many electron atoms, such as metals in coordination compounds, tend to have many loosely bound electrons, and therefore tend to be Raman active. Raman can provide information on the metal ligand bond, leading to knowledge of the composition, structure, and stability of these complexes. This can be particularly useful in metal compounds that have low vibrational absorption frequencies in the IR. Raman is also very useful for determining functional groups and fingerprints of organic molecules. Often, Raman vibrations are highly characteristic to a specific molecule, due to vibrations of a molecule as a whole, not in localized groups. The groups that do appear in Raman spectra have vibrations that are largely localized within the group, and often have multiple bonds involved.
What is Surface-Enhanced Raman Spectroscopy
Raman measurements provide useful characterization of many materials. However, the Raman signal is inherently weak (less than 0.001% of the source intensity), restricting the usefulness of this analytical tool. Placing the molecule of interest near a metal surface can dramatically increase the Raman signal. This is the basis of surface-enhanced Raman spectroscopy (SERS). There are several factors leading to the increase in Raman signal intensity near a metal surface
1. The distance to the metal surface.
• Signal enhancement drops off with distance from the surface.
• The molecule of interest must be close to the surface for signal enhancement to occur.
2. Details about the metal surface: morphology and roughness.
• This determines how close and how many molecules can be near a particular surface area.
3. The properties of the metal.
• Greatest enhancement occurs when the excitation wavelength is near the plasma frequency of the metal.
4. The relative orientation of the molecule to the normal of the surface.
• The polarizability of the bonds within the molecule can be affected by the electrons in the surface of the metal.
Surface-Enhanced Raman Spectroscopy for the Study of Surface Chemistry
The ever-rising interest in nanotechnology involves the synthesis and application of materials with a very high surface area to volume ratio. This places increasing importance on understanding the chemistry occurring at a surface, particularly the surface of a nanoparticle. Slight modifications of the nanoparticle or its surrounding environment can greatly affect many properties including the solubility, biological toxicity, and reactivity of the nanomaterial. Noble metal nanomaterials are of particular interest due to their unique optical properties and biological inertness.
One tool employed to understand the surface chemistry of noble metal nanomaterial, particularly those composed of gold or silver is surface-enhanced Raman spectroscopy (SERS). Replacing a metal surface with a metal nanoparticle increases the available surface area for the adsorption of molecules. Compared to a flat metal surface, a similar sample size using nanoparticles will have a dramatically stronger signal, since signal intensity is directly related to the concentration of the molecule of interest. Due to the shape and size of the structure, the electrons in the nanoparticle oscillate collectively when exposed to incident electromagnetic radiation. This is called the localized surface plasmon resonance (LSPR) of the nanoparticle. The LSPR of the nanoparticles boosts the Raman signal intensity dramatically for molecules of interest near the surface of the nanoparticle. In order to maximize this effect, a nanoparticle should be selected with its resonant wavelength falling in the middle of the incident and scattered wavelengths.
The overall intensity enhancement of SERS can be as large as a factor of 106, with the surface plasmon resonance responsible for roughly four orders of magnitude of this signal increase. The other two orders of magnitude have been attributed to chemical enhancement mechanisms arising charge interactions between the metal particle and the adsorbate or from resonances in the adsorbate alone, as discussed above.
Why is SERS Useful for Studying Surface Chemistry?
Traditionally, SERS uses nanoparticles made of conductive materials, such as gold, to learn more about a particular molecule. However, of interest in many growing fields that incorporate nanotechnology is the structure and functionalization of a nanoparticle stabilized by some surfactant or capping agent. In this case, SERS can provide valuable information regarding the stability and surface structure of the nanoparticle. Another use of nanoparticles in SERS is to provide information about a ligand’s structure and the nature of ligand binding. In many applications it is important to know whether a molecule is bound to the surface of the nanoparticle or simply electrostatically interacting with it.
Sample Preparation and Instrumental Details
The standard Raman instrument is composed of three major components. First, the instrument must have an illumination system. This is usually composed of one or more lasers. The major restriction for the illumination system is that the incident frequency of light must not be absorbed by the sample or solvent. The next major component is the sample illumination system. This can vary widely based on the specifics of the instrument, including whether the system is a standard macro-Raman or has micro-Raman capabilities. The sample illumination system will determine the phase of material under investigation. The final necessary piece of a Raman system is the spectrometer. This is usually placed 90° away from the incident illumination and may include a series of filters or a monochromator. An example of a macro-Raman and micro-Raman setup can be Figure \(5\) and Figure \(6\). A macro-Raman spectrometer has a spatial resolution anywhere from 100 μm to one millimeter while a micro-Raman spectrometer uses a microscope to magnify its spatial resolution.
Characterization of Single-Walled Carbon Nanotubes by Raman Spectroscopy
Carbon nanotubes (CNTs) have proven to be a unique system for the application of Raman spectroscopy, and at the same time Raman spectroscopy has provided an exceedingly powerful tool useful in the study of the vibrational properties and electronic structures of CNTs. Raman spectroscopy has been successfully applied for studying CNTs at single nanotube level.
The large van der Waals interactions between the CNTs lead to an agglomeration of the tubes in the form of bundles or ropes. This problem can be solved by wrapping the tubes in a surfactant or functionalizing the SWNTs by attaching appropriate chemical moieties to the sidewalls of the tube. Functionalization causes a local change in the hybridization from sp2 to sp3 of the side-wall carbon atoms, and Raman spectroscopy can be used to determine this change. In addition information on length, diameter, electronic type (metallic or semiconducting), and whether nanotubes are separated or in bundle can be obtained by the use of Raman spectroscopy. Recent progress in understanding the Raman spectra of single walled carbon nanotubes (SWNT) have stimulated Raman studies of more complicated multi-wall carbon nanotubes (MWNT), but unfortunately quantitative determination of the latter is not possible at the present state of art.
Characterizing SWNT's
Raman spectroscopy is a single resonance process, i.e., the signals are greatly enhanced if either the incoming laser energy (Elaser) or the scattered radiation matches an allowed electronic transition in the sample. For this process to occur, the phonon modes are assumed to occur at the center of the Brillouin zone (q = 0). Owing to their one dimensional nature, the Π-electronic density of states of a perfect, infinite, SWNTs form sharp singularities which are known as van Hove singularities (vHs), which are energetically symmetrical with respect to Fermi level (Ef) of the individual SWNTs. The allowed optical transitions occur between matching vHs of the valence and conduction band of the SWNTs, i.e., from first valence band vHs to the first conduction band vHs (E11) or from the second vHs of the valence band to the second vHs of the conduction band (E22). Since the quantum state of an electron (k) remains the same during the transition, it is referred to as k-selection rule.
The electronic properties, and therefore the individual transition energies in SWNTs are given by their structure, i.e., by their chiral vector that determines the way SWNT is rolled up to form a cylinder. Figure \(7\) shows a SWNT having vector R making an angle θ, known as the chiral angle, with the so-called zigzag or r1 direction.
Raman spectroscopy of an ensemble of many SWNTs having different chiral vectors is sensitive to the subset of tubes where the condition of allowed transition is fulfilled. A ‘Kataura-Plot’ gives the allowed electronic transition energies of individual SWNTs as a function of diameter d, hence information on which tubes are resonant for a given excitation wavelength can be inferred. Since electronic transition energies vary roughly as 1/d, the question whether a given laser energy probes predominantly semiconducting or metallic tubes depends on the mean diameter and diameter distribution in the SWNT ensemble. However, the transition energies that apply to an isolated SWNT do not necessarily hold for an ensemble of interacting SWNTs owing to the mutual van der Waals interactions.
Figure \(8\) shows a typical Raman spectrum from 100 to 3000 cm-1 taken of SWNTs produced by catalytic decomposition of carbon monoxide (HiPco-process). The two dominant Raman features are the radial breathing mode (RBM) at low frequencies and tangential (G-band) multifeature at higher frequencies. Other weak features, such as the disorder induced D-band and the G’ band (an overtone mode) are also shown.
Modes in the Raman Spectra of SWNTs
Radial Breamthing Modes (RBMs)
Out of all Raman modes observed in the spectra of SWNTs, the radial breathing modes are unique to SWNTs. They appear between 150 cm-1 < ωRBM < 300 cm-1 from the elastically scattered laser line. It corresponds to the vibration of the C atoms in the radial direction, as if the tube is breathing (Figure \(9\)). An important point about these modes is the fact that the energy (or wavenumber) of these vibrational modes depends on the diameter (d) of the SWNTs, and not on the way the SWNT is rolled up to form a cylinder, i.e., they do not depend on the θ of the tube.
These features are very useful for characterizing nanotube diameters through the relation ωRBM = A/d + B, where A and B are constants and their variations are often attributed to environmental effects, i.e., whether the SWNTs are present as individual tubes wrapped in a surfactant, isolated on a substrate surface, or in the form of bundles. However, for typical SWNT bundles in the diameter range, d = 1.5 ± 0.2 nm, A = 234 cm-1 nm and B = 10 cm-1(where B is an upshift coming from tube-tube interactions). For isolated SWNTs on an oxidized Si substrate, A= 248 cm-1 nm and B = 0. As can be seen from Figure \(10\), the relation ωRBM = A/d + B holds true for the usual diameter range i.e., when d lies between 1 and 2 nm. However, for d less than 1 nm, nanotube lattice distortions lead to chirality dependence of ωRBM and for large diameters tubes when, d is more than 2 nm the intensity of RBM feature is weak and is hardly observable.
Hence, a single Raman measurement gives an idea of the tubes that are in resonance with the laser line, but does not give a complete characterization of the diameter distribution of the sample. However, by taking Raman spectra using many laser lines, a good characterization of the diameter distributions in the sample can be obtained. Also, natural line widths observed for isolated SWNTs are ωRBM = 3 cm-1, but as the tube diameter is increased, broadening is observed which is denoted by ΓRBM. It has been observed that for d > 2 nm, ΓRBM > 20 cm-1. For SWNT bundles, the line width does not reflect ΓRMB, it rather reflects an ensemble of tubes in resonance with the energy of laser.
Variation of RBM Intensities Upon Functionalization
Functionalization of SWNTs leads to variations of relative intensities of RBM compared to the starting material (unfunctionalized SWNTs). Owing to the diameter dependence of the RBM frequency and the resonant nature of the Raman scattering process, chemical reactions that are sensitive to the diameter as well as the electronic structure, i.e., metallic or semiconducting of the SWNTs can be sorted out. The difference in Raman spectra is usually inferred by thermal defunctionalization, where the functional groups are removed by annealing. The basis of using annealing for defunctionalizing SWNTs is based on the fact that annealing restores the Raman intensities, in contrast to other treatments where a complete disintegration of the SWNTs occurs. Figure \(11\) shows the Raman spectra of the pristine, functionalized and annealed SWNTs. It can be observed that the absolute intensities of the radial breathing modes is drastically reduced after functionalization. This decrease can be attributed to vHs, which themselves are a consequence of translational symmetry of the SWNTs. Since the translational symmetry of the SWNTs is broken as a result of irregular distribution of the sp3-sites due to the functionalization, these vHs are broadened and strongly reduced in intensity. As a result, the resonant Raman cross section of all modes is strongly reduced as well.
For an ensemble of functionalized SWNTs, a decrease in high wavenumber RBM intensities has been observed which leads to an inference that destruction of small diameter SWNT takes place. Also, after prolonged treatment with nitric acid and subsequent annealing in oxygen or vacuum, diameter enlargement of SWNTs is observed from the disappearance of RBMs from small diameter SWNTs and the appearance of new RBMs characteristic of SWNTs with larger diameters. In addition, laser irradiation seems to damage preferentially small diameter SWNTs. In all cases, the decrease of RBM intensities is either attributed to the complete disintegration of SWNTs or reduction in resonance enhancement of selectively functionalized SWNTs. However, change in RBM intensities can also have other reasons. One reason is doping induced bleaching of electronic transitions in SWNTs. When a dopant is added, a previously occupied electronic state can be filled or emptied, as a result of which Ef in the SWNTs is shifted. If this shift is large enough and the conduction band vHs corresponding to the respective Eiitransition that is excited by the laser light gets occupied (n-type doping) or the valence band vHs is emptied (p-type doping), the resonant enhancement is lost as the electronic transitions are quenched.
Sample morphology has also seen to affect the RBMs. The same unfunctionalized sample in different aggregation states gives rise to different spectra. This is because the transition energy, Eii depends on the aggregation state of the SWNTs.
Tangential Modes (G-Band)
The tangential modes are the most intensive high-energy modes of SWNTs and form the so-called G-band, which is typically observed at around 1600 cm-1. For this mode, the atomic displacements occur along the cicumferential direction (Figure \(12\)). Spectra in this frequency can be used for SWNT characterization, independent of the RBM observation. This multi-peak feature can, for example, also be used for diameter characterization, although the information provided is less accurate than the RBM feature, and it gives information about the metallic character of the SWNTs in resonance with laser line.
The tangential modes are useful in distinguishing semiconducting from metallic SWNTs. The difference is evident in the G- feature (Figure \(13\) and \(14\)) which broadens and becomes asymmetric for metallic SWNTs in comparison with the Lorentzian lineshape for semiconducting tubes, and this broadening is related to the presence of free electrons in nanotubes with metallic character. This broadened G-feature is usually fit using a Breit-Wigner-Fano (BWF) line that accounts for the coupling of a discrete phonon with a continuum related to conduction electrons. This BWF line is observed in many graphite-like materials with metallic character, such as n-doped graphite intercalation compounds (GIC), n-doped fullerenes, as well as metallic SWNTs. The intensity of this G- mode depends on the size and number of metallic SWNTs in a bundle (Figure \(15\)).
Charge of G-band Line Shape on Functionalization
Chemical treatments are found to affect the line shape of the tangential line modes. Selective functionalization of SWNTs or a change in the ratio of metallic to semiconducting SWNTs due to selective etching is responsible for such a change. According to Figure \(16\), it can be seen that an increase or decrease of the BWF line shape is observed depending on the laser wavelength. At λexc = 633 nm, the preferentially functionalized small diameter SWNTs are semiconducting, therefore the G-band shows a decrease in the BWG asymmetry. However, the situation is reversed at 514 nm, where small metallic tubes are probed. BWF resonance intensity of small bundles increases with bundle thickness, so care should be taken that the effect ascribed directly to functionalization of the SWNTs is not caused by the exfoliation of the previously bundles SWNT.
Disorder-Induced D-band
This is one of the most discussed modes for the characterization of functionalized SWNTs and is observed at 1300-1400 cm-1. Not only for functionalized SWNTs, D-band is also observed for unfunctionalized SWNTs. From a large number of Raman spectra from isolated SWNTs, about 50% exhibit observable D-band signals with weak intensity (Figure \(14\)).
A large D-peak compared with the G-peak usually means a bad resonance condition, which indicates the presence of amorphous carbon.
The appearance of D-peak can be interpreted due to the breakdown of the k-selection rule. It also depends on the laser energy and diameter of the SWNTs. This behavior is interpreted as a double resonance effect, where not only one of the direct, k-conserving electronic transitions, but also the emission of phonon is a resonant process. In contrast to single resonant Raman scattering, where only phonons around the center of the Brillouin zone (q = 0) are excited, the phonons that provoke the D-band exhibit a non-negligible q vector. This explains the double resonance theory for D-band in Raman spectroscopy. In few cases, the overtone of the D-band known as the G’-band (or D*-band) is observed at 2600-2800 cm-1, and it does not require defect scattering as the two phonons with q and –q are excited. This mode is therefore observed independent of the defect concentration.
The presence of D-band cannot be correlated to the presence of various defects (such as hetero-atoms, vacancies, heptagon-pentagon pairs, kinks, or even the presence of impurities, etc). Following are the two main characteristics of the D-band found in carbon nanotubes:
1. Small linewidths: ΓD values for SWNTs range from 40 cm-1 down to 7 cm-1.
2. Lower frequencies: D-band frequency is usually lower than the frequency of sp2-based carbons, and this downshift of frequency shows 1/d dependence.
D-band Intensity as a Measure of Functionalization vs. Defect Density
Since D-peak appears due to the presence defects, an increase in the intensity of the band is taken as a fingerprint for successful functionalization. But, whether D-band intensity is a measure of degree of functionalization or not is still sure. So, it is not correct to correlate D-peak intensity or D-peak area to the degree of functionalization. From Figure \(17\), it can be observed that for lower degree of functionalization, intensity of the D-band scales linearly with defect density. As the degree of functionalization is further increased, both D and G-band area decrease, which is explained by the loss of resonance enhancement due to functionalization. Also, normalization of the D-peak intensity to the G-band in order to correct for changes in resonance intensities also leads to a decrease for higher densities of functional groups.
Limitations of Raman Spectroscopy
Though Raman spectroscopy has provides an exceedingly important tool for characterization of SWNTs, however, it suffers from few serious limitations. One of the main limitations of Raman spectroscopy is that it does not provide any information about the extent of functionalization in the SWNTs. The presence of D-band indicates disorder, i.e. side wall distribution, however it cannot differentiate between the number of substituents and their distribution. Following are the two main limitations of Raman Spectroscopy:
Quantification of Substituents
This can be illustrated by the following examples. Purified HiPco tubes may be fluorinated at 150 °C to give F-SWNTs with a C:F ratio of approximately 2.4:1. The Raman spectra (using 780 nm excitation) for F-SWNTs shows in addition to the tangential mode at ~1587 cm-1 an intense broad D (disorder) mode at ~ 1295 cm-1consistent with the side wall functionalization. Irrespective of the arrangements of the fluorine substituents, thermolysis of F-SWNTs results in the loss of fluorine and the re-formation of unfunctionalized SWNTs alnog with their cleavage into shorter length tubes. As can be seen from Figure \(18\), the intensity of the D-band decreases as the thermolysis temperature increases. This is consistent with the loss of F-substituents. The G-band shows a concomitant sharpening and increase in intensity.
As discussed above, the presence of a significant D mode has been the primary method for determining the presence of sidewall functionalization. It has been commonly accepted that the relative intensity of the D mode versus the tangential G mode is a quantitative measure of level of substitution. However, as discussed below, the G:D ratio is also dependent on the distribution of substituents. Using Raman spectroscopy in combination with XPS analysis of F-SWNTs that have been subjected to thermolysis at different temperatures, a measure of the accuracy of Raman as a quantitative tool for determining substituent concentration can be obtained. As can be seen from Figure \(19\), there is essentially no change in the G:D band ratio despite a doubling amount of functional groups.Thus, at low levels of functionalization the use of Raman spectroscopy to quantify the presence of fluorine substituents is a clearly suspect.
On the basis of above data it can be concluded that Raman spectroscopy does not provide an accurate quantification of small differences at low levels of functionalization, whereas when a comparison between samples with high levels of functionalization or large differences in degree of functionalization is requires Raman spectroscopy provides a good quantification.
Number vs Distribution
Fluorinated nanotubes may be readily functionalized by reaction with the appropriate amine in the presence of base according to the scheme shown in Figure \(20\).
When the Raman spectra of the functionalized SWNTs is taken (Figure \(21\)), it is found out that the relative intensity of the disorder D-band at ~1290 cm-1versus the tangential G-band (1500 - 1600 cm-1) is much higher for thiophene-SWNT than thiol-SWNT. If the relative intensity of the D mode is the measure of the level of substitution, it can be concluded that there are more number of thiophene groups present per C than thiol groups. However, from the TGA weight loss data the SWNT-C:substituent ratios are calculated to be 19:1 and 17.5:1. Thus, contrary to the Raman data the TGA suggest that the number of substituents per C (in the SWNT) is actually similar for both substituents.
This result would suggest that Raman spectroscopy is potentially unsuccessful in correctly providing the information about the number of substituents on the SWNTs. Subsequent imaging of the functionalized SWNTs by STM showed that the distribution of the functional groups was the difference between the thiol and thiphene functionalized SWNTs (Figure \(22\)). Thus, relative ratio of the D- and G-bands is a measure of concentration and distribution of functional groups on SWNTs.
Multi-walled carbon nanotubes (MWNTs)
Most of the characteristic differences that distinguish the Raman spectra in SWNTs from the spectra of graphite are not so evident for MWNTs. It is because the outer diameter for MWNTs is very large and the ensemble of CNTs in them varies from small to very large. For example, the RBM Raman feature associated with a small diameter inner tube (less than 2 nm) can sometimes be observed when a good resonance condition is established, but since the RBM signal from large diameter tubes is usually too weak to be observable and the ensemble average of inner tube diameter broadens the signal, a good signal is not observed. However, when hydrogen gas in the arc discharge method is used, a thin innermost nanotube within a MWNT of diameter 1 nm can be obtained which gives strong RBM peaks in the Raman spectra.
Thereas the G+ - G- splitting is large for small diameter SWNT, the corresponding splitting of the G-band in MWNTs is both small in intensity and smeared out due to the effect of the diameter distribution. Therefore the G-band feature predominantly exists a weakly asymmetric characteristic lineshape, and a peak appearing close to the graphite frequency of 1582 cm-1.however for isolated MWNTs prepared in the presence of hydrogen gas using the arc discharge method, it is possible to observe multiple G-band splitting effects even more clearly than for the SWNTs, and this is because environmental effects become relatively small for the innermost nanotube in a MWNT relative to the interactions occurring between SWNTs and different environments. The Raman spectroscopy of MWNTs has not been well investigated up to now. The new directions in this field are yet to be explored. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.03%3A_Raman_Spectroscopy.txt |
Ultraviolet-visible (UV-vis) spectroscopy is used to obtain the absorbance spectra of a compound in solution or as a solid. What is actually being observed spectroscopically is the absorbance of light energy or electromagnetic radiation, which excites electrons from the ground state to the first singlet excited state of the compound or material. The UV-vis region of energy for the electromagnetic spectrum covers 1.5 - 6.2 eV which relates to a wavelength range of 800 - 200 nm. The Beer-Lambert Law, Equation \ref{1} , is the principle behind absorbance spectroscopy. For a single wavelength, A is absorbance (unitless, usually seen as arb. units or arbitrary units), ε is the molar absorptivity of the compound or molecule in solution (M-1cm-1), b is the path length of the cuvette or sample holder (usually 1 cm), and c is the concentration of the solution (M).
$A\ =\ \varepsilon b c \label{1}$
All of these instruments have a light source (usually a deuterium or tungsten lamp), a sample holder and a detector, but some have a filter for selecting one wavelength at a time. The single beam instrument (Figure $1$) has a filter or a monochromator between the source and the sample to analyze one wavelength at a time. The double beam instrument (Figure $2$) has a single source and a monochromator and then there is a splitter and a series of mirrors to get the beam to a reference sample and the sample to be analyzed, this allows for more accurate readings. In contrast, the simultaneous instrument (Figure $3$) does not have a monochromator between the sample and the source; instead, it has a diode array detector that allows the instrument to simultaneously detect the absorbance at all wavelengths. The simultaneous instrument is usually much faster and more efficient, but all of these types of spectrometers work well.
What Information can be Obtained from UV-vis Spectra?
UV-vis spectroscopic data can give qualitative and quantitative information of a given compound or molecule. Irrespective of whether quantitative or qualitative information is required it is important to use a reference cell to zero the instrument for the solvent the compound is in. For quantitative information on the compound, calibrating the instrument using known concentrations of the compound in question in a solution with the same solvent as the unknown sample would be required. If the information needed is just proof that a compound is in the sample being analyzed, a calibration curve will not be necessary; however, if a degradation study or reaction is being performed, and concentration of the compound in solution is required, thus a calibration curve is needed.
To make a calibration curve, at least three concentrations of the compound will be needed, but five concentrations would be most ideal for a more accurate curve. The concentrations should start at just above the estimated concentration of the unknown sample and should go down to about an order of magnitude lower than the highest concentration. The calibration solutions should be spaced relatively equally apart, and they should be made as accurately as possible using digital pipettes and volumetric flasks instead of graduated cylinders and beakers. An example of absorbance spectra of calibration solutions of Rose Bengal (4,5,6,7-tetrachloro-2',4',5',7'-tetraiodofluorescein, Figure $4$, can be seen in Figure $5$. To make a calibration curve, the value for the absorbances of each of the spectral curves at the highest absorbing wavelength, is plotted in a graph similar to that in Figure $6$ of absorbance versus concentration. The correlation coefficient of an acceptable calibration is 0.9 or better. If the correlation coefficient is lower than that, try making the solutions again as the problem may be human error. However, if after making the solutions a few times the calibration is still poor, something may be wrong with the instrument; for example, the lamps may be going bad.
Limitations of UV-vis Spectroscopy
Sample
UV-vis spectroscopy works well on liquids and solutions, but if the sample is more of a suspension of solid particles in liquid, the sample will scatter the light more than absorb the light and the data will be very skewed. Most UV-vis instruments can analyze solid samples or suspensions with a diffraction apparatus (Figure $7$), but this is not common. UV-vis instruments generally analyze liquids and solutions most efficiently.
Calibration and Reference
A blank reference will be needed at the very beginning of the analysis of the solvent to be used (water, hexanes, etc), and if concentration analysis needs to be performed, calibration solutions need to be made accurately. If the solutions are not made accurately enough, the actual concentration of the sample in question will not be accurately determined.
Choice of Solvent or Container
Every solvent has a UV-vis absorbance cutoff wavelength. The solvent cutoff is the wavelength below which the solvent itself absorbs all of the light. So when choosing a solvent be aware of its absorbance cutoff and where the compound under investigation is thought to absorb. If they are close, chose a different solvent. Table $1$ provides an example of solvent cutoffs.
Table $1$: UV absorbance cutoffs of various common solvents
Solvent UV Absorbance Cutoff (nm)
Acetone 329
Benzene 278
Dimethylformamide 267
Ethanol 205
Toluene 285
Water 180
The material the cuvette (the sample holder) is made from will also have a UV-vis absorbance cutoff. Glass will absorb all of the light higher in energy starting at about 300 nm, so if the sample absorbs in the UV, a quartz cuvette will be more practical as the absorbance cutoff is around 160 nm for quartz (Table $2$).
Table $2$: Three different types of cuvettes commonly used, with different usable wavelengths.
Material Wavelength Range (nm)
Glass 380-780
Plastic 380-780
Fused Quartz < 380
Concentration of Solution
To obtain reliable data, the peak of absorbance of a given compound needs to be at least three times higher in intensity than the background noise of the instrument. Obviously using higher concentrations of the compound in solution can combat this. Also, if the sample is very small and diluting it would not give an acceptable signal, there are cuvettes that hold smaller sample sizes than the 2.5 mL of a standard cuvettes. Some cuvettes are made to hold only 100 μL, which would allow for a small sample to be analyzed without having to dilute it to a larger volume, lowering the signal to noise ratio. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.04%3A_UV-Visible_Spectroscopy.txt |
Photoluminescence spectroscopy is a contactless, nondestructive method of probing the electronic structure of materials. Light is directed onto a sample, where it is absorbed and imparts excess energy into the material in a process called photo-excitation. One way this excess energy can be dissipated by the sample is through the emission of light, or luminescence. In the case of photo-excitation, this luminescence is called photoluminescence.
Photo-excitation causes electrons within a material to move into permissible excited states. When these electrons return to their equilibrium states, the excess energy is released and may include the emission of light (a radiative process) or may not (a nonradiative process). The energy of the emitted light (photoluminescence) relates to the difference in energy levels between the two electron states involved in the transition between the excited state and the equilibrium state. The quantity of the emitted light is related to the relative contribution of the radiative process.
In most photoluminescent systems chromophore aggregation generally quenches light emission via aggregation-caused quenching (ACQ). This means that it is necessary to use and study fluorophores in dilute solutions or as isolated molecules. This in turn results in poor sensitivity of devices employing fluorescence, e.g., biosensors and bioassays. However, there have recently been examples reported in which luminogen aggregation played a constructive, instead of destructive role in the light-emitting process. This aggregated-induced emission (AIE) is of great potential significance in particular with regard to solid state devices. Photoluminescence spectroscopy provides a good method for the study of luminescent properties of a fluorophore.
Forms of Photoluminescence
• Resonant Radiation: In resonant radiation, a photon of a particular wavelength is absorbed and an equivalent photon is immediately emitted, through which no significant internal energy transitions of the chemical substrate between absorption and emission are involved and the process is usually of an order of 10 nanoseconds.
• Fluorescence: When the chemical substrate undergoes internal energy transitions before relaxing to its ground state by emitting photons, some of the absorbed energy is dissipated so that the emitted light photons are of lower energy than those absorbed. One of such most familiar phenomenon is fluorescence, which has a short lifetime (10-8 to 10-4s).
• Phosphorescence: Phosphorescence is a radiational transition, in which the absorbed energy undergoes intersystem crossing into a state with a different spin multiplicity. The lifetime of phosphorescence is usually from 10-4 - 10-2 s, much longer than that of Fluorescence. Therefore, phosphorescence is even rarer than fluorescence, since a molecule in the triplet state has a good chance of undergoing intersystem crossing to ground state before phosphorescence can occur.
Relation between Absorption and Emission Spectra
Fluorescence and phosphorescence come at lower energy than absorption (the excitation energy). As shown in Figure $1$, in absorption, wavelength λ0 corresponds to a transition from the ground vibrational level of S0 to the lowest vibrational level of S1. After absorption, the vibrationally excited S1 molecule relaxes back to the lowest vibrational level of S1 prior to emitting any radiation. The highest energy transition comes at wavelength λ0, with a series of peaks following at longer wavelength. The absorption and emission spectra will have an approximate mirror image relation if the spacings between vibrational levels are roughly equal and if the transition probabilities are similar. The λ0 transitions in Figure $2$, do not exactly overlap. As shown in Figure $8$, a molecule absorbing radiation is initially in its electronic ground state, S0. This molecule possesses a certain geometry and solvation. As the electronic transition is faster than the vibrational motion of atoms or the translational motion of solvent molecules, when radiation is first absorbed, the excited S1 molecule still possesses its S0 geometry and solvation. Shortly after excitation, the geometry and solvation change to their most favorable values for S1 state. This rearrangement lowers the energy of excited molecule. When an S1 molecule fluoresces, it returns to the S0 state with S1 geometry and solvation. This unstable configuration must have a higher energy than that of an S0molecule with S0 geometry and solvation. The net effect in Figure $1$ is that the λ0 emission energy is less than the λ0 excitation energy.
Instrumentation
A schematic of an emiision experiment is give in Figure $3$. An excitation wavelength is selected by one monochromator, and luminescence is observed through a second monochromator, usually positioned at 90° to the incident light to minimize the intensity of scattered light reaching the dector. If the excitation wavelength is fixed and the emitted radiation is scanned, an emission spectrum is produced.
Relationship to UV-vis Spectroscopy
Ultraviolet-visible (UV-vis) spectroscopy or ultraviolet-visible spectrophotometry refers to absorption spectroscopy or reflectance spectroscopy in the untraviolet-visible spectral region. The absorption or reflectance in the visible range directly affects the perceived color of the chemicals involved. In the UV-vis spectrum, an absorbance versus wavelength graph results and it measures transitions from the ground state to excited state, while photoluminescence deals with transitions from the excited state to the ground state.
An excitation spectrum is a graph of emission intensity versus excitation wavelength. An excitation spectrum looks very much like an absorption spectrum. The greater the absorbance is at the excitation wavelength, the more molecules are promoted to the excited state and the more emission will be observed.
By running an UV-vis absorption spectrum, the wavelength at which the molecule absorbs energy most and is excited to a large extent can be obtained. Using such value as the excitation wavelength can thus provide a more intense emission at a red-shifted wavelength, which is usually within twice of the excitation wavelength.
Applications
Detection of ACQ or AIE properties
Aggregation-caused quenching (ACQ) of light emission is a general phenomenon for many aromatic compounds that fluorescence is weakened with an increase in its solution concentration and even condensed phase. Such effect, however, comes into play in the solid state, which has prevented many lead luminogens identified by the laboratory solution-screening process from finding real-world applications in an engineering robust form.
Aggregation-induced emission (AIE), on the other hand, is a novel phenomenon that aggregation plays a constructive, instead of destructive role in the light-emitting process, which is exactly opposite to the ACQ effect.
A Case Study
From the photoluminescence spectra of hexaphenylsilole (HPS, Figure $4$) show in Figure $5$, it can be seen that as the water (bad solvent) fraction increases, the emission intensity of HPS increases. For BODIPY derivative Figure $6$ in Figure $7$, it shows that the PL intensity peaks at 0 water content resulted from intramolecular rotation or twisting, known as twisted intramolecular charge transfer (TICT).
The emission color of an AIE luminogen is scarcely affected by solvent polarity, whereas that of a TICT luminogen typically bathochromically shifts with increasing solvent polarity. In Figure $8$, however, it shows different patterns of emission under different excitation wavelengths. At the excitation wavelength of 372 nm, which is corresponding to the BODIPY group, the emission intensity increases as water fraction increases. However, it decreases at the excitation wavelength of 530 nm, which is corresponding to the TPE group. The presence of two emissions in this compound is due to the presence of two independent groups in the compound with AIE and ACQ properties, respectively.
Detection of Luminescence with Respect to Molarity
Figure $9$ shows the photoluminescence spectroscopy of a BODIPY-TPE derivative of different concentrations. At the excitation wavelength of 329 nm, as the molarity increases, the emission intensity decreases. Such compounds whose PL emission intensity enhances at low concentration can be a good chemo-sensor for the detection of the presence of compounds with low quantity.
Other Applications
Apart from the detection of light emission patterns, photoluminescence spectroscopy is of great significance in other fields of analysis, especially semiconductors.
Band Gap Determination
Band gap is the energy difference between states in the conduction and valence bands, of the radiative transition in semiconductors. The spectral distribution of PL from a semiconductor can be analyzed to nondestructively determine the electronic band gap. This provides a means to quantify the elemental composition of compound semiconductor and is a vitally important material parameter influencing solar cell device efficiency.
Impurity Levels and Defect Detection
Radiative transitions in semiconductors involve localized defect levels. The photoluminescence energy associated with these levels can be used to identify specific defects, and the amount of photoluminescence can be used to determine their concentration. The PL spectrum at low sample temperatures often reveals spectral peaks associated with impurities contained within the host material. Fourier transform photoluminescence microspectroscopy, which is of high sensitivity, provides the potential to identify extremely low concentrations of intentional and unintentional impurities that can strongly affect material quality and device performance.
Recombination Mechanisms
The return to equilibrium, known as “recombination”, can involve both radiative and nonradiative processes. The quantity of PL emitted from a material is directly related to the relative amount of radiative and nonradiative recombination rates. Nonradiative rates are typically associated with impurities and the amount of photoluminescence and its dependence on the level of photo-excitation and temperature are directly related to the dominant recombination process. Thus, analysis of photoluminescence can qualitatively monitor changes in material quality as a function of growth and processing conditions and help understand the underlying physics of the recombination mechanism.
Surface and Structure and Excited States
The widely used conventional methods such as XRD, IR and Raman spectroscopy, are very often not sensitive enough for supported oxide catalysts with low metal oxide concentrations. Photoluminescence, however, is very sensitive to surface effects or adsorbed species of semiconductor particles and thus can be used as a probe of electron-hole surface processes.
Limitations of Photoluminescence Spectroscopy
Very low concentrations of optical centers can be detected using photoluminescence, but it is not generally a quantitative technique. The main scientific limitation of photoluminescence is that many optical centers may have multiple excited states, which are not populated at low temperature.
The disappearance of luminescence signal is another limitation of photoluminescence spectroscopy. For example, in the characterization of photoluminescence centers of silicon no sharp-line photoluminescence from 969 meV centers was observed when they had captured self-interstitials.
Fluorescence Characterization and DNA Detection
Luminescence is a process involving the emission of light from any substance, and occurs from electronically excited states of that substance. Normally, luminescence is divided into two categories, fluorescence and phosphorescence, depending on the nature of the excited state.
Fluorescence is the emission of electromagnetic radiation light by a substance that has absorbed radiation of a different wavelength. Phosphorescence is a specific type of photoluminescence related to fluorescence. Unlike fluorescence, a phosphorescent material does not immediately re-emit the radiation it absorbs.
The process of fluorescent absorption and emission is easily illustrated by the Jablonski diagram. A classic Jablonski diagram is shown in Figure $10$, where Sn represents the nth electronic states. There are different vibrational and rotational states in every electronic state. After light absorption, a fluorophore is excited to a higher electronic and vibrational state from ground state (here rotational states are not considered for simplicity). By internal conversion of energy, these excited molecules relax to lower vibrational states in S1 (Figure $10$) and then return to ground states by emitting fluorescence. Actually, excited molecules always return to higher vibration states in S0 and followed by some thermal process to ground states in S1. It is also possible for some molecules to undergo intersystem crossing process to T2 states (Figure $10$). After internal conversion and relaxing to T1, these molecules can emit phosphorescence and return to ground states.
The Stokes shift, the excited state lifetime and quantum yield are the three most important characteristics of fluorescence emission. Stokes shift is the difference between positions of the band maxima of the absorption and emission spectra of the same electronic transition. According to mechanism discussed above, an emission spectrum must have lower energy or longer wavelength than absorption light. The quantum yield is a measure of the intensity of fluorescence, as defined by the ratio of emitted photons over absorbed photons. Excited state lifetime is a measure of the decay times of the fluorescence.
Instrumentation of Fluorescence Spectroscopy
Spectrofluorometers
Most spectrofluorometers can record both excitation and emission spectra. An emission spectrum is the wavelength distribution of an emission measured at a single constant excitation wavelength. In comparison, an excitation spectrum is measured at a single emission wavelength by scanning the excitation wavelength.
Light Sources
Specific light sources are chosen depending on the application.
Arc and Incandescent Xenon Lamps
The high-pressure xenon (Xe) arc is the most versatile light source for steady-state fluorometers now. It can provides a steady light output from 250 - 700 nm (Figure $11$), with only some sharp lines near 450 and 800 nm. The reason that xenon arc lamps emit a continuous light is the recombination of electrons with ionized Xe atoms. These ions produced by collision between Xe and electrons. Those sharp lines near 450 nm are due to the excited Xe atoms that are not ionized.
During fluorescence experiment, some distortion of the excitation spectra can be observed, especially the absorbance locating in visible and ultraviolet region. Any distortion displayed in the peaks is the result of wavelength-dependent output of Xe lamps. Therefore, we need to apply some mathematic and physical approaches for correction.
High Pressure Mercury Lamps
Compared with xenon lamps, Hg lamps have higher intensities. As shown in Figure $11$ the intensity of Hg lamps is concentrated in a series of lines, so it is a potentially better excitation light source if matched to certain fluorophorescence.
Xe-Hg Arc Lamps
High-pressure xenon-mercury lamps have been produced. They have much higher intensity in ultraviolet region than normal Xe lamps. Also, the introduction of Xe to Hg lamps broadens the sharp-line output of Hg lamps. Although the wavelength of output is still dominated by those Hg lines, these lines are broadened and fit to various fluorophores better. The Xe-Hg lamp output depends on the operating temperature.
Low Pressure Hg and Hg-Ar Lamps
Due to their very sharp line spectra, they are primarily useful for calibration purpose. The combination of Hg and Ar improve the output scale, from 200 - 1000 nm.
Other Light Source
There are many other light source for experimental and industrial application, such as pulsed xenon lamps, quartz-tungsten halogen (QTH) lamps, LED light sources, etc.
Monochromators
Most of the light sources used provide only polychromatic or white light. However, what is needed for experiments are various chromatic light with a wavelength range of 10 nm. Monocharomators help us to achieve this aim. Prisms and diffraction gratings are the two main kinds of monochromators used, although diffraction gratings are most useful, especially in spectrofluorometers.
Dispersion, efficiency, stray light level and resolution are important parameters for monochromators. Dispersion is mainly determined by slit width and expressed in nm/mm. It is prepared to have low stray light level. Stray light is defined as light transmitted by the monochromator at wavelength outside the chosen range. Also, a high efficiency is required to increase the ability to detect low light levels. Resolution depends on the slit width. There are normally two slits, entrance and exit in a fluorometers. Light intensity that passes through the slits is proportional to the square of the slit width. Larger slits have larger signal levels, but lower resolution, and vice verse. Therefore, it is important to balance the signal intensity and resolution with the slit width.
Optical filters
Optical filters are used in addition to monochromators, because the light passing through monochromator is rarely ideal, optical filters are needed for further purifying light source. If the basic excitation and emission properties of a particular system under study, then selectivity by using optical filters is better than by the use of monochromators. Two kinds of optical filter are gradually employed: colored filters and thin-film filters.
Colored Filters
Colored filters are the most traditional filter used before thin-film filter were developed. They can be divided into two categories: monochromatic filter and long-pass filter. The first one only pass a small range of light (about 10 - 25 nm) centered at particular chosen wavelength. In contrast, long pass filter transmit all wavelengths above a particular wavelength. In using these bandpass filters, special attention must be paid to the possibility of emission from the filter itself, because many filters are made up of luminescent materials that are easily excited by UV light. In order to avoid this problem, it is better to set up the filter further away from the sample.
Thin-film Filters
The transmission curves of colored class filter are not suitable for some application and as such they are gradually being substituted by thin-film filters. Almost any desired transmission curve can be obtained using a thin film filter.
Detectors
The standard detector used in many spectrofluorometers is the InGaAs array, which can provides rapid and robust spectral characterization in the near-IR. And the liquid-nitrogen cooling is applied to decrease the background noise. Normally, detectors are connected to a controller that can transfer a digital signal to and from the computer.
Fluorophores
At present a wide range of fluorophores have been developed as fluorescence probes in bio-system. They are widely used for clinical diagnosis, bio-tracking and labeling. The advance of fluorometers has been accompanied with developments in fluorophore chemistry. Thousands of fluorophores have been synthesized, but herein four categories of fluorophores will be discussed with regard their spectral properties and application.
Intrinsic or Natural Fluorophores
Tryptophan (trp), tyrosine (tyr), and phenylalanine (phe) are three natural amino acid with strong fluorescence (Figure $12$). In tryptophan, the indole groups absorbs excitation light as UV region and emit fluorescence.
Green fluorescent proteins (GFP) is another natural fluorophores. GFP is composed of 238 amino acids (Figure $13$), and it exhibits a characteristic bright green fluorescence when excited. They are mainly extracted from bioluminescent jellyfish Aequorea vicroria, and are employed as signal reporters in molecular biology.
Extrinsic Fluorophores
Most bio-molecules are nonfluorescent, therefore it is necessary to connect different fluorophores to enable labeling or tracking of the biomolecules. For example, DNA is an example of a biomolecule without fluorescence. The Rhodamine (Figure $14$) and BODIPY (Figure $15$) families are two kinds of well-developed organic fluorophores. They have been extensively employed in design of molecular probes due to their excellent photophysical properties.
Red and Near-infrared (NIR) dyes
With the development of fluorophores, red and near-infrared (NIR) dyes attract increasing attention since they can improve the sensitivity of fluorescence detection. In biological system, autofluorescence always increase the ratio of signal-to-noise (S/N) and limit the sensitivity. As the excitation wavelength turns to longer, autopfluorescence decreases accordingly, and therefore signal-to-noise ratio increases. Cyanines are one such group of long-wavelength dyes, e.g., Cy-3, Cy-5 and Cy-7 (Figure $16$), which have emission at 555, 655 and 755 nm respectively.
Long-lifetime Fluorophores
Almost all of the fluorophores mentioned above are organic fluorophores that have relative short lifetime from 1-10 ns. However, there are also a few long-lifetime organic fluorophore, such as pyrene and coronene with lifetime near 400 ns and 200 ns respectively (Figure $17$). Long-lifetime is one of the important properties to fluorophores. With its help, the autofluorescence in biological system can be removed adequately, and hence improve the detectability over background.
Although their emission belongs to phosphorescence, transition metal complexes are a significant class of long-lifetime fluorophores. Ruthenium (II), iridium (III), rhenium (I), and osmium (II) are the most popular transition metals that can combine with one to three diimine ligands to form fluorescent metal complexes. For example, iridium forms a cationic complex with two phenyl pyridine and one diimine ligand (Figure $18$). This complex has excellent quantum yield and relatively long lifetime.
Applications
With advances in fluorometers and fluorophores, fluorescence has been a dominant techonology in the medical field, such clinic diagnosis and flow cytometry. Herein, the application of fluorescence in DNA and RNA detecition is discussed.
The low concentration of DNA and RNA sequences in cells determine that high sensitivity of the probe is required, while the existence of various DNA and RNA with similar structures requires a high selectivity. Hence, fluorophores were introduced as the signal group into probes, because fluorescence spectroscopy is most sensitive technology until now.
The general design of a DNA or RNA probe involves using an antisense hybridization oligonucleotide to monitor target DNA sequence. When the oligonucleotide is connected with the target DNA, the signal groups-the fluorophores-emit designed fluorescence. Based on fluorescence spectroscopy, signal fluorescence can be detected which help us to locate the target DNA sequence. The selectively inherent in the hybridization between two complementary DNA/RNA sequences make this kind of DNA probes extremely high selectivity. A molecular Beacon is one kind of DNA probes. This simple but novel design is reported by Tyagi and Kramer in 1996 (Figure $19$) and gradually developed to be one of the most common DNA/RNA probes.
Generally speaking, a molecular beacon it is composed of three parts: one oligonucleotide, a fluorophore and a quencher at different ends. In the absence of the target DNA, the molecular beacon is folded like a hairpin due to the interaction between the two series nucleotides at opposite ends of the oligonucleotide. At this time, the fluorescence is quenched by the close quencher. However, in the presence of the target, the probe region of the MB will hybridize to the target DNA, open the folded MB and separate the fluorophore and quencher. Therefore, the fluorescent signal can be detected which indicate the existence of a particular DNA.
Fluorescence Correlation Spectroscopy
Florescence correlation spectroscopy (FCS) is an experimental technique that that measures fluctuations in fluorescence intensity caused by the Brownian motion of particles. Fluorescence is a form of luminescence that involves the emission of light by a substance that has absorbed light or other electromagnetic radiation. Brownian motion is the random motion of particles suspended in a fluid that results from collisions with other molecules or atoms in the fluid. The initial experimental data is presented as intensity over time but statistical analysis of fluctuations makes it possible to determine various physical and photo-physical properties of molecules and systems. When combined with analysis models, FCS can be used to find diffusion coefficients, hydrodynamic radii, average concentrations, kinetic chemical reaction rates, and single-triplet state dynamics. Singlet and triplet states are related to electron spin. Electrons can have a spin of (+1/2) or (-1/2). For a system that exists in the singlet state, all spins are paired and the total spin for the system is ((-1/2) + (1/2)) or 0. When a system is in the triplet state, there exist two unpaired electrons with a total spin state of 1.
History
The first scientists to be credited with the application of fluorescence to signal-correlation techniques were Douglas Magde, Elliot L. Elson, and Walt W.Webb, therefore they are commonly referred to as the inventors of FCS. The technique was originally used to measure the diffusion and binding of ethidium bromide (Figure $20$) onto double stranded DNA.
Initially, the technique required high concentrations of fluorescent molecules and was very insensitive. Starting in 1993, large improvements in technology and the development of confocal microscopy and two-photon microscopy were made, allowing for great improvements in the signal to noise ratio and the ability to do single molecule detection. Recently, the applications of FCS have been extended to include the use of FörsterResonance Energy Transfer (FRET), the cross-correlation between two fluorescent channels instead of auto correlation, and the use of laser scanning. Today, FCS is mostly used for biology and biophysics.
Instrumentation
A basic FCS setup (Figure $21$) consists of a laser line that is reflected into a microscope objective by a dichroic mirror. The laser beam is focused on a sample that contains very dilute amounts of fluorescent particles so that only a few particles pass through the observed space at any given time. When particles cross the focal volume (the observed space) they fluoresce. This light is collected by the objective and passes through the dichroic mirror (collected light is red-shifted relative to excitation light), reaching the detector. It is essential to use a detector with high quantum efficiency (percentage of photons hitting the detector that produce charge carriers). Common types of detectors are a photo-multiplier tube (rarely used due to low quantum yield), an avalanche photodiode, and a super conducting nanowire single photo detector. The detector produces an electronic signal that can be stored as intensity over time or can be immediately auto correlated. It is common to use two detectors and cross- correlate their outputs leading to a cross-correlation function that is similar to the auto correlation function but is free from after-pulsing (when a photon emits two electronic pulses). As mentioned earlier, when combined with analysis models, FCS data can be used to find diffusion coefficients, hydrodynamic radii, average concentrations, kinetic chemical reaction rates, and single-triplet dynamics.
Analysis
When particles pass through the observed volume and fluoresce, they can be described mathematically as point spread functions, with the point of the source of the light being the center of the particle. A point spread function (PSF) is commonly described as an ellipsoid with measurements in the hundreds of nanometer range (although not always the case depending on the particle). With respect to confocal microscopy, the PSF is approximated well by a Gaussian, \ref{1}, where I0 is the peak intensity, r and z are radial and axial position, and wxy and wzare the radial and axial radii (with wz > wxy).
$PSF(r,z) \ =\ I_{0} e^{-2r^{2}}/\omega^{2}_{xy}e^{-2z^{2}/\omega^{2}_{z}} \label{1}$
This Gaussian is assumed with the auto-correlation with changes being applied to the equation when necessary (like the case of a triplet state, chemical relaxation, etc.). For a Gaussian PSF, the autocorrelation function is given by \ref{2}, where \ref{3} is the stochastic displacement in space of a fluorophore after time T.
$G(\tau )\ =\frac{1}{\langle N \rangle } \langle exp (- \frac{\Delta (\tau)^{2} \ +\ \Delta Y(\tau )^{2}}{w^{2}_{xy}}\ -\ \frac{\Delta Z(\tau )^{2}}{w^{2}_{z}}) \rangle \label{2}$
$\Delta \vec{R} (\tau )\ =\ (\Delta X(\tau ), \Delta (\tau ), \Delta (\tau )) \label{3}$
The expression is valid if the average number of particles, N, is low and if dark states can be ignored. Because of this, FCS observes a small number of molecules (nanomolar and picomolar concentrations), in a small volume (~1μm3) and does not require physical separation processes, as information is determined using optics. After applying the chosen autocorrelation function, it becomes much easier to analyze the data and extract the desired information (Figure $22$).
Application
FCS is often seen in the context of microscopy, being used in confocal microscopy and two-photon excitation microscopy. In both techniques, light is focused on a sample and fluorescence intensity fluctuations are measured and analyzed using temporal autocorrelation. The magnitude of the intensity of the fluorescence and the amount of fluctuation is related to the number of individual particles; there is an optimum measurement time when the particles are entering or exiting the observation volume. When too many particles occupy the observed space, the overall fluctuations are small relative to the total signal and are difficult to resolve. On the other hand, if the time between molecules passing through the observed space is too long, running an experiment could take an unreasonable amount of time. One of the applications of FCS is that it can be used to analyze the concentration of fluorescent molecules in solution. Here, FCS is used to analyze a very small space containing a small number of molecules and the motion of the fluorescence particles is observed. The fluorescence intensity fluctuates based on the number of particles present; therefore analysis can give the average number of particles present, the average diffusion time, concentration, and particle size. This is useful because it can be done in vivo, allowing for the practical study of various parts of the cell. FCS is also a common technique in photo-physics, as it can be used to study triplet state formation and photo-bleaching. State formation refers to the transition between a singlet and a triplet state while photo-bleaching is when a fluorophore is photo-chemically altered such that it permanently looses its ability to fluoresce. By far, the most popular application of FCS is its use in studying molecular binding and unbinding often, it is not a particular molecule that is of interest but, rather, the interaction of that molecule in a system. By dye labeling a particular molecule in a system, FCS can be used to determine the kinetics of binding and unbinding (particularly useful in the study of assays).
Main Advantages and Limitations
Table $1$: Advantages and limitations of PCS.
Advantage Limitation
Can be used in vivo Can be noisy depending on the system
Very sensitive Does not work if concentration of dye is too high
The same instrumentation can perform various kinds of experiments Raw data does not say much, analysis models must be applied
Has been used in various studies, extensive work has been done to establish the technique If system deviates substantially from the ideal, analysis models can be difficult to apply (making corrections hard to calculate).
A large amount of information can be extracted It may require more calculations to approximate PSF, depending on the particular shape.
Molecular Phosphorescence Spectroscopy
When a material that has been radiated emits light, it can do so either via incandescence, in which all atoms in the material emit light, or via luminescence, in which only certain atoms emit light, Figure $23$. There are two types of luminescence: fluorescence and phosphorescence. Phosphorescence occurs when excited electrons of a different multiplicity from those in their ground state return to their ground state via emission of a photon, Figure $24$. It is a longer-lasting and less common type of luminescence, as it is a spin forbidden process, but it finds applications across numerous different fields. This module will cover the physical basis of phosphorescence, as well as instrumentation, sample preparation, limitations, and practical applications relating to molecular phosphorescence spectroscopy.
Phosphorescence
Phosphorescence is the emission of energy in the form of a photon after an electron has been excited due to radiation. In order to understand the cause of this emission, it is first important to consider the molecular electronic state of the sample. In the singlet molecular electronic state, all electron spins are paired, meaning that their spins are antiparallel to one another. When one paired electron is excited to a higher-energy state, it can either occupy an excited singlet state or an excited triplet state. In an excited singlet state, the excited electron remains paired with the electron in the ground state. In the excited triplet state, however, the electron becomes unpaired with the electron in ground state and adopts a parallel spin. When this spin conversion happens, the electron in the excited triplet state is said to be of a different multiplicity from the electron in the ground state. Phosphorescence occurs when electrons from the excited triplet state return to the ground singlet state, \ref{4} - \ref{6}, where E represents an electron in the singlet ground state, E* represent the electron in the singlet excited state, and T* represents the electron in the triplet excited state.
$E\ +\ hv \rightarrow E* \label{4}$
$E* \rightarrow T* \label{5}$
$T* \rightarrow \ E\ +\ hv' \label{6}$
Electrons in the triplet excited state are spin-prohibited from returning to the singlet state because they are parallel to those in the ground state. In order to return to the ground state, they must undergo a spin conversion, which is not very probable, especially considering that there are many other means of releasing excess energy. Because of the need for an internal spin conversion, phosphorescence lifetimes are much longer than those of other kinds of luminescence, lasting from 10-4 to 104 seconds.
Historically, phosphorescence and fluorescence were distinguished by the amount of time after the radiation source was removed that luminescence remained. Fluorescence was defined as short-lived chemiluminescence (< 10-5 s) because of the ease of transition between the excited and ground singlet states, whereas phosphorescence was defined as longer-lived chemiluminescence. However, basing the difference between the two forms of luminescence purely on time proved to be a very unreliable metric. Fluorescence is now defined as occurring when decaying electrons have the same multiplicity as those of their ground state.
Sample Preparation
Because phosphorescence is unlikely and produces relatively weak emissions, samples using molecular phosphorescence spectroscopy must be very carefully prepared in order to maximize the observed phosphorescence. The most common method of phosphorescence sample preparation is to dissolve the sample in a solvent that will form a clear and colorless solid when cooled to 77 K, the temperature of liquid nitrogen. Cryogenic conditions are usually used because, at low temperatures, there is little background interference from processes other than phosphorescence that contribute to loss of absorbed energy. Additionally, there is little interference from the solvent itself under cryogenic conditions. The solvent choice is especially important; in order to form a clear, colorless solid, the solvent must be of ultra-high purity. The polarity of the phosphorescent sample motivates the solvent choice. Common solvents include ethanol for polar samples and EPA (a mixture of diethyl ether, isopentane, and ethanol in a 5:5:2 ratio) for non-polar samples. Once a disk has been formed from the sample and solvent, it can be analyzed using a phosphoroscope.
Room Temperature Phosphorescence
While using a rigid medium is still the predominant choice for measuring phosphorescence, there have been recent advances in room temperature spectroscopy, which allows samples to be measured at warmer temperatures. Similar the sample preparation using a rigid medium for detection, the most important aspect is to maximize recorded phosphorescence by avoiding other forms of emission. Current methods for allowing good room detection of phosphorescence include absorbing the sample onto an external support and putting the sample into a molecular enclosure, both of which will protect the triplet state involved in phosphorescence.
Instrumentation and Measurement
Phosphorescence is recorded in two distinct methods, with the distinguishing feature between the two methods being whether or not the light source is steady or pulsed. When the light source is steady, a phosphoroscope, or an attachment to a fluorescence spectrometer, is used. The phosphoroscope was experimentally devised by Alexandre-Edmond Becquerel, a pioneer in the field of luminescence, in 1857, Figure $25$.
There are two different kinds of phosphoroscopes: rotating disk phosphoroscopes and rotating can phosphoroscopes. A rotating disk phosphoroscope, Figure $26$, comprises two rotating disk with holes, in the middle of which is placed the sample to be tested. After a light beam penetrates one of the disks, the sample is electronically excited by the light energy and can phosphoresce; a photomultiplier records the intensity of the phosphorescence. Changing the speed of the disks’ rotation allows a decay curve to be created, which tells the user how long phosphorescence lasts.
The second type of phosphoroscope, the rotating can phosphoroscope, employs a rotating cylinder with a window to allow passage of light, Figure $27$. The sample is placed on the outside edge of the can and, when light from the source is allowed to pass through the window, the sample is electronically excited and phosphoresces, and the intensity is again detected via photomultiplier. One major advantage of the rotating can phosphoroscope over the rotating disk phosphoroscope is that, at high speeds, it can minimize other types of interferences such as fluorescence and Raman and Rayleigh scattering, the inelastic and elastic scattering of photons, respectively.
The more modern, advanced measurement of phosphorescence uses pulsed-source time resolved spectrometry and can be measured on a luminescence spectrometer. A luminescence spectrometer has modes for both fluorescence and phosphorescence, and the spectrometer can measure the intensity of the wavelength with respect to either the wavelength of the emitted light or time, Figure $28$.
The spectrometer employs a gated photomultiplier to measure the intensity of the phosphorescence. After the initial burst of radiation from the light source, the gate blocks further light, and the photomultiplier measures both the peak intensity of phosphorescence as well as the decay, as shown in Figure $29$.
The lifetime of the phosphorescence is able to be calculated from the slope of the decay of the sample after the peak intensity. The lifetime depends on many factors, including the wavelength of the incident radiation as well as properties arising from the sample and the solvent used. Although background fluorescence as well as Raman and Rayleigh scattering are still present in pulsed-time source resolved spectrometry, they are easily detected and removed from intensity versus time plots, allowing for the pure measurement of phosphorescence.
Limitations
The biggest single limitation of molecular phosphorescence spectroscopy is the need for cryogenic conditions. This is a direct result of the unfavorable transition from an excited triplet state to a ground singlet state, which unlikely and therefore produces low-intensity, difficult to detect, long-lasting irradiation. Because cooling phosphorescent samples reduces the chance of other irradiation processes, it is vital for current forms of phosphorescence spectroscopy, but this makes it somewhat impractical in settings outside of a specialized laboratory. However, the emergence and development of room temperature spectroscopy methods give rise to a whole new set of applications and make phosphorescence spectroscopy a more viable method.
Practical Applications
Currently, phosphorescent materials have a variety of uses, and molecular phosphorescence spectrometry is applicable across many industries. Phosphorescent materials find use in radar screens, glow-in-the-dark toys, and in pigments, some of which are used to make highway signs visible to drivers. Molecular phosphorescence spectroscopy is currently in use in the pharmaceutical industry, where its high selectivity and lack of need for extensive separation or purification steps make it useful. It also shows potential in forensic analysis because of the low sample volume requirement. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.05%3A_Photoluminescence_Phosphorescence_and_Fluorescence_Spectroscopy.txt |
In 1957 Rudolf Mössbauer achieved the first experimental observation of the resonant absorption and recoil-free emission of nuclear γ-rays in solids during his graduate work at the Institute for Physics of the Max Planck Institute for Medical Research in Heidelberg Germany. Mössbauer received the 1961 Nobel Prize in Physics for his research in resonant absorption of γ-radiation and the discovery of recoil-free emission a phenomenon that is named after him. The Mössbauer effect is the basis of Mössbauer spectroscopy.
The Mössbauer effect can be described very simply by looking at the energy involved in the absorption or emission of a γ-ray from a nucleus. When a free nucleus absorbs or emits a γ-ray to conserve momentum the nucleus must recoil, so in terms of energy:
$E_{ \gamma - ray} \ = \ E_{\text{nuclear transition}}\ -\ E_{\text{recoil}} \label{1}$
When in a solid matrix the recoil energy goes to zero because the effective mass of the nucleus is very large and momentum can be conserved with negligible movement of the nucleus. So, for nuclei in a solid matrix:
$E_{\gamma - ray} \ =\ E_{\text{nuclear transition}} \label{2}$
This is the Mössbauer effect which results in the resonant absorption/emission of γ-rays and gives us a means to probe the hyperfine interactions of an atoms nucleus and its surroundings.
A Mössbauer spectrometer system consists of a γ-ray source that is oscillated toward and away from the sample by a “Mössbauer drive”, a collimator to filter the γ-rays, the sample, and a detector.
Figure $2$ hows the two basic set ups for a Mössbauer spectrometer. The Mössbauer drive oscillates the source so that the incident γ-rays hitting the absorber have a range of energies due to the doppler effect. The energy scale for Mössbauer spectra (x-axis) is generally in terms of the velocity of the source in mm/s. The source shown (57Co) is used to probe 57Fe in iron containing samples because 57Co decays to 57Fe emitting a γ-ray of the right energy to be absorbed by 57Fe. To analyze other Mössbauer isotopes other suitable sources are used. Fe is the most common element examined with Mössbauer spectroscopy because its 57Fe isotope is abundant enough (2.2), has a low energy γ-ray, and a long lived excited nuclear state which are the requirements for observable Mössbauer spectrum. Other elements that have isotopes with the required parameters for Mössbauer probing are seen in Table $1$.
Most commonly examined elements Fe, Ru, W, Ir, Au, Sn, Sb, Te, I, W, Ir, Eu, Gd, Dy, Er, Yb, Np
Elements that exhibit Mössbauer effect K, Ni, Zn, Ge, Kr, Tc, Ag, Xe, Cs, Ba, La, Hf, Ta, Re, Os, Pt, Hg, Ce, Pr, Nd, Sm, Tb, Ho, Tm, Lu, Th, Pa, U, Pu, Am
Table $1$ Elements with known Mössbauer isotopes and most commonly examined with Mössbauer spectroscopy.
Mössbauer Spectra
The primary characteristics looked at in Mössbauer spectra are isomer shift (IS), quadrupole splitting (QS), and magnetic splitting (MS or hyperfine splitting). These characteristics are effects caused by interactions of the absorbing nucleus with its environment.
Isomer shift is due to slightly different nuclear energy levels in the source and absorber due to differences in the s-electron environment of the source and absorber. The oxidation state of an absorber nucleus is one characteristic that can be determined by the IS of a spectra. For example due to greater d electron screening Fe2+ has less s-electron density than Fe3+ at its nucleus which results in a greater positive IS for Fe2+.
For absorbers with nuclear angular momentum quantum number I > ½ the non-spherical charge distribution results in quadrupole splitting of the energy states. For example Fe with a transition from I=1/2 to 3/2 will exhibit doublets of individual peaks in the Mössbauer spectra due to quadrupole splitting of the nuclear states as shown in red in Figure $2$.
In the presence of a magnetic field the interaction between the nuclear spin moments with the magnetic field removes all the degeneracy of the energy levels resulting in the splitting of energy levels with nuclear spin I into 2I + 1 sublevels. Using Fe for an example again, magnetic splitting will result in a sextet as shown in green in Figure $2$. Notice that there are 8 possible transitions shown, but only 6 occur. Due to the selection rule ІΔmIІ = 0, 1, the transitions represented as black arrows do not occur.
Synthesis of Magnetite Nanoparticles
Numerous schemes have been devised to synthesize magnetite nanoparticles (nMag). The different methods of nMag synthesis can be generally grouped as aqueous or non-aqueous according to the solvents used. Two of the most widely used and explored methods for nMag synthesis are the aqueous co-precipitation method and the non-aqueous thermal decomposition method.
The co-precipitation method of nMag synthesis consists of precipitation of Fe3O4 (nMag) by addition of a strong base to a solution of Fe2+ and Fe3+ salts in water. This method is very simple, inexpensive and produces highly crystalline nMag. The general size of nMag produced by co-precipitation is in the 15 to 50 nm range and can be controlled by reaction conditions, however a large size distribution of nanoparticles is produced by this method. Aggregation of particles is also observed with aqueous methods.
The thermal decomposition method consists of the high temperature thermal decomposition of an iron-oleate complex derived from an iron precursor in the presence of surfactant in a high boiling point organic solvent under an inert atmosphere. For the many variations of this synthetic method many different solvents and surfactants are used. However, in most every method nMag is formed through the thermal decomposition of an iron-oleate complex to form highly crystalline nMag in the 5 to 40 nm range with a very small size distribution. The size of nMag produced is a function of reaction temperature, the iron to surfactant ratio, and the reaction time, and various methods are used that achieve good size control by manipulation of these parameters. The nMag synthesized by organic methods is soluble in organic solvents because the nMag is stabilized by a surfactant surface coating with the polar head group of the surfactant attached to and the hydrophobic tail extending away from the nMag (Figure $3$). An example of a thermal decomposition method is shown in Figure $3$.
Mössbauer Analysis of Iron Oxide Nanoparticles
Spectra and Formula Calculations
Due to the potential applications of magnetite nanoparticles (Fe3O4, nMag) many methods have been devised for its synthesis. However, stoichiometric Fe3O4 is not always achieved by different synthetic methods. B-site vacancies introduced into the cubic inverse spinel crystal structure of nMag result in nonstoichiometric iron oxide of the formula (Fe3+)A(Fe(1-3x)2+ Fe(1+2X)3+Øx)BO4 where Ø represents B-site vacancy. The magnetic susceptibility which is key to most nMag applications decreases with increased B-site vacancy hence the extent of B-site vacancy is important. The very high sensitivity of the Mössbauer spectrum to the oxidation state and site occupancy of Fe3+ in cubic inverse spinel iron oxides makes Mössbauer spectroscopy valuable for addressing the issues of whether or not the product of a synthetic method is actually nMag and the extent of B-site vacancy.
As with most analysis using multiple instrumental methods in conjunction is often helpful. This is exemplified by the use of XRD along with Mössbauer spectroscopy in the following analysis. Figure $4$ shows the XRD results and Mössbauer spectra “magnetite” samples prepared by a Fe2+/Fe3+ co-precipitation (Mt025), hematite reduction by hydrogen (MtH2) and hematite reduction with coal(MtC). The XRD analysis shows MtH2 and MT025 exhibiting only magnetite peaks while MtC shows the presence of magnetite, maghemite, and hematite. This information becomes very useful when fitting peaks to the Mössbauer spectra because it gives a chemical basis for peak fitting parameters and helps to fit the peaks correctly.
Being that the iron occupies two local environments, the A-site and B site, and two species (Fe2+ and Fe3+) occupy the B-site one might expect the spectrum to be a combination of 3 spectra, however delocalization of electrons or electron hopping between Fe2+ and Fe3+ in the B site causes the nuclei to sense an average valence in the B site thus the spectrum are fitted with two curves accordingly. This is most easily seen in the Mt025 spectrum. The two fitted curves correspond to Fe3+ in the A-site and mixed valance Fe2.5+ in the B-site. The isomer shift of the fitted curves can be used to determined which curve corresponds to which valence. The isomer shift relative to the top fitted curve is reported to be 0.661 and the bottom fitted curve is 0.274 relative to αFe thus the top fitted curve corresponds to less s-electron dense Fe2.5+. The magnetic splitting is quite apparent. In each of the spectra, six peaks are present due to magnetic splitting of the nuclear energy states as explained previously. Quadrupole splitting is not so apparent, but actually is present in the spectra. The three peaks to the left of the center of a spectrum should be spaced the same as those to the right due to magnetic splitting alone since the energy level spacing between sublevels is equal. This is not the case in the above spectra, because the higher energy I = 3/2 sublevels are split unevenly due to magnetic and quadrupole splitting interactions.
Once the peaks have been fitted appropriately, determination of the extent of B-site vacancy in (Fe3+)A(Fe(1-3x)2+ Fe(1+2X)3+Øx)BO4 is a relatively simple matter. All one has to due to determine the number of vacancies (x) is solve the equation:
$\frac{RA_{B}}{RA_{A}} = \frac{2-6x}{1-5x} \label{3}$
where RAB or A = relative area
$\frac{Area\ A\ or\ B\ site\ curve}{Area\ of\ both\ curves} \label{4}$
of the curve for the B or A site respectively
The reasoning for this equation is as follows. Taking into account that the mixed valance Fe2.5+ curve is a result of paired interaction between Fe2+ and Fe3+ the nonstochiometric chemical formula is (Fe3+)A(Fe(1-3x)2+Fe(1+2X)3+Øx)BO4. The relative intensity (or relative area) of the Fe-A and Fe-B curves is very sensitive to stoichiometry because vacancies in the B-site reduce the Fe-A curve and increase Fe-B curve intensities. This is due to the unpaired Fe5x3+ adding to the intensity of the Fe-A curve rather than the Fe-B curve. Since the relative area is directly proportional to the number of Fe contributing to the spectrum the ratio of the relative areas is equal to stoichiometric ratio of Fe2.5+ to Fe3+, which yields the above formula.
Example Calculation:
For MtH2 RAA/RAB = 1.89
Plugging x into the nonstoichiometric iron oxide formula yeilds:
$\frac{RA_{B}}{RA_{A}} = \frac{2-6x}{1-5x} \label{5}$
solving for x yields
$x=\frac{2-\frac{RA_{A}}{RA_{B}}}{5 \frac{RA_{A}}{RA_{B}}\ +\ 6} \label{6}$
(Fe3+)A(Fe 1.95722+ Fe0.03563+)BO4 (very close to stoichiometric)
Figure $2$: Parameters and nonstoichiometric formulas for MtC, Mt025, and MtH2
Sample RAB/RAA X Chemical Formula
MtH2 1.89 0.007 (Fe3+)A(Fe0.9792+Fe1.0143+)BO4
MtC 1.66 0.024 (Fe3+)A(Fe0.9292+Fe1.0483+)BO4
Mt 025 1.60 0.029 (Fe3+)A(Fe0.9142+Fe1.0573+)BO4
Chemical Formulas of Nonstoichiometric Iron Oxide Nanoparticles from Mössbauer Spectroscopy
Chemical Formula Determination
Magnetite (Fe3O4) nanoparticles (n-Mag) are nanometer sized, superparamagnetic, have high saturation magnetization, high magnetic susceptibility, and low toxicity. These properties could be utilized for many possible applications; hence, n-Mag has attracted much attention in the scientific community. Some of the potential applications include drug delivery, hyperthermia agents, MRI contrast agents, cell labeling, and cell separation to name a few.
The crystal structure of n-Mag is cubic inverse spinel with Fe3+ cations occupying the interstitial tetrahedral sites(A) and Fe3+ along with Fe2+ occupying the interstitial octahedral sites(B) of an FCC latticed of O2-. Including the site occupation and charge of Fe, the n-Mag chemical formula can be written (Fe3+)A(Fe2+Fe3+)BO4. Non-stoichiometric iron oxide results from B-site vacancies in the crystal structure. To maintain balanced charge and take into account the degree of B-site vacancies the iron oxide formula is written (Fe3+)A(Fe(1-3x)2+ Fe(1+2X)3+Øx)BO4 where Ø represents B-site vacancy. The extent of B-site vacancy has a significant effect on the magnetic properties of iron oxide and in the synthesis of n-Mag stoichiometric iron oxide is not guaranteed; therefore, B-site vacancy warrants attention in iron oxide characterization, and can be addressed using Mössbauer spectroscopy. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.06%3A_Mossbauer_Spectroscopy.txt |
Nuclear magnetic resonance spectroscopy (NMR) is a widely used and powerful method that takes advantage of the magnetic properties of certain nuclei. The basic principle behind NMR is that some nuclei exist in specific nuclear spin states when exposed to an external magnetic field. NMR observes transitions between these spin states that are specific to the particular nuclei in question, as well as that nuclei's chemical environment. However, this only applies to nuclei whose spin, I, is not equal to 0, so nuclei where I = 0 are ‘invisible’ to NMR spectroscopy. These properties have led to NMR being used to identify molecular structures, monitor reactions, study metabolism in cells, and is used in medicine, biochemistry, physics, industry, and almost every imaginable branch of science.
Theory
The chemical theory that underlies NMR spectroscopy depends on the intrinsic spin of the nucleus involved, described by the quantum number S. Nuclei with a non-zero spin are always associated with a non-zero magnetic moment, as described by Equation \ref{1}, where μ is the magnetic moment, $S$ is the spin, and γ is always non-zero. It is this magnetic moment that allows for NMR to be used; therefore nuclei whose quantum spin is zero cannot be measured using NMR. Almost all isotopes that have both an even number of protons and neutrons have no magnetic moment, and cannot be measured using NMR.
$\mu =\ \gamma \cdot S \label{1}$
In the presence of an external magnetic field (B) for a nuclei with a spin I = 1/2, there are two spin states present of +1/2 and -1/2. The difference in energy between these two states at a specific external magnetic field (Bx) are given by Equation \ref{2}, and are shown in Figure $1$ where E is energy, I is the spin of the nuclei, and μ is the magnetic moment of the specific nuclei being analyzed. The difference in energy shown is always extremely small, so for NMR strong magnetic fields are required to further separate the two energy states. At the applied magnetic fields used for NMR, most magnetic resonance frequencies tend to fall in the radio frequency range.
$E\ =\ \mu \cdot B_{x} / I \label{2}$
The reason NMR can differentiate between different elements and isotopes is due to the fact that each specific nuclide will only absorb at a very specific frequency. This specificity means that NMR can generally detect one isotope at a time, and this results in different types of NMR: such as 1H NMR, 13C NMR, and 31P NMR, to name only a few.
The subsequent absorbed frequency of any type of nuclei is not always constant, since electrons surrounding a nucleus can result in an effect called nuclear shielding, where the magnetic field at the nucleus is changed (usually lowered) because of the surrounding electron environment. This differentiation of a particular nucleus based upon its electronic (chemical) environment allows NMR be used to identify structure. Since nuclei of the same type in different electron environments will be more or less shielded than another, the difference in their environment (as observed by a difference in the surrounding magnetic field) is defined as the chemical shift.
Instrumentation
An example of an NMR spectrometer is given in Figure $2$. NMR spectroscopy works by varying the machine’s emitted frequency over a small range while the sample is inside a constant magnetic field. Most of the magnets used in NMR machines to create the magnetic field range from 6 to 24 T. The sample is placed within the magnet and surrounded by superconducting coils, and is then subjected to a frequency from the radio wave source. A detector then interprets the results and sends it to the main console.
Interpreting NMR spectra
Chemical Shift
The different local chemical environments surrounding any particular nuclei causes them to resonate at slightly different frequencies. This is a result of a nucleus being more or less shielded than another. This is called the chemical shift (δ). One factor that affects chemical shift is the changing of electron density from around a nucleus, such as a bond to an electronegative group. Hydrogen bonding also changes the electron density in 1H NMR, causing a larger shift. These frequency shifts are miniscule in comparison to the fundamental NMR frequency differences, on a scale of Hz as compared to MHz. For this reason chemical shifts (δ) are described by the unit ppm on an NMR spectra, \ref{3}, where Href = the resonance frequency of the reference, Hsub = resonance frequency of the substance, and Hmachine = operating frequency of the spectrometer.
$\delta \ =\ (\frac{H_{ref}-H_{sub}}{H_{machine}})\ \times 10^{6} \label{3}$
Since the chemical shift (δ in ppm) is reported as a relative difference from some reference frequency, so a reference is required. In 1H and 13C NMR, for example, tetramethylsilane (TMS, Si(CH3)4) is used as the reference. Chemical shifts can be used to identify structural properties in a molecule based on our understanding of different chemical environments. Some examples of where different chemical environments fall on a 1H NMR spectra are given in Table $1$.
Table $1$ Representative chemical shifts for organic groups in the 1H NMR.
Functional Group Chemical Shift Range (ppm)
Alkyl (e.g. methyl-CH3) ~ 1
Alkyl adjacent to oxygen (-CH2-O) 3 - 4
Alkene (=CH2) ~ 6
Alkyne (C-H) ~ 3
Aromatic 7 - 8
In Figure $3$, an 1H NMR spectra of ethanol, we can see a clear example of chemical shift. There are three sets of peaks that represent the six hydrogens of ethanol (C2H6O). The presence of three sets of peaks means that there are three different chemical environments that the hydrogens can be found in: the terminal methyl (CH3) carbon’s three hydrogens, the two hydrogens on the methylene (CH2) carbon adjacent to the oxygen, and the single hydrogen on the oxygen of the alcohol group (OH). Once we cover spin-spin coupling, we will have the tools available to match these groups of hydrogens to their respective peaks.
Spin-spin Coupling
Another useful property that allows NMR spectra to give structural information is called spin-spin coupling, which is caused by spin coupling between NMR active nuclei that are not chemically identical. Different spin states interact through chemical bonds in a molecule to give rise to this coupling, which occurs when a nuclei being examined is disturbed or influenced by a nearby nuclear spin. In NMR spectra, this effect is shown through peak splitting that can give direct information concerning the connectivity of atoms in a molecule. Nuclei which share the same chemical shift do not form splitting peaks in an NMR spectra.
In general, neighboring NMR active nuclei three or fewer bonds away lead to this splitting. The splitting is described by the relationship where n neighboring nuclei result in n+1 peaks, and the area distribution can be seen in Pascal’s triangle (Figure $4$). However, being adjacent to a strongly electronegative group such as oxygen can prevent spin-spin coupling. For example a doublet would have two peaks with intensity ratios of 1:1, while a quartet would have four peaks of relative intensities 1:3:3:1. The magnitude of the observed spin splitting depends on many factors and is given by the coupling constant J, which is in units of Hz.
Referring again to Figure $4$, we have a good example of how spin-spin coupling manifests itself in an NMR spectra. In the spectra we have three sets of peaks: a quartet, triplet, and a singlet. If we start with the terminal carbon’s hydrogens in ethanol, using the n+1 rule we see that they have two hydrogens within three bonds (i.e., H-C-C-H), leading us to identify the triplet as the peaks for the terminal carbon’s hydrogens. Looking next at the two central hydrogens, they have four NMR active nuclei within three bonds (i.e., H-C-C-H), but there is no quintet on the spectra as might be expected. This can be explained by the fact that the single hydrogen bonded to the oxygen is shielded from spin-spin coupling, so it must be a singlet and the two central hydrogens form the quartet. We have now interpreted the NMR spectra of ethanol by identifying which nuclei correspond to each peak.
Peak Intensity
Mainly useful for proton NMR, the size of the peaks in the NMR spectra can give information concerning the number of nuclei that gave rise to that peak. This is done by measuring the peak’s area using integration. Yet even without using integration the size of different peaks can still give relative information about the number of nuclei. For example a singlet associated with three hydrogen atoms would be about 3 times larger than a singlet associated with a single hydrogen atom.
This can also be seen in the example in Figure $3$. If we integrated the area under each peak, we would find that the ratios of the areas of the quartet, singlet, and triplet are approximately 2:1:3, respectively.
Limitations of NMR
Despite all of its upsides, there are several limitations that can make NMR analysis difficult or impossible in certain situations. One such issue is that the desired isotope of an element that is needed for NMR analysis may have little or no natural abundance. For example the natural abundance of 13C, the active isotope for carbon NMR, is about 11%, which works well for analysis. However, in the case of oxygen the active isotope for NMR is 17O, which is only 0.035% naturally abundant. This means that there are certain elements that can essentially never be measured through NMR.
Another problem is that some elements have an extremely low magnetic moment, μ. The sensitivity of NMR machines is based on the magnetic moment of the specific element, but if the magnetic moment is too low it can be very difficult to obtain an NMR spectra with enough peak intensity to properly analyze.
NMR Properties of the Element
Isotope Natural Abundance (%) Relative NMR Frequency (MHz) Relative Receptivity as Compared to 1H
1H 99.985 100 1.00
3H - 106.7 -
3He 0.00013 76.2 5.8 x 10-7
13C 1.11 25.1 1.8 x 10-4
15N 0.37 10.1 3.9 x 10-6
19F 100 94.1 8.3 x 10-1
29Si 4.7 19.9 3.7 x 10-4
31P 100 40.5 6.6 x 10-2
57Fe 2.2 3.2 7.4 x 10-7
77Se 7.6 19.1 5.3 x 10-4
89Y 100 4.9 1.2 x 10-4
103Rh 100 3.2 3.2 x 10-5
107Ag 51.8 4.0 3.5 x 10-5
109Ag 48.2 4.7 4.9 x 10-5
111Cd 12.8 21.2 1.2 x 10-3
113Cd 12.3 22.2 1.3 x 10-3
117Sna 7.6 35.6 3.5 x 10-3
119Sn 8.6 37.3 4.5 x 10-3
125Tea 7.0 31.5 2.2 x 10-3
129Xe 26.4 27.8 5.7 x 10-3
169Tm 100 8.3 5.7 x 10-4
171Yb 14.3 17.6 7.8 x 10-4
183W 14.4 4.2 1.1 x 10-5
187Os 1.6 2.3 2.0 x 10-7
195Pt 33.8 21.4 3.4 x 10-3
199Hg 16.8 17.9 9.8 x 10-4
203Ti 29.5 57.1 5.7 x 10-2
205Ti 70.5 57.6 1.4 x 10-1
207Pb 22.6 20.9 2.0 x 10-1
Table $1$ NMR properties of selected spin 1/2 nuclei. a Other spin 1/2 also exist.
Isotope Spin Natural Abundance (%) Relative NMR Frequency (%) Relative Receptivity as Compared to 1H Quadropole moment (10-28 m2)
2H 1 0.015 15.4 1.5 x 10-6 2.8 x 10-3
6Li 1 7.4 14.7 6.3 x 10-4 -8 x 10-4
7Li 3/2 92.6 38.9 2.7 x 10-1 -4 x 10-2
9Be 3/2 100 14.1 1.4 x 10-2 5 x 10-2
10B 3 19.6 10.7 3.9 x 10-3 8.5 x 10-2
11B 3/2 80.4 32.1 1.3 x 10-1 4.1 x 10-2
14Na 1 99.6 7.2 1.0 x 10-3 1 x 10-2
17O 5/2 0.037 13.6 1.1 x 10-5 -2.6 x 10-2
23Na 5/2 100 26.5 9.3 x 10-2 1 x 10-1
25Mg 5/2 10.1 6.1 2.7 x 10-4 2.2 x 10-1
27Al 5/2 100 26.1 2.1 x 10-1 1.5 x 10-1
33S 3/2 0.76 7.7 1.7 x 10-5 -5.5 x 10-2
35Cl 3/2 75.5 9.8 3.6 x 10-3 -1 x 10-1
37Cl 3/2 24.5 8.2 6.7 x 10-4 -7.9 x 10-2
39Kb 3/2 93.1 4.7 4.8 x 10-4 4.9 x 10-2
43Ca 7/2 0.15 6.7 8.7 x 10-6 2 x 10-1
45Sc 7/2 100 24.3 3 x 10-1 -2.2 x 10-1
47Ti 5/2 7.3 5.6 1.5 x 10-4 2.9 x 10-1
49Ti 7/2 5.5 5.6 2.1 x 10-4 2.4 x 10-1
51Vb 7/2 99.8 26.3 3.8 x 10-1 -5 x 10-2
53Cr 3/2 9.6 5.7 8.6 x 10-5 3 x 10-2
55Mn 5/2 100 24.7 1.8 x 10-1 4 x 10-1
59Co 7/2 100 23.6 2.8 x 10-1 3.8 x 10-1
61Ni 3/2 1.2 8.9 4.1 x 10-1 1.6 x 10-1
63Cu 3/2 69.1 26.5 6.5 x 10-2 -2.1 x 10-1
65Cu 3/2 30.9 28.4 3.6 x 10-2 -2.0 x 10-1
67Zn 5/2 4.1 6.3 1.2 x 10-4 1.6 x 10-1
69Ga 3/2 60.4 24.0 4.2 x 10-2 1.9 x 10-1
71Ga 3/2 39.6 30.6 5.7 x 10-2 1.2 x 10-1
73Ge 9/2 7.8 3.5 1.1 x 10-4 -1.8 x 10-1
75As 3/2 100 17.2 2.5 x 10-2 2.9 x 10-1
79Br 3/2 50.5 25.1 4.0 x 10-2 3.7 x 10-1
81Br 3/2 49.5 27.1 4.9 x 10-2 3.1 x 10-1
87Rbb 3/2 27.9 32.8 4.9 x 10-2 1.3 x 10-1
87Sr 9/2 7.0 4.3 1.9 x 10-4 3 x 10-1
91Zr 5/2 11.2 9.3 1.1 x 10-3 -2.1 x 10-1
93Nb 9/2 100 24.5 4.9 x 10-1 -2.2 x 10-1
95Mo 5/2 15.7 6.5 5.1 x 10-4 ±1.2 x 10-1
97Mo 5/2 9.5 6.7 3.3 x 10-4 ±1.1
99Ru 5/2 12.7 4.6 1.5 x 10-4 7.6 x 10-2
101Ru 5/2 17.1 5.2 2.8 x 10-4 4.4 x 10-1
105Pd 5/2 22.2 4.6 2.5 x 10-4 8 x 10-1
115Inb 9/2 95.7 22.0 3.4 x 10-1 8.3 x 10-1
121Sb 5/2 57.3 24.0 9.3 x 10-2 -2.8 x 10-1
123Sb 7/2 42.7 13.0 2.0 x 10-2 3.6 x 10-1
127I 5/2 100 20.1 9.5 x 10-2 -7.9 x 10-1
131Xea 3/2 21.3 8.2 5.9 x 10-4 -1.2 x 10-1
133Cs 7/2 100 13.2 4.8 x 10-2 -3 x 10-3
137Bab 3/2 11.3 11.1 7.9 x 10-4 2.8 x 10-1
139La 7/2 99.9 14.2 6.0 x 10-2 2.2 x 10-1
177Hf 7/2 18.5 4.0 2.6 x 10-4 4.5
179Hf 9/2 13.8 2.5 7.4 x 10-5 5.1
181Ta 7/2 99.99 12.0 3.7 x 10-2 3
185Re 5/2 37.1 22.7 5.1 x 10-2 2.3
187Re 5/2 62.9 22.9 8.8 x 10-2 2.2
189Osa 3/2 16.1 7.8 3.9 x 10-4 8 x 10-1
191Ir 3/2 37.3 1.7 9.8 x 10-6 1.1
193Ir 3/2 62.7 1.9 2.1 x 10-5 1.0
197Au 3/2 100 1.7 2.6 x 10-5 5.9 x 10-1
201Hg 3/2 13.2 6.6 1.9 x 10-4 4.4 x 10-1
209Bi 9/2 100 16.2 1.4 x 10-1 -3.8 x 10-1
Table $2$ NMR properties of selected quadrupolar nuclei. a A spin 1/2 isotope also exists. b Other quadrupolar nuclei exist.
NMR Spin Coupling
The Basis of Spin Coupling
Nuclear magnetic resonance (NMR) signals arise when nuclei absorb a certain radio frequency and are excited from one spin state to another. The exact frequency of electromagnetic radiation that the nucleus absorbs depends on the magnetic environment around the nucleus. This magnetic environment is controlled mostly by the applied field, but is also affected by the magnetic moments of nearby nuclei. Nuclei can be in one of many spin states Figure $5$, giving rise to several possible magnetic environments for the observed nucleus to resonate in. This causes the NMR signal for a nucleus to show up as a multiplet rather than a single peak.
When nuclei have a spin of I = 1/2 (as with protons), they can have two possible magnetic moments and thus split a single expected NMR signal into two signals. When more than one nucleus affects the magnetic environment of the nucleus being examined, complex multiplets form as each nucleus splits the signal into two additional peaks. If those nuclei are magnetically equivalent to each other, then some of the signals overlap to form peaks with different relative intensities. The multiplet pattern can be predicted by Pascal’s triangle (Figure $6$), looking at the nth row, where n = number of nuclei equivalent to each other but not equivalent to the one being examined. In this case, the number of peaks in the multiplet is equal to n + 1
When there is more than one type of nucleus splitting an NMR signal, then the signal changes from a multiplet to a group of multiplets (Figure $7$). This is caused by the different coupling constants associated with different types of nuclei. Each nucleus splits the NMR signal by a different width, so the peaks no longer overlap to form peaks with different relative intensities.
When nuclei have I > 1/2, they have more than two possible magnetic moments and thus split NMR signals into more than two peaks. The number of peaks expected is 2I + 1, corresponding to the number of possible orientations of the magnetic moment. In reality however, some of these peaks may be obscured due to quadrupolar relaxation. As a result, most NMR focuses on I = 1/2 nuclei such as 1H, 13C, and 31P.
Multiplets are centered around the chemical shift expected for a nucleus had its signal not been split. The total area of a multiplet corresponds to the number of nuclei resonating at the given frequency.
Spin Coupling in molecules
Looking at actual molecules raises questions about which nuclei can cause splitting to occur. First of all, it is important to realize that only nuclei with I ≠ 0 will show up in an NMR spectrum. When I = 0, there is only one possible spin state and obviously the nucleus cannot flip between states. Since the NMR signal is based on the absorption of radio frequency as a nucleus transitions from one spin state to another, I = 0 nuclei do not show up on NMR. In addition, they do not cause splitting of other NMR signals because they only have one possible magnetic moment. This simplifies NMR spectra, in particular of organic and organometallic compounds, greatly, since the majority of carbon atoms are 12C, which have I = 0.
For a nucleus to cause splitting, it must be close enough to the nucleus being observed to affect its magnetic environment. The splitting technically occurs through bonds, not through space, so as a general rule, only nuclei separated by three or fewer bonds can split each other. However, even if a nucleus is close enough to another, it may not cause splitting. For splitting to occur, the nuclei must also be non-equivalent. To see how these factors affect real NMR spectra, consider the spectrum for chloroethane (Figure $8$).
Notice that in Figure $8$ there are two groups of peaks in the spectrum for chloroethane, a triplet and a quartet. These arise from the two different types of I ≠ 0 nuclei in the molecule, the protons on the methyl and methylene groups. The multiplet corresponding to the CH3 protons has a relative integration (peak area) of three (one for each proton) and is split by the two methylene protons (n = 2), which results in n + 1 peaks, i.e., 3 which is a triplet. The multiplet corresponding to the CH2 protons has an integration of two (one for each proton) and is split by the three methyl protons ((n = 3) which results in n + 1 peaks, i.e., 4 which is a quartet. Each group of nuclei splits the other, so in this way, they are coupled.
Coupling Constants
The difference (in Hz) between the peaks of a mulitplet is called the coupling constant. It is particular to the types of nuclei that give rise to the multiplet, and is independent of the field strength of the NMR instrument used. For this reason, the coupling constant is given in Hz, not ppm. The coupling constant for many common pairs of nuclei are known (Table $3$), and this can help when interpreting spectra.
Structural Type
0.5 - 3
12 - 15
12 - 18
7 - 12
0.5 - 3
3 - 11
2 - 3
ortho = 6 - 9; meta = 1 - 3; para = 0 - 1
Table $3$ Typical coupling constants for various organic structural types.
Coupling constants are sometimes written nJ to denote the number of bonds (n) between the coupled nuclei. Alternatively, they are written as J(H-H) or JHH to indicate the coupling is between two hydrogen atoms. Thus, a coupling constant between a phosphorous atom and a hydrogen would be written as J(P-H) or JPH. Coupling constants are calculated empirically by measuring the distance between the peaks of a multiplet, and are expressed in Hz.
Coupling constants may be calculated from spectra using frequency or chemical shift data. Consider the spectrum of chloroethane shown in Figure $5$ and the frequency of the peaks (collected on a 60 MHz spectrometer) give in Table $4$.
Peak Label $\delta$ (ppm) v (Hz)
a 3.7805 226.83
b 3.6628 219.77
c 3.5452 212.71
d 3.4275 205.65
e 1.3646 81.88
f 1.2470 74.82
g 1.1293 67.76
Table $4$ Chemical shift in ppm and Hz for all peaks in the 1H NMR spectrum of chloroethane. Peak labels are given in Figure $5$.
To determine the coupling constant for a multiplet (in this case, the quartet in Figure $3$, the difference in frequency (ν) between each peak is calculated and the average of this value provides the coupling constant in Hz. For example using the data from Table $4$:
Frequency of peak c - frequency of peak d = 212.71 Hz - 205.65 Hz = 7.06 Hz
Frequency of peak b - frequency of peak c = 219.77 Hz – 212.71 Hz = 7.06 Hz
Frequency of peak a - frequency of peak b = 226.83 Hz – 219.77 Hz = 7.06 Hz
Average: 7.06 Hz
J(H-H) = 7.06 Hz
In this case the difference in frequency between each set of peaks is the same and therefore an average determination is not strictly necessary. In fact for 1st order spectra they should be the same. However, in some cases the peak picking programs used will result in small variations, and thus it is necessary to take the trouble to calculate a true average.
To determine the coupling constant of the same multiplet using chemical shift data (δ), calculate the difference in ppm between each peak and average the values. Then multiply the chemical shift by the spectrometer field strength (in this case 60 MHz), in order to convert the value from ppm to Hz:
Chemical shift of peak c - chemical shift of peak d = 3.5452 ppm – 3.4275 ppm = 0.1177 ppm
Chemical shift of peak b - chemical shift of peak c = 3.6628 ppm – 3.5452 ppm = 0.1176 ppm
Chemical shift of peak a - chemical shift of peak b = 3.7805 ppm – 3.6628 ppm = 0.1177 ppm
Average: 0.1176 ppm
Average difference in ppm x frequency of the NMR spectrometer = 0.1176 ppm x 60 MHz = 7.056 Hz
J(H-H) = 7.06 Hz
Calculate the coupling constant for triplet in the spectrum for chloroethane (Figure $6$) using the data from Table $5$.
Using frequency data:
Frequency of peak f - frequency of peak g = 74.82 Hz – 67.76 Hz = 7.06 Hz
Frequency of peak e - frequency of peak f = 81.88 Hz – 74.82 Hz = 7.06 Hz
Average = 7.06 Hz
J(H-H) = 7.06 Hz
Alternatively, using chemical shift data:
Chemical shift of peak f - chemical shift of peak g = 1.2470 ppm – 1.1293 ppm = 0.1177 ppm
Chemical shift of peak e - chemical shift of peak f = 1.3646 ppm – 1.2470 ppm = 0.1176 ppm
Average = 0.11765 ppm
0.11765 ppm x 60 MHz = 7.059 Hz
J(H-H) = 7.06 Hz
Notice the coupling constant for this multiplet is the same as that in the example. This is to be expected since the two multiplets are coupled with each other.
Second-Order Coupling
When coupled nuclei have similar chemical shifts (more specifically, when Δν is similar in magnitude to J), second-order coupling or strong coupling can occur. In its most basic form, second-order coupling results in “roofing” (Figure $6$). The coupled multiplets point to or lean toward each other, and the effect becomes more noticeable as Δν decreases. The multiplets also become off-centered with second-order coupling. The midpoint between the peaks no longer corresponds exactly to the chemical shift.
In more drastic cases of strong coupling (when Δν ≈ J), multiplets can merge to create deceptively simple patterns. Or, if more than two spins are involved, entirely new peaks can appear, making it difficult to interpret the spectrum manually. Second-order coupling can often be converted into first-order coupling by using a spectrometer with a higher field strength. This works by altering the Δν (which is dependent on the field strength), while J (which is independent of the field strength) stays the same.
P-31 NMR Spectroscopy
Phosphorus-31 nuclear magnetic resonance (31P NMR) is conceptually the same as proton (1H) NMR. The 31P nucleus is useful in NMR spectroscopy due to its relatively high gyromagnetic ratio (17.235 MHzT-1). For comparison, the gyromagnetic ratios of 1H and 13C are (42.576 MHz T-1) and (10.705 MHz T-1), respectively. Furthermore, 31P has a 100% natural isotopic abundance. Like the 1H nucleus, the 31P nucleus has a nuclear spin of 1/2 which makes spectra relatively easy to interpret. 31P NMR is an excellent technique for studying phosphorus containing compounds, such as organic compounds and metal coordination complexes.
Differences Between 1H and 31P NMR
There are certain significant differences between 1H and 31P NMR. While 1H NMR spectra is referenced to tetramethylsilane [Si(CH3)4], the chemical shifts in 31P NMR are typically reported relative to 85% phosphoric acid (δ = 0 ppm), which is used as an external standard due to its reactivity. However, trimethyl phosphite, P(OCH3)3, is also used since unlike phosphoric acid its shift (δ = 140 ppm) is not dependent on concentration or pH. As in 1H NMR, positive chemical shifts correspond to a downfield shift from the standard. However, prior to the mid-1970s, the convention was the opposite. As a result, older texts and papers report shifts using the opposite sign. Chemical shifts in 31P NMR commonly depend on the concentration of the sample, the solvent used, and the presence of other compounds. This is because the external standard does not take into account the bulk properties of the sample. As a result, reported chemical shifts for the same compound could vary by 1 ppm or more, especially for phosphate groups (P=O). 31P NMR spectra are often recorded with all proton signals decoupled, i.e., 31P-{1H}, as is done with 13C NMR. This gives rise to single, sharp signals per unique 31P nucleus. Herein, we will consider both coupled and decoupled spectra.
Interpreting Spectra
As in 1H NMR, phosphorus signals occur at different frequencies depending on the electron environment of each phosphorus nucleus Figure $7$. In this section we will study a few examples of phosphorus compounds with varying chemical shifts and coupling to other nuclei.
Different Phosphorus Environments and their Coupling to 1H
Consider the structure of 2,6,7-trioxa-1,4-diphosphabicyclo[2.2.2]octane [Pα(OCH2)3Pβ] shown in Figure $8$. The subscripts α and β are simply used to differentiate the two phosphorus nuclei. According to Table 1, we expect the shift of Pα to be downfield of the phosphoric acid standard, roughly around 125 ppm to 140 ppm and the shift of Pβ to be upfield of the standard, between -5 ppm and -70 ppm. In the decoupled spectrum shown in Figure $8$, we can assign the phosphorus shift at 90.0 ppm to Pα and the shift at -67.0 ppm to Pβ.
Figure $9$ shows the coupling of the phosphorus signals to the protons in the compound. We expect a stronger coupling for Pβ because there are only two bonds separating Pβ from H, whereas three bonds separate Pαfrom H (JPCH > JPOCH). Indeed, JPCH = 8.9 Hz and JPOCH = 2.6 Hz, corroborating our peak assignments above.
Finally, Figure $10$ shows the 1H spectrum of Pα(OCH2)3Pβ (Figure $11$), which shows a doublet of doublets for the proton signal due to coupling to the two phosphorus nuclei.
As suggested by the data in Figure $7$ we can predict and observe changes in phosphorus chemical shift by changing the coordination of P. Thus for the series of compounds with the structure shown in Figure $11$ the different chemical shifts corresponding to different phosphorus compounds are shown in Table $3$.
X Y Pα chemical shift (ppm) Pβ chemical shift (ppm)
- - 90.0 -67.0
O O -18.1 6.4
S - 51.8 -70.6
Table $5$ 31P chemical shifts for variable coordination of [XPα(OCH2)3PβY] (Figure $11$). Data from K. J. Coskran and J. G. Verkade, Inorg. Chem., 1965, 4, 1655.
Coupling to Fluorine
19F NMR is very similar to 31P NMR in that 19F has spin 1/2 and is a 100% abundant isotope. As a result, 19F NMR is a great technique for fluorine-containing compounds and allows observance of P-F coupling. The coupled 31P and 19F NMR spectra of ethoxybis(trifluoromethyl)phosphine, P(CF3)2(OCH2CH3), are shown in Figure $11$. It is worth noting the splitting due to JPCF = 86.6 Hz.
31P - 1H Coupling
Consider the structure of dimethyl phosphonate, OPH(OCH3)2, shown in Figure $12$. As the phosphorus nucleus is coupled to a hydrogen nucleus bound directly to it, that is, a coupling separated by a single bond, we expect JPH to be very high. Indeed, the separation is so large (715 Hz) that one could easily mistake the split peak for two peaks corresponding to two different phosphorus nuclei.
This strong coupling could also lead us astray when we consider the 1H NMR spectrum of dimethyl phosphonate (Figure $13$). Here we observe two very small peaks corresponding to the phosphine proton. The peaks are separated by such a large distance and are so small relative to the methoxy doublet (ratio of 1:1:12), that it would be easy to confuse them for an impurity. To assign the small doublet, we could decouple the phosphorus signal at 11 ppm, which will cause this peak to collapse into a singlet.
Obtaining 31P Spectra
Sample Preparation
Unlike 13C NMR, which requires high sample concentrations due to the low isotopic abundance of 13C, 31P sample preparation is very similar to 1H sample preparation. As in other NMR experiments, a 31P NMR sample must be free of particulate matter. A reasonable concentration is 2-10 mg of sample dissolved in 0.6-1.0 mL of solvent. If needed, the solution can be filtered through a small glass fiber. Note that the solid will not be analyzed in the NMR experiment. Unlike 1H NMR, however, the sample does not to be dissolved in a deuterated solvent since common solvents do not have 31P nuclei to contribute to spectra. This is true, of course, only if a 1H NMR spectrum is not to be obtained from this sample. Being able to use non-deuterated solvents offers many advantages to 31P NMR, such as the simplicity of assaying purity and monitoring reactions, which will be discussed later.
Instrument Operation
Instrument operation will vary according to instrumentation and software available. However, there are a few important aspects to instrument operation relevant to 31P NMR. The instrument probe, which excites nuclear spins and detects chemical shifts, must be set up appropriately for a 31P NMR experiment. For an instrument with a multinuclear probe, it is a simple matter to access the NMR software and make the switch to a 31P experiment. This will select the appropriate frequency for 31P. For an instrument which has separate probes for different nuclei, it is imperative that one be trained by an expert user in changing the probes on the spectrometer.
Before running the NMR experiment, consider whether the 31P spectrum should include coupling to protons. Note that 31P spectra are typically reported with all protons decoupled, i.e., 311P-{1H}. This is usually the default setting for a 31P NMR experiment. To change the coupling setting, follow the instructions specific to your NMR instrument software.
As mentioned previously, chemical shifts in 31P NMR are reported relative to 85% phosphoric acid. This must be an external standard due to the high reactivity of phosphoric acid. One method for standardizing an experiment uses a coaxial tube inserted into the sample NMR tube (Figure $14$). The 85% H3PO4 signal will appear as part of the sample NMR spectrum and can thus be set to 0 ppm.
Another way to reference an NMR spectrum is to use a 85% H3PO4 standard sample. These can be prepared in the laboratory or purchased commercially. To allow for long term use, these samples are typically vacuum sealed, as opposed to capped the way NMR samples typically are. The procedure for using a separate reference is as follows.
1. Insert NMR sample tube into spectrometer.
2. Tune the 31P probe and shim the magnetic field according to your individual instrument procedure.
3. Remove NMR sample tube and insert H3PO4 reference tube into spectrometer.
4. Begin NMR experiment. As scans proceed, perform a fourier transform and set the phosphorus signal to 0 ppm. Continue to reference spectrum until the shift stops changing.
5. Stop experiment.
6. Remove H3PO4 reference tube and insert NMR sample into spectrometer.
7. Run NMR experiment without changing the referencing of the spectrum.
31P NMR Applications
Assaying Sample Purity
31P NMR spectroscopy gives rise to single sharp peaks that facilitate differentiating phosphorus-containing species, such as starting materials from products. For this reason, 31P NMR is a quick and simple technique for assaying sample purity. Beware, however, that a “clean” 31P spectrum does not necessarily suggest a pure compound, only a mixture free of phosphorus-containing contaminants.
31P NMR can also be used to determine the optical purity of a chiral sample. Adding an enantiomer to the chiral mixture to form two different diastereomers will give rise to two unique chemical shifts in the 31P spectrum. The ratio of these peaks can then be compared to determine optical purity.
Monitoring Reactions
As suggested in the previous section, 31P NMR can be used to monitor a reaction involving phosphorus compounds. Consider the reaction between a slight excess of organic diphosphine ligand and a nickel(0) bis-cyclooctadiene, Figure $15$.
The reaction can be followed by 31P NMR by simply taking a small aliquot from the reaction mixture and adding it to an NMR tube, filtering as needed. The sample is then used to acquire a 31P NMR spectrum and the procedure can be repeated at different reaction times. The data acquired for these experiments is found in Figure $16$. The changing in 31P peak intensity can be used to monitor the reaction, which begins with a single signal at -4.40 ppm, corresponding to the free diphosphine ligand. After an hour, a new signal appears at 41.05 ppm, corresponding the the diphosphine nickel complex. The downfield peak grows as the reaction proceeds relative to the upfield peak. No change is observed between four and five hours, suggesting the conclusion of the reaction.
There are a number of advantages for using 31P for reaction monitoring when available as compared to 1H NMR:
• There is no need for a deuterated solvent, which simplifies sample preparation and saves time and resources.
• The 31P spectrum is simple and can be analyzed quickly. The corresponding 1H NMR spectra for the above reaction would include a number of overlapping peaks for the two phosphorus species as well as peaks for both free and bound cyclooctadiene ligand.
• Purification of product is also easy assayed.
31P NMR does not eliminate the need for 1H NMR chacterization, as impurities lacking phosphorus will not appear in a 31P experiment. However, at the completion of the reaction, both the crude and purified products can be easily analyzed by both 1H and 31P NMR spectroscopy.
Measuring Epoxide Content of Carbon Nanomaterials
One can measure the amount of epoxide on nanomaterials such as carbon nanotubes and fullerenes by monitoring a reaction involving phosphorus compounds in a similar manner to that described above. This technique uses the catalytic reaction of methyltrioxorhenium (Figure $17$). An epoxide reacts with methyltrioxorhenium to form a five membered ring. In the presence of triphenylphosphine (PPH3), the catalyst is regenerated, forming an alkene and triphenylphosphine oxide (OPPh3). The same reaction can be applied to carbon nanostructures and used to quantify the amount of epoxide on the nanomaterial. Figure $18$ illustrates the quantification of epoxide on a carbon nanotube.
Because the amount of initial PPh3 used in the reaction is known, the relative amounts of PPh3 and OPPh3can be used to stoichiometrically determine the amount of epoxide on the nanotube. 31P NMR spectroscopy is used to determine the relative amounts of PPh3 and OPPh3 (Figure $19$).
The integration of the two 31P signals is used to quantify the amount of epoxide on the nanotube according to \ref{4}.
$Moles\ of\ Epoxide\ =\ \frac{area\ of\ OPPH_{3}\ peak}{area\ of\ PPh_{3}\ peak} \times \ moles\ PPh_{3} \label{4}$
Thus, from a known quantity of PPh3, one can find the amount of OPPh3 formed and relate it stoichiometrically to the amount of epoxide on the nanotube. Not only does this experiment allow for such quantification, it is also unaffected by the presence of the many different species present in the experiment. This is because the compounds of interest, PPh3 and OPPh3, are the only ones that are characterized by 31P NMR spectroscopy.
Conclusion
31P NMR spectroscopy is a simple technique that can be used alongside 1H NMR to characterize phosphorus-containing compounds. When used on its own, the biggest difference from 1H NMR is that there is no need to utilize deuterated solvents. This advantage leads to many different applications of 31P NMR, such as assaying purity and monitoring reactions.
NMR Spectroscopy of Stereoisomers
Nuclear magnetic resonance (NMR) spectroscopy is a very useful tool used widely in modern organic chemistry. It exploits the differences in the magnetic properties of different nuclei in a molecule to yield information about the chemical environment of the nuclei, and subsequently the molecule, in question. NMR analysis lends itself to scientists more easily than say the more cryptic data achieved form ultraviolet or infared spectra because the differences in magnetic properties lend themselves to scientists very well. The chemical shifts that are characteristic of different chemical environments and the multiplicity of the peaks fit well with our conception of the way molecules are structured.
Using NMR spectroscopy, we can differentiate between constitutional isomers, stereoisomers, and enantiomers. The later two of these three classifications require close examination of the differences in NMR spectra associated with changes in chemical environment due to symmetry differences; however, the differentiation of constitutional isomers can be easily obtained.
Constitutional Isomerism
Nuclei both posses charge and spin, or angular momentum, and from basic physics we know that a spinning charge generates a magnetic moment. The specific nature of this magnetic moment is the main concern of NMR spectroscopy.
For proton NMR, the local chemical environment makes different protons in a molecule resonate at different frequencies. This difference in resonance frequencies can be converted into a chemical shift (δ) for each nucleus being studied. Because each chemical environment results in a different chemical shift, one can easily assign peaks in the NMR data to specific functional groups based upon president. Presidents for chemical shifts can be found in any number of basic NMR text. For example, Figure $20$ shows the spectra of ethyl formate and benzyl acetate. In the lower spectra, benzyl acetate, notice peaks at δ = 1.3, 4.2, and 8.0 ppm characteristic of the primary, secondary, and aromatic protons, respectively, present in the molecule. In the spectra of ethyl formate (Figure $20$ b), notice that the number of peaks is is the same as that of benzyl acetate (Figure $20$ a); however, the multiplicity of peaks and their shifts is very different.
The difference between these two spectra is due to geminal spin-spin coupling. Spin-spin coupling is the result of magnetic interaction between individual protons transmitted by the bonding electrons between the protons. This spin-spin coupling results in the speak splitting we see in the NMR data. One of the benefits of NMR spectroscopy is the sensitivity to very slight changes in chemical environment.
Stereoisomerism
Diastereomers
Based on their definition, diastereomers are stereoisomers that are not mirror images of each other and are not superimposable. In general, diastereomers have differing reactivity and physical properties. One common example is the difference between threose and erythrose (Figure $21$.
As one can see from Figure $22$, these chemicals are very similar each having the empirical formula of C4H7O4. One may wonder: how are these slight differences in chemical structure represented in NMR? To answer this question, we must look at the Newman projections for a molecule of the general structure Figure $22$.
One can easily notice that the two protons represented are always located in different chemical environments. This is true because the R group makes the proton resonance frequencies v1(I) ≠ v2(III), v2(I) ≠ v1(II), and v2(II) ≠ v1(III). Thus, diastereomers have different vicinal proton-proton couplings and the resulting chemical shifts can be used to identify the isomeric makeup of the sample.
Enantiomers
Enantiomers are compounds with a chiral center. In other words, they are non-superimposable mirror images. Unlike diastereomers, the only difference between enantiomers is their interaction with polarized light. Unfortunately, this indistinguishability of racemates includes NMR spectra. Thus, in order to differentiate between enantiomers, we must make use of an optically active solvent also called a chiral derivatizing agent (CDA). The first CDA was (α-methoxy-α-(trifluoromethyl)phenylacetic acid) (MTPA also known as Mosher's acid) (Figure $23$).
Now, many CDAs exist and are readily available. It should also be noted that CDA development is a current area of active research. In simple terms, one can think of the CDA turning an enantiomeric mixture into a mixture of diastereomeric complexes, producing doublets where each half of the doublet corresponds to each diastereomer, which we already know how to analyze. The resultant peak splitting in the NMR spectra due to diastereomeric interaction can easily determine optical purity. In order to do this, one may simply integrate the peaks corresponding to the different enantiomers thus yielding optical purity of incompletely resolved racemates. One thing of note when performing this experiment is that this interaction between the enantiomeric compounds and the solvent, and thus the magnitude of the splitting, depends upon the asymmetry or chirality of the solvent, the intermolecular interaction between the compound and the solvent, and thus the temperature. Thus, it is helpful to compare the spectra of the enantiomer-CDA mixture with that of the pure enantiomer so that changes in chemical shift can be easily noted.
Basics of Solid-State NMR
NMR stands for nuclear magnetic resonance and functions as a powerful tool for chemical characterization. Even though NMR is used mainly for liquids and solutions, technology has progressed to where NMR of solids can be obtained with ease. Aptly named as solid state NMR, the expansion of usable phases has invariably increased our ability to identify chemical compounds. The reason behind difficulties using the solid state lie in the fact that solids are never uniform. When put through a standard NMR, line broadening interactions cannot be removed by rapid molecular motions, which results in unwieldy wide lines which provide little to no useful information. The difference is so staggering that lines broaden by hundreds to thousands of hertz as opposed to less than 0.1 Hz in solution when using an I = 1/2 spin nucleus.
A process known as magic angle spinning (MAS), where the sample is tilted at a specific angle, is used in order to overcome line broadening interactions and achieve usable peak resolutions. In order to understand solid state NMR, its history, operating chemical and mathematical principles, and distinctions from gas phase/solution NMR will be explained.
History
The first notable contribution to what we know today as NMR was Wolfgang Pauli’s (Figure $24$) prediction of nuclear spin in 1926. In 1932 Otto Stern (Figure $25$) used molecular beams and detected nuclear magnetic moments.
Four years later, Gorter performed the first NMR experiment with lithium fluoride (LiF) and hydrated potassium alum (K[Al(SO4)2]•12H2O) at low temperatures. Unfortunately, he was unable to characterize the molecules and the first successful NMR for a solution of water was taken in 1945 by Felix Bloch (Figure $27$). In the same year, Edward Mills Purcell (Figure $27$) managed the first successful NMR for the solid paraffin. Continuing their research, Bloch obtained the 1H NMR of ethanol and Purcell obtained that of paraffin in 1949. In the same year, the chemical significance of chemical shifts was discovered. Finally, high resolution solid state NMR was made possible in 1958 by the discovery of magic angle spinning.
How it Works: From Machine to Graph
NMR spectroscopy works by measuring the nuclear shielding, which can also be seen as the electron density, of a particular element. Nuclear shielding is affected by the chemical environment, as different neighboring atoms will have different effects on nuclear shielding, as electronegative atoms will tend to decrease shielding and vice versa. NMR requires the elements analyzed to have a spin state greater than zero. Commonly used elements are 1H, 13C, and 29Si. Once inside the NMR machine, the presence of a magnetic field splits the spin states (Figure $29$).
From (Figure $29$ we see that a spin state of 1/2 is split into two spin states. As spin state value increases, so does the number of spin states. A spin of 1 will have three spin states, 3/2 will have four spin states, and so on. However, higher spin states increases the difficulty to accurately read NMR results due to confounding peaks and decreased resolution, so spin states of ½ are generally preferred. The E, or radiofrequency shown in (Figure $29$ can be described by \ref{5}, where µ is the magnetic moment, a property intrinsic to each particular element. This constant can be derived from \ref{6}, where ϒ is the gyromagnetic ratio, another element dependent quantity, h is Planck’s constant, and I is the spin.
$E\ =\ \mu B_{0}H_{0} \label{5}$
$\mu \ =\ \gamma h (I(I + 1))^{1/2} \label{6}$
In \ref{5} can have E substituted for hν, leading to \ref{7}, which can solve for the NMR resonance frequency (v).
$h \nu \ =\ \mu B_{0}H_{0} \label{7}$
Using the frequency (v), the δ, or expected chemical shift may be computed using \ref{8}.
$\delta \ =\ \frac{(\nu _{observed} - \nu _{reference})}{\nu _{spectrometer}} \label{8}$
Delta (δ) is observed in ppm and gives the distance from a set reference. Delta is directly related to the chemical environment of the particular atom. For a low field, or high delta, an atom is in an environment which produces induces less shielding than in a high field, or low delta.
NMR Instrument
An NMR can be divided into three main components: the workstation computer where one operates the NMR instrument, the NMR spectrometer console, and the NMR magnet. A standard sample is inserted through the bore tube and pneumatically lowered into the magnet and NMR probe (Figure $30$).
The first layer inside the NMR (Figure $31$ is the liquid nitrogen jacket. Normally, this space is filled with liquid nitrogen at 77 K. The liquid nitrogen reservoir space is mostly above the magnet so that it can act as a less expensive refrigerant to block infrared radiation from reaching the liquid helium jacket.
The layer following the liquid nitrogen jacket is a 20 K radiation shield made of aluminum wrapped with alternating layers of aluminum foil and open weave gauze. Its purpose is to block infrared radiation which the 77 K liquid nitrogen vessel was unable to eliminate, which increases the ability for liquid helium to remain in the liquid phase due to its very low boiling point. The liquid helium vessel itself, the next layer, is made of stainless steel wrapped in a single layer of aluminum foil, acting once again as an infrared radiation shield. It is about 1.6 mm thick and kept at 4.2 K.
Inside the vessel and around the magnet is the aluminum baffle, which acts as another degree of infrared radiation protection as well as a layer of protection for the superconducting magnet from liquid helium reservoir fluctuations, especially during liquid helium refills. The significance is that superconducting magnets at low fields are not fully submerged in liquid helium, but higher field superconducting magnets must maintain the superconducting solenoid fully immersed in liquid helium The vapor above the liquid itself is actually enough to maintain superconductivity of most magnets, but if it reaches a temperature above 10 K, the magnet quenches. During a quench, the solenoid exceeds its critical temperature for superconductivity and becomes resistive, generating heat. This heat, in turn, boils off the liquid helium. Therefore, a small opening at the very base of the baffle exists as a path for the liquid helium to reach the magnet surface so that during refills the magnet is protected from accidental quenching.
Problems with Solid State NMR
The most notable difference between solid samples and solution/gas in terms of NMR spectroscopy is that molecules in solution rotate rapidly while those in a solid are fixed in a lattice. Different peak readings will be produced depending on how the molecules are oriented in the magnetic field because chemical shielding depends upon the orientation of a molecule, causing chemical shift anisotropy. Therefore, the effect of chemical shielding also depends upon the orientation of the molecule with respect to the spectrometer. These counteracting forces are balanced out in gases and solutions because of their randomized molecular movement, but become a serious issue with fixed molecules observed in solid samples. If the chemical shielding isn’t determined accurately, neither will the chemical shifts (δ).
Another issue with solid samples are dipolar interactions which can be very large in solid samples causing linewidths of tens to hundreds of kilohertz to be generated. Dipolar interactions are tensor quantities, which demonstrate values dependent on the orientation and placement of a molecule in reference to its surroundings. Once again the issue goes back to the lattice structure of solids, which are in a fixed location. Even though the molecules are fixed, this does not mean that nuclei are evenly spread apart. Closer nuclei display greater dipolar interactions and vice versa, creating the noise seen in spectra of NMR not adapted for solid samples. Dipolar interactions are averaged out in solution states because of randomized movement. Spin state repulsions are averaged out by molecular motion of solutions and gases. However, in solid state, these interactions are not averaged and become a third source of line broadening.
Magic Angle Spinning
In order to counteract chemical shift anisotropy and dipolar interactions, magic angle spinning was developed. As discussed above, describing dipolar splitting and chemical shift aniostoropy interactions respectively, it becomes evident that both depend on the geometric factor (3cos2θ-1).
$Dipolar\ splitting \ =\ C(\mu _{0}/8 \pi )(\gamma _{a} \gamma _{x} / r^{2}_{ax})(3 cos^{2} \theta _{iz} - 1) \label{9}$
$\sigma _{zz} \ =\ \bar{\sigma } + 1/3 \Sigma \sigma_{ii} (3 cos^{2} \theta _{iz} - 1) \label{10}$
If this factor is decreased to 0, then line broadening due to chemical shift anisotropy and dipolar interactions will disappear. Therefore, solid samples are rotated at an angle of 54.74˚, effectively allowing solid samples to behave similarly to solutions/gases in NMR spectroscopy. Standard spinning rates range from 12 kHz to an upper limit of 35 kHz, where higher spin rates are necessary to remove higher intermolecular interactions.
Application of Solid State NMR
The development of solid state NMR is a technique necessary to understand and classify compounds that would not work well in solutions, such as powders and complex proteins, or study crystals too small for a different characterization method.
Solid state NMR gives information about local environment of silicon, aluminum, phosphorus, etc. in the structures, and is therefore an important tool in determining structure of molecular sieves. The main issue frequently encountered is that crystals large enough for X-Ray crystallography cannot be grown, so NMR is used since it determines the local environments of these elements. Additionally, by using 13C and 15N, solid state NMR helps study amyloid fibrils, filamentous insoluble protein aggregates related to neurodegenerative diseases such as Alzheimer’s disease, type II diabetes, Huntington’s disease, and prion diseases.
Using 13-C NMR to Study Carbon Nanomaterials
Carbon Nanomaterial
There are several types of carbon nanomaterial. Members of this family are graphene, single-walled carbon nanotubes (SWNT), multi-walled carbon nanotubes (MWNT), and fullerenes such as C60. Nano materials have been subject to various modification and functionalizations, and it has been of interest to develop methods that could observe these changes. Herein we discuss selected applications of 13C NMR in studying graphene and SWNTs. In addition, a discussion of how 13C NMR could be used to analyze a thin film of amorphous carbon during a low-temperature annealing process will be presented.
13C NMR vs. 1H NMR
Since carbon is found in any organic molecule NMR that can analyze carbon could be very helpful, unfortunately the major isotope, 12C, is not NMR active. Fortunately, 13C with a natural abundance of 1.1% is NMR active. This low natural abundance along with lower gyromagnetic ratio for 13C causes sensitivity to decrease. Due to this lower sensitivity, obtaining a 13C NMR spectrum with a specific signal-to-noise ratio requires averaging more spectra than the number of spectra that would be required to average in order to get the same signal to noise ratio for a 1H NMR spectrum. Although it has a lower sensitivity, it is still highly used as it discloses valuable information.
Peaks in a 1H NMR spectrum are split to n + 1 peak, where n is the number of hydrogen atoms on the adjacent carbon atom. The splitting pattern in 13C NMR is different. First of all, C-C splitting is not observed, because the probability of having two adjacent 13C is about 0.01%. Observed splitting patterns, which is due to the hydrogen atoms on the same carbon atom not on the adjacent carbon atom, is governed by the same n + 1 rule.
In 1H NMR, the integral of the peaks are used for quantitative analysis, whereas this is problematic in 13C NMR. The long relaxation process for carbon atoms takes longer comparing to that of hydrogen atoms, which also depends on the order of carbon (i.e., 1°, 2°, etc.). This causes the peak heights to not be related to the quantity of the corresponding carbon atoms.
Another difference between 13C NMR and 1H NMR is the chemical shift range. The range of the chemical shifts in a typical NMR represents the different between the minimum and maximum amount of electron density around that specific nucleus. Since hydrogen is surrounded by fewer electrons in comparison to carbon, the maximum change in the electron density for hydrogen is less than that for carbon. Thus, the range of chemical shift in 1H NMR is narrower than that of 13C NMR.
Solid State NMR
13C NMR spectra could also be recorded for solid samples. The peaks for solid samples are very broad because the sample, being solid, cannot have all anisotropic, or orientation-dependent, interactions canceled due to rapid random tumbling. However, it is still possible to do high resolution solid state NMR by spinning the sample at 54.74° with respect to the applied magnetic field, which is called the magic angle. In other words, the sample can be spun to artificially cancel the orientation-dependent interaction. In general, the spinning frequency has a considerable effect on the spectrum.
13C NMR of Carbon Nanotubes
Single-walled carbon nanotubes contain sp2 carbons. Derivatives of SWNTs contain sp3 carbons in addition. There are several factors that affect the 13C NMR spectrum of a SWNT sample, three of which will be reviewed in this module: 13C percentage, diameter of the nanotube, and functionalization.
13C Percentage
For sp2 carbons, there is a slight dependence of 13C NMR peaks on the percentage of 13C in the sample. Samples with lower 13C percentage are slighted shifted downfield (higher ppm). Data are shown in Table $4$. Please note that these peaks are for the sp2 carbons.
Sample $\delta$ (ppm)
SWNTs(100%) 116±1
SWNTs(1%) 118±1
Table $4$ Effects of 13C percentage on the sp2 peak. Data from S. Hayashi, F. Hoshi, T. Ishikura, M. Yumura, and S. Ohshima, Carbon, 2003, 41, 3047.
Diameter of the Nanotubes
The peak position for SWNTs also depends on the diameter of the nanotubes. It has been reported that the chemical shift for sp2 carbons decreases as the diameter of the nanotubes increases. Figure $32$ shows this correlation. Since the peak position depends on the diameter of nanotubes, the peak broadening can be related to the diameter distribution. In other words, the narrower the peak is, the smaller the diameter distribution of SWNTs is. This correlation is shown in Figure $33$.
Functionalization
Solid stated 13C NMR can also be used to analyze functionalized nanotubes. As a result of functionalizing SWNTs with groups containing a carbonyl group, a slight shift toward higher fields (lower ppm) for the sp2carbons is observed. This shift is explained by the perturbation applied to the electronic structure of the whole nanotube as a result of the modifications on only a fraction of the nanotube. At the same time, a new peak emerges at around 172 ppm, which is assigned to the carboxyl group of the substituent. The peak intensities could also be used to quantify the level of functionalization. Figure $34$ shows these changes, in which the substituents are –(CH2)3COOH, –(CH2)2COOH, and –(CH2)2CONH(CH2)2NH2 for the spectra Figure $34$ b, Figure $34$ c, and Figure $34$ d, respectively. Note that the bond between the nanotube and the substituent is a C-C bond. Due to low sensitivity, the peak for the sp3 carbons of the nanotube, which does not have a high quantity, is not detected. There is a small peak around 35 ppm in Figure $34$, can be assigned to the aliphatic carbons of the substituent.
For substituents containing aliphatic carbons, a new peak around 35 ppm emerges, as was shown in Figure $34$, which is due to the aliphatic carbons. Since the quantity for the substituent carbons is low, the peak cannot be detected. Small substituents on the sidewall of SWNTs can be chemically modified to contain more carbons, so the signal due to those carbons could be detected. This idea, as a strategy for enhancing the signal from the substituents, can be used to analyze certain types of sidewall modifications. For example, when Gly (–NH2CH2CO2H) was added to F-SWNTs (fluorinated SWNTs) to substitute the fluorine atoms, the 13C NMR spectrum for the Gly-SWNTs was showing one peak for the sp2 carbons. When the aliphatic substituent was changed to 6-aminohexanoic acid with five aliphatic carbons, the peak was detectable, and using 11-aminoundecanoic acid (ten aliphatic carbons) the peak intensity was in the order of the size of the peak for sp2 carbons. In order to use 13C NMR to enhance the substituent peak (for modification quantification purposes as an example), Gly-SWNTs was treated with 1-dodecanol to modify Gly to an amino ester. This modification resulted in enhancing the aliphatic carbon peak at around 30 ppm. Similar to the results in Figure $34$, a peak at around 170 emerged which was assigned to the carbonyl carbon. The sp3 carbon of the SWNTs, which was attached to nitrogen, produced a small peak at around 80 ppm, which is detected in a cross-polarization magic angle spinning (CP-MAS) experiment.
F-SWNTs (fluorinated SWNTs) are reported to have a peak at around 90 ppm for the sp3 carbon of nanotube that is attached to the fluorine. The results of this part are summarized in Figure $34$ (approximate values).
Group $\delta$(ppm) Intensity
sp2 carbons of SWNTs 120 Strong
–NH2(CH2)nCO2H (aliphatic carbon, n=1,5, 10) 20-40 Depends on ‘n’
–NH2(CH2)nCO2H (carboxyl carbon, n=1,5, 10) 170 Weak
sp3 carbon attached to nitrogen 80 Weak
sp3 carbon attached to fluorine 90 Weak
Table $5$ Chemical shift for different types of carbons in modified SWNTs. Note that the peak for the aliphatic carbons gets stronger if the amino acid is esterified. Data are obtained from: H. Peng, L. B. Alemany, J. L. Margrave, and V. N. Khabashesku, J. Am. Chem. Soc., 2003, 125, 15174; L. Zeng, L. Alemany, C. Edwards, and A. Barron, Nano. Res., 2008, 1, 72; L. B. Alemany, L. Zhang, L. Zeng, C. L. Edwards, and A. R. Barron, Chem. Mater., 2007, 19, 735.
The peak intensities that are weak in Figure $34$ depend on the level of functionalization and for highly functionalized SWNTs, those peaks are not weak. The peak intensity for aliphatic carbons can be enhanced as the substituents get modified by attaching to other molecules with aliphatic carbons. Thus, the peak intensities can be used to quantify the level of functionalization.
13C NMR of Functionalized Graphene
Graphene is a single layer of sp2 carbons, which exhibits a benzene-like structure. Functionalization of graphene sheets results in converting some of the sp2 carbons to sp3. The peak for the sp2 carbons of graphene shows a peak at around 140 ppm. It has been reported that fluorinated graphene produces an sp3peak at around 82 ppm. It has also been reported for graphite oxide (GO), which contains –OH and epoxy substituents, to have peaks at around 60 and 70 ppm for the epoxy and the –OH substituents, respectively. There are chances for similar peaks to appear for graphene oxide. Table $6$ summarizes these results.
Type of Carbon $\delta$(ppm)
sp2 140
sp3 attached to fluorine 80
sp3 attached to -OH (for GO) 70
sp3 attached to epoxide (for GO) 60
Table $6$ Chemical shifts for functionalized graphene. Data are obtained from: M. Dubois, K. Guérin, J. P. Pinheiro, Z. Fawal, F. Masin, and A. Hamwi, Carbon, 2004, 42, 1931; L. B. Casabianca, M. A. Shaibat, W. W. Cai, S. Park, R. Piner, R. S. Ruoff, and Y. Ishii, J. Am. Chem. Soc., 2010, 132, 5672.
Analyzing Annealing Process Using 13C NMR
13C NMR spectroscopy has been used to study the effects of low-temperature annealing (at 650 °C) on thin films of amorphous carbon. The thin films were synthesized from a 13C enriched carbon source (99%). There were two peaks in the 13C NMR spectrum at about 69 and 142 ppm which were assigned to sp3 and sp2carbons, respectively Figure $35$. The intensity of each peak was used to find the percentage of each type of hybridization in the whole sample, and the broadening of the peaks was used to estimate the distribution of different types of carbons in the sample. It was found that while the composition of the sample didn’t change during the annealing process (peak intensities didn’t change, see Figure $35$b), the full width at half maximum (FWHM) did change (Figure $35$a). The latter suggested that the structure became more ordered, i.e., the distribution of sp2 and sp3carbons within the sample became more homogeneous. Thus, it was concluded that the sample turned into a more homogenous one in terms of the distribution of carbons with different hybridization, while the fraction of sp2 and sp3 carbons remained unchanged.
Aside from the reported results from the paper, it can be concluded that 13C NMR is a good technique to study annealing, and possibly other similar processes, in real time, if the kinetics of the process is slow enough. For these purposes, the peak intensity and FWHM can be used to find or estimate the fraction and distribution of each type of carbon respectively.
Summary
13C NMR can reveal important information about the structure of SWNTs and graphene. 13C NMR chemical shifts and FWHM can be used to estimate the diameter size and diameter distribution. Though there are some limitations, it can be used to contain some information about the substituent type, as well as be used to quantify the level of functionalization. Modifications on the substituent can result in enhancing the substituent signal. Similar type of information can be achieved for graphene. It can also be employed to track changes during annealing and possibly during other modifications with similar time scales. Due to low natural abundance of 13C it might be necessary to synthesize 13C-enhanced samples in order to obtain suitable spectra with a sufficient signal-to-noise ratio. Similar principles could be used to follow the annealing process of carbon nano materials. C60will not be discussed herein.
Lanthanide Shift Reagents
Nuclear magnetic resonance spectroscopy (NMR) is the most powerful tool for organic and organometallic compound determination. Even structures can be determined just using this technique. In general NMR gives information about the number of magnetically distinct atoms of the specific nuclei under study, as well as information regarding the nature of the immediate environment surrounding each nuclei. Because hydrogen and carbon are the major components of organic and organometallic compounds, proton (1H) NMR and carbon-13 (13C) NMR are the most useful nuclei to observe.
Not all the protons experience resonance at the same frequency in a 1H NMR, and thus it is possible to differentiate between them. The diversity is due to the existence of a different electronic environment around chemically different nuclei. Under an external magnetic field (B0), the electrons in the valence shell are affected; they start to circulate generating a magnetic field, which is apposite to the applied magnetic field. This effect is called diamagnetic shielding or diamagnetic anisotropy Figure $36$.
The greater the electron density around one specific nucleus, the greater will be the induced field that opposes the applied field, and this will result in a different resonance frequency. The identification of protons sounds simple, however, the NMR technique has a relatively low sensitivity of proton chemical shifts to changes in the chemical and stereochemical environment; as a consequence the resonance of chemically similar proton overlap. There are several methods that have been used to resolve this problem, such as: the use of higher frequency spectrometers or by the use of shift reagents as aromatic solvents or lanthanide complexes. The main issue with high frequency spectrometers is that they are very expensive, which reduces the number of institutions that can have access to them. In contrast, shift reagents work by reducing the equivalence of nuclei by altering their magnetic environment, and can be used on any NMR instrument. The simplest shift reagent is the one of different solvents, however problems with some solvents is that they can react with the compound under study, and also that these solvents usually just alter the magnetic environment of a small part of the molecule. Consequently, although there are several methods, most of the work has been done with lanthanide complexes.
The History of Lanthanide Shift Reagents
The first significant induced chemical shift using paramagnetic ions was reported in 1969 by Conrad Hinckley (Figure $37$), where he used bispyridine adduct of tris(2,2,6,6-tetramethylhepta-3,5-dionato)europium(III) (Eu(tmhd)3), better known as Eu(dpm)3, where dpm is the abbreviation of dipivaloyl- methanato, the chemical structure is shown in Figure $38$. Hinckley used Eu(tmhd)3 on the 1H NMR spectrum of cholesterol from 347 – 2 Hz. The development of this new chemical method to improve the resolution of the NMR spectrum was the stepping-stone for the work of Jeremy Sanders and Dudley Williams, Figure $39$ and Figure $40$ respectively. They observed a significant increase in the magnitude of the induced shift after using just the lanthanide chelate without the pyridine complex. Sugesting that the pyridine donor ligands are in competition for the active sides of the lanthanide complex. The efficiency of Eu(tmhd)3 as a shift reagent was published by Sanders and Williams in 1970, where they showed a significant difference in the 1H NMR spectrum of n-pentanol using the shift reagent, see Figure $41$.
Analyzing the spectra in Figure $41$ it is easy to see that with the use of Eu(tmhd)3 there is any overlap between peaks. Instead, the multiplets of each proton are perfectly clear. After these two publications the potential of lanthanide as shift reagents for NMR studies became a popular topic. Other example is the fluorinate version of Eu(dpm)3; (tris(7,7,-dimethyl-1,1,2,2,2,3,3-heptafluoroocta-7,7-dimethyl-4,6-dionato)europium(III), best known as Eu(fod)3, which was synthesized in 1971 by Rondeau and Sievers. This LSR presents better solubility and greater Lewis acid character, the chemical structure is show in Figure $42$.
Mechanism of Inducement of Chemical Shift
Lanthanide atoms are Lewis acids, and because of that, they have the ability to cause chemical shift by the interaction with the basic sites in the molecules. Lanthanide metals are especially effective over other metals because there is a significant delocalization of the unpaired f electrons onto the substrate as a consequence of unpaired electrons in the f shell of the lanthanide. The lanthanide metal in the complexes interacts with the relatively basic lone pair of electrons of aldehydes, alcohols, ketones, amines and other functional groups within the molecule that have a relative basic lone pair of electrons, resulting in a NMR spectral simplification.
There are two possible mechanisms by which a shift can occur: shifts by contact and shifts by pseudocontact. The first one is a result of the transfer of electron spin density via covalent bond formation from the lanthanide metal ion to the associated nuclei. While the magnetic effects of the unpaired electron magnetic moment causes the pseudocontact shift. Lanthanide complexes give shifts primarily by the pseudocontact mechanism. Under this mechanism, there are several factors that influence the shift of a specific NMR peak. The principal factor is the distance between the metal ion and the proton; the shorter the distance, the greater the shift obtained. On the other hand, the direction of the shift depends on the lanthanide complex used. The complexes that produce a shift to a lower field (downfield) are the ones containing erbium, europium, thulium and ytterbium, while complexes with cerium, neodymium, holmium, praseodymium, samarium and terbium, shift resonances to higher field. Figure 6 shows the difference betwen an NMR spectrum without the use of shift reagent versus the same spectrum in the present of a europium complex (downfield shift) and a praseodymium complex (high-field shift).
Linewidth broadening is not desired because of loss of resolution, and lanthanide complexes unfortunately contribute extremely to this effect when they are used in high concentrations due to their mechanism that shortens the relaxation times (T2), which in turn increases the bandwidth. However europium and praseodymium are an extraordinary exception giving a very low shift broadening, 0.003 and 0.005 Hz/Hz respectively. Europium specially is the most used lanthanide as shift reagent because of its inefficient nuclear spin-lattice ratio properties. It has low angular momentum quantum numbers and a diamagnetic 7F0 ground state. These two properties contribute to a very small separation of the highest and lowest occupied metal orbitals leading to an inefficient relaxation and a very little broadening in the NMR spectra. The excited 7F1 state will then contribute to the pseudocontact shift.
We have mentioned above that lanthanide complexes have a mechanism that influences relaxation times, and this is certainly because paramagnetic ions have an influence in both: chemical shifts and relaxation rates. The relaxation times are of great significant because they depend on the width of a specific resonance (peak). Changes in relaxation time could also be related with the geometry of the complex.
Measuring the Shift
The easiest and more practical way to measure the lanthanide-induced shift (LIS) is to add aliquots of the lanthanide shift reagent (LSR or Δvi) to the sample that has the compound of interest (substrate), and take an NMR spectra after each addition. Because the shift of each proton will change after each addition of the LSR to lower or upper field, the LIS can me measured. With the data collected, a plot of the LIS against the ratio of LSR: substrate will generate a straight line where the slope is representative of the compound that is being studied. The identification of the compound by the use of chiral lanthanide shift reagents can be so precise that it is possible to estimate the composition of enantiomers in the solution under study, see Figure $44$.
Now, what is the mechanism that is actually happening between the LSR and the compound under study? The LSR is a metal complex of six coordinate sides. The LSR, in presence of substrate that contains heteroatoms with Lewis basicity character, expands its coordination sides in solution in order to accept additional ligands. An equilibrium mixture is formed between the substrate and the LSR. \ref{11} and \ref{12} show the equilibrium, where L is LSR, S is the substrate, and LS is the concentration of the complex formed is solution.
$L\ +\ S \mathrel{\mathop{\rightleftarrows}^{\mathrm{K_{1}}}} \ [LS] \label{11}$
$[LS] \ +\ S \mathrel{\mathop{\rightleftarrows}^{\mathrm{K_{2}}}} [LS_{2}] \label{12}$
The abundance of these species depends on K1 and K2, which are the binding constant. The binding constant is a special case of equilibrium constant, but it refers with the binding and unbinding mechanism of two species. In most of the cases like, K2 is assumed to be negligible and therefore just the first complex [LS] is assumed to be formed. The equilibrium between L + S and LS in solution is faster than the NMR timescale, consequently a single average signal will be recorded for each nucleus.
Determination of Enantiomeric Purity
Besides the great potential of lanthanide shift reagents to improve the resolution of NMR spectrums, these complexes also have been used to identify enantiomeric mixtures in solution. To make this possible the substrate must meet certain properties. The fist one is that the organic compounds in the enantiomeric composition must to have a hard organic base as functional group. The shift reagents are not effective with most of the soft bases. Though hundreds of chelates have been synthesized after Eu(dcm)3, this one is the LSR that resulted in the most effective reagent for the resolution of enantiotopic resonances. Basically if you take an NMR of an enantiomeric mixture sample, a big variety of peaks will appear and the hard part is to identify which of those peaks correspond to which specific enantiomer. The differences in chemical shifts observed for enantiomeric mixtures in solution containing LSR might arise from at least two sources: the equilibrium constants of the formation of the possible diastereometic complexes between the substrate and the LSR, and the geometries of these complexes, which might be distinct. The enantiomeric shift differences sometimes are defined as ΔΔδ.
In solution the exchange between substrate coordinated to the europium ion and the free substrate in solution is very fast. To be sure that the europium complexes are binding with one or two substrate molecules, an excess of substrate is usually added.
Determination of Relaxation Parameters of Contrast Agents
Magnetic resonance imaging (MRI) (also known as nuclear magnetic resonance imaging (NMRI) or magnetic resonance tomography (MRT)) is a powerful noninvasive diagnostic technique, which is used to generate magnetic field (B0) and interacts with spin angular momentum of the nucleus in the tissue. Spin angular momentum depends on number of protons and neutrons of nucleus. Nuclei with even number of protons plus neutrons are insensitive to magnetic field, so cannot be viewed by MRI.
Each nucleus can be considered as an arrow with arbitrary direction in absence of external magnetic field (Figure $46$). And we consider them to get oriented in the same direction once magnetic field applied (Figure $47$). In order to get nuclei orient in specific direction, energy is supplied, and to bring it to original position energy is emitted. All this transitions eventually lead to changes in angular velocity, which is defined as Larmor frequency and the expression \ref{13}, where ω is the Larmor frequency, γ is the gyromagnetic ratio, and B0 is the magnetic field. It is not easy to detect energy, which is involved in such a transition, that’s why use of high resolution spectrometers required, those which are developed by nowadays as a most powerful MRI are close to 9 Tesla with mass approaching forty five tons. Unfortunately it is expensive tool to purchase and to operate. That’s why new techniques should be developed, so most of the MRI spectrometers can be involved in imaging. Fortunately presence of huge amount of nuclei in analyzed sample or body can provide with some information.
$\omega \ =\ \gamma B_{0} \label{13}$
Nuclear Magnetic Resonance Relaxometer
Each nucleus possesses microscopic magnetic spins of x, y and z. Presence of randomly distributed atoms with varying x and y spins will lead to zero upon summation of x and y planes. But in case of z, summation of magnetic spins will not lead to cancellation. According to Currie’s law, \ref{14}, (Mzis the resulting magnetization of z axis, C is a material specific Curie constant, B0 is the magnetic field, and T is absolute temperature), magnetization of z axis proportional to magnetic field applied from outside. Basically, excitation happens by passing current through coil which leads to magnetization of x, y and z axis. It is the way of changing magnetism from z axis to x and y axis. Once external current supply is turned off, magnetization will eventually quench. This means a change of magnetization from x and y axis to z axis, were it eventually become equilibrated and device no more can detect the signals. Energy which is emitted from excited spin leads to development of new current inside of the same coil recorded by detector; hence same coil can be used as detector and source of magnetic field. This process called as a relaxation and that's why, return of magnetization to z axis called as spin-lattice relaxation or T1 relaxation (time required for magnetization to align on z axis). Eventual result of zero magnetization on x and y axis called as spin-spin relaxation or T2 relaxation (Figure $48$).
$M_{z}\ =\ CB_{0}/T \label{14}$
Contrast Agents for MRI
In MRI imaging contrast is determined according to T1, T2 or the proton density parameter. Therefor we can obtain three different images. By changing intervals between radio frequency (RF) 90° pulses and RF 180° pulses, the desired type of image can be obtained. There are few computational techniques available to improve contrast of image; those are repetitive scans and different mathematical computations. Repetitive scans take a long time, therefore cannot be applied in MRI. Mathematical computation on their own, do not provide with desired results. For that reason, in order to obtain high resolution images, contrast agents (CA) are important part of medical imaging.
Types of Contrast Agents
There are different types of contrast agents available in markets which reduce the supremacy of T1or T2, and differentiate according to relaxivity1 (r1) and relaxivity2 (r2) values. The relaxivity (ri) can be described as 1/Ti (s-1) of water molecules per mM concentration of CA. Contrast agents are paramagnetic and can interact with dipole moments of water molecules, causing fluctuations in molecules. This theory is known as Solomon-Bloembergen-Morgan (SBM) theory. Those which are efficient were derivatives of gadolinium (e.g., gadobenic acid (Figure $49$ a) and gadoxetic acid (Figure $49$ b), iron (e.g., superparamagnetic iron oxide and ultrasmall superparamagnetic iron oxide) and manganese (e.g., manganese dipyridoxal diphosphate). Fundamentally the role of contrast agents can be played by any paramagnetic species.
Principal of Interactions of CA with Surrounding Media
There are two main principles of interactions of contrast agents with water molecules. One is direct interaction, which is called inner sphere relaxation, and the other mechanism that happens in the absence of direct interaction with water molecule which is outer sphere relaxation. If we have water molecules in the first coordination sphere of metal ion, we can consider them as the inner sphere, and if diffusion of protons from outside happens randomly we define them as outer sphere relaxation. Another type of relaxivity comes from already affected water molecules, which transfers their relaxivity to protons of close proximity, this type of relaxivity called second sphere and is usually neglected or contributed as outer sphere. In inner sphere proton relaxivity there are two main mechanisms involved in relaxation. One is dipole-dipole interactions between metal and proton and another is scalar mechanism. Dipole-dipole interaction affects electron spin vectors and scalar mechanism usually controls water exchange. Effect of contrast agents on T1 relaxation is much larger than on T2, since T1 is much larger for tissues than T2.
Determination of Relaxivity
Determination of relaxivity became very easy with the advancements of NMR and computer technology, where you need just to load your sample and read values from the screen. But let’s consider in more detail what are the precautions should be taken during sample preparation and data acquisition.
Sample Preparation
The sample to be analyzed is dissolved in water or another solvent. Generally water is used since contrast agents for medical MRI are used in aqueous media. The amount of solution used is determined according to the internal standard volume, which is used for calibration purposes of device and is usually provided by company producing device. A suitable sample holder is a NMR tube. It is important to degas solvent prior measurements by bubbling gas through it (nitrogen or argon works well), so no any traces of oxygen remains in solution, since oxygen is paramagnetic.
Data Acquisition
Before collecting data it is better to keep the sample in the device compartment for few minutes, so temperature of magnet and your solution equilibrates. The relaxivity (ri) calculated according to (\ref{15} ), where Ti is the relaxation time in the presence of CAs, Tid is the relaxation time in the absence of CAs, and CA is the concentration of paramagnetic CAs (mM). Having the relaxivity values allows for a comparison of a particular compound to other known contrast agents.
$r_{i} \ =\ (1/T_{i} \ -\ 1/T_{id})/[CA] \label{15}$
Two-Dimensional NMR
General Principles of Two-Dimensional Nuclear Magnetic Resonance Spectroscopy
History
Jean Jeener (Figure $50$ from the Université Libre de Bruxelles first proposed 2D NMR in 1971. In 1975 Walter P. Aue, Enrico Bartholdi, and Richard R. Ernst (Figure $51$ first used Jeener’s ideas of 2D NMR to produce 2D spectra, which they published in their paper “Two-dimensional spectroscopy, application to nuclear magnetic resonance”. Since this first publication, 2D NMR has increasing been utilized for structure determination and elucidation of natural products, protein structure, polymers, and inorganic compounds. With the improvement of computer hardware and stronger magnets, newly developed 2D NMR techniques can easily become routine procedures. In 1991 Richard R. Ernst won the Nobel Prize in Chemistry for his contributions to Fourier Transform NMR. Looking back on the development of NMR techniques, it is amazing that 2D NMR took so long to be developed considering the large number of similarities that it has with the simpler 1D experiments.
Why do We Need 2D NMR?
2D NMR was developed in order to address two major issues with 1D NMR. The first issue is the limited scope of a 1D spectrum. A 2D NMR spectrum can be used to resolve peaks in a 1D spectrum and remove any overlap present. With a 1D spectrum, this is typically performed using an NMR with higher field strength, but there is a limit to the resolution of peaks that can be obtained. This is especially important for large molecules that result in numerous peaks as well as for molecules that have similar structural motifs in the same molecule. The second major issue addressed is the need for more information. This could include structural or stereochemical information. Usually to overcome this problem, 1D NMR spectra are obtained studying specific nuclei present in the molecule (for example, this could include fluorine or phosphorus). Of course this task is limited to only nuclei that have active spin states/spin states other than zero and it requires the use of specialized NMR probes.
2D NMR can address both of these issues in several different ways. The following four techniques are just few of the methods that can be used for this task. The use of J-resolved spectroscopy is used to resolve highly overlapping resonances, usually seen as complex multiplicative splitting patterns. Homonuclear correlation spectroscopy can identify spin-coupled pairs of nuclei that overlap in 1D spectra. Heteronuclear shift-correlation spectroscopy can identify all directly bonded carbon-proton pairs, or other combinations of nuclei pairs. Lastly, Nuclear Overhauser Effect (NOE) interactions can be used to obtain information about through-space interactions (rather than through-bond). This technique is often used to determine stereochemistry or protein/peptide interactions.
One-dimensional vs. Two-dimensional NMR
Similarities
The concept of 2D NMR can be considered as an extension of the concept of 1D NMR. As such there are many similarities between the two. Since the acquisition of a 2D spectrum is almost always preceded by the acquisition of a 1D spectrum, the standard used for reference Since 2D NMR is a more complicated experiment than 1D NMR, there are also some differences between the two. One of the differences is in the complexity of the data obtained. A 2D spectrum often results from a change in pulse time; therefore, it is important to set up the experiment correctly in order to obtain meaningful information. Another difference arises from the fact that one spectrum is 1D while the other is 2D. As such interpreting a 2D spectrum requires a much greater understanding of the experiment parameters. For example, one 2D experiment might investigate the specific coupling of two protons or carbons, rather than focusing on the molecule as a whole (which is generally the target of a 1D experiment). The specific pulse sequence used is often very helpful in interpreting the information obtained. The software used for 1D spectra is not always compatible with 2D spectra. This is due to the fact that a 2D spectrum requires more complex processing, and the 2D spectra generated often look quite different than 1D spectra. Some software that is commonly used to interpret 2D spectra is either Sparky or Bruker’s TopSpin. Lastly the NMR instrument used to obtain a 2D spectrum typically generates a much larger magnetic field (700-1000 MHz). Due to the increased cost of buying and maintaining such an instrument, 2D NMR is usually reserved for rather complex molecules.(TMS) and the solvent used (typically CDCl3 or other deuterated solvent) are the same for both experiments. Furthermore, 2D NMR is most often used to reveal any obscurity in a 1D spectrum (whether that is peak overlap, splitting overlap, or something else), so the nuclei studied are the same. Most often these are 1H and 13C, but other nuclei could also be studied.
Differences
Since 2D NMR is a more complicated experiment than 1D NMR, there are also some differences between the two. One of the differences is in the complexity of the data obtained. A 2D spectrum often results from a change in pulse time; therefore, it is important to set up the experiment correctly in order to obtain meaningful information. Another difference arises from the fact that one spectrum is 1D while the other is 2D. As such interpreting a 2D spectrum requires a much greater understanding of the experiment parameters. For example, one 2D experiment might investigate the specific coupling of two protons or carbons, rather than focusing on the molecule as a whole (which is generally the target of a 1D experiment). The specific pulse sequence used is often very helpful in interpreting the information obtained. The software used for 1D spectra is not always compatible with 2D spectra. This is due to the fact that a 2D spectrum requires more complex processing, and the 2D spectra generated often look quite different than 1D spectra. Some software that is commonly used to interpret 2D spectra is either Sparky or Bruker’s TopSpin. Lastly the NMR instrument used to obtain a 2D spectrum typically generates a much larger magnetic field (700-1000 MHz). Due to the increased cost of buying and maintaining such an instrument, 2D NMR is usually reserved for rather complex molecules.
The Rotating Frame and Fourier Transform
One of the central ideas that is associated with 2D NMR is the rotating frame, because it helps to visualize the changes that take place in dimensions. Our ordinary “laboratory” frame consists of three axes (the Cartesian x, y, and z). This frame can be visualized if one pictures the corner of a room. The intersections of the floor and the walls are the x and the y dimensions, while the intersection of the walls is the z axis. This is usually considered the “fixed frame.” When an NMR experiment is carried out, the frame still consists of the Cartesian coordinate system, but the x and ycoordinates rotate around the z axis. The speed with which the x-y coordinate system rotates is directly dependent on the frequency of the NMR instrument.
When any NMR experiment is carried out, a majority of the spin states of the nucleus of interest line up with one of these three coordinates (which we can pick to be z). Once an equilibrium of this alignment is achieved, a magnetic pulse can be exerted at a certain angle to the z axis (usually 90° or 180°) which temporarily disrupts the equilibrium alignment of the nuclei. As the pulse is removed, the nuclei are allowed to relax back to this equilibrium alignment with the magnetic field of the instrument. When this relaxation takes place, the progression of the nuclei back to the equilibrium orientation is detected by a computer as a free induction decay (FID). When a sample has different nuclei or the same nucleus in different environments, different FID can be recorded for each individual relaxation to the equilibrium position. The FIDs of all of the individual nuclei can be recorded and superimposed. The complex FID signal obtained can be converted to a recording of the NMR spectrum obtained by a Fourier transform(FT). The FT is a complex mathematical concept that can be described by \ref{16}, where ω is the angular frequency.
$z(t)\ =\ \sum^{\infty }_{k \rightarrow \infty} c_{i}e^{ik\omega t} \label{16}$
This concept of the FT is similar for both 1D and 2D NMR. In 2D NMR a FID is obtained in one dimension first, then through the application of a pulse a FID can be obtained in a second dimension. Both FIDs can be converted to a series of NMR spectra through a Fourier transform, resulting in a spectrum that can be interpreted. The coupling of the two FID's in 2D NMR usually reveals a lot more information about the specific connectivity between two atoms.
Four Phases and Pulse Sequence of 2D NMR
There are four general stages or time periods that are present for any 2D NMR experiment. These are preparation, evolution, mixing, and detection. A general schematic representation is seen in Figure $53$. The preparation period defines the system at the first time phase. The evolution period allows the nuclei to precess (or move relative to the magnetic field). The mixing period introduces a change in the way the spectra is obtained. The detection period records the FID. In obtaining a spectrum, the pulse sequence is the most important factor that determines what data will be obtained. In general 2D experiments are a combination of 1D experiments collected by varying the timing and pulsing.
Preparation
This is the first step in any 2D NMR experiment. It is a way to start all experiments from the same state. This state is typically either thermal equilibrium, obeying Boltzmann statistics, or it could be a state where the spins of one nucleus are randomized in orientation and the spins of another nucleus are in thermal equilibrium. At the end of the preparation period, the magnetizations are usually placed perpendicular, or at a specific angle, to the magnetic field axis. This phase creates magnetizations in the x-y plane.
Evolution
The nuclei are then allowed to precess around the direction of the magnetic field. This concept is very similar to the precession of a top in the gravitational field of the Earth. In this phase of the experiment, the rates at which different nuclei precess, as shown in Figure $54$ determine how the nuclei are reacting based on their environment. The magnetizations that are created at the end of the preparation step are allowed to evolve or change for a certain amount of time (t1) in the environment defined by the magnetic and radio frequency (RF) fields. In this phase, the chemical shifts of the nuclei are measured similarly to a 1D experiment, by letting the nucleus magnetization rotate in the x-y plane. This experiment is carried out a large number of times, and then the recorded FID is used to determine the chemical shifts.
Mixing
Once the evolution period is over, the nuclear magnetization is distributed among the spins. The spins are allowed to communicate for a fixed period of time. This typically occurs using either magnetic pulses and/or variation in the time periods. The magnetic pulses typically consist of a change in the rotating frame of reference relative to the original "fixed frame" that was introduced in the preparation period, as seen in Figure $55$. Experiments that only use time periods are often tailored to look at the effect of the RF field intensity. Using either the bonds connecting the different nuclei (J-coupling) or using the small space between them (NOE interaction), the magnetization is allowed to move from one nucleus to another. Depending on the exact experiment performed, these changes in magnetizations are going to differ based on what information is desired. This is the step in the experiment that determines exactly what new information would be obtained by the experiment. Depending on which chemical interactions require suppression and which need to be intensified to reveal new information, the specific "mixing technique" can be adjusted for the experiment.
Detection
This is always the last period of the experiment, and it is the recording of the FID of the second nucleus studied. This phase records the second acquisition time (t2) resulting in a spectrum, similar to the first spectrum, but typically with differences in intensity and phase. These differences can give us information about the exact chemical and magnetic environment of the nuclei that are present. The two different Fourier transforms are used to generate the 2D spectrum, which consists of two frequency dimensions. These two frequencies are independent of each other, but when plotted on a single spectrum the frequency of the signal obtained in time t1 has been converted in another coherence affected by the frequency in time t2. While the first dimension represents the chemical shifts of the nucleus in question, the second dimension reveals new information. The overall spectrum, Figure $56$, is the result of a matrix in the two frequency domains obtained during the experiment.
Pulse Variation
As mentioned earlier, the pulse sequence and the mixing period are some of the most important factors that determine the type of spectrum that will be identified. Depending on whether the magnetization is transferred through a J-coupling or NOE interaction, different information and spectra can be obtained. Furthermore, depending on the experimental setup, the mixing period could transfer magnetization either through a single J-coupling or through several J-couplings for nuclei that are connected together. Similarly NOE interactions can also be controlled to specific distances. Two types of NOE interactions can be observed, positive and negative. When the rate at which fluctuation occurs in the transverse plane of a fluctuating magnetic field matches the frequency of double quantum transition, a positive NOE is observed. When the fluctuation is slower, a negative NOE is produced.
Obtaining a Spectrum
Sample Preparation
Sample preparation for 2D NMR is essentially the same as that for 1D NMR. Particular caution should be exercised to use clean and dry sample tubes and use only deuterated solvents. The amount of sample used should be anywhere between 15 and 25 mg although with sufficient time even smaller quantities may be used. The filling height of the solvent should be about 4 cm. The solution must be clear and homogenous. Any participate needs to be filtered off prior to obtaining the spectra.
The Actual Experiment and Important Acquisition Parameters
The acquisition of a 2D spectrum will vary from instrument to instrument, but the process is virtually identical to obtaining a 13C spectrum. It is important to obtain a 1D spectrum (especially 1H) before proceeding to obtain a 2D spectrum. The acquisition range should be adjusted based on the 1D spectrum to minimize instrument time. Depending on the specific type of 2D experiment (such as COSY or NOESY) several parameters need to be adjusted. The following 6 steps can followed to obtain almost any 2D NMR spectrum.
1. Login to the computer system.
2. Change the sample.
3. Lock and shim the magnet.
4. Setup parameters and run the experiment. Use the 1D spectra already obtained to adjust experiment settings, paying special attention to important acquisition parameters.
5. Process the obtained data and print the spectrum.
6. Exit and logout.
The parameters listed in Table $7$ should be given special attention, as they can significantly affect the quality of the spectra obtained.
Parameter Description
Acquisition Time (AQ) Data points (TD) x dwell time (DW)
Dwell Time 1/spectral width (SW)
Digital Resolution 1/AQ
Number of Scans Multiples of 8/16
TD1 Number of data points in the first time domain ( ~128-512)
SW1 Spectral Width in the first (direct) dimension
TD2 Number of data points in the second time domain (~2048-4096)
SW2 Spectral Width in the second (indirect) dimension
Table $7$ Some of the most important parameters for obtaining a 2D spectrum and their meaning.
After Obtaining a Spectrum and Analysis
After a 2D spectrum has successfully been obtained, depending on the type of spectrum (COSY, NOESY, INEPT), it might need to be phased. Phasing is the adjustment of the spectrum so that all of the peaks across the spectrum are in the absorptive mode (pointing either up or down). With 2D spectra, phasing is done in both frequency dimensions. This can either be done automatically by a software program (for simple 2D spectra with no cluster signals) or manually by the user (for more complex 2D spectra). Sometimes, phasing can be done with the program that is used to obtain the spectrum. Afterwards the spectrum could either be printed out or further analyzed. One example of further analysis is integrating parts of the spectrum. This could give the user meaningful information about the relative ratio of different types of nuclei (and even quantify the ratios between two diasteriomeric molecules).
Conclusion
Two-dimensional NMR is increasingly becoming a routine method for analyzing complex molecules, whether they are inorganic compounds, organic natural products, proteins, or polymers. A basic understanding of 2D NMR can make it significantly easier to analyze complex molecules and provide further confirmation for results obtained by other methods. The variation in pulse sequences provides chemists the opportunity to analyze a large diversity of compounds. The increase in the magnetic strength of NMR machines has allowed 2D NMR to be more often used even for simpler molecules. Furthermore, higher dimension techniques have also been introduced, and they are slowly being integrated into the repertoire of chemists. These are essentially simple extensions of the ideas of 2D NMR.
Two-Dimensional NMR Experiments
Since the advent of NMR, synthetic chemists have had an excellent way to characterize their synthetic products. With the arrival of multidimensional NMR into the realm of analytical techniques, scientists have been able to study larger and more complicated molecules much easier than before, due to the great amount of information 2D and 3D NMR experiments can offer. With 2D NMR, overlapping multiplets and other complex splitting patterns seen in 1D NMR can be easily deciphered, since instead of one frequency domain, two frequency domains are plotted and the couplings are plotted with respect to each other, which makes it easier to determine molecular connectivity.
Spectra are obtained using a specific sequence of radiofrequency (RF) pulses that are administered to the sample, which can vary in the angle at which the pulse is given and/or the number of pulses. Figure $57$ shows a schematic diagram for a generic pulse sequence in a 2D NMR experiment. First, a pulse is administered to the sample in what is referred to as the preparation period. This period could be anything from a single pulse to a complex pattern of pulses. The preparation period is followed by a “wait” time (also known as the evolution time), t1, during which no data is observed. The evolution time also can be varied to suit the needs of the specific experiment. A second pulse is administered next during what is known as the mixing period, where the coherence at the end of t1 is converted into an observable signal, which is recorded during the observation time, t2. Figure $58$ shows a schematic diagram of how data is converted from the time domain (depicted in the free induction decay, or FID) to a frequency domain. The process of this transformation using Fourier Transform (FT) is the same as it is in 1D NMR, except here, it is done twice (or three times when conducting a 3D NMR experiment).
In 1D NMR, spectra are plotted with frequency (in ppm or Hz, although most commonly ppm) on the horizontal axis and with intensity on the vertical axis. However, in 2D NMR spectra, there are two frequency domains being plotted, each on the vertical and horizontal axes. Intensity, therefore, can be shown as a 3D plot or topographically, much like a contour map, with more contour lines representing greater intensities, as shown in Figure $59$ a. Since it is difficult to read a spectrum in a 3D plot, all spectra are plotted as contour plots. Furthermore, since resolution in a 2D NMR spectrum is not needed as much as in a 1D spectrum, data acquisition times are often short.
2D NMR is very advantageous for many different applications, though it is mainly used for determining structure and stereochemistry of large molecules such as polymers and biological macromolecules, that usually exhibit higher order splitting effects and have small, overlapping coupling constants between nuclei. Further, some 2D NMR experiments can be used to elucidate the components of a complex mixture. This module aims to describe some of the common two-dimensional NMR experiments used to determine qualitative information about molecular structure.
2D Experiments
COSY
COSY (COrrelation SpectroscopY) was one of the first and most popular 2D NMR experiments to be developed. It is a homonuclear experiment that allows one to correlate different signals in the spectrum to each other. In a COSY spectrum (see Figure $59$ b), the chemical shift values of the sample’s 1D NMR spectrum are plotted along both the vertical and horizontal axes (some 2D spectra will actually reproduce the 1D spectra along the axes, along with the frequency scale in ppm, while others may simply show the scale). This allows for a collection of peaks to appear down the diagonal of the spectrum known as diagonal peaks (shown in Figure $59$ b, highlighted by the red dotted line). These diagonal peaks are simply the peaks that appear in the normal 1D spectrum, because they show nuclei that couple to themselves. The other type of peaks appears symmetric across the diagonal and is known as cross peaks. These peaks show which groups in the molecule that have different chemical shifts are coupled to each other by producing a signal at the intersection of the two frequency values.
One can then determine the structure of a sample by examining what chemical shift values the cross peaks occur at in a spectrum. Since the cross peaks are symmetric across the diagonal peaks, one can easily identify which cross peaks are real (if a certain peak has a counterpart on the other side of the diagonal) and which are digital artifacts of the experiment. The smallest coupling that can be detected using COSY is dependent on the linewidth of the spectrum and the signal-to-noise ratio; a maximum signal-to-noise ratio and a minimum linewidth will allow for very small coupling constants to be detected.
Variations of COSY
Although COSY is very useful, it does have its disadvantages. First of all, because the anti-phase structure of the cross peaks, which causes the spectral lines to cancel one another out, and the in-phase structure of the diagonal peaks, which causes reinforcement among the peaks, there is a significant difference in intensity between the diagonal and cross peaks. This difference in intensity makes identifying small cross peaks difficult, especially if they lie near the diagonal. Another problem is that when processing the data for a COSY spectrum, the broad lineshapes associated with the experiment can make high-resolution work difficult.
In one of the more popular COSY variations known as DQF COSY (Double-Quantum Filtered COSY), the pulse sequence is altered so that all of the signals are passed through a double-quantum coherence filter, which suppresses signals with no coupling (i.e. singlets) and allows cross peaks close to the diagonal to be clearly visible by making the spectral lines much sharper. Since most singlet peaks are due to the solvent, DQF COSY is useful to suppress those unwanted peaks.
ECOSY (Exclusive COrrelation SpectroscopY) is another derivative of COSY that was made to detect small J-couplings, predominantly among multiplets, usually when J ≤ 3 Hz. Also referred to as long-range COSY, this technique involves adding a delay of about 100-400 ms to the pulse sequence. However, there is more relaxation that is occurring during this delay, which causes a loss of magnetization, and therefore a loss of signal intensity. This experiment would be advantageous for one who would like to further investigate whether or not a certain coupling exists that did not appear in the regular COSY spectrum.
GS-COSY (Gradient Selective COSY) is a very applied offshoot of COSY since it eliminates the need for what is known as phase cycling. Phase cycling is a method in which the phase of the pulses is varied in such a way to eliminate unwanted signals in the spectrum, due to the multiple ways which magnetization can be aligned or transferred, or even due to instrument hardware. In practical terms, this means that by eliminating phase cycling, GS-COSY can produce a cleaner spectrum (less digital artifacts) in much less time than can normal COSY.
Another variation of COSY is COSY-45, which administers a pulse at 45° to the sample, unlike DQF COSY which administers a pulse perpendicular to the sample. This technique is useful because one can elucidate the sign of the coupling constant by looking at the shape of the peak and in which direction it is oriented. Knowing the sign of the coupling constant can be useful in discriminating between vicinal and geminal couplings. However, COSY-45 is less sensitive than other COSY experiments that use a 90° RF pulse.
TOCSY
TOCSY (TOtal Correlation SpectroscopY) is very similar to COSY in that it is a homonuclear correlation technique. It differs from COSY in that it not only shows nuclei that are directly coupled to each other, but also signals that are due to nuclei that are in the same spin system, as shown in Figure $60$ below. This technique is useful for interpreting large, interconnected networks of spin couplings. The pulse sequence is arranged in such a way to allow for isotropic mixing during the sequence that transfers magnetization across a network of atoms coupled to each other. An alternative technique to 2D TOCSY is selective 1D TOCSY, which can excite certain regions of the spectrum by using shaped pulses. By specifying particular chemical shift values and setting a desired excitation width, one can greatly simplify the 1D experiment. Selective 1D TOCSY is particularly useful for analyzing polysaccharides, since each sugar subunit is an isolated spin system, which can produce its own subspectrum, as long as there is at least one resolved multiplet. Furthermore, each 2D spectrum can be acquired with the same resolution as a normal 1D spectrum, which allows for an accurate measurement of multiplet splittings, especially when signals from different coupled networks overlap with one another.
Heteronuclear Experiments
HETCOR (Heteronuclear Correlation) refers to a 2D NMR experiment that correlates couplings between different nuclei (usually 1H and a heteroatom, such as 13C or 15N). Heteronuclear experiments can easily be extended into three or more dimensions, which can be thought of as experiments that correlate couplings between three or more different nuclei. Because there are two different frequency domains, there are no diagonal peaks like there are in COSY or TOCSY. Recently, inverse-detected HETCOR experiments have become extremely useful and commonplace, and it will be those experiments that will be covered here. Inverse-detection refers to detecting the nucleus with the higher gyromagnetic ratio, which offers higher sensitivity. It is ideal to determine which nucleus has the highest gyromagnetic ratio for detection and set the probe to be the most sensitive to this nucleus. In HETCOR, the nucleus that was detected first in a 1H -13C experiment was 13C, whereas now 1H is detected first in inverse-detection experiments, since protons are inherently more sensitive. Today, regular HETCOR experiments are not usually in common laboratory practice.
The HMQC (Heteronuclear Multiple-Quantum Coherence) experiment acquires a spectrum (see Figure $61$ a) by transferring the proton magnetization by way of 1JCH to a heteronucleus, for example, 13C. The 13C atom then experiences its chemical shift in the t1 time period of the pulse sequence. The magnetization then returns to the 1H for detection. HMQC detects 1JCH coupling and can also be used to differentiate between geminal and vicinal proton couplings just as in COSY-45. HMQC is very widely used and offers very good sensitivity at much shorter acquisition times than HETCOR (about 30 min as opposed to several hours with HETCOR).
However, because it shows the 1H -1H couplings in addition to 1H -13C couplings and because the cross peaks appear as multiplets, HMQC suffers when it comes to resolution in the 13C peaks. The HSQC (Heteronuclear Single-Quantum Coherence) experiment can assist, as it can suppress the 1H -1H couplings and collapse the multiplets seen in the cross peaks into singlets, which greatly enhances resolution (an example of an HSQC is shown in Figure $61$ b). Figure $61$ shows a side-by-side comparison of spectra from HMQC and HSQC experiments, in which some of the peaks in the HMQC spectrum are more resolved in the HSQC spectrum. However, HSQC administers more pulses than HMQC, which causes miss-settings and inhomogeneity between the RF pulses, which in turn leads to loss of sensitivity. In HMBC (Heteronuclear Multiple Bond Coherence) experiments, two and three bond couplings can be detected. This technique is particularly useful for putting smaller proposed fragments of a molecule together to elucidate the larger overall structure. HMBC, on the other hand, cannot distinguish between 2JCH and 3JCH coupling constants. An example spectrum is shown in Figure $61$ d.
NOESY and ROESY
NOESY (Nuclear Overhauser Effect SpectroscopY) is an NMR experiment that can detect couplings between nuclei through spatial proximity (< 5 Å apart) rather than coupling through covalent bonds. The Nuclear Overhauser Effect (NOE) is the change in the intensity of the resonance of a nucleus upon irradiation of a nearby nucleus (about 2.5-3.5 Å apart). For example, when an RF pulse specifically irradiates a proton, its spin population is equalized and it can transfer its spin polarization to another proton and alter its spin population. The overall effect is dependent on a distance of r-6. NOESY uses a mixing time without pulses to accumulate NOEs and its counterpart ROESY (Rotating frame nuclear Overhauser Effect SpectroscopY) uses a series of pulses to accumulate NOEs. In NOESY, NOEs are positive when generated from small molecules, are negative when generated from large molecules (or molecules dissolved in a viscous solvent to restrict molecular tumbling), and are quite small (near zero) for medium-sized molecules. On the contrary, ROESY peaks are always positive, regardless of molecular weight. Both experiments are useful for determine proximity of nuclei in large biomolecules, especially proteins, where two atoms may be nearby in space, but not necessarily through covalent connectivity. Isomers, such as ortho-, meta-, and para-substituted aromatic rings, as well as stereochemistry, can also be distinguished through the use of an NOE experiment. Although NOESY and ROESY can generate COSY and TOCSY artifacts, respectively, those unwanted signals could be minimized by variations in the pulse sequences. Example NOESY and ROESY spectra are shown in Figure $63$.
How to Interpret 2D NMR Spectra
Much of the interpretation one needs to do with 2D NMR begins with focusing on the cross peaks and matching them according to frequency, much like playing a game of Battleship®. The 1D spectrum usually will be plotted along the axes, so one can match which couplings in one spectrum correlate to which splitting patterns in the other spectrum using the cross peaks on the 2D spectrum (see Figure $64$).
Also, multiple 2D NMR experiments are used to elucidate the structure of a single molecule, combining different information from the various sources. For example, one can combine homonuclear and heteronuclear experiments and piece together the information from the two techniques, with a process known as Parallel Acquisition NMR Spectroscopy or PANSY. In the 1990s, co-variance processing came onto the scene, which allowed scientists to process information from two separate experiments, without having to run both experiments at the same time, which made for shorter data acquisition time. Currently, the software for co-variance processing is available from various NMR manufacturers. There are many possible ways to interpret 2D NMR spectra, though one common method is to label the cross peaks and make connections between the signals as they become apparent. Prof. James Nowick at UC Irvine describes his method of choice for putting the pieces together when determining the structure of a sample; the lecture in which he describes this method is posted in the links above. In this video, he provides a stepwise method to deciphering a spectrum.
Conclusion
Within NMR spectroscopy, there are a vast variety of different methods to acquire data on molecular structure. In 1D and 2D experiments, one can simply adjust the appearance of the spectrum by changing any one of the many parameters that are set when running a sample, such as number of scans, relaxation delay times, the amount of pulses at various angles, etc. Many 3D and 4D NMR experiments are actually simply multiple 2D NMR pulse sequences run in sequence, which generates more correlation between different nuclei in a spin system. With 3D NMR experiments, three nuclei, for example 1H, 13C, and 15N can be studied together and their connectivity can be elucidated. These techniques become invaluable when working with biological molecules with complex 3D structures, such as proteins and polysaccharides, to analyze their structures in solution. These techniques, coupled with ultra-fast data acquisition, leads to monitoring complex chemical reactions and/or non-covalent interactions in real time. Through the use of these and other techniques, one can begin to supplement a characterization “toolbox” in order to continue solving complex chemical problems.
Chemical Exchange Saturation Transfer (CEST)
Paramagnetic chemical exchange saturation transfer (PARACEST) is a powerful analytical tool that can elucidate many physical properties of molecules and systems of interest both in vivo and in vitro through specific paramagnetic agents. Although a relatively new imaging technique, applications for PARACEST imaging are growing as new imaging agents are being developed with enhanced exchange properties. Current applications revolve around using these PARACEST agents for MRI imaging to enhance contrast. However, the fundamentals of PARACEST can be used to measure properties such as temperature, pH, and concentration of molecules and systems as we will discuss. PARACEST was developed in response to several imaging limitations presented by diamagnetic agents. PARACEST spectral data can be easily obtained using NMR Spectroscopy while imaging can be typically achieved with widely available clinical 1.5/4 T MRI scanners.
History
Chemical exchange saturation transfer (CEST) is a phenomenon that has been around since the 1960s. It was first discovered by Forsén, pictured below in Figure $65$, and Hoffman in 1963 and was termed magnetization transfer NMR. This technique was limited in its applications to studying rapid chemical exchange reactions. However in 2000, Balaban, pictured below in Figure $66$, revisited this topic and discovered the application of this phenomenon for imaging purposes. He termed the phenomenon chemical exchange saturation transfer. From this seminal finding, Balaban elucidated techniques to modulate MRI contrasts to reflect the exchange for imaging purposes.
CEST imaging focuses on N-H, O-H, or S-H exchangeable protons. Observing these exchanges in diamagnetic molecules can be very challenging. Several models have been developed to overcome the challenges associated with imaging with clinical scanners. The focus of recent research has been to develop paramagnetic chemical exchange saturation transfer (PARACEST) agents. Typical PARACEST complexes are based on lanthanide atoms. Historically, these molecules were thought to be useless for chemical exchange due to their very fast water exchanges rates. However, recent works by Silvio Aime and Dean Sherry have shown modified lanthanide complexes that have very slow exchange rates that make it ideal for CEST imaging. In addition to slow exchange rates, these molecules have vastly different resonance frequencies which contributes to their enhanced contrast.
Chemical Exchange Saturation Transfer
Saturation Transfer
Chemical exchange is defined as the process of proton exchange with surrounding bulk water. Exchange can occur with non-water exchange sites but it has been shown that its’ contribution is negligible. As stated before, CEST imaging focuses on N-H, O-H, or S-H exchangeable protons. Molecularly every exchange proton has a very specific saturation frequency. Applying a radio-frequency pulse that is the same as the proton’s saturation frequency results in a net loss of longitudinal magnetization. Longitudinal magnetization exists by virtue of being in a magnet. All protons in a solution line up with the magnetic field either in a parallel or antiparallel manner. There is a net longitudinal magnetization at equilibrium as the antiparallel state is higher in energy. A 90° RF pulse sequence causes many of the parallel protons to move to the higher energy antiparallel state causing zero longitudinal magnetization. This nonequilibrium state is termed as saturation, where the same amount of nuclear spins is aligned against and with the magnetic field. These saturated protons are exchangeable and the surrounding bulk water participates in this exchange called chemical exchange saturation transfer.
This exchange can be visualized through spectral data. The saturated proton exchange with the surrounding bulk water causes the spectral signal from the bulk water to decrease due to decreased net longitudinal magnetization. This decrease can then be quantified and used to measure a wide variety of properties of a molecule or a solution. In the next sub-section, we will explore the quantification in more detail to provide a stronger conceptual understanding.
Two-system Model
Derivations of the chemical exchange saturation transfer mathematical models arise fundamentally from an understanding of the Boltzmann equation, \ref{17}. The Boltzmann equation mathematically defines the distribution of spins of a molecule placed in a magnetic field. There are many complex models that are used to provide a better understanding of the phenomenon. However, we will stick with a two-system model to simplify the mathematics to focus on conceptual understanding. In this model, there are two systems: bulk water (alpha) and an agent pool (beta). When the agent pool is saturated with a radiofrequency pulse, we make two important assumptions. The first is that all the exchangeable protons are fully saturated and the second is that the saturation process does not affect the bulk water protons, which retain their characteristic longitudinal magnetization.
$\frac{N_{high\ energy}}{N_{low\ energy}}\ =\ exp( \frac{-\Delta E}{kT}) \label{17}$
To quantify the following proton exchange we shall define the equilibrium proton concentration. The Boltzmann equation gives us the distribution of the spin states at equilibrium which is proportional to the proton concentration. As such, we shall label the two system’s equilibrium states as $M_{\alpha }^{0}$ and $M_{\beta }^{0}$. Following saturation, the saturated protons of the bulk pool exchange with the agent pool at a rate $k_{\alpha }$. As such the decrease in longitudinal (Z) magnetization is given by $k_{\alpha } M^{Z}_{\alpha }$. Furthermore, another effect that needs to be considered is the inherent relaxation of the protons which increase the Z magnetization back to equilibrium levels, $M_{\alpha }^{0}$. This can be estimated with the following \ref{18} where $T_{1 \alpha }$ is the longitudinal relaxation time for bulk water. Setting the two systems equal to represent equilibrium we get the following relationship \ref{19} that can be manipulated mathematically to yield the generalized chemical exchange Equation \ref{20} where $\tau _{\alpha } \ = k_{\alpha }^{-1}$ and defined as lifetime of a proton in the system and c being the concentrations of protons in their respective system. [n] represents the number of exchangeable protons per CEST molecule. In terms of CEST calculations, the lower the ratio of Z the more prominent the CEST effect. A plot of this equation over a range of pulse frequencies results in what is called a Z-spectra also known as a CEST spectra, shown in Figure $67$. This spectrum is then used to create CEST Images.
$\frac{M^{0}_{\alpha } - M^{Z}_{\alpha }}{T_{1\alpha }} \label{18}$
$k_{\alpha }M^{Z}_{\alpha } \ =\ \frac{M^{0}_{\alpha } - M^{Z}_{\alpha }}{T_{1\alpha }} \label{19}$
$Z\ = \frac{M^{Z}_{\alpha }}{M^{0}_{\alpha }} = \frac{1}{1\ +\ \frac{C_{\beta }[n]}{C_{\alpha }} \frac{T_{1\alpha }}{\tau _{\alpha }}} \label{20}$
Limitations of Diamagnetic CEST Imaging and Two-system Model
A CEST agent must have several properties to maximize the CEST effect. Maximum CEST effect is observed when the residence lifetime of bulk water ( $\tau _{\alpha }$ ) is as short as possible. This indirectly means that an effective CEST agent has a high exchange rate, $k_{\alpha }$. Furthermore, maximum effect is noted when the CEST agent concentration is high.
In addition to these two properties, we need to consider the fact that the two-system model’s assumptions are almost never true. There is often a less than saturated system resulting in a decrease in the observed CEST effect. As a result, we need to consider the power of the saturation pulses, B1. The relationship between the $\tau _{\alpha }$ and B1 is shown in the below \ref{21}. As such, an increase in saturation pulse power results in increase CEST effect. However, we cannot apply too much B1 due to in vivo limitations. Furthermore, the ideal $\tau _{\alpha }$ can be calculated using the above relationship.
$\tau \ =\ \frac{1}{2\pi B_{1}} \label{21}$
Finally, another limitation that needs to be considered is the inherent only to diamagnetic CEST and provides an important distinction between CEST and PARACEST as we will soon discuss. We assumed with the two-system model that saturation with a radiofrequency pulse did not affect the surrounded bulk water Z-magnetization. However, this is large generalization that can only be made for PARACEST agents as we shall soon see. Diamagnetic species, whether endogenous or exogenous, have a chemical shift difference (Δω) between the exchangeable –NH or –OH groups and the bulk water of less than 5 ppm. This small shift difference is a major limitation. Selective saturation often lead to partial saturation of bulk water protons. This is a more important consideration where in-vivo water peak is very broad. As such, we need to maximize the shift difference between bulk water and the contrast agent.
Paramagnetic Chemical Exchange Saturation Transfer
Strengths of PARACEST
PARACEST addresses the two complications that arise with CEST. Application of a radio frequency pulse close to the bulk water signal will result in some off-resonance saturation of the wa
ter. This essentially limits power which enhances CEST effect. Furthermore, a slow exchange condition less than the saturation frequency difference (Δω) means that a very slow exchange rate is required for diamagnetic CEST agents of this sort. Both problems can be alleviated by using an agent that has a larger chemical shift separation such as paramagnetic species. Figure $68$ shows the broad Δω of Eu3+complex.
Figure $68$ Eu3+ complex broadens the chemical shift leading to a larger saturation frequency difference that can easily be detected. Red spectral line represents EuDOTA-(glycine ethyl ester)4. Blue spectral line represents barbituric acid. Adapted from A. D. Sherry and M. Woods, Annu. Rev. Biomed. Eng., 2008, 10, 391.
Selection of Lanthanide Species
Based on the criteri a established in \ref{22}, we see that only Eu3+, Tb3+, Dy3+, and Ho3+ are effective lanthanide CEST agents at the most common MRI power level (1.5 T). However, given stronger field strengths the Table $8$ suggests more CEST efficiency. With exception of Sm3+, all other lanthanide molecules have shifts far from water peak providing a large Δω that is desired of CEST agents. This table should be considered before design of a PARACEST experiment. Furthermore, this table eludes the relationship between power of the saturation pulse and the observed CEST effect. Referring to the following \ref{23}, we see that for increased saturation pulse we notice increased CEST effect. In fact, varying B1 levels changes saturation offset. The higher the B1frequency the higher the signal intensity of the saturation offset As such, it is important to select a proper saturation pulse before experimentation.
Complex Tm at 298 K ($\mu$ s) δ 1H (ppm) Δω.τα at 1.5 T Δω.τα at 4.7 T Δω.τα at 11.75 T
Pr3+ 20 -60 0.5 1.5 3.8
Nd3+ 80 -32 1.0 3.2 8.0
Sm3+ 320 -4 0.5 1.6 4.0
Eu3+ 382 50 7.7 24.0 60.0
Tb3+ 31 -600 7.5 23.4 58.5
Dy3+ 17 -720 4.9 15.4 38.5
Ho3+ 19 -360 2.8 8.6 21.5
Er3+ 9 200 0.7 2.3 5.7
Tm3+ 3 500 0.6 1.9 4.7
Yb3+ 3 200 0.2 0.5 1.9
Table $8$ The chemical shifts and proton lifetime values for various lanthanide metals in a lanthanide DOTA-4AmCE complex (Figure $68$).
Based on the criteria established in \ref{22}, we see that only Eu3+, Tb3+, Dy3+, and Ho3+ are effective lanthanide CEST agents at the most common MRI power level (1.5 T). However, given stronger field strengths the Table $9$ suggests more CEST efficiency. With exception of Sm3+, all other lanthanide molecules have shifts far from water peak providing a large Δω that is desired of CEST agents. This table should be considered before design of a PARACEST experiment. Furthermore, this table eludes the relationship between power of the saturation pulse and the observed CEST effect. Referring to the following \ref{23}, we see that for increased saturation pulse we notice increased CEST effect. In fact, varying B1 levels changes saturation offset. The higher the B1frequency the higher the signal intensity of the saturation offset As such, it is important to select a proper saturation pulse before experimentation.
$\Delta \omega \cdot \tau _{\alpha } \ =\ \frac{1}{2\pi B_{1}} \label{22}$
$\tau _{\alpha } \ =\ \frac{1}{2\pi B_{1}} \label{23}$
Running a PARACEST Experiment
Two types of experiments can be run to quantify PARACEST. The first produces quantifiable Z-spectral data and is typically run on 400 MHz spectrometers with a B1 power between 200-1000 KHz and an irradiation time between 2 and 6 seconds based on the lanthanide complex. Imaging experiments are typically performed on either clinical scanners are small bore MRI scanner at room temperature using a custom surface coil. Imaging experiments usually require the followings sequence of steps:
1. Bulk water spectra are collected from PARACEST using a 2 second presaturation pulse at a desired power level based on lanthanide complex.
2. Following base scan, the saturation frequency is stepped between ±100 ppm (relative to the bulk water frequency at 0 ppm) in 1 ppm increments. The scanning frequency can be altered to include a wider scan if lanthanide complex has a larger chemical shift difference.
3. Following collection of data, the bulk water signal is integrated using a Matlab program. The difference between the integrated signals measured at equivalent positive and negative saturation frequencies are plotted and subtracted using the following \ref{24} and mapped to produce gradient images.
4. To create a CEST Image the data set is first filtered to improve signal-to-noise ratio and normalized with phantom data by subtraction and color-coded.
5. For data tools to perform CEST Imaging analysis. Please refer to the following links for free access to open source software tools: https://github.com/cest-sources/CEST_EVAL/ or http://www.med.upenn.edu/cmroi/software-overview.html.
$\frac{S_{sat(-\Delta \omega)} \ -\ S_{sat(\Delta \omega)}}{S_{0}} \label{24}$
Applications of PARACEST
Temperature Mapping
PARACEST imaging has shown to be a promising area of research in developing a noninvasive technique for temperature mapping. Sherry et. al shows a variable-temperature dependence of a lanthanide bound water molecule resonance frequency. They establish a linear correspondence over the range of 20-50 °C. Furthermore, they show a feasible analysis technique to locate the chemical shift (δ) of a lanthanide in images with high spatial resolution. By developing a plot of pixel intensity versus frequency offset they can individually identify temperature at each pixel and hence create a temperature map as shown in the Figure $70$.
Zinc Ion Detection
Divalent zinc is an integral transition-metal that is prominent in many aqueous solutions and plays an important role in physiological systems. The ability to detect changes in sample concentrations of Zinc ions provides valuable information regarding a system’s. Developing specific ligands that coordinate with specific ions to enhance wate-rexchange characteristics can amplify CEST profile. In this paper, the authors develop a Eu(dotampy) sensor shown in Figure $71$ for Zn ions. This authors theorize that the sensor coordinates with Zinc using its four pyridine donors in a square anti-prism manner as determined by NMR Spectroscopy by observing water exchange rates and by base catalysis by observing CEST sensitivity. Authors were unable to analyze coordination by X-ray crystallography. Following, determination of successful CEST profiles, the authors mapped in-vitro samples of varying concentrations of Zn and were successfully able to correlate image voxel intensity with Zn concentrations as shown in Figure $72$. Furthermore, they were able to successfully demonstrate specificity of the sensor to Zn over Magnesium and Calcium. This application proves promising as a potential detection method for Zn ions in solutions with a range of concentrations between 5 nm to 0.12 μm. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.07%3A_NMR_Spectroscopy.txt |
Basic Principles for EPR Spectroscopy
Electron paramagnetic resonance spectroscopy (EPR) is a powerful tool for investigating paramagnetic species, including organic radicals, inorganic radicals, and triplet states. The basic principles behind EPR are very similar to the more ubiquitous nuclear magnetic resonance spectroscopy (NMR), except that EPR focuses on the interaction of an external magnetic field with the unpaired electron(s) in a molecule, rather than the nuclei of individual atoms. EPR has been used to investigate kinetics, mechanisms, and structures of paramagnetic species and along with general chemistry and physics, has applications in biochemistry, polymer science, and geosciences.
The degeneracy of the electron spin states is lifted when an unpaired electron is placed in a magnetic field, creating two spin states, ms = ± ½, where ms = - ½, the lower energy state, is aligned with the magnetic field. The spin state on the electron can flip when electromagnetic radiation is applied. In the case of electron spin transitions, this corresponds to radiation in the microwave range.
The energy difference between the two spin states is given by the equation
$\Delta E \ =\ E_{+} - E_{-} = h \nu = g \beta B \label{1}$
where h is Planck’s constant (6.626 x 10-34 J s-1), v is the frequency of radiation, ß is the Bohr magneton (9.274 x 10-24 J T-1), B is the strength of the magnetic field in Tesla, and g is known as the g-factor. The g-factor is a unitless measurement of the intrinsic magnetic moment of the electron, and its value for a free electron is 2.0023. The value of g can vary, however, and can be calculated by rearrangement of the above equation, i.e.,
$g = \dfrac{h \nu}{\beta B }\label{2}$
using the magnetic field and the frequency of the spectrometer. Since h, v, and ß should not change during an experiment, g values decrease as B increases. The concept of g can be roughly equated to that of chemical shift in NMR.
Instrumentation
EPR spectroscopy can be carried out by either 1) varying the magnetic field and holding the frequency constant or 2) varying the frequency and holding the magnetic field constant (as is the case for NMR spectroscopy). Commercial EPR spectrometers typically vary the magnetic field and holding the frequency constant, opposite of NMR spectrometers. The majority of EPR spectrometers are in the range of 8-10 GHz (X-band), though there are spectrometers which work at lower and higher fields: 1-2 GHz (L-band) and 2-4 GHz (S-band), 35 GHz (Q-band) and 95 GHz (W-band).
EPR spectrometers work by generating microwaves from a source (typically a klystron), sending them through an attenuator, and passing them on to the sample, which is located in a microwave cavity (Figure $1$).
Microwaves reflected back from the cavity are routed to the detector diode, and the signal comes out as a decrease in current at the detector analogous to absorption of microwaves by the sample.
Samples for EPR can be gases, single crystals, solutions, powders, and frozen solutions. For solutions, solvents with high dielectric constants are not advisable, as they will absorb microwaves. For frozen solutions, solvents that will form a glass when frozen are preferable. Good glasses are formed from solvents with low symmetry and solvents that do not hydrogen bond. Drago provides an extensive list of solvents that form good glasses.
EPR spectra are generally presented as the first derivative of the absorption spectra for ease of interpretation. An example is given in Figure $2$.
Magnetic field strength is generally reported in units of Gauss or mTesla. Often EPR spectra are very complicated, and analysis of spectra through the use of computer programs is usual. There are computer programs that will predict the EPR spectra of compounds with the input of a few parameters.
Factors that Affect EPR Spectra
Hyperfine Coupling
Hyperfine coupling in EPR is analogous to spin-spin coupling in NMR. There are two kinds of hyperfine coupling: 1) coupling of the electron magnetic moment to the magnetic moment of its own nucleus; and 2) coupling of the electron to a nucleus of a different atom, called super hyperfine splitting. Both types of hyperfine coupling cause a splitting of the spectral lines with intensities following Pascal’s triangle for I = 1/2 nuclei, similar to J-coupling in NMR. A simulated spectrum of the methyl radical is shown in Figure $3$. The line is split equally by the three hydrogens giving rise to four lines of intensity 1:3:3:1 with hyperfine coupling constant a.
The hyperfine splitting constant, known as a, can be determined by measuring the distance between each of the hyperfine lines. This value can be converted into Hz (A) using the g value in the equation:
$hA\ =\ g \beta a \label{3}$
In the specific case of Cu(II), the nuclear spin of Cu is I = 3/2, so the hyperfine splitting would result in four lines of intensity 1:1:1:1. Similarly, super hyperfine splitting of Cu(II) ligated to four symmetric I = 1 nuclei, such as 14N, would yield nine lines with intensities would be 1:8:28:56:70:56:28:8:1.
Anisotropy
The g factor of many paramagnetic species, including Cu(II), is anisotropic, meaning that it depends on its orientation in the magnetic field. The g factor for anisotropic species breaks down generally into three values of g following a Cartesian coordinate system which is symmetric along the diagonal: gx, gy, and gz. There are four limits to this system:
1. When gx = gy = gz the spectrum is considered to be isotropic, and is not dependent on orientation in the magnetic field.
2. When gx = gy > gz the spectrum is said to be axial, and is elongated along the z-axis. The two equivalent g values are known as g while the singular value is known as g. It exhibits a small peak at low field and a large peak at high field.
3. When gx = gy < gz the spectrum is also said to be axial, but is shortened in the xy plane. It exhibits a large peak at low field and a small peak at high field.
4. When gxgygz the spectrum is said to be rhombic, and shows three large peaks corresponding to the different components of g.
Condition ii corresponds to Cu(II) in a square planar geometry with the unpaired electron in the dx2-y2 orbital. Where there is also hyperfine splitting involved, g is defined as being the weighted average of the lines.
Electron Paramagnetic Resonance Spectroscopy of Copper(II) Compounds
Copper(II) Compounds
Copper compounds play a valuable role in both synthetic and biological chemistry. Copper catalyzes a vast array of reactions, primarily oxidation-reduction reactions which make use of the Cu(I)/Cu(II) redox cycle. Copper is found in the active site of many enzymes and proteins, including the oxygen carrying proteins called hemocyanins.
Common oxidation states of copper include the less stable copper(I) state, Cu+; and the more stable copper(II) state, Cu2+. Copper (I) has a d10 electronic configuration with no unpaired electrons, making it undetectable by EPR. The d9 configuration of Cu2+ means that its compounds are paramagnetic making EPR of Cu(II) containing species a useful tool for both structural and mechanistic studies. Two literature examples of how EPR can provide insight into the mechanisms of reactivity of Cu(II) are discussed herein.
Copper (II) centers typically have tetrahedral, or axially elongated octahedral geometry. Their spectra are anisotropic and generally give signals of the axial or orthorhombic type. From EPR spectra of copper (II) compounds, the coordination geometry can be determined. An example of a typical powder Cu(II) spectrum is shown in Figure $4$.
The spectrum above shows four absorption-like peaks corresponding to g indicating coordination to four identical atoms, most likely nitrogen. There is also an asymmetric derivative peak corresponding to g at higher field indicating elongation along the z axis.
Determination of an Intermediate
The reactivity and mechanism of Cu(II)-peroxy systems was investigated by studying the decomposition of the Cu(II) complex 1 with EPR as well as UV-Vis and Raman spectroscopy. The structure (Figure $5$) and EPR spectrum Figure $6$ of 1 are given. It was postulated that decomposition of 1 may go through intermediates LCu(II)OOH, LCu(II)OO•, or LCu(II)O• where L = ligand.
To determine the intermediate, a common radical trap 5,5-dimethyl-1-pyrroline-N-oxide (DMPO) was added. A 1:1 complex of intermediate and DMPO was isolated, and given the possible structure 2 (Figure $7$, which is shown along with its EPR specturm (Figure $8$).
The EPR data show similar though different spectra for Cu(II) in each compound, indicating a similar coordination environment – elongated axial, and most likely a LCu(II)O• intermediate.
Determination of a Catalytic Cycle
The mechanism of oxidizing alcohols to aldehydes using a Cu(II) catalyst, TEMPO, and O2 was investigated using EPR. A proposed mechanism is given in Figure $9$.
EPR studies were conducted during the reaction by taking aliquots at various time points and immediately freezing the samples for EPR analysis. The resulting spectra are shown in Figure $10$.
The EPR spectrum (a) in Figure 6, after 1.2 hours shows a signal for TEMPO at g = 2.006 as well as a signal for Cu(II) with g= 2.26, g = 2.06, A = 520 MHz, and A < 50 MHz. After 4 hours, the signal for Cu(II) is no longer in the reaction mixture, and the TEMPO signal has decreased significantly. Suggesting that all the Cu(II) has been reduced to Cu(I) and the majority of TEMPO has been oxidized. After 8 hours, the signals for both Cu(II) and TEMPO have returned indicating regeneration of both species. In this way, EPR evidence supports the proposed mechanism.
Electron-Nuclear Double Resonance Spectroscopy
Electron nuclear double resonance (ENDOR) uses magnetic resonance to simplify the electron paramagnetic resonance (EPR) spectra of paramagnetic species (one which contains an unpaired electron). It is very powerful and advanced and it works by probing the environment of these species. ENDOR was invented in 1956 by George Feher (Figure $11$).
ENDOR: NMR Spectroscopy on an EPR Spectrometer
A transition’s metal electron spin can interact with the nuclear spins of ligands through dipolar contact interactions. This causes shifts in the nuclear magnetic resonance (NMR) Spectrum lines caused by the ligand nuclei. The NMR technique uses these dipolar interactions, as they correspond to the nuclear spin’s relative position to the metal atom, to give information about the nuclear coordinates. However, a paramagnetic species (one that contains unpaired electrons) complicates the NMR spectrum by broadening the lines considerably.
EPR is a technique used to study paramagnetic compounds. However, EPR has its limitations as it offers low resolution that result in line broadening and line splitting. This is partly due to the electron spins coupling to surrounding nuclear spins. However, this coupling are important to understand a paramagnetic compound and determine the coordinates of its ligands. While neither NMR or EPR can be used to study these coupling interaction, one can use both techniques simultaneously, which is the concept behind ENDOR. An ENDOR experiment is a double resonance experiment in which NMR resonances are detected using intensity changes of an EPR line that is irradiated simultaneously. An important difference is that the NRM portion of an ENDOR experiment uses microwaves rather than radiofrequencies, which results in an enhancement of the sensitivity by several orders of magnitude.
Theory
The ENDOR technique involves monitoring the effects of EPR transitions of a simultaneously driven NMR transition, which allows for the detection of the NMR absorption with much greater sensitivity than EPR. In order to illustrate the ENDOR system, a two-spin system is used. This involves a magnetic field (Bo) interacting with one electron (S = 1/2) and one proton (I = 1/2).
Hamiltonian Equation
The Hamiltonian equation for a two-spin system is described by \ref{4}. The equation lists four terms: the electron Zeeman interaction (EZ), the nuclear Zeeman interaction (NZ), the hyperfine interaction (HFS), respectively. The EZ relates to the interaction the spin of the electron and the magnetic field applied. The NZ describes the interaction of the proton’s magnetic moment and the magnetic field. The HSF is the interaction of the coupling that occurs between spin of the electron and the nuclear spin of the proton. ENDOR spectra contain information on all three terms of the Hamiltonian.
$H_{0} \ =\ H_{EZ}\ +\ H_{NZ}\ +\ H_{HFS} \label{4}$
Selection Rules
\ref{4} can be further expanded to \ref{5}. gn is the nuclear g-factor, which characterizes the magnetic moment of the nucleus. S and I are the vector operators for the spins of the electron and nucleus, respectively. μB is the Bohr magneton (9.274 x 10-24 JT-1). μn is the nuclear magneton (5.05 x 10-27 JT-1). h is the Plank constant (6.626 x 10-34 J s). g and A are the g and hyperfine tensors. \ref{5} becomes \ref{6} by assuming only isotropic interactions and the magnetic filed aligned along the Z-axis. In \ref{6}, g is the isotropic g-factor and a is the isotropic hyperfine constant.
$H\ =\ \mu_{B}B_{0}gS\ -\ g_{n}\mu _{n}B_{0}I \ +\ hSAI \label{5}$
$H\ =\ g\mu_{B}B_{0}S_{Z} - g_{n}\mu _{n} B_{0} I_{Z} \ +\ haSI \label{6}$
The energy levels for the two spin systems can be calculated by ignoring second order terms in the high filed approximation by \ref{7}. This equation can be used to express the four possible energy levels of the two-spin system (S = 1/2, I = 1/2) in \ref{8} - \ref{11}
$E(M_{S},M_{I}) = g \mu _{B} B_{0} M_{S} - g_{n} \mu _{n} B_{0} M_{I} \ +\ haM_{S}M_{I} \label{7}$
$E_{a}\ =\ -1/2g\mu _{B} B_{0} - 1/2g_{n} \mu _{n} B_{0} - 1/4ha \label{8}$
$E_{b}\ =\ +1/2g\mu _{B} B_{0} - 1/2g_{n} \mu _{n} B_{0} + 1/4ha \label{9}$
$E_{c}\ =\ +1/2g\mu _{B} B_{0} + 1/2g_{n} \mu _{n} B_{0} - 1/4ha \label{10}$
$E_{d}\ =\ -1/2g\mu _{B} B_{0} + 1/2g_{n} \mu _{n} B_{0} + 1/4ha \label{11}$
We can apply the EPR selection rules to these energy levels (ΔMI = 0 and ΔMS = ±1) to find the two possible resonance transitions that can occur, shown in \ref{12} and \ref{13}. These equations can be further simplified by expressing them in frequency units, where νe = nB0/h to derive \ref{14}, which defines the EPR transitions (Figure $12$). In the spectrum this would give two absorption peaks that are separated by the isotropic hyperfine splitting, a (Figure $12$).
$\Delta E_{cd}\ =\ E_{c}\ -\ E_{d} \ =\ g \mu_{B} B - 1/2ha \label{12}$
$\Delta E_{ab}\ =\ E_{b}\ -\ E_{a} \ =\ g \mu_{B} B + 1/2ha \label{13}$
$V_{EPR} \ =\ v_{e} \pm a/2 \label{14}$
Applications
ENDOR has advantages in both organic and inorganic paramagnetic species as it is helpful in characterizing their structure in both solution and in the solid state. First, it enhances the resolution gained in organic radicals in solution. In ENDOR, each group of equivalent nuclei contributes only 2 lines to the spectrum, and nonequivalent nuclei cause only an additive increase as opposed to a multiplicative increase like in EPR. For example, the radical cation 9,10-dimethilanthracene (Figure $14$) would produce 175 lines in an EPR spectrum because the spectra would include 3 sets of inequivalent protons. However ENDOR produces only three pairs of lines (1 for each set of equivalent nuclei), which can be used to find the hyperfine couplings. This is also shown in Figure $14$.
ENDOR can also be used to obtain structural information from the powder EPR spectra of metal complexes. ENDOR spectroscopy can be used to obtain the electron nuclear hyperfine interaction tensor, which is the most sensitive probe for structure determination. A magnetic filed that assumes all possible orientations with respect to the molecular frame is applied to the randomly oriented molecules. The resonances from this are superimposed on each other and make up the powder EPR spectrum. ENDOR measurements are made at a selected field position in the EPR spectrum, which only contain that subset of molecules that have orientations that contribute to the EPR intensity at the chosen value of the observing field. By selected EPR turning points at magnetic filed values that correspond to defined molecular orientations, a “single crystal like” ENDOR spectra is obtained. This is also called a “orientation selective” ENDOR experiment which can use simulation of the data to obtain the principal components of the magnetic tensors for each interacting nucleus. This information can then be used to provide structural information about the distance and spatial orientation of the remote nucleus. This can be especially interesting since a three dimensional structure for a paramagnetic system where a single crystal cannot be prepared can be obtained. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.08%3A_EPR_Spectroscopy.txt |
XPS of Carbon Nanomaterials
X-ray photoelectron spectroscopy (XPS), also called electron spectroscopy for chemical analysis (ESCA), is a method used to determine the elemental composition of a material’s surface. It can be further applied to determine the chemical or electronic state of these elements.
The photoelectric effect is the ejection of electrons from the surface of a material upon exposure to electromagnetic radiation of sufficient energy. Electrons emitted have characteristic kinetic energies proportional to the energy of the radiation, according to \ref{1}, where KE is the kinetic energy of the electron, h is Planck’s constant, ν is the frequency of the incident radiation, Eb is the ionization, or binding, energy, and φ is the work function. The work function is a constant which is dependent upon the spectrometer.
$KE\ =\ h \nu \ -\ E_{b}\ -\ \varphi \label{1}$
In photoelectron spectroscopy, high energy radiation is used to expel core electrons from a sample. The kinetic energies of the resulting core electrons are measured. Using the equation with the kinetic energy and known frequency of radiation, the binding energy of the ejected electron may be determined. By Koopman’s theorem, which states that ionization energy is equivalent to the negative of the orbital energy, the energy of the orbital from which the electron originated is determined. These orbital energies are characteristic of the element and its state.
Basics of XPS
Sample Preparation
As a surface technique, samples are particularly susceptible to contamination. Furthermore, XPS samples must be prepared carefully, as any loose or volatile material could contaminate the instrument because of the ultra-high vacuum conditions. A common method of XPS sample preparation is embedding the solid sample into a graphite tape. Samples are usually placed on 1 x 1 cm or 3 x 3 cm sheets.
Experimental Set-up
Monochromatic aluminum (hν = 1486.6 eV) or magnesium (hν = 1253.6 eV) Kα X-rays are used to eject core electrons from the sample. The photoelectrons ejected from the material are detected and their energies measured. Ultra-high vacuum conditions are used in order to minimize gas collisions interfering with the electrons before they reach the detector.
Measurement Specifications
XPS analyzes material between depths of 1 and 10 nm, which is equivalent to several atomic layers, and across a width of about 10 µm. Since XPS is a surface technique, the orientation of the material affects the spectrum collected.
Data Collection
X-ray photoelectron (XP) spectra provide the relative frequencies of binding energies of electrons detected, measured in electron-volts (eV). Detectors have accuracies on the order of ±0.1 eV. The binding energies are used to identify the elements to which the peaks correspond. XPS data is given in a plot of intensity versus binding energy. Intensity may be measured in counts per unit time (such as counts per second, denoted c/s). Often, intensity is reported as arbitrary units (arb. units), since only relative intensities provide relevant information. Comparing the areas under the peaks gives relative percentages of the elements detected in the sample. Initially, a survey XP spectrum is obtained, which shows all of the detectable elements present in the sample. Elements with low detection or with abundances near the detection limit of the spectrometer may be missed with the survey scan. Figure $1$ shows a sample survey XP scan of fluorinated double-walled carbon nanotubes (DWNTs).
Subsequently, high resolution scans of the peaks can be obtained to give more information. Elements of the same kind in different states and environments have slightly different characteristic binding energies. Computer software is used to fit peaks within the elemental peak which represent different states of the same element, commonly called deconvolution of the elemental peak. Figure $2$ and Figure $3$ show high resolutions scans of C1s and F1s peaks, respectively, from Figure $1$, along with the peak designations.
Limitations
Both hydrogen and helium cannot be detected using XPS. For this reason, XPS can provide only relative, rather than absolute, ratios of elements in a sample. Also, elements with relatively low atomic percentages close to that of the detection limit or low detection by XPS may not be seen in the spectrum. Furthermore, each peak represents a distribution of observed binding energies of ejected electrons based on the depth of the atom from which they originate, as well as the state of the atom. Electrons from atoms deeper in the sample must travel through the above layers before being liberated and detected, which reduces their kinetic energies and thus increases their apparent binding energies. The width of the peaks in the spectrum consequently depends on the thickness of the sample and the depth to which the XPS can detect; therefore, the values obtained vary slightly depending on the depth of the atom. Additionally, the depth to which XPS can analyze depends on the element being detected.
High resolution scans of a peak can be used to distinguish among species of the same element. However, the identification of different species is discretionary. Computer programs are used to deconvolute the elemental peak. The peaks may then be assigned to particular species, but the peaks may not correspond with species in the sample. As such, the data obtained must be used cautiously, and care should be taken to avoid over-analyzing data.
XPS for Carbon Nanomaterials
Despite the aforementioned limitations, XPS is a powerful surface technique that can be used to accurately detect the presence and relative quantities of elements in a sample. Further analysis can provide information about the state and environment of atoms in the sample, which can be used to infer information about the surface structure of the material. This is particularly useful for carbon nanomaterials, in which surface structure and composition greatly influence the properties of the material. There is much research interest in modifying carbon nanomaterials to modulate their properties for use in many different applications.
Sample Preparation
Carbon nanomaterials present certain issues in regard to sample preparation. The use of graphite tape is a poor option for carbon nanomaterials because the spectra will show peaks from the graphite tape, adding to the carbon peak and potentially skewing or overwhelming the data. Instead, a thin indium foil (between 0.1 and 0.5 mm thick) is used as the sample substrate. The sample is simply pressed onto a piece of the foil.
Analysis and Applications for Carbon Nanomaterials
Chemical Speciation
The XP survey scan is an effective way to determine the identity of elements present on the surface of a material, as well as the approximate relative ratios of the elements detected. This has important implications for carbon nanomaterials, in which surface composition is of greatest importance in their uses. XPS may be used to determine the purity of a material. For example, nanodiamond powder is a created by detonation, which can leave nitrogenous groups and various oxygen containing groups attached to the surface. Figure $4$ shows a survey scan of a nanodiamond thin film with the relative atomic percentages of carbon, oxygen, and nitrogen being 91.25%, 6.25%, and 1.7%, respectively. Based on the XPS data, the nanodiamond material is approximately 91.25% pure.
XPS is a useful method to verify the efficacy of a purification process. For example, high-pressure CO conversion single-walled nanotubes (HiPco SWNTs) are made using iron as a catalyst, Figure $5$ shows the Fe2p XP spectra for pristine and purified HiPco SWNTs.
For this application, XPS is often done in conjunction with thermogravimetric analysis (TGA), which measures the weight lost from a sample at increasing temperatures. TGA data serves to corroborate the changes observed with the XPS data by comparing the percentage of weight loss around the region of the impurity suspected based on the XP spectra. The TGA data support the reduction in iron content with purification suggested by the XP spectra above, for the weight loss at temperatures consistent with iron loss decreases from 27% in pristine SWNTs to 18% in purified SWNTs. Additionally, XPS can provide information about the nature of the impurity. In Figure $6$, the Fe2p spectrum for pristine HiPco SWNTs shows two peaks characteristic of metallic iron at 707 and 720 eV. In contrast, the Fe2p spectrum for purified HiPco SWNTs also shows two peaks at 711 and 724 eV, which are characteristic of either Fe2O3 or Fe3O4. In general, the atomic percentage of carbon obtained from the XPS spectrum is a measure of the purity of the carbon nanomaterials.
Bonding and Functional Groups
XP spectra give evidence of functionalization and can provide insight into the identity of the functional groups. Carbon nanomaterials provide a versatile surface which can be functionalized to modulate their properties. For example, the sodium salt of phenyl sulfonated SWNTs is water soluble. In the XP survey scan of the phenyl sulfonated SWNTs, there is evidence of functionalization owing to the appearance of the S2p peak. Figure $6$ shows the survey XP spectrum of phenyl sulfonated SWNTs.
The survey XP spectrum of the sodium salt shows a Na1s peak (Figure $7$ and the high resolution scans of Na1s and S2p show that the relative atomic percentages of Na1s and S2p are nearly equal (Figure $8$, which supports the formation of the sodium salt.
Further Characterization
High resolution scans of each of the element peaks of interest can be obtained to give more information about the material. This is a way to determine with high accuracy the presence of elements as well as relative ratios of elements present in the sample. This can be used to distinguish species of the same element in different chemical states and environments, such as through bonding and hybridization, present in the material. The distinct peaks may have binding energies that differ slightly from that of the convoluted elemental peak. Assignment of peaks can be done using XPS databases, such as that produced by NIST. The ratios of the intensities of these peaks can be used to determine the percentage of atoms in a particular state. Discrimination between and identity of elements in different states and environments is a strength of XPS that is of particular interest for carbon nanomaterials.
Hybridization
The hybridization of carbons influences the properties of a carbon nanomaterial and has implications in its structure. XPS can be used to determine the hybridization of carbons on the surface of a material, such as graphite and nanodiamond. Graphite is a carbon material consisting of sp2 carbons. Thus, theoretically the XPS of pure graphite would show a single C1s peak, with a binding energy characteristic of sp2 carbon (around 284.2 eV). On the other hand, nanodiamond consists of sp3 bonded carbons. The XPS of nanodiamond should show a single C1s peak, with a binding energy characteristic of sp3 carbon (around 286 eV). The ratio of the sp2 and sp3 peaks in the C1s spectrum gives the ratio of sp2 and sp3 carbons in the nanomaterial. This ratio can be altered and compared by collecting the C1s spectra. For example, laser treatment of graphite creates diamond-like material, with more sp3 character when a higher laser power is used. This can be observed in Figure $9$, in which the C1s peak is broadened and shifted to higher binding energies as increased laser power is applied.
Alternatively, annealing nanodiamond thin films at very high temperatures creates graphitic layers on the nanodiamond surface, increasing sp2 content. The extent of graphitization increases with the temperature at which the sample is annealed, as shown in Figure $10$.
Reaction Completion
Comparing the relative intensities of various C1s peaks can be powerful in verifying that a reaction has occurred. Fluorinated carbon materials are often used as precursors to a broad range of variously functionalized materials. Reaction of fluorinated SWNTs (F-SWNTs) with polyethyleneimine (PEI) leads to decreases in the covalent carbon-fluoride C1s peak, as well as the evolution of the amine C1s peak. These changes are observed in the C1s spectra of the two samples (Figure $11$).
Nature and Extent of Functionalization
XPS can also be applied to determine the nature and extent of functionalization. In general, binding energy increases with decreasing electron density about the atom. Species with more positive oxidation states have higher binding energies, while more reduced species experience a greater degree of shielding, thus increasing the ease of electron removal.
The method of fluorination of carbon materials and such factors as temperature and length of fluorination affect the extent of fluoride addition as well as the types of carbon-fluorine bonds present. A survey scan can be used to determine the amount of fluorine compared to carbon. High resolution scans of the C1s and F1s peaks can also give information about the proportion and types of bonds. A shift in the peaks, as well as changes in peak width and intensity, can be observed in spectra as an indication of fluorination of graphite. Figure $12$ shows the Cls and F1s spectra of samples containing varying ratios of carbon to fluorine.
Furthermore, different carbon-fluorine bonds show characteristic peaks in high resolution C1s and F1s spectra. The carbon-fluorine interactions in a material can range from ionic to covalent. Covalent carbon-fluorine bonds show higher core electron binding energies than bonds more ionic in character. The method of fluorination affects the nature of the fluorine bonds. Graphite intercalation compounds are characterized by ionic carbon-fluorine bonding. Figure $13$ shows the F1s spectra for two fluorinated exfoliated graphite samples prepared with different methods.
Also, the peaks for carbons attached to a single fluorine atom, two fluorine atoms, and carbons attached to fluorines have characteristic binding energies. These peaks are seen in that C1s spectra of F- and PEI-SWNTs shown in Figure $14$.
Table $1$ lists various bonds and functionalities and the corresponding C1s binding energies, which may be useful in assigning peaks in a C1s spectrum, and consequently in characterizing the surface of a material.
Bond/Group Binding Energy (eV)
C-C 284.0 - 286.0
C-C (sp2) 284.3 - 284.6
C-C (sp3) 285.0 - 286.0
C-N 285.2 - 288.4
C-NR2 (amine) 285.5 - 286.4
O=C-NH (amide) 287.9 - 288.6
-C=N (nitrile) 266.3 - 266.8
C-O 286.1-290.0
O=C-OH (carboxyl) 288.0 - 290.0
-C-O (epoxy) 286.1 - 287.1
-C-OH (hydroxyl) 286.4 - 286.7
-C-O-C- (ether) 286.1 - 288.0
-C=O (aldehyde/ketone) 287.1 - 288.1
C-F 287.0-293.4
-C-F (covalent) 287.7 - 290.2
-C-F (ionic) 287.0 - 287.4
C-C-F 286.0 - 287.7
C-F2 291.6 - 292.4
C-F3 292.4 - 293.4
C-S 285.2 - 287.5
C-Cl 287.0 - 287.2
Table $1$ Summary of selected C1s binding energies
Conclusion
X-ray photoelectron spectroscopy is a facile and effective method for determining the elemental composition of a material’s surface. As a quantitative method, it gives the relative ratios of detectable elements on the surface of the material. Additional analysis can be done to further elucidate the surface structure. Hybridization, bonding, functionalities, and reaction progress are among the characteristics that can be inferred using XPS. The application of XPS to carbon nanomaterials provides much information about the material, particularly the first few atomic layers, which are most important for the properties and uses of carbon nanomaterials. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.09%3A_X-ray_Photoelectron_Spectroscopy.txt |
ESI-QTOF-MS Coupled to HPLC and its Application for Food Safety
High-performance liquid chromatography (HPLC) is a very powerful separation method widely used in environmental science, pharmaceutical industry, biological and chemical research and other fields. Generally, it can be used to purify, identify and/or quantify one or several components in a mixture simultaneously.
Mass spectrometry (MS) is a detection technique by measuring mass-to-charge ratio of ionic species. The procedure consists of different steps. First, a sample is injected in the instrument and then evaporated. Second, species in the sample are charged by certain ionized methods, such as electron ionization (EI), electrospray ionization (ESI), chemical ionization (CI), matrix-assisted laser desorption/ionization (MALDI). Finally, the ionic species wil be analyzed depending on their mass-to-charge ratio (m/z) in the analyzer, such as quadrupole, time-of-flight (TOF), ion trap and fourier transform ion cyclotron resonance.
The mass spectrometric identification is widely used together with chromatographic separation. The most common ones are gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-mass spectrometry (LC-MS). Because of the high sensitivity, selectivity and relatively low price of GC-MS, it has very wide applications in drug detection, environmental analysis and so forth. For those organic chemistry research groups, it is also a daily-used and convenient equipment. However, GC-MS is ineffective if the molecules have high boiling point and/or will be decomposed at high temperature.
In this module, we will mainly talk about liquid chromatography and electrospray ionization quadrupole time-of-flight mass spectrometry (LC/ESI-QTOF-MS). As mentioned above, the LC has an efficient capacity of separation and MS has a high sensitivity and strong ability of structural characterization. Furthermore, TOF-MS, has several distinctive properties on top of regular MS, including fast acquisition rates, high accuracy in mass measurements and a large mass range. The combination of LC and ESI-TOF-MS allow us to obtain a powerful in the quantitative and qualitative analysis of molecules in complex matrices by reducing the matrix interferences. It may play an important role in the area of food safety.
How it Works
Generally, LC-MS has four components, including an autosampler, HPLC, ionization source and mass spectrometer, as shown in Figure \(1\). Here we need to pay attention to the interface of HPLC and MS so that they can be suitable to each other and be connected. There are specified separation column for HPLC-MS, whose inner diameter (I.D.) is usually 2.0 mm. And the flow rate, which is 0.05 - 0.2 mL/min, is slower than typical HPLC. For the mobile phase, we use the combination of water and methanol and/acetonitrile. And because ions will inhibit the signals in MS, if we want to modify to mobile phase, the modifier should be volatile, such as HCO2H, CH3CO2H, [NH4][HCO2] and [NH4][CH3CO2].
As the interface between HPLC and MS, the ionization source is also important. There are many types and ESI and atmospheric pressure chemical ionization (APCI) are the most common ones. Both of them are working at atmospheric pressure, high voltage and high temperature. In ESI, the column eluent as nebulized in high voltage field (3 - 5 kV). Then there will be very small charged droplet. Finally individual ions formed in this process and goes into mass spectrometer.
Comparison of ESI-QTOF-MS and Other Mass Spectrometer Methods
There are many types of mass spectrometers which can connect with the HPLC. One of the most widely-used MS systems is single quadrupole mass spectrometer, whichis not very expensive, shown in Figure \(2\). This system has two modes. One mode is total ion monitoring (TIM) mode which can provide the total ion chromatograph. The other is selected ion monitoring (SIM) mode, in which the user can choose to monitor some specific ions, and the latter’s sensitivity is much higher than the former’s. Further, the mass resolution of the single quadrupole mass spectrometer is 1 Da and its largest detection mass range is 30 - 3000 Da.
The second MS system is the triple quadrupole MS-MS system, shown in Figure \(3\). Using this system, people can select the some ions, called parent ions, and use another electron beam to collide them again to get the fragment ions, called daughter ions. In other words, there are two steps to select the target molecules. So it reduces the matrix effect a lot. This system is very useful in the analysis of biological samples because biological samples always have very complex matrix; however, the mass resolution is still 1 Da.
The third system is time-of-flight (TOF) MS, shown in Figure \(4\), which can provide a higher mass resolution spectrum, 3 to 4 decimals of Da. Furthermore, it can detect a very large range of mass at a very fast speed. The largest detection mass range is 20 - 10000 Da. But the price of this kind of MS is very high. The last technique is a hybrid mass spectrometer, Q-TOF MS, which combines a single quadrupole MS and a TOF MS. Using this MS, we can get high resolution chromatograph and we also can use the MS-MS system to identify the target molecules.
Application of LC/ESI-QTOF-MS in the Detection of Quinolones in Edible Animal Food
Quinolones are a family of common antibacterial veterinary medicine which can inhibit DNA-gyrase in bacterial cells. However, the residues of quinolone in edible animal products may be directly toxic or cause resistant pathogens in humans. Therefore, sensitive methods are required to monitor such residues possibly present in different animal-producing food, such as eggs, chicken, milk and fish. The molecular structures of eight quinolones, ciprofloxacin (CIP), anofloxacin methanesulphonate (DAN), enrofloxacin (ENR), difloxacin (DIF), sarafloxacin (SARA), oxolinic, acid (OXO), flumequine (FLU), ofloxacin (OFL), are shown in Figure \(5\).
LC-MS is a common detection approach in the field of food safety. But because of the complex matrix of the samples, it is always difficult to detect those target molecules of low concentration by using single quadrupole MS. The following gives an example of the application of LC/ESI-QTOF-MS.
Using a quaternary pump system, a Q-TOF-MS system, a C18 column (250 mm × 2.0 mm I.D., 5 µm) with a flow rate of 0.2 mL/min, and a mixture of solvents as the mobile phase comprising of 0.3% formic acid solution and acetonitrile. The gradient phofile for mobile phase is shown in Table \(1\). Since at acidic pH condition, the quinolones carried a positive charge, all mass spectra were acquired in the positive ion mode and summarizing 30,000 single spectra in the mass range of 100-500 Da.
Time (min) Volume % of Formic Acid Solution Volume % of Acetonitrile
0 80 20
12 65 35
15 20 80
20 15 85
30 15 85
30.01 80 20
Table \(1\) The gradient phofile for mobile phase
The optimal ionization source working parameters were as follows: capillary voltage 4.5 kV; ion energy of quadrupole 5 eV/z; dry temperature 200 °C; nebulizer 1.2 bar; dry gas 6.0 L/min. During the experiments, HCO2Na (62 Da) was used to externally calibrate the instrument. Because of the high mass accuracy of the TOF mass spectrometer, it can extremely reduce the matrix effects. Three different chromatographs are shown in Figure \(6\). The top one is the total ion chromatograph at the window range of 400 Da. It’s impossible to distinguish the target molecules in this chromatograph. The middle one is at one Da resolution, which is the resolution of single quadrupole mass spectrometer. In this chromatograph, some of the molecules can be identified. But noise intensity is still very high and there are several peaks of impurities with similar mass-to-charge ratios in the chromatograph. The bottom one is at 0.01 Da resolution. It clearly shows the peaks of eight quinolones with very high signal to noise ratio. In other words, due to the fast acquisition rates and high mass accuracy, LC/TOF-MS can significantly reduce the matrix effects.
The quadrupole MS can be used to further confirm the target molecules. Figure \(7\) shows the chromatograms obtained in the confirmation of CIP (17.1 ng/g) in a positive milk sample and ENR (7.5 ng/g) in a positive fish sample. The chromatographs of parent ions are shown on the left side. On the right side, they are the characteristic daughter ion mass spectra of CIP and ENR.
Drawbacks of LC/Q-TOF-MS
Some of the drawbacks of LC/Q-TOF-MS are its high costs of purchase and maintenance. It is hard to apply this method in daily detection in the area of environmental protection and food safety.
In order to reduce the matrix effect and improve the detection sensitivity, people may use some sample preparation methods, such as liquid-liquid extraction (LLE), solid-phase extraction (SPE), distillation. But these methods would consume large amount of samples, organic solvent, time and efforts. Nowadays, there appear some new sample preparation methods. For example, people may use online microdialysis, supercritical fluid extraction (SFE) and pressurized liquid extraction. In the method mentioned in the Application part, we use online in-tube solid-phase microextraction (SPME), which is an excellent sample preparation technique with the features of small sample volume, simplicity solventless extraction and easy automation. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.10%3A_ESI-QTOF-MS_Coupled_to_HPLC_and_its_Application_for_Food_Safety.txt |
Principles of Mass Spectrometry and Modern Applications
Mass spectrometry (MS) is a powerful characterization technique used for the identification of a wide variety of chemical compounds. At its simplest, MS is merely a tool for determining the molecular weight of the chemical species in a sample. However, with the high resolution obtainable from modern machines, it is possible to distinguish isomers, isotopes, and even compounds with nominally identical molecular weights. Libraries of mass spectra have been compiled which allow rapid identification of most known compounds, including proteins as large as 100 kDa (100,000 amu).
Mass spectrometers separate compounds based on a property known as the mass-to-charge ratio. The sample to be identified is first ionized, and then passed through some form of magnetic field. Based on parameters such as how long it takes the molecule to travel a certain distance or the amount of deflection caused by the field, a mass can be calculated for the ion. As will be discussed later, there are a wide variety of techniques for ionizing and detecting compounds.
Limitations of MS generally stem from compounds that are not easily ionizable, or which decompose upon ionization. Geometric isomers can generally be distinguished easily, but differences in chirality are not easily resolved. Complications can also arise from samples which are not easily dissolved in common solvents.
Ionization Techniques
Electron Impact (EI)
In electon impact ionization, a vaporized sample is passed through a beam of electrons. The high energy (typically 70 eV) beam strips electrons from the sample molecules leaving a positively charged radical species. The molecular ion is typically unstable and undergoes decomposition or rearrangement to produce fragment ions. Because of this, electron impact is classified as a “hard” ionization technique. With regards to metal-containing compounds, fragments in EI will almost always contain the metal atom (i.e., [MLn]+•fragments to [MLn-1]+ + L, not MLn-1 + L+). One of the main limitations of EI is that the sample must be volatile and thermally stable.
Chemical Ionization (CI)
In chemical ionization, the sample is introduced to a chamber filled with excess reagent gas (such as methane). The reagent gas is ionized by electrons, forming a plasma with species such as CH5+, which react with the sample to form the pseudomolecular ion [M+H]+. Because CI does not involve radical reactions, fragmentation of the sample is generally much lower than that of EI. CI can also be operated in negative mode (to generate anions) by using different reagent gases. For example, a mixture of CH4 and NO2 will generate hydroxide ions, which can abstract protons to yield the [M-H]- species. A related technique, atmospheric pressure chemical ionization (APCI) delivers the sample as a neutral spray, which is then ionized by corona discharge, producing ions in a similar manner as described above. APCI is particularly suited for low molecular weight, nonpolar species that cannot be easily analyzed by other common techniques such as ESI.
Field Ionization/Desorption
Field ionization and desorption are two closely related techniques which use quantum tunneling of electrons to generate ions. Typically, a highly positive potential is applied to an electrode with a sharp point, resulting in a high potential gradient at the tip Figure \(1\). As the sample reaches this field, electron tunneling occurs to generate the cation, which is repelled into the mass analyzer. Field ionization utilizes gaseous samples whereas in field desorption the sample is adsorbed directly onto the electrode. Both of these techniques are soft, resulting in low energy ions which do not easily fragment.
Electrospray Ionization (ESI)
In ESI, a highly charged aerosol is generated from a sample in solution. As the droplets shrink due to evaporation, the charge density increases until a coulombic explosion occurs, producing daughter droplets that repeat the process until individualized sample ions are generated (Figure \(2\). One of the limitations of is the requirement that the sample be soluble. ESI is best applied to charged, polar, or basic compounds.
Matrix Assisted Laser Desorption Ionization (MALDI)
Laser desorption ionization generates ions by ablation from a surface using a pulsed laser. This technique is greatly improved by the addition of a matrix co-crystallized with the sample. As the sample is irradiated, a plume of desorbed molecules is generated. It is believed that ionization occurs in this plume due to a variety of chemical and physical interactions between the sample and the matrix (Figure \(3\)). One of the major advantages of MALDI is that it produces singly charged ions almost exclusively and can be used to volatilize extremely high molecular weight species such as polymers and proteins. A related technique, desorption ionization on silicon (DIOS) also uses laser desorption, but the sample is immobilized on a porous silicon surface with no matrix. This allows the study of low molecular weight compounds which may be obscured by matrix peaks in conventional MALDI.
Inductively Coupled Plasma Mass Spectrometry (ICP-MS)
A plasma torch generated by electromagnetic induction is used to ionize samples. Because the effective temperature of the plasma is about 10,000 °C, samples are broken down to ions of their constituent elements. Thus, all chemical information is lost, and the technique is best suited for elemental analysis. ICP-MS is typically used for analysis of trace elements.
Fast Atom Bombardment (FAB) and Secondary Ion Mass Spectrometry (SIMS)
Both of these techniques involve sputtering a sample to generate individualized ions; FAB utilizes a stream of inert gas atoms (argon or xenon) whereas SIMS uses ions such as Cs+. Ionization occurs by charge transfer between the ions and the sample or by protonation from the matrix material (Figure \(4\)). Both solid and liquid samples may be analyzed. A unique aspect of these techniques for analysis of solids is the ability to do depth profiling because of the destructive nature of the ionization technique.
Choosing an Ionization Technique
Depending on the information desired from mass spectrometry analysis, different ionization techniques may be desired. For example, a hard ionization method such as electron impact may be used for a complex molecule in order to determine the component parts by fragmentation. On the other hand, a high molecular weight sample of polymer or protein may require an ionization method such as MALDI in order to be volatilized. Often, samples may be easily analyzed using multiple ionization methods, and the choice is simplified to choosing the most convenient method. For example, electrospray ionization may be easily coupled to liquid chromatography systems, as no additional sample preparation is required. Table \(1\) provides a quick guide to ionization techniques typically applied to various types of samples.
Information Desired Ionization Technique
Elemental analysis Inductively coupled plasma
Depth profiling Fast atom bombardment/secondary ion mass spectroscopy
Chemical speciation/component analysis (fragmentation desired) Electron impact
Molecular species identification of compounds soluble in common solvents Electrospray ionization
Molecular species identification of hydrocarbon compounds Field ionization
Molecular species identification of high molecular weight compounds Matrix assisted laser desorption ionization
Molecular species identification of halogen containing compounds Chemical ionization (negative mode)
Table \(1\) Strengths of various ionization techniques
Mass Analyzers
Sectors
A magnetic or electric field is used to deflect ions into curved trajectories depending on the m/z ratio, with heavier ions experiencing less deflection (Figure \(5\)). Ions are brought into focus at the detector slit by varying the field strength; a mass spectrum is generated by scanning field strengths linearly or exponentially. Sector mass analyzers have high resolution and sensitivity, and can detect high mass ranges, but are expensive, require large amounts of space, and are incompatible with the most popular ionization techniques MALDI and ESI.
Time of Flight (TOF)
The amount of time required for an ion to travel a known distance is measured (Figure \(6\)). A pulse of ions is accelerated through and electric analyzer such that they have identical kinetic energies. As a result, their velocity is directly dependent on their mass. Extremely high vacuum conditions are required to extend the mean free path of ions and avoid collisions. TOF mass analyzers are fastest, have unlimited mass ranges, and allow simultaneous detection of all species, but are best coupled with pulsed ionization sources such as MALDI.
Quadropole
Ions are passed through four parallel rods which apply a varying voltage and radiofrequency potential (Figure \(7\)). As the field changes, ions respond by undergoing complex trajectories. Depending on the applied voltage and RF frequencies, only ions of a certain m/z ratio will have stable trajectories and pass through the analyzer. All other ions will be lost by collision with the rods. Quadrupole analyzers are relatively inexpensive, but have limited resolution and low mass range.
Ion Trap
Ion traps operate under the same principle as quadrupole, but contain the ions in space. Electrodes can be manipulated to selectively eject ions of desired m/z ratios, allowing for mass analysis. Ion traps are uniquely suited for repeated cycles of mass spectrometry because of their ability to retain ions of desired m/z ratios. Selected fragments can be further fragmented by collision induced dissociation with helium gas. Ion traps are compact, relatively inexpensive, and can be adapted to many hybrid instruments.
Coupling Mass Spectrometry to Other Instruments
Mass spectrometry is a powerful tool for identification of compounds, and is frequently combined with separation techniques such as liquid or gas chromatography for rapid identification of the compounds within a mixture. Typically, liquid chromatography systems are paired with ESI-quadrupole mass spectrometers to take advantage of the solvated sample. GC-MS systems usually employ electron impact ionization and quadrupole or ion trap mass analyzers to take advantage of the gas-phase molecules and fragmentation libraries associated with EI for rapid identification.
Mass spectrometers are also often coupled in tandem to form MS-MS systems. Typically the first spectrometer utilizes a hard ionization technique to fragment the sample. The fragments are passed on to a second mass analyzer where they may be further fragmented and analyzed. This technique is particularly important for studying large, complex molecules such as proteins.
Fast Atom Bombardment
Fast atom bombardment (FAB) is an ionization technique for mass spectroscopy employing secondary ion mass spectroscopy (SIMS). Before the appearance of this technique, there was only limited way to obtain the mass spectrum of the intact oligopeptide which is not easy to be vaporized. Prior to 1970, electron ionization (EI) or chemical ionization (CI) was widely used but those methods require the destructive vaporization of the sample. Field desorption of ions with nuclear fission overcame this problem though due to the necessity of special technique and nuclear fission of 252Cf limits the generality of this approach. FAB became prevalent solving those underlying problems by using bombardment of fast atom or ion which has high kinetic energy onto the sample in matrix.
Principle
The FAB utilizes the bombardment of accelerated atom or ion beams and the ionized sample is emitted by the collision of the beams and the sample in matrix. In this section, the detail of each step is discussed.
Atom Beam
Although ions can be accelerated by electric field relatively easily, that is not the case for the neutral atom. Therefore, in the FAB conversion of neutral atom into ion is significant to generate the accelerated species. The fast atom such as xenon used for the bombardment is produced through three steps (Figure \(8\)):
1. Ionization of the atom by collision with electron.
2. Acceleration of the generated ion through high electric potential.
3. Electron transfer from the accelerated ion to another slow atom, affording the desired accelerated atom.
Ion Beam
In the same way as the atom beam, a fast ion beam also can be used. Although cesium ion (Cs+) cheaper and heavier than xenon is often employed, they have drawback that the mass spectroscopy can be contaminated by the ions.
Bombardment
The obtained fast atom or ion is then bombarded to the sample in matrix which is a type of solvent having high boiling point, resulting in momentum transfer and vaporization of the sample (Figure \(9\)). The fast atom used for the bombardment is called primary beam of atoms or ions while secondary beam of atoms or ions corresponds to the sputtered ions and neutrals. The ionized sample is directed by ion optics, leading to the detection of those ion in mass analyzer.
Matrices
One of the crucial characteristics of FAB is using liquid matrix. For example, long-lived signal in FAB is responsible for using matrix. Due to the high vacuum condition, usual solvent for chemistry laboratory such as water and other common organic solvent is precluded for FAB and, therefore, solvent with high boiling point called matrix is necessary to be employed. Table \(1\) shows examples of matrix.
Matrix Observed Ions (m/z)
Glycerol 93
Thioglycerol 109
3-Nitrobenzyl alcohol (3-NOBA) 154
n-Octyl-3-nitrophenylether (NOP) 252
Triethanolamine 150
Diethanolamine 106
Polyethylene glycol (mixtures) Dependent on the glycol used
Table \(1\) Typical examples of matrices. Data from C. G. Herbert and R. A. W. Johnstone, Mass Spectrometry Basics, CRC Press, New York (2002)
Instrument
An image of a typical instrument for fast atom bombardment mass spectrometry is shown in Figure \(10\).
Spectra
The obtained spectrum by FAB has information of structure or bond nature of the compound in addition to the mass. Here, three spectrum are shown as examples.
Glycerol
Typical FAB mass spectrum of glycerol alone is shown in Figure \(11\).
Glycerol shows signal at m/z 93 which is corresponding to the protonated glycerol with small satellite derived from isotope of carbon (13C). At the same time, signals for cluster of protonated glycerol are also often observed at m/z 185, 277, and 369. As is seen in this example, signal from aggregation of the sample also can be detected and this will provide the information of the sample.
Sulfonated Azo Compound
Figure \(12\) shows positive FAB spectrum of sulfonated azo compound X and structure of the plausible fragments in the spectrum. The signal of the target compound X (Mw = 409) was observed at m/z 432 and 410 as an adduct with sodium and proton, respectively. Because of the presence of some type of relatively weak bonds, several fragmentation was observed. For example, signal at m/z 352 and 330 resulted from the cleavage of aryl-sulfonate bond. Also, nitrogen-nitrogen bond cleavage in the azo moiety occurred, producing the fragment signal at m/z 267 and 268. Furthermore, taking into account the fact that favorable formation of nitrogen-nitrogen triple bond from azo moiety, aryl-nitrogen bond can be cleaved and in fact those were detected at m/z 253 and 252. As is shown in these example, fragmentation can be used for obtaining information regarding structure and bond nature of desired compound.
Bradykinin Potentiator C
The mass spectrum of protonated molecule (MH+ = m/z 1052) of bradykinin potentiator C is shown in Figure \(13\). In this case fragmentation occurs between certain amino acids, affording the information of peptide sequence. For example, signal at m/z 884 is corresponding to the fragment as a result of scission of Gly-Leu bond. It should be noted that the pattern of fragmentation is not only done by one type of bond cleavage. Fragmentation at the bond between Gly-Pro is a good example; two type of fragment (m/z 533 and 520) are observed. Thus, pattern of fragmentation can afford the information of sequence of peptide.
Secondary Ion Mass Spectrometry (SIMS)
Secondary ion mass spectrometry (SIMS) is an analytical method which has very low detection limits, is capable of analyzing over a broad dynamic range, has high sensitivity, and has high mass resolution. In this technique, primary ions are used to sputter a solid (and sometimes a liquid) surface of any composition. This causes the emission of electrons, ions, and neutral species, so called secondary particles, from the solid surface. The secondary ions are then analyzed by a mass spectrometer. Depending on the operating mode selected, SIMS can be used for surface composition and chemical structure analysis, depth profiling, and imaging.
Theory
Of all the secondary particles that are sputtered from the sample surface, only about 1 in every 1,000 is emitted as an ion. Because only the ions may be detected by mass spectrometry, an understanding of how these secondary ions form is important.
Sputtering Models
Sputtering can be defined as the emission of atoms, molecules, or ions from a target surface as a result of particle bombardment of the surface. This phenomenon has been described by two different sets of models.
The first approach to describe sputtering, called linear collision cascade theory, compares the atoms to billiard balls and assumes that atomic collisions are completely elastic. Although there are a few different types of sputtering defined by this model, the type which is most important to SIMS is slow collisional sputtering. In this type of sputtering, the primary ion collides with the surface of the target and causes a cascade of random collisions between the atoms in the target. Eventually, these random collisions result in the emission of an atom from the target surface, as can be seen in Figure \(14\). This model does not take into account the location of atoms- it only requires that the energy of the incoming ion be higher than the energy required to sublimate atoms from the target surface.
Despite that fact that this method makes oversimplifications regarding atomic interactions and structure, its predicted sputter yield data is actually fairly close to the experimental data for elements such as Cu, Zn, Ag, and Au, which have high sputter yields. However, for low sputter yield elements, the model predicts three times more sputtered ions than what is actually observed.
The second method to describe sputtering uses computer-generated three-dimensional models of the atoms and molecules in the sample to predict the effect of particle bombardment. All models under this category describe the target solid in terms of its constituent atoms and molecules and their interactions with one another. However, these models only take into account atomic forces (not electronic forces) and describe atomic behavior using classical mechanics (not quantum mechanics). Two specific examples of this type of model are:
1. The molecular dynamics model
2. The binary collision approximation.
Ionization Models
The ionization models of sputtering can be divided into two categories, theories that predict ionization outside the target and theories that predict that they are generated inside the target. In the theories that describe ionization outside of the target, the primary particle strikes the target, causing the emission of an excited atom or molecule from the target. This particle relaxes by emitting an Auger electron, thus becoming an ion. Because no simple mathematical equation has been described for this theory, it is of little practical use. For this reason, ionization inside the target models are used more often. Additionally, it has been shown that ionization occurs more often inside the target. Although there are many models that describe ionization within the target, two representative models of this type are the bond-breaking model and the local thermal equilibrium theory.
In the bond breaking model, the primary particle strikes the target and causes the heterolytic cleavage of a bond in the target. So, either an anion or a cation is emitted directly from the target surface. This is an important model to mention because it has useful implications. Stated simply, the yield of positive ions can be increased by the presence of electronegative atoms in the target, in the primary ion beam, or in the sample chamber in general. The reverse is also true- the negative ion yield may be increased by the presence of electropositive atoms.
The local thermal equilibrium theory can be described as an expansion of the bond breaking model. Here, the increase in yield of positive ions when the target is in the presence of electronegative atoms is said to be the result of the high potential barrier of the metal oxide which is formed. This results in a low probability of the secondary ion being neutralized by an electron, thus giving a high positive ion yield.
Instrumentation
Primary Ion Sources
The primary ions in a SIMS instrument (labeled “Primary ion source” in Figure \(15\)) are generated by one of three types of ion guns. The first type, called an electron bombardment plasma source, uses accelerating electrons (produced from a heated filament) to bombard an anode. If the energy of these electrons is two to three times higher than the ionization energy of the atom, ionization occurs. Once a certain number of ions and electrons are obtained, a plasma forms. Then, an extractor is used to make a focused ion beam from the plasma.
In the second type of source, called the liquid metal source, a liquid metal film flows over a blunt needle. When this film is subjected to a strong electric field, electrons are ejected from the atoms in the liquid metal, leaving them ionized. An extractor then directs the ions out of the ion gun.
The last source is called a surface ionization source. Here, atoms of low ionization energy are absorbed onto a high work function metal. This type of system allows for the transfer of electrons from the surface atoms to the metal. When the temperature is increased, more atoms (or ions) leave the surface than absorb on the surface, causing an increase in absorbed ions compared to absorbed atoms. Eventually, nearly all of the atoms that leave the surface are ionized and can be used as an ion beam.
The type of source used depends on the type of SIMS experiment which is going to be run as well as the composition of the sample to be analyzed. A comparison of the three different sources is given in Table \(2\).
Source Spot Size (µm) Brightness (A/m2Sr) Energy Speed (eV) Ion Type
Electron Bombardment Plasma 1 104-107 <10 Ar+, Xe+, O2+
Liquid Metal 0.05 1010 >10 Ga+, In+,Cs+
Surface Ionization 0.1 107 <1 Cs+
Table \(2\) A comparison of primary ion sources. Data from J.C. Vickerman, A. Brown, N.M. Reed, Secondary ion mass spectrometry: Principles and applications, Clarendon Press, Oxford, 1989.
Of the three sources, electron bombardment plasma has the largest spot size. Thus, this source has a high-diameter beam and does not have the best spatial resolution. For this reason, this source is commonly used for bulk analysis such as depth profiling. The liquid metal source is advantageous for imaging SIMS because it has a high spatial resolution (or low spot size). Lastly, the surface ionization source works well for dynamic SIMS (see above)
because its very small energy spread allows for a uniform etch rate.
In addition to the ion gun type, the identity of the primary ion is also important. O2+ and Cs+ are commonly used because they enhance the positive or negative secondary ion yield, respectively. However, use of the inert gas plasma source is advantageous because it allows for surface studies without reacting with the surface itself. Using the O2+ plasma source allows for an increased output of positively charged secondary ions, but it will alter the surface that is being studied. Also, a heavy primary ion allows for better depth resolution because it does not penetrate as far into the sample as a light ion.
Sputtering
The sputter rate, or the number of secondary ions that are removed from the sample surface by bombardment by one primary ion, depends both on the properties of the target and on the parameters of the primary beam.
There are many target factors that affect the sputter rate. A few examples are crystal structure and the topography of the target. Specifically, hexagonal close-packed crystals and rough surfaces give the highest sputter yield. There are many other properties of the target which effect sputtering, but they will not be discussed here.
As was discussed earlier, different primary ion sources are used for different SIMS applications. In addition to the source used, the manner in which the source is used is also important. First, the sputter rate can be increased by increasing the energy of the beam. For example, using a beam of energy greater than 10 keV gives a maximum of 10 sputtered particles per primary ion impact. Second, increasing the primary ion mass will also increase the secondary ion yield. Lastly, the angle of incidence is also important. It has been found that a maximum sputter rate can be achieved if the angle of impact is 70° relative to the surface normal.
Mass Spectrometers
The detector which measures the amount and type of secondary ions sputtered from the sample surface is a mass spectrometer. See Figure \(15\) for a diagram that shows where the mass spectrometer is relative to the other instrument components. The type of analysis one wishes to do determines which type of spectrometer is used. Both dynamic and static SIMS usually use a magnetic sector mass analyzer because it has a high mass resolution. Static SIMS (as well as imaging SIMS) may also use a time-of-flight system, which allows for high transmission. A description of how each of these mass spectrometers works and how the ions are detected can be found elsewhere (see https://cnx.org/contents/kl4gTdhf@1/Principles-of-Mass-Spectrometry-and-Modern-Applications).
Samples
SIMS can be used to analyze the surface and about 30 µm below the surface of almost any solid sample and some liquid samples. Depending on the type of SIMS analysis chosen, it is possible to obtain both qualitative and quantitative data about the sample.
Technique Selection
There are three main types of SIMS experiments: Dynamic SIMS, static SIMS, and imaging SIMS.
In dynamic SIMS analysis, the target is sputtered at a high rate. This allows for bulk analysis when the mass spectrometer is scanned over all mass ranges to get a mass spectrum and multiple measurements in different areas of the sample are taken. If the mass spectrometer is set to rapidly analyze individual masses sequentially as the target is eroded rapidly, it is possible to see the depth at which specific atoms are located up to 30 µm below the sample surface. This type of analysis is called a depth profile. Depth profiling is very useful because it is a quantitative method- it allows for the calculation of concentration as a function of depth so long as ion-implanted standards are used and the crater depth is measured. See the previous section for more information on ion-implants.
SIMS may also be used to obtain an image in a way similar to SEM while giving better sensitivity than SEM. Here, a finely focused ion beam (rather than an electron beam, as in SEM) is raster-scanned over the target surface and the resulting secondary ions are analyzed at each point. Using the identity of the ions at each analyzed spot, an image may be assembled based on the distributions of these ions.
In static SIMS, the surface of the sample is eroded very slowly so that the ions which are emitted are from areas which have not already been altered by the primary ion. By doing this, it is possible to identify the atoms and some of the molecules just on the surface of the sample.
An example that shows the usefulness of SIMS is the analysis of fingerprints using this instrument. Many other forms of analysis have been employed to characterize the chemical composition of fingerprints such as GC-MS. This is important in forensics to determine fingerprint degradation, to detect explosives or narcotics, and to help determine age of the person who left the print by analyzing differences in sebaceous secretions. Compared to GC-MS, SIMS is a better choice of analysis because it is relatively less destructive. In order to do a GC-MS, the fingerprint must be dissolved. SIMS, on the other hand, is a solid state method. Also, because SIMS only erodes through a few monolayers, the fingerprint can be kept for future analysis and for record-keeping. Additionally, SIMS depth profiling allows the researcher to determine the order in which substances were touched. Lastly, an image of the fingerprint can be obtained using the imaging SIMS analysis.
Sample Preparation
As with any other instrumental analysis, SIMS does require some sample preparation. First, rough samples may require polishing because the uneven texture will be maintained as the surface is sputtered. Because surface atoms are the analyte in imaging and static SIMS, polishing is obviously not required. However, it is required for depth profiling. Without polishing, layers beneath the surface of the sample will appear to be mixed with the upper layer in the spectrum, as can be seen in Figure \(16\).
But, polishing before analysis does not necessarily guarantee even sputtering. This is because different crystal orientations sputter at different rates. So, if the sample is polycrystalline or has grain boundaries (this is often a problem with metal samples), the sample may develop small cones where the sputtering is occurring, leading to an inaccurate depth profile, as is seen in Figure \(17\).
Analyzing insulators using SIMS also requires special sample preparation as a result of electrical charge buildup on the surface (since the insulator has no conductive path to diffuse the charge through). This is a problem because it distorts the observed spectra. To prevent surface charging, it is common practice to coat the sample with a conductive layer such as gold.
Once the sample has been prepared for analysis, it must be mounted to the sample holder. There are a few methods to doing this. One way is to place the sample on a spring loaded sample holder which pushes the sample against a mask. This method is advantageous because the researcher doesn’t have to worry about adjusting the sample height for different samples (see below to find out why sample height is important). However, because the mask is on top of the sample, it is possible to accidentally sputter the mask. Another method used to mount samples is to simply glue them to a backing plate using silver epoxy. This method requires drying under a heat lamp to ensure that all volatiles are evaporated off the glue before analysis. Alternatively, the sample can be pressed in a soft metal like indium. The last two methods are especially useful for mounting of insulating samples, since they provide a conductive path to help prevent charge buildup.
When loading the mounted sample into the instrument, it is important that the sample height relative to the instrument lens is correct. If the sample is either too close or too far away, the secondary ions will either not be detected or they will be detected at the edge of the crater being produced by the primary ions (see Figure \(18\)). Ideally, the secondary ions that are analyzed should be those resulting from the center of the primary beam where the energy and intensity are most uniform.
Standards
In order to do quantitative analysis using SIMS, it is necessary to use calibration standards since the ionization rate depends on both the atom (or molecule) and the matrix. These standards are usually in the form of ion implants which can be deposited in the sample using an implanter or using the primary ion beam of the SIMS (if the primary ion source is mass filtered). By comparing the known concentration of implanted ions to the number of sputtered implant ions, it is possible to calculate the relative sensitivity factor (RSF) value for the implant ion in the particular sample. By comparing this RSF value to the value in a standard RSF table and adjusting all the table RSF values by the difference between them, it is possible to calculate the concentrations of other atoms in the sample. For more information on RSF values, see above.
When choosing an isotope to use for ion implantation, it is important take into consideration possible mass interferences. For example, 11B, 16O, and 27Al have the same overall masses and will interfere with each others ion intensity in the spectra. Therefore, one must chose an ion implant that does not have the same mass as any other species in the sample which are of interest.
Also, the depth at which the implant is deposited is also important. The implanted ion must be lower than the equilibration depth, above which, chaotic sputtering occurs until a sputter equilibrium is reached. However, care should be taken to ensure that the implanted ions do not pass the layer of interest in the sample- if the matrix changes, the implanted ions will no longer sputter at the same rate, causing concentrations to be inaccurate.
Matrix Effects
In SIMS, matrix effects are common and originate from changes in the ionization efficiency (the number of ionized species compared to totally number of sputtered species) and the sputtering yield. One of the main causes of matrix effects is the primary beam. As was discussed earlier, electronegative primary ions increases the number of positively charged secondary ions, while electropositive primary ions increases the number of negatively charged secondary ions. Matrix effects can also be caused by species present in the sample. The consequences of these matrix effects depends on the identity of the effecting species and the composition of the sample. To correct for matrix effects, it is necessary to use a standards and compare the results with RSFs (see above).
Detection Limits
For most atoms, SIMS can accurately detect down to a concentration of 1ppm. For some atoms, a concentration of 10 ppb may be achieved. The detection limit in this instrument is set by the count rate (how many ions may be counted per second) rather than by a limitation due to the mass of the ion. So, to decrease detection limit, the sample can be sputtered at a higher rate.
Sensitivity
The sensitivity of SIMS analysis depends on the element of interest, the matrix the element is in, and what primary ion is used. The sensitivity of SIMS towards a particular ion may easily be determined by looking at an RSF table. So, for example, looking at an RSF table for an oxygen primary ion and positive secondary ions shows that the alkali metals have the highest sensitivity (they have low RSF values). This makes sense, since these atoms have the lowest electron affinities and are the easiest to ionize. Similarly, looking at the RSF table for a cesium primary ion beam and negative secondary ions shows that the halogens have the highest sensitivity. Again, this makes sense since the halogens have the highest electron affinities and accept electrons easily.
Data Interpretation
Three types of spectra can be obtained from a SIMS analysis. From static SIMS, a mass spectrum is produced. From dynamic SIMS, a depth profile or mass spectrum is produced. And, not surprisingly, an image is produced from imaging SIMS.
Mass Spectra
As with a typical mass spectrum, the mass to charge ratio (m/z) is compared to the ion intensity. However, because SIMS is capable of a dynamic range of 9 orders of magnitude, the intensity of the SIMS mass spectra is displayed on a logarithmic scale. From this data, it is possible to observe isotopic data as well as molecular ion data and their relative abundances on the sample surface.
Depth Profile
A depth profile displays the intensity of one or more ions with respect to the depth (or, equivalently, time). Caution should be taken when interpreting this data- if ions are collected off the wall of the crater rather than from the bottom, it will appear that the layer in question runs deeper in the sample than it actually does.
Matrix Assisted Laser Desorption Ionization (MALDI)
Development of MALDI
As alluded to in previous sections, laser desorption (LD) was originally developed to produce ions in the gas phase. This is accomplished by pulsing a laser on the sample surface to ablate material causing ionization and vaporization of sample particles. However, the probability of attaining a valuable mass spectrum is highly dependent on the properties of the analyte. Furthermore, masses observed in the spectrum were products of the molecular fragmentation if the molecular weight was above 500 Da. Clearly, this was not optimal instrumentation for analyzing large biomolecules and bioinorganic compounds that do not ionize well and samples were degraded during the process. Matrix-assisted laser desorption ionization (MALDI) was developed and alleviated many issues associated with LD techniques. The MALDI technique allows proteins with masses up to 300,000 Da to be detected. This is important to bioinorganic chemistry when visualizing products resulting from catalytic reactions, metalloenzyme modifications, and other applications.
MALDI as a process decreases the amount of damage to the sample by protected the individual analytes within a matrix (more information of matrices later). The matrix itself absorbs much of the energy introduced by the laser during the pulsing action. Plus, energy absorbed by the matrix in subsequently transferred to the analyte (Figure \(19\)). Once, energized, the analyte ionized and is released into a plume of ions containing common cations (Na+, K+, etc.), matrix ions, and analyte ions. These ions then enter the flight tube where they are sent to the detector. Different instrumental modes adjust for differences in ion flight time (Figure \(19\)). The MALDI technique is also more sensitive and universal since readjustments to match absorption frequency is not necessary due to the matrix absorption. Many of the commonly used matrices have similar wavelength absorptions Table \(3\).
Matrix Wavelength Application Structure
Cyano-4-hydroxycinnamic acid UV: 337nm, 353 nm Peptides
6-Aza-2-thiothymine UV: 337 nm, 353 nm Proteins, peptides, non-covalent complexes
k,m,n-Di(tri)hydroxy-acetophenone
UV: 337 nm,
353 nm
Proteins, peptides, non-covalent complexes
2,5-Dihydroxybenzoic acid (requires 10% 2-hydroxy-5-methoxybenzoic acid) UV: 337 nm, 353 nm Proteins, peptides, carbohydrates, synthetic polymers
Sinapinic acid UV: 337 nm, 353 nm Proteins, peptides
Nicotinic acid UV: 266 nm Proteins, peptides, adduct formation
Succinic acid IR: 2.94 µm, 2.79 µm Proteins, peptides
Glycerol IR: 2.94 µm, 2.79 µm Proteins, peptides
Table \(3\) Table of different small molecules used as MALDI matrices.
Collection of MALDI Spectra
The process of MALDI takes place in 2 steps:
1. Sample preparation.
2. Sample ablation
Sample Preparation
The sample for analysis is combined with a matrix (a solvent containing small organic molecules that have a strong absorbance at the laser wavelength) and added to the MALDI plate (Figure \(18\)). The sample is then dried to the surface of the plate before it is analyzed, resulting in the matrix doped with the analyte of interest as a "solid solution". Figure \(20\) shows the loading of a peptide in water in cyano-4-hydroxycinnamic acid matrix.
Prior to insertion of the plate into the MALDI instrument, the samples must be fully dried. The MALDI plate with the dry samples is placed on a carrier and is inserted into the vacuum chamber (Figure \(21\)a-b). After the chamber is evacuated, it is ready to start the step of sample ablation.
After the sample is loaded into the instrument, the instrument camera will show activate to show a live feed from inside of the chamber. The live feed allows the controller to view the location where the spectrum is being acquired. This becomes especially important when the operator manually fires the laser pulses.
Collection of a Spectrum
When the sample is loaded into the vacuum chamber of the instrument, there are several options for taking a mass spectrum. First, there are several modes for the instrument, two of which are described here: axial and reflectron modes.
Axial Mode
In the axial (or linear) mode, only a very short ion pulse is required before the ions go down the flight tube and hit the detector. This mode is often used when exact accuracy is not required since the mass accuracy has an error of about +/- 2-5%. Sources of these errors are found in the arrival time of different ions through the flight tube to the detector. Errors in the arrival time are caused by the difference in initial velocity with which the ions travel based on their size. The larger ions have a lower initial velocity, thus they reach the detector after a longer period of time. This decreases the mass detection resolution.
Reflectron Mode
In the reflectron (“ion mirror”) mode, ions are refocused before they hit the detector. The reflectron itself is actually a set of ring electrodes that create an electric field that is constant near the end of the flight tube. This causes the ions to slow and reverse direction towards a separate detector. Smaller ions are then brought closer to large ions before the group of ions hit the detector. This assists with improving detection resolution and decreases accuracy error to +/- 0.5%.
Example of MALDI Application
While MALDI is used extensively in analyzing proteins and peptides, it is also used to analyze nanomaterials. The following example describes the analysis of fullerene analogues synthesized for a high performance conversion system for solar power. The fullerene C60 is a spherical carbon molecule consisting of 60 sp2carbon atoms, the properties of which may be altered through functionalization. A series of tert-butyl-4-C61-benzoate (t-BCB) functionalized fullerenes were synthesized and isolated. MALDI was not used extensively as a method for observing activity, but instead was used as a conformative technique to determine the presence of desired product. Three fullerene derivatives were synthesized (Figure \(24\)).The identity and number of functional groups were determined using MALDI (Figure \(25\)).
Surface-Assisted Laser Desorption/Ionization Mass Spectrometry (SALDI-MS)
Surface-assisted laser desorption/ionization mass spectrometry, which is known as SALDI-MS, is a soft mass spectrometry technique capable of analyzing all kinds of small organic molecules, polymers and large biomolecules. The essential principle of this method is similar to (matrix-assisted laser desorption/ionization mass spectrometry) MALDI-MS (see http://cnx.org/contents/925e204d-d85...3e4d60057b37@1), but the organic matrix commonly used in MALDI has been changed into the surface of certain substrates, usually inorganic compounds. This makes SALDI a matrix-free ionization technique that avoids the interference of matrix molecules.
SALDI is considered to be a three-step process shown in Figure \(26\).
• Samples are mixed with the substrates that provide large surface area to support sample molecule.
• The samples are irradiated with IR of UV laser pulses when the energy of laser pulses are absorbed by the substrates and transferred to the sample molecules.
• Desorption and ionization process are initiated, which produces ions that are accelerated into the analyzer.
Since the bulk of energy input goes to substrates instead of the sample molecules, it is thought to be a soft ionization technique useful in chemistry and chemical biology fields.
The most important characteristic of the substrate in SALDI is a large surface areas. In the past 30 years, efforts have been made to explore novel substrate materials that increase the sensitivity and selectivity in SALDI-MS. Depending on the substrate compounds being used, the interaction between the substrate materials and sample molecules could be covalent, non-covalent such as hydrophobic effect, bio-specific such as recognition between biotins and avidins, and between antigens and antibodies, or electrostatic. With the unique characteristics stated above, SALDI is able to combine the advantages of both hard and soft ionization techniques. On one hand, low molecular weight (LMW) molecules could be analyzed and identified in SALDI-MS, which resembles the function of most hard ionization techniques. On the other hand, molecular or quasi-molecular ions would dominate the spectra as what we commonly see in the spectra prepared by soft ionization techniques.
History
The SALDI technique actually emerged from its well-known rival technique, MALDI. The development of soft ionization techniques, which mainly included MALDI and ESI, enabled chemists and chemical biologists to analyze large polymers and biomolecules using mass spectrometry. This should be attributed to the soft ionization process which prohibited large degree of fragmentation that complicated the spectra, and resultant ions were dominantly molecular ions or quasi-molecular ions. In other words, tolerance of impurities would be increased since the spectra became highly simplified. While it was effective in determining molecular weight of the analytes, the matrix peaks would also appear in low mass range, which seriously interfered with the analysis of LMW analytes. As a result, the SALDI method emerged to resolve the problem by replacing matrix with surface that was rather stationary.
The original idea of SALDI was raised by Tanaka (Figure \(27\)) in 1988. Ultra-fine cobalt powders with an average diameter of about 300 Å that were mixed in the sample were responsible of “rapid heating” due to its high photo-absorption and low heat capacity. With a large surface area, the cobalt powders were able to conduct heat to large numbers of surrounding glycerol liquid and analyte molecules, which indeed resulted in a thermal desorption/ionization mechanism. The upper mass limit was increased up to 100 kDa, which is shown in Figure \(28\) for the analysis of lysozymes from chicken egg white.
The low mass range was not paid much attention at the beginning, and the concept of “surface-assisted” was not proposed until Sunner (Figure \(29\)) and co-workers reported the study on graphite SALDI in 1995. And that was the first time the term “SALDI” was used by chemists. They achieved obtaining mass spectra of both proteins and LWM analytes by irradiating mixture of 2-150 μm graphite particles and solutions of analytes in glycerol. Although fragmentation of the LMW glycerol molecules was relatively complicated (Figure \(30\)), it was still considered as a significant improvement in ionizing small molecules by soft ionization methods.
Despite the breakthrough mentioned above, SALDI did not widely interest chemists. Regardless of its drawbacks in upper mass limit for the analysis of large molecules, the sensitivity was far from being satisfactory compared to hard ionization techniques in terms of testing LMW molecules. This situation has been changed ever since nanomaterials were introduced as the substrates, especially for the successful development of desorption/ionization on porous silicon (DIOS) shown in Figure \(31\). In fact, majority of research on SALDI-MS has been focusing on exploiting novel nanomaterial substrates, aiming at further broadening the mass range, improving the reproducibility, enhancing the sensitivity and extending the categories of compounds that were able to be analyzed. So far, a variety of nanomaterials have been utilized in SALDI-MS, including carbon-based nanomaterials, metal-based nanomaterials, semiconductor-based nanomaterials, etc.
Mechanism of Desorption and Ionization
As a soft ionization technique, SALDI is expected to produce molecular or quasi-molecular ions in the final mass spectra. Since this requires the ionization process to be both effective and controllable, which means sufficient sample molecules could be ionized while further fragmentation should be mostly avoided.
While the original goal mentioned above has been successfully accomplished for years, the study on desorption and ionization mechanism in detail is still one of the most popular and controversial research areas of SALDI at present. It is mostly agreed that the substrate material has played a significant role of both activating and protecting the analyte molecules. The schematic picture describing the entire process is shown in Figure \(33\). Energy input from the pulsed laser is largely absorbed by the substrate material, which is possibly followed by complicated energy transfer from the substrate material to the absorbed analyte molecules. As a result, both thermal and non-thermal desorption could be triggered, and for different modes of SALDI experiments, the specific desorption and ionization process greatly differs.
The mechanism for porous silicon surface as a SALDI substrate has been widely studied by researchers. In general, the process can be subdivided into the following steps:
1. Adsorption of neutral analyte molecules takes places by formation of hydrogen bonds with surface silanol groups;
2. Electronic excitation of the substrate under the influence of the laser pulse generates a free electron/“hole” pair. This separation causes enrichment of positive charges near the surface layer; as a result, the acidity of the silanol groups increases and proton transfer to analytes becomes easier;
3. Analyte ions are thermally activated and thus dissociated from the surface.
When no associated proton donor is present in the vicinity of analyte molecules, desorption might occur without ionization. Subsequently, the desorbed analyte molecule is ionized in the gas phase by collision with incoming ions.
Signal Enhancement Factors on SALDI Substrates
Since it is the active surface responsible for adsorption, desorption and ionization of analyte molecules that features the technique, the surface chemistry of substrate material is undoubtedly crucial for SALDI performance. But it is rather difficult to draw a general conclusion due to the fact that the affinity between different classes of substrates and analytes is considerably versatile. Basically, the interaction between those two components has an impact on trapping and releasing the analyte molecules, as well as the electronic surface state of the substrate and energy transfer coefficiency.
Another important aspect is the physical properties of the substrates which could alter desorption and ionization process directly, especially for the thermally activated pathway. This is closely related to rapid temperature increase on the substrate surface. Those properties include optical absorption coefficiency, heat capacity and heat conductivity (or heat diffusion rate). First, higher optical absorption coefficiency enables the substrate to absorb and generate more heat when certain amount of energy is provided by the laser source. Moreover, a lower heat capacity usually leads to larger temperature increase upon the same amount of heat. In addition, a lower hear conductivity helps the substrate to maintain a high temperature that will further result in a higher temperature peak. Therefore, the thermal desorption and ionization could occur more rapidly and effectively.
Instrumentation
The instrument involved in SALDI shown in Figure \(34\) is similar with in MALDI to large extent. It contains a laser source which generates pulsed laser that excites the sample mixture. There is a sample stage that places the sample mixture of substrate materials and analytes. Usually the mass analyzer and ion detector are on the other side to let the ions go through and become separated and detected based on different m/z value. Recent progress has been made that incorporates direct analysis in real time (DART) ion source into SALDI-MS system which makes it possible to perform the analysis in ambient conditions. Figure \(35\) shows their ambient SALDI-MS method.
Examples of Nanomaterials Used For Analysis of LMW Analytes in SALDI-MS
Porous Silicon as a Substrate Material
Porous silicon with large surface area could be used to trap certain analyte molecules for matrix-free desorption and ionization process. More interestingly, a large ultraviolet absorption coefficiency was found for this porous material, which also improved the ionization performance. It has been reported that using porous silicon as the substrate in SALDI-MS was able to work at femtomole and attomole levels of analytes including as peptides, caffeine, an antiviral drug molecule (WIN), reserpine and N-octyl-β-D-glucopyranoside . Compared to conventional MALDI-MS, the DIOS-MS (which was the specific type of SALDI in this research) successfully eliminated the matrix interference and displayed much higher quasi-molecular peak (MH+), which could be observed in Figure \(36\). What’s more, chemical modification of the porous silicon was able to further optimize the ionization characteristics.
Graphene as a Surface Material
Graphene is a type of popular carbon nanomaterial discovered in 2004. It has a large surface area that could effectively attach the analyte molecules. On the other hand, the efficiency of desorption/ionization for analytes on a layer of graphene can be enhanced for its simple monolayer structure and unique electronic properties. Polar compounds including amino acids, polyamines, anticancer drugs, and nucleosides can be successfully analyzed. In addition, nonpolar molecules can be analyzed with high resolution and sensitivity due to the hydrophobic nature of graphene itself. Compared with a conventional matrix, graphene exhibited a high desorption/ionization efficiency for nonpolar compounds. The graphene substrate functions as a substrate to trap analytes, and it transfers energy to the analytes upon laser irradiation, which allows for the analytes to be readily desorbed/ionized and the interference of matrix to be eliminated. It has been demonstrated that the use of graphene as a substrate material avoids the fragmentation of analytes and provides good reproducibility and a high salt tolerance, underscoring the potential application of graphene as a matrix for MALDI-MS analysis of practical samples in complex sample matrixes. It is also proved that the use of graphene as an adsorbent for the solid-phase extraction of squalene could improve greatly the detection limit.
Combination with GC
Gas-phase SALDI-MS analysis has a relatively high ionization efficiency, which leads to a high sensitivity. In 2009, gas chromatography (GC) was first used with SALDI-MS, where the SALDI substrate was amorphous silicon and the analyte was N-alkylated phenylethylamines. Detection limits were in the range of attomoles, but improvements are expected in the future. The combination with GC is expected to expand the use of SALDI-MS even more that SALDI could be applied to separation and identification of samples with more complexity. The instrumental setup is shown in Figure \(37\).
Differential Electrochemical Mass Spectrometry
In the study of electrochemistry, it had always been a challenge to obtain immediate and continuous detection of electrochemical products due to the limited formation on the surface of the electrode, until the discovery of differential electrochemical mass spectrometry. Scientists initially tested the idea by combining porous membrane and mass spectrometry for product analysis in the study of oxygen generation from HClO4 using porous electrode in 1971. In 1984, another similar experiment was performed using a porous Teflon membrane with 100 μm of lacquers at the surface between the electrolytes and the vacuum system. Comparing to previous experiment, this experiment has demonstrated a vacuum system with improved time derivative that showed nearly immediate detection of volatile electrochemical reaction products, with high sensitivity of detecting as small as “one monolayer” at the electrode. In summary, the experiment demonstrated in 1984 not only showed continous sample detection in mass spectrometry but also the rates of formation, which distinguished itself from the technique performed previously in 1971. Hence, this method was called differential electrochemical mass spectrometry (DEMS). During the past couple decades, this technique has evolved from using classic electrode to rotating disc electrode (RDE), which provides a more homogeneous and faster transport of reaction species to the surface of the electrode.
Described in basic terms, differential electrochemical mass spectrometry is a characterization technique that analyzes specimens using both the electrochemical half-cell experimentation and mass spectrometry. It uses non-wetting membrane to separate the aqueous electrolyte and gaseous electrolyte, which gaseous electrolyte will permeate through the membrane and will be ionized and detected in the mass spectrometer using continuous, two-stage vacuum system. This analytical method can detect gaseous or volatile electrochemical reactants, reaction products, and even reaction intermediates. The instrument consists of three major components: electrochemical half-cell, PTFE (polytetrafluoroethylene) membrane interface, and quadrupole mass spectrometer (QMS), which is a part of the vacuum system.
DEMS Operations
The entire assembly of the instrument is shown in Figure \(38\), which consists of three major components: an electrochemical half-cell, a PTFE membrane interface, and the quadrupole mass spectrometer. In this section, each component will be explained and its functionality will be explored, and additional information will be provided at the end of this section. The PTFE membrane is micro-porous membrane that separates the aqueous electrolyte from volatile electrolyte which will be drawn to the high vacuum portion. Using the high vacuum suction, the gaseous or volatile species will be allowed to permeate through the membrane using differential pressure, leaving the aqueous materials on the surface due to hydrophobic nature of the membrane. The selection of the membrane material is very important to maintain both the hydrophobicity and proper diffusion of volatile species. The species permeated to QMS will be monitored and measured, and the kinetics of formation will be determined at the end. Depending on the operating condition, different vacuum pumps might be required.
Electrochemical Cells
First major component of the DEMS instrument is the design of electrochemical cells. There are many different designs that have been developed for the past several decades, depending on the types of electrochemical reactions, the types and sizes of electrodes. However, only the classic cell will be discussed in this chapter.
DEMS method was first demonstrated using the classical method. A conventional setup of electrochemical cell is showed in Figure \(39\). The powdered electrode material is deposited on the porous membrane to form the working electrode, shown as Working Electrode Material in Figure \(39\). In the demonstration by Wolber and Heitbaum, the electrode was prepared by having small Pt particles deposited onto the membrane by painting a lacquer. It was later in other experimentations evolved to use sputtering electro-catalyst layer for a more homogenous surface. The aqueous cell electrolyte is shielded with an upside down glass body with vertical tunnel opening to the PTFE membrane. The working electrode material lies above the PTFE membrane, where it is supported mechanically by stainless steel frit inside vacuum flange. Both the working electrode material and PTFE membrane are compressed between vacuum castings and PTFE spacer, which is a ring that prevents the electrolyte from leakage. The counter electrode (CE) and reference electrode (RE) made from platinum wire are placed on top of the working electrode material to create the electrical contact. One of the main advantages of the classical design is fast respond time, with high efficiency of “0.5 for lacquer and 0.9 with the sputter electrode”. However, this method poses certain difficulties. First, the electrolyte materials will be absorbed on the working electrode before it permeates through the membrane. Due to the limitation of absorption rate, the concentration on the surface of the electrode will be lower than bulk. Second, the aqueous volatile electrolyte must be absorbed onto working electrode, and then followed by evaporation through the membrane. Therefore, the difference in rates of absorption and evaporation will create a shift in equilibrium. Third, this method is also limited to the types of material that can be deposited on the surface, such as single crystals or even some polycrystalline electrode surfaces. Lastly, the way RE is position could potentially introduce impurity into the system, which will interfere with the experiment.
Membrane Interface
PTFE membrane is placed between the aqueous electrolyte cell and the high vacuum system on the other end. It acts as a barrier that prevents aqueous electrolyte from passing through, while its selectivity allows the vaporized electrochemical species to transport to the high vacuum side, which the process is similar to vacuum membrane distillation shown in Figure \(41\). In order to prevent the aqueous solution from penetrating through the membrane, the surface of the membrane must be hydrophobic, which is a material property that repels water or aqueous fluid. Therefore, at each pore location, there is vapor and liquid interface where the liquid will remain on the surface while the vapor will penetrate into the membrane. Then the transportation of the material in vapor phase is triggered by the pressure difference created from the vacuum on the other end of the membrane. Therefore, the size of the pore is crucial in controlling its hydrophobic properties and the transfer rate through the membrane. When the pore size is less than 0.8 μm, the hydrophobic property is activated. This number is determined by calculating the surface tension of liquid, the contact angle and the applied pressure. Therefore, a membrane with relatively small pore sizes and large pore distribution is desired. In general membrane materials used are “typically 0.02 μm in size with thickness between 50 and 110 μm”. In terms of materials, there are other materials such as polypropylene and polyvinylidene fluoride (PVDF)(Figure \(41\)) have been tested; however, PTFE material (Figure \(42\)) as membrane has demonstrated better durability and chemical resistance to electrochemical environment. Therefore, PTFE is shown to be the better candidate for such application, and is usually laminated onto polypropylene for enhanced mechanical properties. Despite the hydrophobic property of PTFE material, a significant amount aqueous material penetrates through the membrane due to the large pressure drop. Therefore, the correct sizing of the vacuum pumps is crucial to maintain the flux of gas to be transported to the mass spectrometry at the desire pressure. More information regarding the vacuum system will be discussed. In addition, capillary has been used in replacement of the membrane; however, this method will not be discussed here.
Vacuum and QMS
The correctly sized vacuum system can ensure the maximum amount of vapor material to be transported across the membrane. When the pressure drop is not adequate, part of the vapor material may be remain on the aqueous side as shown Figure \(43\). However, when the pressure drop is too large, too much aqueous electrolyte will be pulled from the liquid-vapor interface, subsequently increasing load on the vacuum pumps. In the cases of improper sized pumps can reduce pump efficiency and lower pump life-time if such problem is not corrected immediately. In addition, in order for mass spectrometry operate properly, the gas flux will need to maintain at a certain flow. Therefore, the vacuum pumps should provide steady flux of gas around 0.09 mbar/s.cm2 consisting mostly with gaseous or volatile species and other species that will be sent to mass spectrometry for analyzing. In additional, due to the limitation of pump speed of single vacuum pump, vacuum system with two or more pumps will be needed. For example, if 0.09 mbar/s.cm2 is required and pump speed of 300 s-1 that operates at 10-5 mbar, the acceptable membrane geometrical area is 0.033 cm-2. In order to increase the membrane area, addition pumps will be required in order to achieve the same gas flux.
Additional Information
There are several other analytical techniques such as cyclic voltammetry, potential step and galvanic step that can be combined with DEMS experiment. Cyclic voltammetry can provide both quantitative and qualitative results using the potential dependence. As a result, both the ion current of interested species and faradaic electrode current (the current generated by the reduction or oxidation of some chemical substance at an electrode) will be recorded when combining cyclic voltammetry and DEMS.
Applications
The lack of commercialization of this technique has limited it to only academic research. The largest field of application of DEMS is on electro-catalytic reactions. In addition, it is also used fuel cell research, detoxification reactions, electro-chemical gas sensors or more fundamental relevant research such as decomposition of ionic liquids etc.
Fuel Cell Differential Electrochemical Mass Spectrometry: Ethanol Electro-oxidation
The ethanol oxidation reaction was studied using alkaline membrane electrode assemblies (MEAs), constructed using nanoparticle Pt catalyst and alkaline polymeric membrane. DEMS will be use to study the mechanics of the ethanol oxidation reaction on the pt-based catalysts. The relevant products of the oxidation reaction are carbon dioxide, acetaldehyde and acetic acid. However, both carbon dioxide and acetaldehyde has the same molecular weight, which 44 g/mole. One approach is to monitor the major fragments where ionized CO22+ at m/z = 22 and COH+ at m/z = 29 were used. Differential electrochemical mass spectrometry can detect volatile products of the electrochemical reaction; however, detections can be varied by solubility or boiling point. CO2 is very volatile, but also soluble in water. If KOH is present, DEMS will not detect any CO2traces. Therefore, all extra alkaline impurities should be removed before measurements are taken. The electrochemical characteristics can also be measured under various conditions and examples shown in Figure \(43\). In addition, the CCE (CO2 current efficiency) was measured under different potentials. Using the CCE, the study concluded that the ethanol undergoes more complete oxidation using alkaline MEA than acidic MEA.
Studies on the Decomposition of Ionic Liquids
Ionic liquids (IL) have several properties such as high ionic conductivity, low vapor pressure, high thermal and electrochemical stability, which make them great candidate for battery electrolyte. Therefore, it is important to have better understanding of the stability of the reaction and of the products formed during decomposition behavior. DEMS is a powerful method where it can provide online detection of the volatile products; however, it runs into problems with high viscosity of ILs and low permeability due to the size of the molecules. Therefore, researchers modified the traditional setup of DEMS, which the modified method made use of the low vapor pressure of ILs and have electrochemical cell placed directly into the vacuum system. This experiment shows that this technique can be designed for very specific application and can be modified easily.
Conclusion
DEMS technique can provide on-line detection of products for electrochemical reactions both analytically and kinetically. In addition, the results are delivered with high sensitivity where both products and by-products can be detected as long as they are volatile. It can be easily assembled in the laboratory environment. For the past several decades, this technique has demonstrated advanced development and has delivered good results for many applications such as fuel cells, gas sensors etc. However, this technique has its limitation. There are many factors that need to be considered when designing this system such as half-cell electrochemical reaction, absorption rate and etc. Due to these constraints, the type of membrane should be selected and pump should be sized accordingly. Therefore, this characterization method is not one size fits all and will need to be modified base on the experimental parameters. Therefore, next step of development for DEMS is not only to improve its functions, but also to be utilized beyond the academic laboratory. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.11%3A_Mass_Spectrometry.txt |
• 5.1: Dynamic Headspace Gas Chromatography Analysis
Gas chromatography (GC) is a very commonly used chromatography in analytic chemistry for separating and analyzing compounds that are gaseous or can be vaporized without decomposition. Because of its simplicity, sensitivity, and effectiveness in separating components of mixtures, gas chromatography is an important tools in chemistry. It is widely used for quantitative and qualitative analysis of mixtures, and for the purification of compounds.
• 5.2: Gas Chromatography Analysis of the Hydrodechlorination Reaction of Trichloroethene
Trichloroethene (TCE) is a widely spread environmental contaminant and a member of the class of compounds known as dense non-aqueous phase liquids (DNAPLs). Pd/Al2O3 catalyst has shown activity for the hydrodechlorination (HDC) of chlorinated compounds.
• 5.3: Temperature-Programmed Desorption Mass Spectroscopy Applied in Surface Chemistry
The temperature-programmed desorption (TPD) technique is often used to monitor surface interactions between adsorbed molecules and substrate surface. Utilizing the dependence on temperature is able to discriminate between processes with different activation parameters, such as activation energy, rate constant, reaction order and Arrhenius pre-exponential factorIn order to provide an example of the set-up and results from a TPD experiment we are going to use an ultra-high vacuum (UHV) chamber equ
05: Reactions Kinetics and Pathways
Gas chromatography (GC) is a very commonly used chromatography in analytic chemistry for separating and analyzing compounds that are gaseous or can be vaporized without decomposition. Because of its simplicity, sensitivity, and effectiveness in separating components of mixtures, gas chromatography is an important tools in chemistry. It is widely used for quantitative and qualitative analysis of mixtures, for the purification of compounds, and for the determination of such thermochemical constants as heats of solution and vaporization, vapor pressure, and activity coefficients. Compounds are separated due to differences in their partitioning coefficient between the stationary phase and the mobile gas phase in the column.
Physical Components of a GC System
A gas chromatograph (Figure $1$) consists of a carrier gas system, a sampling system, a separation system, a detection system, and a data recording system.
An ideal separation is judged by resolution, efficiency, and symmetry of the desired peaks, as illustrated by
The carrier gas system consists of carrier gas sources, purification, and gas flow control. The carrier gas must be chemically inert. Commonly used gases include nitrogen, helium, argon, and carbon dioxide. The choice of carrier gas often depends upon the type of detector used. A molecular sieve is often contained in the carrier gas system to remove water and other impurities.
Auto Sampling System
An auto sampling system consists of auto sampler, and vaporization chamber. The sample to be analyzed is loaded at the injection port via a hypodermic syringe and it will be volatilized as the injection port is heated up. Typically samples of one micro liter or less are injected on the column. These volumes can be further reduced by using what is called a split injection system in which a controlled fraction of the injected sample is carried away by a gas stream before entering the column.
Separation System
The separation system consists of columns and temperature controlling oven. The column is where the components of the sample are separated, and is the crucial part of a GC system. The column is essentially a tube that contains different stationary phases have different partition coefficients with analytes,and determine the quality of separation. There are two general types of column: packed (Figure $2$) and capillary also known as open tubular (Figure $3$).
• Packed columns contain a finely divided, inert, solid support material coated with liquid stationary phase. Most packed columns are 1.5 – 10 m in length and have an internal diameter of 2 – 4 mm.
• Capillary columns have an internal diameter of a few tenths of a millimeter. They can be one of two types; wall-coated open tubular (WCOT) or support-coated open tubular (SCOT). Wall-coated columns consist of a capillary tube whose walls are coated with liquid stationary phase. In support-coated columns, the inner wall of the capillary is lined with a thin layer of support material such as diatomaceous earth, onto which the stationary phase has been adsorbed. SCOT columns are generally less efficient than WCOT columns. Both types of capillary column are more efficient than packed columns.
Detectors
The purpose of a detector is to monitor the carrier gas as it emerges from the column and to generate a signal in response to variation in its composition due to eluted components. As it transmits physical signal into recordable electrical signal, it is another crucial part of GC. The requirements of a detector for GC are listed below.
Detectors for GC must respond rapidly to minute concentration of solutes as they exit the column, i.e., they are required to have a fast response and a high sensitivity. Other desirable properties of a detector are: linear response, good stability, ease of operation, and uniform response to a wide variety of chemical species or, alternatively predictable and selective response to one or more classes of solutes.
Recording Devices
GC system originally used paper chart readers, but modern system typically uses an online computer, which can track and record the electrical signals of the separated peaks. The data can be later analyzed by software to provide the information of the gas mixture.
How Does GC Work?
Separation Terminology
An ideal separation is judged by resolution, efficiency, and symmetry of the desired peaks, as illustrated by Figure $4$.
Resolution (R)
Resolution can be simply expressed as the distance on the output trace between two peaks. The highest possible resolution is the goal when developing a separation method. Resolution is defined by the R value, \ref{1}, which can be expressed mathamatically, \ref{2}, where k is capacity, α is selectivity, and N is the number of theoretical plates. An R value of 1.5 is defined as being the minimum required for baseline separation, i.e., the two adjacent peaks are separated by the baseline. Separation for different R values is illustrated in Figure $5$.
$R \ =\ capacity \times selectivity \times efficiency \label{1}$
$R\ =\ [k/(1+k)](\alpha -\ 1/\alpha)(N^{0.5}/4) \label{2}$
Capacity (k')
Capacity (k´) is known as the retention factor. It is a measure of retention by the stationary phase. It is calculated from \ref{3}, where tr = retention time of analyte (substance to be analyzed), and tm = retention time of an unretained compound.
$k'\ =\ (t_{r}-t_{m})/t_{m} \label{3}$
Selectivity
Selectivity is related to α, the separation factor (Figure $6$. The value of α should be large enough to give baseline resolution, but minimized to prevent waste.
Efficiency
Narrow peaks have high efficiency (Figure $7$), and are desired. Units of efficiency are "theoretical plates" (N) and are often used to describe column performance. "Plates" is the current common term for N, is defined as a function of the retention time (tr) and the full peak width at half maximum (Wb1/2), EQ.
$N \ =\ 5.545(t_{r}/W_{b1/2})^{2} \label{4}$
Peak Symmetry
The symmetry of a peak is judged by the values of two half peak widths, a and b (Figure $8$). When a = b, a peak is called symmetric, which is desired. Unsymmetrical peaks are often described as "tailing" or "fronting".
An Ideal Separation
The attributions of an ideal separation are as follows:
• Should meet baseline resolution of the compounds of interest.
• Each desired peak is narrow and symmetrical.
• Has no wasted dead time between peaks.
• Takes a minimal amount of time to run.
• The result is reproducible.
In its simplest form gas chromatography is a process whereby a sample is vaporized and injected onto the chromatographic column, where it is separated into its many components. The elution is brought about by the flow of carrier gas (Figure $9$).
The carrier gas serves as the mobile phase that elutes the components of a mixture from a column containing an immobilized stationary phase. In contrast to most other types of chromatography, the mobile phase does not interact with molecules of the analytes. Carrier gases, the mobile phase of GC, include helium, hydrogen and nitrogen which are chemically inert. The stationary phase in gas-solid chromatography is a solid that has a large surface area at which adsorption of the analyte species (solutes) take place. In gas-liquid chromatography, a stationary phase is liquid that is immobilized on the surface of a solid support by adsorption or by chemical bonding.
Gas chromatographic separation occurs because of differences in the positions of adsorption equilibrium between the gaseous components of the sample and the stationary phases (Figure $9$). In GC the distribution ratio (ratio of the concentration of analytes in stationary and mobile phase) is dependent on the component vapor pressure, the thermodynamic properties of the bulk component band and affinity for the stationary phase. The equilibrium is temperature dependent. Hence the importance of the selection the stationary phase of the column and column temperature programming in optimizing a separation.
Choice of Method
Carrier Gas and Flow Rate
Helium, nitrogen, argon, hydrogen and air are typically used carrier gases. Which one is used is usually determined by the detector being used, for example, a discharge ionization detection (DID) requires helium as the carrier gas. When analyzing gas samples, however, the carrier is sometimes selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred, because the argon in the sample does not show up on the chromatogram. Safety and availability are other factors, for example, hydrogen is flammable, and high-purity helium can be difficult to obtain in some areas of the world.
The carrier gas flow rate affects the analysis in the same way that temperature does. The higher the flow rate the faster the analysis, but the lower the separation between analytes. Furthermore, the shape of peak will be also effected by the flow rate. The slower the rate is, the more axial and radical diffusion are, the broader and the more asymmetric the peak is. Selecting the flow rate is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature.
Column Selection
Table $1$ shows commonly used stationary phase in various applications.
Stationary Phase Common Trade Name Temperature (Celsius) Common Applications
Polydimethyl Siloxane OV-1, SE-30 350 General-purpose nonpolar phase, hydrocarbons, polynuclear aromatics, drugs, steroids, PCBs
Poly(phenylmethyl-dimethyl) siloxane (10% phenyl) OV-3, SE-52 350 Fatty acid methyl esters, alkaloids, drugs, halogenated compounds
Poly(phenylmethyl) siloxane (50% phenyl) OV-17 250 Drugs, steroids, pesticides, glycols
Poly(trifluoropropyl-dimethyl) siloxane OV-210 200 Chlorinated aromatics, nitroaromatics, alkyl-substituted benzenes
Polyethylene glycol Carbowax 20M 250 Free acids, alcohols, ethers, essential oils, glycols
Poly(dicyanoallyldimethyl) siloxane OV-275 240 Polyunsaturated fatty acid, rosin acids, free acids, alcohols
Table $1$ Some common stationary phases for gas-liquid chromatography. Adapted from www.cem.msu.edu/~cem333/Week15.pdf
Column Temperature and Temperature Program
For precise work, the column temperature must be controlled to within tenths of a degree. The optimum column temperature is dependent upon the boiling point of the sample. As a rule of thumb, a temperature slightly above the average boiling point of the sample results in an elution time of 2 - 30 minutes. Minimal temperatures give good resolution, but increase elution times. If a sample has a wide boiling range, then temperature programming can be useful. The column temperature is increased (either continuously or in steps) as separation proceeds. Another effect that temperature may have is on the shape of peak as flow rate does. The higher the temperature is, the more intensive the diffusion is, the worse the shape is. Thus, a compromise has to be made between goodness of separation and retention time as well as peak shape.
Detector Selection
A number of detectors are used in gas chromatography. The most common are the flame ionization detector (FID) and the thermal conductivity detector (TCD). Both are sensitive to a wide range of components, and both work over a wide range of concentrations. While TCDs are essentially universal and can be used to detect any component other than the carrier gas (as long as their thermal conductivities are different from that of the carrier gas, at detector temperature), FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. However, an FID cannot detect water. Both detectors are also quite robust. Since TCD is non-destructive, it can be operated in-series before an FID (destructive), thus providing complementary detection of the same analytes.For halides, nitrates, nitriles, peroxides, anhydrides and organometallics, ECD is a very sensitive detection, which can detect up to 50 fg of those analytes. Different types of detectors are listed below in Table $2$, along with their properties.
Detector Type Support Gases Selectivity Detectability Dynamic Range
Flame Ionization (FID) Mass flow Mass flow Most organic compounds 100 pg 107
Thermal Conductivity (TCD) Concentration Reference Universal 1 ng 107
Electron Capture (FCD) Concentration Make-up Halides, nitrates, nitriles, peroxides, anhydrides, organometallic 50 fg 105
Nitrogen-Phosphorus Mass flow Hydrogen and air Nitrogen, phosphorus 10 pg 106
Flame Photometric (FPD) Mass flow Hydrogen and air possibly oxygen Sulphur, phosphorus, tin, boron, arsenic, germanium, selenium, chromium 100 pg 103
Photo-ionization (PID) Concentration Make-up Aliphatics, aromatics, ketones, esters, aldehydes, amines, heterocyclics, organosulphurs, some organometallics 2 pg 107
Hall electrolytic Conductivity Mass flow Hydrogen, oxygen Halide, nitrogen, nitrosamine, sulphur - -
Table $2$ Different types of detectors and their properties. Adapted from teaching.shu.ac.uk/hwb/chemis...m/gaschrm.html
Headspace Analysis Using GC
Most consumer products and biological samples are composed of a wide variety of compounds that differ in molecular weight, polarity, and volatility. For complex samples like these, headspace sampling is the fastest and cleanest method for analyzing volatile organic compounds. A headspace sample is normally prepared in a vial containing the sample, the dilution solvent, a matrix modifier, and the headspace (Figure $10$). Volatile components from complex sample mixtures can be extracted from non-volatile sample components and isolated in the headspace or vapor portion of a sample vial. An aliquot of the vapor in the headspace is delivered to a GC system for separation of all of the volatile components.
The gas phase (G in Figure $10$) is commonly referred to as the headspace and lies above the condensed sample phase. The sample phase (S in Figure $10$ contains the compound(s) of interest and is usually in the form of a liquid or solid in combination with a dilution solvent or a matrix modifier. Once the sample phase is introduced into the vial and the vial is sealed, volatile components diffuse into the gas phase until the headspace has reached a state of equilibrium as depicted by the arrows. The sample is then taken from the headspace.
Basic Principles of Headspace Analysis
Partition Coefficient
Samples must be prepared to maximize the concentration of the volatile components in the headspace, and minimize unwanted contamination from other compounds in the sample matrix. To help determine the concentration of an analyte in the headspace, you will need to calculate the partition coefficient (K), which is defined by \ref{5} ,where Cs is the concentration of analyte in sample phase and Cg is the concentration of analyte in gas phase. Compounds that have low K values will tend to partition more readily into the gas phase, and have relatively high responses and low limits of detection. K can be lowered by changing the temperature at which the vial is equilibrated or by changing the composition of the sample matrix.
$K\ =\ C_{s}/C_{g} \label{5}$
Phase Ratio
The phase ratio (β) is defined as the relative volume of the headspace compared to volume of the sample in the sample vial, \ref{6}, where Vs=volume of sample phase and Vg=volume of gas phase. Lower values for β (i.e., larger sample size) will yield higher responses for volatile compounds. However, decreasing the β value will not always yield the increase in response needed to improve sensitivity. When β is decreased by increasing the sample size, compounds with high K values partition less into the headspace compared to compounds with low K values, and yield correspondingly smaller changes in Cg. Samples that contain compounds with high K values need to be optimized to provide the lowest K value before changes are made in the phase ratio.
$\beta \ =\ V_{g}/V_{s} \label{6}$ | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/05%3A_Reactions_Kinetics_and_Pathways/5.01%3A_Dynamic_Headspace_Gas_Chromatography_Analysis.txt |
Trichloroethene (TCE) is a widely spread environmental contaminant and a member of the class of compounds known as dense non-aqueous phase liquids (DNAPLs). Pd/Al2O3 catalyst has shown activity for the hydrodechlorination (HDC) of chlorinated compounds.
To quantify the reaction rate, a 250 mL screw-cap bottle with 77 mL of headspace gas was used as the batch reactor for the studies. TCE (3 μL) is added in 173 mL DI water purged with hydrogen gas for 15 mins, together with 0.2 μL pentane as internal standard. Dynamic headspace analysis using GC has been applied. The experimental condition is concluded in the table below (Table $1$).
TCE 3 μL
H2 1.5 ppm
Pentane 0.2 μL
DI water 173 mL
1 wt% Pd/Al2O3 50 mg
Temperature 25 °C
Pressure 1 atm
Reaction time 1 h
Table $1$ The experimental condition in HDC of TCE.
Reaction Kinetics
First order reaction is assumed in the HDC of TCE, \ref{1}, where Kmeans is defined by \ref{2}, and Ccatis equal to the concentration of Pd metal within the reactor and kcat is the reaction rate with units of L/gPd/min.
$-dC_{TCE}/dt\ =\ k_{meas} \times C_{TCE} \label{1}$
$k_{meas} \ =\ k_{cat} \times C_{cat} \label{2}$
The GC Method
The GC methods used are listed in Table $3$.
GC type Agilent 6890N GC
Column Supelco 1-2382 40/60 Carboxen-1000 packed column
Detector FID
Oven temperature 210 °C
Flow rate 35 mL/min
Injection amount 200 μL
Carrier gas Helium
Detect 5 min
Table $3$ GC method for detection of TCE and other related chlorinated compounds.
Quantitative Method
Since pentane is introduced as the inert internal standard, the relative concentration of TCE in the system can be expressed as the ratio of area of TCE to pentane in the GC plot, \ref{3}.
$C_{TCE}\ =\ (peak\ area\ of\ TCE)/(peak\ area\ of\ pentane) \label{3}$
Results and Analysis
The major analytes (referenced as TCE, pentane, and ethane) are very well separated from each other, allowing for quantitative analysis. The peak areas of the peaks associated with these compounds are integrated by the computer automatically, and are listed in (Table $4$) with respect to time.
Time/min Peak area of pentane Peak area of TCE
0 5992.93 13464
5.92 6118.5 11591
11.25 5941.2 8891
16.92 5873.5 7055.6
24.13 5808.6 5247.4
32.65 5805.3 3726.3
43.65 5949.8 2432.8
53.53 5567.5 1492.3
64.72 5725.6 990.2
77.38 5624.3 550
94.13 5432.5 225.7
105 5274.4 176.8
Table $2$ Peak area of pentane, TCE as a function of reaction time.
Normalize TCE concentration with respect to peak area of pentane and then to the initial TCE concentration, and then calculate the nature logarithm of this normalized concentration, as shown in Table $3$.
Time (min) TCE/pentane TCE/pentane/TCEinitial In(TCE/Pentane/TCEinitial)
0 2.2466 1.0000 0.0000
5.92 1.8944 0.8432 -0.1705
11.25 1.4965 0.6661 -0.4063
16.92 1.2013 0.5347 -0.6261
24.13 0.9034 0.4021 -0.9110
32.65 0.6419 0.2857 -1.2528
43.65 0.4089 0.1820 -1.7038
53.53 0.2680 0.1193 -2.1261
64.72 0.1729 0.0770 -2.5642
77.38 0.0978 0.0435 -3.1344
94.13 0.0415 0.0185 -3.9904
105 0.0335 0.0149 -4.2050
Table $3$ Normalized TCE concentration as a function of reaction time
From a plot normalized TCE concentration against time shows the concentration profile of TCE during reaction (Figure $1$, while the slope of the logarithmic plot provides the reaction rate constant (\ref{1}).
From Figure $1$, we can see that the linearity, i.e., the goodness of the assumption of first order reaction, is very much satisfied throughout the reaction. Thus, the reaction kinetic model is validated. Furthermore, the reaction rate constant can be calculated from the slope of the fitted line, i.e., kmeas = 0.0414 min-1. From this the kcat can be obtained, \ref{4}.
$k_{cat}\ =\ k_{meas}/C_{Pd}\ =\ \frac{0.0414min^{-1}{(5 \times 10^{-4}\ g/0.173L)}\ =\ 14.32L/g_{Pd}\ min \label{4}$ | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/05%3A_Reactions_Kinetics_and_Pathways/5.02%3A_Gas_Chromatography_Analysis_of_the_Hydrodechlorination_Reaction_of_Trichloroethene.txt |
The temperature-programmed desorption (TPD) technique is often used to monitor surface interactions between adsorbed molecules and substrate surface. Utilizing the dependence on temperature is able to discriminate between processes with different activation parameters, such as activation energy, rate constant, reaction order and Arrhenius pre-exponential factorIn order to provide an example of the set-up and results from a TPD experiment we are going to use an ultra-high vacuum (UHV) chamber equipped with a quadrupole mass spectrometer to exemplify a typical surface gas-solid interaction and estimate several important kinetic parameters.
Experimental System
Ultra-high Vacuum (UHV) Chamber
When we start to set up an apparatus for a typical surface TPD experiment, we should first think about how we can generate an extremely clean environment for the solid substrate and gas adsorbents. Ultra-high vacuum (UHV) is the most basic requirement for surface chemistry experiments. UHV is defined as a vacuum regime lower than 10-9 Torr. At such a low pressure the mean free path of a gas molecule is approximately 40 Km, which means gas molecules will collide and react with sample substrate in the UHV chamber many times before colliding with each other, ensuring all interactions take place on the substrate surface.
Most of time UHV chambers require the use of unusual materials in construction and by heating the entire system to ~180 °C for several hours baking to remove moisture and other trace adsorbed gases around the wall of the chamber in order to reach the ultra-high vacuum environment. Also, outgas from the substrate surface and other bulk materials should be minimized by careful selection of materials with low vapor pressures, such as stainless steel, for everything inside the UHV chamber. Thus bulk metal crystals are chosen as substrates to study interactions between gas adsorbates and crystal surface itself. Figure $1$ shows a schematic of a TPD system, while Figure $2$ shows a typical TPD instrument equipped with a quadrupole MS spectrometer and a reflection absorption infrared spectrometer (RAIRS).
Pumping System
There is no single pump that can operate all the way from atmospheric pressure to UHV. Instead, a series of different pumps are used, according to the appropriate pressure range for each pump. Pumps are commonly used to achieve UHV include:
• Turbomolecular pumps (turbo pumps).
• Ionic pumps.
• Titanium sublimation pumps.
• Non-evaporate mechanical pumps.
UHV pressures are measured with an ion-gauge, either a hot filament or an inverted magnetron type. Finally, special seals and gaskets must be used between components in a UHV system to prevent even trace leakage. Nearly all such seals are all metal, with knife edges on both sides cutting into a soft (e.g., copper) gasket. This all-metal seal can maintain system pressures down to ~10-12 Torr.
Manipulator and Bulk Metal Crystal
A UHV manipulator (or sample holder, see Figure $2$) allows an object that is inside a vacuum chamber and under vacuum to be mechanically positioned. It may provide rotary motion, linear motion, or a combination of both. The manipulator may include features allowing additional control and testing of a sample, such as the ability to apply heat, cooling, voltage, or a magnetic field. Sample heating can be accomplished by thermal radiation. A filament is mounted close to the sample and resistively heated to high temperature. In order to simplify complexity from the interaction between substrate and adsorbates, surface chemistry labs often carry out TPD experiments by choosing a substrate with single crystal surface instead of polycrystalline or amorphous substrates (see Figure $1$).
Pretreatment
Before selected gas molecules are dosed to the chamber for adsorption, substrates (metal crystals) need to be cleaned through argon plasma sputtering, followed by annealing at high temperature for surface reconstruction. After these pretreatments, the system is again cooled down to very low temperature (liquid N2temp), which facilitating gas molecules adsorbed on the substrate surface. Adsorption is a process in which a molecule becomes adsorbed onto a surface of another phase. It is distinguished from absorption, which is used when describing uptake into the bulk of a solid or liquid phase.
Temperature-programmed Desorption Processes
After gas molecules adsorption, now we are going to release theses adsorbates back into gas phase by programmed-heating the sample holder. A mass spectrometer is set up for collecting these desorbed gas molecules, and then correlation between desorption temperature and fragmentation of desorbed gas molecules will show us certain important information. Figure $3$ shows a typical TPD experiment carried out by adsorbing CO onto Pd(111) surface, followed by programmed-heating to desorb the CO adsorbates.
Theory of TPD Experiment
Langmuir Isotherm
The Langmuir isotherm describes the dependence of the surface coverage of an adsorbed gas on the pressure of the gas above the surface at a fixed temperature. Langmuir isotherm is the simplest assumption, but it provides a useful insight into the pressure dependence of the extent of surface adsorption. It was Irving Langmuir who first studied the adsorption process quantitatively. In his proposed model, he supposed that molecules can adsorb only at specific sites on the surface, and that once a site is occupied by one molecule, it cannot adsorb a second molecule. The adsorption process can be represented as \ref{1}, where A is the adsorbing molecule, S is the surface site, and A─S stands for an A molecule bound to the surface site.
$A\ +\ S \rightarrow A\ -\ S \label{1}$
In a similar way, it reverse desorption process can be represented as \ref{2}.
$A\ -\ S \rightarrow A\ +\ S \label{2}$
According to the Langmuir model, we know that the adsorption rate should be proportional to ka[A](1-θ), where θ is the fraction of the surface sites covered by adsorbate A. The desorption rate is then proportional to kdθ. ka and kd are the rate constants for the adsorption and desorption. At equilibrium, the rates of these two processes are equal, \ref{3} - \ref{4}.We can replace [A] by P, where P means the gas partial pressure, \ref{6}.
$k_{a} [A](1-\theta )\ =\ k_{d} \theta \label{3}$
$\frac{\theta }{1\ -\ \theta }\ =\ \frac{k_{a}}{k_{d}}[A] \label{4}$
$K\ =\ \frac{k_{a}}{k_{d}} \label{5}$
$\theta \ =\ \frac{K[A]}{1+K[A]} \label{6}$
$\theta \ =\ \frac{KP}{1+KP} \label{7}$
We can observe the equation above and know that if [A] or P is low enough so that K[A] or KP << 1, then θ ~ K[A] or KP, which means that the surface coverage should increase linearly with [A] or P. On the contrary, if [A] or P is large enough so that K[A] or KP >> 1, then θ ~ 1. This behavior is shown in the plot of θ versus [A] or P in Figure $4$.
Derivation of Kinetic Parameters Based on TPD Results
Here we are going to show how to use the TPD technique to estimate desorption energy, reaction energy, as well as Arrhenius pre-exponential factor. Let us assume that molecules are irreversibly adsorbed on the surface at some low temperature T0. The leak valve is closed, the valve to the pump is opened, and the “density” of product molecules is monitored with a mass spectrometer as the crystal is heated under programmed temperature \ref{8}, where β is the heating rate (~10 °C/s). We know the desorption rate depends strongly on temperature, so when the temperature of the crystal reaches a high enough value so that the desorption rate is appreciable, the mass spectrometer will begin to record a rise in density. At higher temperatures, the surface will finally become depleted of desorbing molecules; therefore, the mass spectrometer signal will decrease. According to the shape and position of the peak in the mass signal, we can learn about the activation energy for desorption and the Arrhenius pre-exponential factor.
$T\ =\ T_{0}\ +\ \beta T \label{8}$
First-Order Process
Consider a first-order desorption process \ref{9}, with a rate constant kd, \ref{10}, where A is Arrhenius pre-exponential factor. If θ is assumed to be the number of surface adsorbates per unit area, the desorption rate will be given by \ref{11}.
$A\ -\ S \rightarrow \ A\ +\ S \label{9}$
$k_{d}\ =\ Ae^{({- \Delta E_{\alpha}}}{{RT})} \label{10}$
$\frac{-d\theta}{dt}\ =\ k_{d} \theta =\ \theta Ae^{({- \Delta E_{\alpha}}}{{RT})} \label{11}$
Since we know the relationship between heat rate β and temperature on the crystal surface T, \ref{12} and \ref{13}.
$T\ =\ T_{0} \ +\ \beta t \label{12}$
$\frac{1}{dt} \ = \ \frac{\beta }{dT} \label{13}$
Multiplying by -dθ gives \ref{13}, since \ref{14} and \ref{15}. A plot of the form of –dθ/dT versus T is shown in Figure $5$.
$\frac{-d\theta}{dt}\ =\ -\beta \frac{d\theta }{dT} \label{14}$
$\frac{-d\theta}{dt}\ =\ k_{d}\ =\ \theta A e^{ (\frac{-\Delta E_{D}}{RT}}) \label{15}$
$\frac{-d\theta}{dt}\ =\ \frac{\theta A}{\beta }e^{(\frac{- \Delta E_{a}}{RT})} \label{16}$
We notice that the Tm (peak maximum) in Figure $5$
keeps constant with increasing θ, which means the value of Tm does not depend on the initial coverage θ in the first-order desorption. If we want to use different desorption activation energy Ea and see what happens in the corresponding desorption temperature T. We are able to see the Tm values will increase with increasing Ea.
At the peak of the mass signal, the increase in the desorption rate is matched by the decrease in surface concentration per unit area so that the change in dθ/dT with T is zero: \ref{17} - \ref{18}. Since \ref{19}, then \ref{20} and \ref{21}.
$\frac{-d\theta }{dT}\ =\ \frac{\theta A}{\beta }e^{(\frac{- \Delta E_{a}}{RT})} \label{17}$
$\frac{d}{dT} [frac{\theta A}{\beta }e^{(\frac{- \Delta E_{a}}{RT})}]\ =\ 0 \label{18}$
$\frac{\Delta E_{a}}{RT^{2}_{M}} = -\frac{1}{\theta}(\frac{d\theta }{dT}) \label{19}$
$-\frac{-d\theta }{dT}\ =\ \frac{\theta A}{\beta} e^{(- \frac{- \Delta E_{a}}{RT})} \label{20}$
$\frac{\Delta E_{a}}{RT^{2}_{M}}\ =\ \frac{A}{\beta }e^{(- \frac{- \Delta E_{a}}{RT})} \label{21}$
$2lnT_{M}\ -\ ln \beta \ = \frac{\Delta E_{a}}{RT^{2}_{M}} + ln{\frac{\Delta E_{a}}{RA}} \label{22}$
This tells us if different heating rates β are used and the left-hand side of the above equation is plotted as a function of 1/TM, we can see that a straight line should be obtained whose slope is ΔEa/R and intercept is ln(ΔEa/RA). So we are able to obtain the activation energy to desorption ΔEa and Arrhenius pre-exponential factor A.
Second-Order Process
Now let consider a second-order desorption process \ref{23}, with a rate constant kd. We can deduce the desorption kinetics as \ref{24}. The result is different from the first-order reaction whose Tm value does not depend upon the initial coverage, the temperature of the peak Tm will decrease with increasing initial surface coverage.
$2A\ -S \rightarrow A_{2}\ +\ 2S \label{23}$
$-\frac{d\theta }{dT} =\ A \theta^{2} e^{\frac{\Delta E_{a}}{RT}} \label{24}$
Zero-Order Process
The zero-order desorption kinetics relationship as \ref{25}. Looking at desorption rate for the zero-order reaction (Figure $7$), we can observe that the desorption rate does not depend on coverage and also implies that desorption rate increases exponentially with T. Also according to the plot of desorption rate versus T, we figure out the desorption rate rapid drop when all molecules have desorbed. Plus temperature of peak, Tm, moves to higher T with increasing coverage θ.
$- \frac{d\theta }{dT} \ =\ Ae^{(- \frac{\Delta E_{a}}{RT})} \label{25}$
A Typical Example
A typical TPD spectra of D2 from Rh(100) for different exposures in Langmuirs (L = 10-6 Torr-sec) shows in Figure $8$. First we figure out the desorption peaks from g to n show two different desorbing regions. The higher one can undoubtedly be ascribed to chemisorbed D2 on Rh(100) surface, which means chemisorbed molecules need higher energy used to overcome their activation energy for desorption. The lower desorption region is then due to physisorbed D2 with much lower desorption activation energy than chemisorbed D2. According to the TPD theory we learnt, we notice that the peak maximum shifts to lower temperature with increasing initial coverage, which means it should belong to a second-order reaction. If we have other information about heating rate β and each Tm under corresponding initial surface coverage θ then we are able to calculate the desorption activation energy Ea and Arrhenius pre-exponential factor A.
Conclusion
Temperature-programmed desorption is an easy and straightforward technique especially useful to investigate gas-solid interaction. By changing one of parameters, such as coverage or heating rate, followed by running a serious of typical TPD experiments, it is possible to to obtain several important kinetic parameters (activation energy to desorption, reaction order, pre-exponential factor, etc). Based on the information, further mechanism of gas-solid interaction can be deduced. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/05%3A_Reactions_Kinetics_and_Pathways/5.03%3A_Temperature-Programmed_Desorption_Mass_Spectroscopy_Applied_in_Surface_Chemistry.txt |
• 6.1: NMR of Dynamic Systems- An Overview
The study of conformational and chemical equilibrium is an important part of understanding chemical species in solution. NMR is one of the most useful and easiest to use tools for such kinds of work. In an equilibrium system it is the changes in the structure/conformation of the compound that result in the variation of the peaks in the NMR spectrum.
• 6.2: Determination of Energetics of Fluxional Molecules by NMR
It does not take an extensive knowledge of chemistry to understand that as-drawn chemical structures do not give an entirely correct picture of molecules. Unlike drawings, molecules are not stationary objects in solution, the gas phase, or even in the solid state. Bonds can rotate, bend, and stretch, and the molecule can even undergo conformational changes. Rotation, bending, and stretching do not typically interfere with characterization techniques, but conformational changes occasionally compl
• 6.3: Rolling Molecules on Surfaces Under STM Imaging
As single molecule imaging methods such as scanning tunneling microscope (STM), atomic force microscope (AFM), and transmission electron microscope (TEM) developed in the past decades, scientists have gained powerful tools to explore molecular structures and behaviors in previously unknown areas. Among these imaging methods, STM is probably the most suitable one to observe detail at molecular level.
06: Dynamic Processes
The study of conformational and chemical equilibrium is an important part of understanding chemical species in solution. NMR is one of the most useful and easiest to use tools for such kinds of work. Figure \(1\) The study of conformational and chemical equilibrium is an important part of understanding chemical species in solution. NMR is one of the most useful and easiest to use tools for such kinds of work.
Chemical equilibrium is defined as the state in which both reactants and products (of a chemical reaction) are present at concentrations which have no further tendency to change with time. Such a state results when the forward reaction proceeds at the same rate (i.e., Ka in Figure \(1\) b) as the reverse reaction (i.e., Kd in Figure \(1\) b). The reaction rates of the forward and reverse reactions are generally not zero but, being equal, there are no net changes in the concentrations of the reactant and product. This process is called dynamic equilibrium.
Conformational isomerism is a form of stereoisomerism in which the isomers can be interconverted exclusively by rotations about formally single bonds. Conformational isomers are distinct from the other classes of stereoisomers for which interconversion necessarily involves breaking and reforming of chemical bonds. The rotational barrier, or barrier to rotation, is the activation energy required to interconvert rotamers. The equilibrium population of different conformers follows a Boltzmann distribution.
If we consider a simple system (Figure \(2\))as an example of how to study conformational equilibrium. In this system, the two methyl groups (one is in red, the other blue) will exchange with each other through the rotation of the C-N bond. When the speed of the rotation is fast (faster than the NMR timescale of about 10-5s), NMR can no longer recognize the difference of the two methyl groups, which results in an average peak in the NMR spectrum (as is shown in the red spectrum in Figure \(3\)).Conversely, when the speed of the rotation is slowed by cooling (to -50 °C) the two conformations have lifetimes significantly longer that they are observable in the NMR spectrum (as is shown by the dark blue spectrum in Figure \(3\)). The changes that occur to this spectrum with varying temperature is shown in Figure \(3\), where it is clearly seen the change of the NMR spectrum with the decreasing of temperature.
Based upon the above, it should be clear that the presence of an average or separate peaks can be used as an indicator of the speed of the rotation. As such this technique is useful in probing systems such as molecular motors. One of the most fundamental problems is to confirm that the motor is really rotating, while the other is to determine the rotation speed of the motors. In this area, the dynamic NMR measurements is an ideal technique. For example, we can take a look at the molecular motor shown in Figure \(4\). This molecular motor is composed of two rigid conjugated parts, which are not in the same plane. The rotation of the C-N bond will change the conformation of the molecule, which can be shown by the variation of the peaks of the two methyl groups in NMR spectrum. For the control of the rotation speed of this particular molecule motor, the researchers added additional functionality. When the nitrogen in the aromatic ring is not protonated the repulsion between the nitrogen and the oxygen atoms is larger which prohibits the rotation of the five member ring, which separates the peaks of the two methyl groups from each other. However, when the nitrogen is protonated, the rotation barrier greatly decreases because of the formation of a more stable coplanar transition state during the rotation process. Therefore, the speed of the rotation of the rotor dramatically increases to make the two methyl groups unrecognizable by NMR spectrometry to get an average peak. The result of the NMR spectrum versus the addition of the acid is shown in Figure \(5\), which can visually tell that the rotation speed is changing. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/06%3A_Dynamic_Processes/6.01%3A_NMR_of_Dynamic_Systems-_An_Overview.txt |
Introduction to Fluxionality
It does not take an extensive knowledge of chemistry to understand that as-drawn chemical structures do not give an entirely correct picture of molecules. Unlike drawings, molecules are not stationary objects in solution, the gas phase, or even in the solid state. Bonds can rotate, bend, and stretch, and the molecule can even undergo conformational changes. Rotation, bending, and stretching do not typically interfere with characterization techniques, but conformational changes occasionally complicate analyses, especially nuclear magnetic resonance (NMR).
For the present discussion, a fluxional molecule can be defined as one that undergoes an intramolecular reversible interchange between two or more conformations. Fluxionality is specified as intramolecular to differentiate from ligand exchange and complexation mechanisms, intermolecular processes. An irreversible interchange is more of a chemical reaction than a form of fluxionality. Most of the following examples alternate between two conformations, but more complex fluxionality is possible. Additionally, this module will focus on inorganic compounds. In this module, examples of fluxional molecules, NMR procedures, calculations of energetics of fluxional molecules, and the limitations of the approach will be covered.
Examples of Fluxionality
Bailar Twist
Octahedral trischelate complexes are susceptible to Bailar twists, in which the complex distorts into a trigonal prismatic intermediate before reverting to its original octahedral geometry. If the chelates are not symmetric, a Δ enantiomer will be inverted to a Λ enantiomer. For example not how in Figure $1$ with the GaL3 complex of 2,3-dihydroxy-N,N‘-diisopropylterephthalamide (Figure $2$ he end product has the chelate ligands spiraling the opposite direction around the metal center.
Berry Psuedorotation
D3h compounds can also experience fluxionality in the form of a Berry pseudorotation (depicted in Figure $3$), in which the complex distorts into a C4v intermediate and returns to trigonal bipyrimidal geometry, exchanging two equatorial and axial groups . Phosphorous pentafluoride is one of the simplest examples of this effect. In its 19FNMR, only one peak representing five fluorines is present at 266 ppm, even at low temperatures. This is due to interconversion faster than the NMR timescale.
Sandwhich and Half-sandwhich Complexes
Perhaps one of the best examples of fluxional metal complexes is (π5-C5H5)Fe(CO)21-C5H5) (Figure $4$. Not only does it have a rotating η5 cyclopentadienyl ring, it also has an alternating η1 cyclopentadienyl ring (Cp). This can be seen in its NMR spectra in Figure $5$. The signal for five protons corresponds to the metallocene Cp ring (5.6 ppm). Notice how the peak remains a sharp singlet despite the large temperature sampling range of the spectra. Another noteworthy aspect is how the multiplets corresponding to the other Cp ring broaden and eventually condense into one sharp singlet.
An Example Procedure
ample preparation is essentially the same for routine NMR. The compound of interest will need to be dissolved in an NMR compatible solvent (CDCl3 is a common example) and transferred into an NMR tube. Approximately 600 μL of solution is needed with only micrograms of compound. Compounds should be at least 99 % pure in order to ease peak assignments and analysis. Because each spectrometer has its own protocol for shimming and optimization, having the supervision of a trained specialist is strongly advised. Additionally, using an NMR with temperature control is essential. The basic goal of this experiment is to find three temperatures: slow interchange, fast interchange, and coalescence. Thus many spectra will be needed to be obtained at different temperatures in order to determine the energetics of the fluctuation.
The process will be much swifter if the lower temperature range (in which the fluctuation is much slower than the spectrometer timescale) is known. A spectra should be taken in this range. Spectra at higher temperatures should be taken, preferably in regular increments (for instance, 10 K), until the peaks of interest condense into a sharp single at higher temperature. A spectrum at the coalescence temperature should also be taken in case of publishing a manuscript. This procedure should then be repeated in reverse; that is, spectra should be taken from high temperature to low temperature. This ensures that no thermal reaction has taken place and that no hysteresis is observed. With the data (spectra) in hand, the energetics can now be determined.
Calculation of Energetics
For intramolecular processes that exchange two chemically equivalent nuclei, the function of the difference in their resonance frequencies (Δv) and rate of exchange (k) is the NMR spectrum. Slow interchange occurs when Δv >> k, and two separate peaks are observed. When Δv << k, fast interchange is said to occur, and one sharp peak is observed. At intermediate temperatures, the peaks are broadened and overlap one another. When they completely merge into one peak, the coalescence temperature, Tc is said to be reached. In the case of coalescence of an equal doublet (for instance, one proton exchanging with one proton), coalescences occurs when Δv0t = 1.4142/(2π), where Δv0 is the difference in chemical shift at low interchange and where t is defined by \ref{1}, where ta and tb are the respective lifetimes of species a and b. This condition only occurs when ta = tb, and as a result, k = ½ t.
$\frac{1}{t} \ =\ \frac{1}{t_{a}}\ +\ \frac{1}{t_{b}} \label{1}$
For reference, the exact lineshape function (assuming two equivalent groups being exchanged) is given by the Bloch Equation, \ref{2}, where g is the intensity at frequency v,and where K is a normalization constant.
$g(v)= \frac{Kt(v_{a} + v_{b})^{2}}{[0.5(v_{a} + v_{b})- u ]^{2}+4\pi^{2}t^{2}(v_{a}-v)^{2}(v _{b} - v)^{2}} \label{2}$
Low Temperatures to Coalescence Temperature
At low temperature (slow exchange), the spectrum has two peaks and Δv >> t. As a result, \ref{3} reduces to \ref{4}, where T2a is the spin-spin relaxation time. The linewidth of the peak for species a is defined by \ref{5}.
$g(v)_{a}=g(v)_{b}=\frac{KT_{2a}}{1+T^{2}_{2a}(v_{a}-v)^{2}} \label{3}$
$(\Delta v_{a})_{1/2} = \frac{1}{\pi}(\frac{1}{T_{2a}}+\frac{1}{t_{a}} ) \label{4}$
Because the spin-spin relaxation time is difficult to determine, especially in inhomogeneous environments, rate constants at higher temperatures but before coalescence are preferable and more reliable.
The rate constant k can then be determined by comparing the linewidth of a peak with no exchange (low temp) with the linewidth of the peak with little exchange using, \ref{5}, where subscript e refers to the peak in the slightly higher temperature spectrum and subscript 0 refers to the peak in the no exchange spectrum.
$k = \frac{\pi }{ \sqrt{2}}[(\Delta v_{e})_{1/2}- (\Delta v_{0})_{1/2}] \label{5}$
Additionally, k can be determined from the difference in frequency (chemical shift) using \ref{6}, where Δv0is the chemical shift difference in Hz at the no exchange temperature and Δve is the chemical shift difference at the exchange temperature.
$k= \frac{\pi}{\sqrt{2} }(\Delta v^{2}_{0} - \Delta v^{2}_{e}) \label{6}$
The intensity ratio method, \ref{7}, can be used to determine the rate constant for spectra whose peaks have begun to merge, where r is the ratio between the maximum intensity and the minimum intensity, of the merging peaks, Imax/Imin.
$k = \frac{\pi }{\sqrt{2}} (r+(r^{2} - r)^{1/2})^{-1/2} \label{7}$
Additionally, k can be determined from the difference in frequency (chemical shift) using \ref{8}, where Δv0is the chemical shift difference in Hz at the no exchange temperature and Δve is the chemical shift difference at the exchange temperature.
$k\ = \frac{\pi }{\sqrt{2} }(\Delta v^{2}_{0} - \Delta v^{2}_{e} ) \label{8}$
The intensity ratio method, \ref{9} can be used to determine the rate constant for spectra whose peaks have begun to merge, where r is the ratio between the maximum intensity and the minimum intensity, of the merging peaks, Imax/Imin
$k\ = \frac{\pi }{\sqrt{2} }(r+(r^{2}-r)^{1/2})^{-1/2} \label{9}$
As mentioned earlier, the coalescence temperature, Tc is the temperature at which the two peaks corresponding to the interchanging groups merge into one broad peak and \ref{10} may be used to calculate the rate at coalescence.
$k\ = \frac{\pi \Delta v_{0} }{\sqrt{2}} \label{10}$
Higher Temperatures
Beyond the coalescence temperature, interchange is so rapid (k >> t) that the spectrometer registers the two groups as equivalent and as one peak. At temperatures greater than that of coalescence, the lineshape equation reduces to \ref{11}.
$g(v)\ = \frac{KT_{2}}{[1 \ +\ \pi T_{2}(v_{a} \ +\ v_{b} \ +\ 2v)^{2}]} \label{11}$
As mentioned earlier, determination of T2 is very time consuming and often unreliable due to inhomogeneity of the sample and of the magnetic field. The following approximation (\ref{12}) applies to spectra whose signal has not completely fallen (in their coalescence).
$k\ = \frac{ 0.5\pi \Delta v^{2} }{(\Delta v_{e})_{1/2} - (\Delta v_{0} )_{1/2} } \label{12}$
Now that the rate constants have been extracted from the spectra, energetic parameters may now be calculated. For a rough measure of the activation parameters, only the spectra at no exchange and coalescence are needed. The coalescence temperature is determined from the NMR experiment, and the rate of exchange at coalescence is given by \ref{10}. The activation parameters can then be determined from the Eyring equation (\ref{13} ), where kB is the Boltzmann constant, and where ΔH- TΔS = ΔG.
$ln(\frac{k}{t}) = \frac{\Delta H^{\ddagger } }{RT} - \frac{\Delta S^{\ddagger }}{R} + ln(\frac{k_{B}}{h}) \label{13}$
For more accurate calculations of the energetics, the rates at different temperatures need to be obtained. A plot of ln(k/T) versus 1/T (where T is the temperature at which the spectrum was taken) will yield ΔH, ΔS, and ΔG. For a pictorial representation of these concepts, see Figure $6$.
Diverse Populations
For unequal doublets (for instance, two protons exchanging with one proton), a different treatment is needed. The difference in population can be defined through \ref{14}, where Pi is the concentration (integration) of species i and X = 2πΔvt (counts per second). Values for Δvt are given in Figure $7$.
$\Delta P = P_{a} - P_{b} = [ \frac{X^{2} - 2}{3} ]^{3/2} (\frac{1}{X}) \label{14}$
The rates of conversion for the two species, ka and kb, follow kaPa = kbPb (equilibrium), and because ka = 1/taand kb = 1/tb, the rate constant follows \ref{15}.
$k_{i} = \frac{1}{2t}(1- \Delta P) \label{15}$
From Erying's expressions, the Gibbs free activation energy for each species can be obtained through \ref{16} and \ref{17}.
$\Delta G^{\ddagger }_{a} = \ RT_{c}\ ln(\frac{kT_{c}}{h\pi \Delta v_{0}} \times \frac{X}{1-\Delta P_{a}} ) \label{16}$
$\Delta G^{\ddagger }_{b} = \ RT_{c}\ ln(\frac{kT_{c}}{h\pi \Delta v_{0}} \times \frac{X}{1-\Delta P_{b}} ) \label{17}$
Taking the difference of \ref{16} and \ref{17} gives the difference in energy between species a and b (\ref{18}).
$\Delta G^{\ddagger } = RT_{c} ln(\frac{P_{a}}{P_{b}}=RT_{c}ln(\frac{1+P}{1-P}) \label{18}$
Converting constants will yield the following activation energies in calories per mole (\ref{19} and \ref{20}).
$\Delta G^{\ddagger }_{a} = 4.57T_{c}[10.62\ +\ log(\frac{X}{2p(1-\Delta P)}) +\ log(T_{c}/\Delta v)] \label{19}$
$\Delta G^{\ddagger }_{b} = 4.57T_{c}[10.62\ +\ log(\frac{X}{2p(1-\Delta P)}) +\ log(T_{c}/\Delta v)] \label{20}$
To obtain the free energys of activation, values of log (X/(2π(1 + ΔP))) need to be plotted against ΔP (values Tc and Δv0 are predetermined).
This unequal doublet energetics approximation only gives ΔG at one temperature, and a more rigorous theoretical treatment is needed to give information about ΔS and ΔH.
Example of Determination of Energetic Parameters
Normally ligands such as dipyrido(2,3-a;3′,2′-j)phenazine (dpop’) are tridentate when complexed to transition metal centers. However, dpop’ binds to rhenium in a bidentate manner, with the outer nitrogens alternating in being coordinated and uncoordinated. See Figure $8$for the structure of Re(CO)3(dpop')Cl. This fluxionality results in the exchange of the aromatic protons on the dpop’ ligand, which can be observed via 1HNMR. Because of the complex nature of the coalescence of doublets, the rate constants at different temperatures were determined via computer simulation (DNMR3, a plugin of Topspin). These spectra are shown in Figure $8$.
The activation parameters can then be obtained by plotting ln(k/T) versus 1/T (see Figure $9$ for the Eyring plot). ΔS can be extracted from the y-intercept, and ΔH can be obtained through the slope of the plot. For this example, ΔH, ΔS and ΔG. were determined to be 64.9 kJ/mol, 7.88 J/mol, and 62.4 kJ/mol.
Limitations to the Approach
Though NMR is a powerful technique for determining the energetics of fluxional molecules, it does have one major limitation. If the fluctuation is too rapid for the NMR timescale (< 1 ms) or if the conformational change is too slow meaning the coalescence temperature is not observed, the energetics cannot be calculated. In other words, spectra at coalescence and at no exchange need to be observable. One is also limited by the capabilities of the available spectrometer. The energetics of very fast fluxionality (metallocenes, PF5, etc) and very slow fluxionality may not be determinable. Also note that this method does not prove any fluxionality or any mechanism thereof; it only gives a value for the activation energy of the process. As a side note, sometimes the coalescence of NMR peaks is not due to fluxionality, but rather temperature-dependent chemical shifts. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/06%3A_Dynamic_Processes/6.02%3A_Determination_of_Energetics_of_Fluxional_Molecules_by_NMR.txt |
Introduction to Surface Motions at the Molecular Level
As single molecule imaging methods such as scanning tunneling microscope (STM), atomic force microscope (AFM), and transmission electron microscope (TEM) developed in the past decades, scientists have gained powerful tools to explore molecular structures and behaviors in previously unknown areas. Among these imaging methods, STM is probably the most suitable one to observe detail at molecular level. STM can operate in a wide range of conditions, provides very high resolution, and able to manipulate molecular motions with the tip. An interesting early example came from IBM in 1990, in which the STM was used to position individual atoms for the first time, spelling out "I-B-M" in Xenon atoms. This work revealed that observation and control of single atoms and molecular motions on surfaces were possible.
The IBM work, and subsequent experiments, relied on the fact that STM tip always exerts a finite force toward an adsorbate atom that contains both van der Waals and electrostatic forces was utilized for manipulation purpose. By adjusting the position and the voltage of the tip, the interactions between the tip and the target molecule were changed. Therefore, applying/releasing force to a single atom and make it move was possible Figure \(1\).
The actual positioning experiment was carried out in the following process. The nickel metal substrate was prepared by cycles of argon-ion sputtering, followed by annealing in a partial pressure of oxygen to remove surface carbon and other impurities. After the cleaning process, the sample was cooled to 4 K, and imaged with the STM to ensure the quality of surface. The nickel sample was then doped with xenon. An image of the doped sample was taken at constant-current scanning conditions. Each xenon atom appears as a located randomly 1.6 Å high bump on the surface (Figure \(2\) a). Under the imaging conditions (tip bias = 0.010 V with tunneling current 10-9 A) the interaction of the xenon with the tip is too weak to cause the position of the xenon atom to be perturbed. To move an atom, the STM tip was placed on top of the atom performing the procedure depicted in Figure \(1\) to move to its target. Repeating this process again and again led the researcher to build of the structure they desired Figure \(2\) b and c.
All motions on surfaces at the single molecule level can be described as by the following (or combination of the following) modes:
Sliding
Hopping
Rolling
Pivoting
Although the power of STM imaging has been demonstrated, imaging of molecules themselves is still often a difficult task. The successful imaging of the IBM work was attributed to selection of a heavy atom. Other synthetic organic molecules without heavy atoms are much more difficult to be imaged under STM. Determinations of the mechanism of molecular motion is another. Besides imaging methods themselves, other auxiliary methods such as DFT calculations and imaging of properly designed molecules are required to determine the mechanism by which a particular molecule moves across a surface.
Herein, we are particularly interested in surface-rolling molecules, i.e., those that are designed to roll on a surface. It is straightforward to imagine that if we want to construct (and image) surface-rolling molecules, we must think of making highly symmetrical structures. In addition, the magnitudes of interactions between the molecules and the surfaces have to be adequate; otherwise the molecules will be more susceptible to slide/hop or stick on the surfaces, instead of rolling. As a result, only very few molecules are known can roll and be detected on surfaces.
Surface Rolling of Molecules under the Manipulation of STM Tips
As described above, rolling motions are most likely to be observed on molecules having high degree of symmetry and suitable interactions between themselves and the surface. C60 is not only a highly symmetrical molecule but also readily imageable under STM due to its size. These properties together make C60 and its derivatives highly suitable to study with regards to surface-rolling motion.
The STM imaging of C60 was first carried out at At King College, London. Similar to the atom positioning experiment by IBM, STM tip manipulation was also utilized to achieve C60 displacement. The tip trajectory suggested that a rolling motion took into account the displacement on the surface of C60. In order to confirm the hypothesis, the researchers also employed ab initio density function (DFT) calculations with rolling model boundary condition (Figure \(3\)). The calculation result has supported their experimental result.
The results provided insights into the dynamical response of covalently bound molecules to manipulation. The sequential breaking and reforming of highly directional covalent bonds resulted in a dynamical molecular response in which bond breaking, rotation, and translation are intimately coupled in a rolling motion, but not performing sliding or hopping motion.
A triptycene wheeled dimeric molecule Figure \(4\) was also synthesized for studying rolling motion under STM. This "tripod-like" triptycene wheel ulike a ball like C60 molecule also demonstrated a rolling motion on the surface. The two triptycene units were connected via a dialkynyl axle, for both desired molecule orientation sitting on surface and directional preference of the rolling motion. STM controlling and imaging was demonstrated, including the mechanism Figure \(4\).
Single Molecule Nanocar Under STM Imaging
Another use of STM imaging at single molecule imaging is the single molecule nanocar by the Tour group at Rice University. The concept of a nanocar initially employed the free rotation of a C-C single bond between a spherical C60 molecule and an alkyne, Figure \(5\). Based on this concept, an “axle” can be designed into which are mounted C60 “wheels” connected with a “chassis” to construct the “nanocar”. Nanocars with this design are expected to have a directional movement perpendicular to the axle. Unfortunately, the first generation nanocar (named “nanotruck” Figure \(6\)) encountered some difficulties in STM imaging due to its chemical instability and insolubility. Therefore, a new of design of nanocar based on OPE has been synthesized Figure \(7\).
The newly designed nanocar was studied with STM. When the nanocar was heated to ~200 °C, noticeable displacements of the nanocar were observed under selected images from a 10 min STM experiment Figure \(8\). The phenomenon that the nanocar moved only at high temperature was attributed their stability to a relatively strong adhesion force between the fullerene wheels and the underlying gold. The series of images showed both pivotal and translational motions on the surfaces.
Although literature studies suggested that the C60 molecule rolls on the surface, in the nanocar movement studies it is still not possible to conclusively conclude that the nanocar moves on surface exclusively via a rolling mechanism. Hopping, sliding and other moving modes could also be responsible for the movement of the nanocar since the experiment was carried out at high temperature conditions, making the C60 molecules more energetic to overcome interactions between surfaces.
To tackle the question of the mode of translation, a trimeric “nano-tricycle” has been synthesized. If the movement of fullerene-wheeled nanocar was based on a hopping or sliding mechanism, the trimer should give observable translational motions like the four-wheeled nanocar, however, if rolling is the operable motion then the nano-tricycle should rotate on an axis, but not translate across the surface. The result of the imaging experiment of the trimer at ~200 °C (Figure \(9\)), yielded very small and insignificant translational displacements in comparison to 4-wheel nanocar (Figure \(9\)). The trimeric 3-wheel nanocar showed some pivoting motions in the images. This motion type can be attributed to the directional preferences of the wheels mounted on the trimer causing the car to rotate. All the experimental results suggested that a C60-based nanocar moves via a rolling motion rather than hopping and sliding. In addition, the fact that the thermally driven nanocar only moves in high temperature also suggests that four C60 have very strong interactions to the surface. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/06%3A_Dynamic_Processes/6.03%3A_Rolling_Molecules_on_Surfaces_Under_STM_Imaging.txt |
• 7.1: Crystal Structure
In any sort of discussion of crystalline materials, it is useful to begin with a discussion of crystallography: the study of the formation, structure, and properties of crystals. A crystal structure is defined as the particular repeating arrangement of atoms (molecules or ions) throughout a crystal. Structure refers to the internal arrangement of particles and not the external appearance of the crystal. However, these are not entirely independent.
• 7.2: Structures of Element and Compound Semiconductors
A single crystal of either an elemental (e.g., silicon) or compound (e.g., gallium arsenide) semiconductor forms the basis of almost all semiconductor devices. The ability to control the electronic and opto-electronic properties of these materials is based on an understanding of their structure. In addition, the metals and many of the insulators employed within a microelectronic device are also crystalline.
• 7.3: X-ray Crystallography
The significance of this for chemistry is that given this fact, crystalline solids will be easily identifiable once a database has been established. Much like solving a puzzle, crystal structures of heterogeneous compounds could be solved very methodically by comparison of chemical composition and their interactions.
• 7.4: Low Energy Electron Diffraction
Low energy electron diffraction (LEED) is a very powerful technique that allows for the characterization of the surface of materials. Its high surface sensitivity is due to the use of electrons with energies between 20-200 eV, which have wavelengths equal to 2.7 – 0.87 Å (comparable to the atomic spacing). Therefore, the electrons can be elastically scattered easily by the atoms in the first few layers of the sample. Its features, such as little penetration of low–energy electrons have positione
• 7.5: Neutron Diffraction
The first neutron diffraction experiment was in 1945 by Ernest O. Wollan using the Graphite Reactor at Oak Ridge. Along with Clifford Shull they outlined the principles of the technique. However, the concept that neutrons would diffract like X-rays was first proposed by Dana Mitchell and Philip Powers. They proposed that neutrons have a wave like structure, which is explained by the de Broglie equation.
• 7.6: XAFS
X-ray absorption fine structure (XAFS) spectroscopy includes both X-ray absorption near edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) spectroscopies. The difference between both techniques is the area to analyze and the information each technique provides.
• 7.7: Circular Dichroism Spectroscopy and its Application for Determination of Secondary Structure of Optically Active Species
Circular dichroism (CD) spectroscopy is one of few structure assessmet methods that can be utilized as an alternative and amplification to many conventional analysis techniques with advatages such as rapid data collection and ease of use. Since most of the efforts and time spent in advancement of chemical sciences are devoted to elucidation and analysis of structure and composition of synthesized molecules or isolated natural products rather than their preparation, one should be aware of all the
• 7.8: Protein Analysis using Electrospray Ionization Mass Spectroscopy
Electrospray ionization-mass spectrometry (ESI-MS) is an analytical method that focuses on macromolecular structural determination. The unique component of ESI-MS is the electrospray ionization. The development of electrospraying, the process of charging a liquid into a fine aerosol, was completed in the 1960’s when Malcolm Dole.
• 7.9: The Analysis of Liquid Crystal Phases using Polarized Optical Microscopy
Liquid crystals are a state of matter that has the properties between solid crystal and common liquid.
07: Molecular and Solid State Structure
In any sort of discussion of crystalline materials, it is useful to begin with a discussion of crystallography: the study of the formation, structure, and properties of crystals. A crystal structure is defined as the particular repeating arrangement of atoms (molecules or ions) throughout a crystal. Structure refers to the internal arrangement of particles and not the external appearance of the crystal. However, these are not entirely independent since the external appearance of a crystal is often related to the internal arrangement. For example, crystals of cubic rock salt (NaCl) are physically cubic in appearance. Only a few of the possible crystal structures are of concern with respect to simple inorganic salts and these will be discussed in detail, however, it is important to understand the nomenclature of crystallography.
Crystallography
Bravais Lattice
The Bravais lattice is the basic building block from which all crystals can be constructed. The concept originated as a topological problem of finding the number of different ways to arrange points in space where each point would have an identical “atmosphere”. That is each point would be surrounded by an identical set of points as any other point, so that all points would be indistinguishable from each other. Mathematician Auguste Bravais discovered that there were 14 different collections of the groups of points, which are known as Bravais lattices. These lattices fall into seven different "crystal systems”, as differentiated by the relationship between the angles between sides of the “unit cell” and the distance between points in the unit cell. The unit cell is the smallest group of atoms, ions or molecules that, when repeated at regular intervals in three dimensions, will produce the lattice of a crystal system. The “lattice parameter” is the length between two points on the corners of a unit cell. Each of the various lattice parameters are designated by the letters a, b, and c. If two sides are equal, such as in a tetragonal lattice, then the lengths of the two lattice parameters are designated a and c, with b omitted. The angles are designated by the Greek letters α, β, and γsize 12{γ} {}, such that an angle with a specific Greek letter is not subtended by the axis with its Roman equivalent. For example, α is the included angle between the b and c axis.
Table $1$ shows the various crystal systems, while Figure $1$ shows the 14 Bravais lattices. It is important to distinguish the characteristics of each of the individual systems. An example of a material that takes on each of the Bravais lattices is shown in Table $2$.
System Axial Lengths and Angles Unit Cell Geometry
cubic a=b=c, α = β = γ = 90°
tetragonal a = b ≠ c, α = β = γ= 90°
orthorhombic a ≠ b ≠ c, α = β = γ= 90°
rhombohedral a = b = c, α = β = γ ≠ 90°
hexagonal a = b ≠ c, α = β = 90°, γ = 120°
monoclinic a ≠ b ≠ c, α = γ = 90°, β ≠ 90°
triclinic a ≠ b ≠ c, α ≠ β ≠ γ
Table $1$ Geometrical characteristics of the seven crystal systems
Crystal System Example
triclinic K2S2O8
monoclinic As4S4, KNO2
rhombohedral Hg, Sb
hexagonal Zn, Co, NiAs
orthorhombic Ga, Fe3C
tetragonal In, TiO2
cubic Au, Si, NaCl
Table $2$ Examples of elements and compounds that adopt each of the crystal systems.
The cubic lattice is the most symmetrical of the systems. All the angles are equal to 90°, and all the sides are of the same length (a = b = c). Only the length of one of the sides (a) is required to describe this system completely. In addition to simple cubic, the cubic lattice also includes body-centered cubic and face-centered cubic (Figure $1$. Body-centered cubic results from the presence of an atom (or ion) in the center of a cube, in addition to the atoms (ions) positioned at the vertices of the cube. In a similar manner, a face-centered cubic requires, in addition to the atoms (ions) positioned at the vertices of the cube, the presence of atoms (ions) in the center of each of the cubes face.
The tetragonal lattice has all of its angles equal to 90°, and has two out of the three sides of equal length (a = b). The system also includes body-centered tetragonal (Figure $1$.
In an orthorhombic lattice all of the angles are equal to 90°, while all of its sides are of unequal length. The system needs only to be described by three lattice parameters. This system also includes body-centered orthorhombic, base-centered orthorhombic, and face-centered orthorhombic (Figure $1$.
A base-centered lattice has, in addition to the atoms (ions) positioned at the vertices of the orthorhombic lattice, atoms (ions) positioned on just two opposing faces.
The rhombohedral lattice is also known as trigonal, and has no angles equal to 90°, but all sides are of equal length (a = b = c), thus requiring only by one lattice parameter, and all three angles are equal (α = β = γ).
A hexagonal crystal structure has two angles equal to 90°, with the other angle ( γsize 12{γ} {}) equal to 120°. For this to happen, the two sides surrounding the 120° angle must be equal (a = b), while the third side (c) is at 90° to the other sides and can be of any length.
The monoclinic lattice has no sides of equal length, but two of the angles are equal to 90°, with the other angle (usually defined as β) being something other than 90°. It is a tilted parallelogram prism with rectangular bases. This system also includes base-centered monoclinic (Figure $2$).
In the triclinic lattice none of the sides of the unit cell are equal, and none of the angles within the unit cell are equal to 90°. The triclinic lattice is chosen such that all the internal angles are either acute or obtuse. This crystal system has the lowest symmetry and must be described by 3 lattice parameters (a, b, and c) and the 3 angles (α, β, and γ).
Atom Positions, Crystal Directions and Miller Indices
Atom Positions and Crystal Axes
The structure of a crystal is defined with respect to a unit cell. As the entire crystal consists of repeating unit cells, this definition is sufficient to represent the entire crystal. Within the unit cell, the atomic arrangement is expressed using coordinates. There are two systems of coordinates commonly in use, which can cause some confusion. Both use a corner of the unit cell as their origin. The first, less-commonly seen system is that of Cartesian or orthogonal coordinates (X, Y, Z). These usually have the units of Angstroms and relate to the distance in each direction between the origin of the cell and the atom. These coordinates may be manipulated in the same fashion are used with two- or three-dimensional graphs. It is very simple, therefore, to calculate inter-atomic distances and angles given the Cartesian coordinates of the atoms. Unfortunately, the repeating nature of a crystal cannot be expressed easily using such coordinates. For example, consider a cubic cell of dimension 3.52 Å. Pretend that this cell contains an atom that has the coordinates (1.5, 2.1, 2.4). That is, the atom is 1.5 Å away from the origin in the x direction (which coincides with the a cell axis), 2.1 Å in the y (which coincides with the b cell axis) and 2.4 Å in the z (which coincides with the c cell axis). There will be an equivalent atom in the next unit cell along the x-direction, which will have the coordinates (1.5 + 3.52, 2.1, 2.4) or (5.02, 2.1, 2.4). This was a rather simple calculation, as the cell has very high symmetry and so the cell axes, a, b and c, coincide with the Cartesian axes, X, Y and Z. However, consider lower symmetry cells such as triclinic or monoclinic in which the cell axes are not mutually orthogonal. In such cases, expressing the repeating nature of the crystal is much more difficult to accomplish.
Accordingly, atomic coordinates are usually expressed in terms of fractional coordinates, (x, y, z). This coordinate system is coincident with the cell axes (a, b, c) and relates to the position of the atom in terms of the fraction along each axis. Consider the atom in the cubic cell discussion above. The atom was 1.5 Å in the a direction away from the origin. As the a axis is 3.52 Å long, the atom is (1.5/3.52) or 0.43 of the axis away from the origin. Similarly, it is (2.1/3.52) or 0.60 of the b axis and (2.4/3.5) or 0.68 of the c axis. The fractional coordinates of this atom are, therefore, (0.43, 0.60, 0.68). The coordinates of the equivalent atom in the next cell over in the a direction, however, are easily calculated as this atom is simply 1 unit cell away in a. Thus, all one has to do is add 1 to the x coordinate: (1.43, 0.60, 0.68). Such transformations can be performed regardless of the shape of the unit cell. Fractional coordinates, therefore, are used to retain and manipulate crystal information.
Crystal Directions
The designation of the individual vectors within any given crystal lattice is accomplished by the use of whole number multipliers of the lattice parameter of the point at which the vector exits the unit cell. The vector is indicated by the notation [hkl], where h, k, and l are reciprocals of the point at which the vector exits the unit cell. The origination of all vectors is assumed defined as [000]. For example, the direction along the a-axis according to this scheme would be [100] because this has a component only in the a-direction and no component along either the b or c axial direction. A vector diagonally along the face defined by the a and baxis would be [110], while going from one corner of the unit cell to the opposite corner would be in the [111] direction. Figure $2$ shows some examples of the various directions in the unit cell. The crystal direction notation is made up of the lowest combination of integers and represents unit distances rather than actual distances. A [222] direction is identical to a [111], so [111] is used. Fractions are not used. For example, a vector that intercepts the center of the top face of the unit cell has the coordinates x = 1/2, y = 1/2, z = 1. All have to be inversed to convert to the lowest combination of integers (whole numbers); i.e., [221] in Figure $2$. Finally, all parallel vectors have the same crystal direction, e.g., the four vertical edges of the cell shown in Figure $2$ all have the crystal direction [hkl] = [001].
Crystal directions may be grouped in families. To avoid confusion there exists a convention in the choice of brackets surrounding the three numbers to differentiate a crystal direction from a family of direction. For a direction, square brackets [hkl] are used to indicate an individual direction. Angle brackets <hkl> indicate a family of directions. A family of directions includes any directions that are equivalent in length and types of atoms encountered. For example, in a cubic lattice, the [100], [010], and [001] directions all belong to the <100> family of planes because they are equivalent. If the cubic lattice were rotated 90°, the a, b, and cdirections would remain indistinguishable, and there would be no way of telling on which crystallographic positions the atoms are situated, so the family of directions is the same. In a hexagonal crystal, however, this is not the case, so the [100] and [010] would both be <100> directions, but the [001] direction would be distinct. Finally, negative directions are identified with a bar over the negative number instead of a minus sign.
Crystal Planes
Planes in a crystal can be specified using a notation called Miller indices. The Miller index is indicated by the notation [hkl] where h, k, and l are reciprocals of the plane with the x, y, and z axes. To obtain the Miller indices of a given plane requires the following steps:
1. The plane in question is placed on a unit cell.
2. Its intercepts with each of the crystal axes are then found.
3. The reciprocal of the intercepts are taken.
4. These are multiplied by a scalar to insure that is in the simple ratio of whole numbers.
For example, the face of a lattice that does not intersect the y or z axis would be (100), while a plane along the body diagonal would be the (111) plane. An illustration of this along with the (111) and (110) planes is given in Figure $3$.
As with crystal directions, Miller indices directions may be grouped in families. Individual Miller indices are given in parentheses (hkl), while braces {hkl} are placed around the indices of a family of planes. For example, (001), (100), and (010) are all in the {100} family of planes, for a cubic lattice.
Description of Crystal Structures
Crystal structures may be described in a number of ways. The most common manner is to refer to the size and shape of the unit cell and the positions of the atoms (or ions) within the cell. However, this information is sometimes insufficient to allow for an understanding of the true structure in three dimensions. Consideration of several unit cells, the arrangement of the atoms with respect to each other, the number of other atoms they in contact with, and the distances to neighboring atoms, often will provide a better understanding. A number of methods are available to describe extended solid-state structures. The most applicable with regard to elemental and compound semiconductor, metals and the majority of insulators is the close packing approach.
Close Packed Structures: Hexagonal Close Packing and Cubic Close Packing
Many crystal structures can be described using the concept of close packing. This concept requires that the atoms (ions) are arranged so as to have the maximum density. In order to understand close packing in three dimensions, the most efficient way for equal sized spheres to be packed in two dimensions must be considered.
The most efficient way for equal sized spheres to be packed in two dimensions is shown in Figure $4$, in which it can be seen that each sphere (the dark gray shaded sphere) is surrounded by, and is in contact with, six other spheres (the light gray spheres in Figure $4$. It should be noted that contact with six other spheres the maximum possible is the spheres are the same size, although lower density packing is possible. Close packed layers are formed by repetition to an infinite sheet. Within these close packed layers, three close packed rows are present, shown by the dashed lines in Figure $4$.
The most efficient way for equal sized spheres to be packed in three dimensions is to stack close packed layers on top of each other to give a close packed structure. There are two simple ways in which this can be done, resulting in either a hexagonal or cubic close packed structures.
Hexagonal Close Packed
If two close packed layers A and B are placed in contact with each other so as to maximize the density, then the spheres of layer B will rest in the hollow (vacancy) between three of the spheres in layer A. This is demonstrated in Figure $5$. Atoms in the second layer, B (shaded light gray), may occupy one of two possible positions (Figure $5$ a or b) but not both together or a mixture of each. If a third layer is placed on top of layer B such that it exactly covers layer A, subsequent placement of layers will result in the following sequence ...ABABAB.... This is known as hexagonal close packing or hcp.
The hexagonal close packed cell is a derivative of the hexagonal Bravais lattice system (Figure $6$ with the addition of an atom inside the unit cell at the coordinates (1/3,2/3,1/2). The basal plane of the unit cell coincides with the close packed layers (Figure $6$. In other words the close packed layer makes-up the {001} family of crystal planes.
The “packing fraction” in a hexagonal close packed cell is 74.05%; that is 74.05% of the total volume is occupied. The packing fraction or density is derived by assuming that each atom is a hard sphere in contact with its nearest neighbors. Determination of the packing fraction is accomplished by calculating the number of whole spheres per unit cell (2 in hcp), the volume occupied by these spheres, and a comparison with the total volume of a unit cell. The number gives an idea of how “open” or filled a structure is. By comparison, the packing fraction for body-centered cubic (Figure $5$) is 68% and for diamond cubic (an important semiconductor structure to be described later) is it 34%.
Cubic Close Packed: Face-centered Cubic
In a similar manner to the generation of the hexagonal close packed structure, two close packed layers are stacked (Figure $7$ however, the third layer (C) is placed such that it does not exactly cover layer A, while sitting in a set of troughs in layer B (Figure $7$), then upon repetition the packing sequence will be ...ABCABCABC.... This is known as cubic close packing or ccp.
The unit cell of cubic close packed structure is actually that of a face-centered cubic (fcc) Bravais lattice. In the fcc lattice the close packed layers constitute the {111} planes. As with the hcp lattice packing fraction in a cubic close packed (fcc) cell is 74.05%. Since face centered cubic or fcc is more commonly used in preference to cubic close packed (ccp) in describing the structures, the former will be used throughout this text.
Coordination Number
The coordination number of an atom or ion within an extended structure is defined as the number of nearest neighbor atoms (ions of opposite charge) that are in contact with it. A slightly different definition is often used for atoms within individual molecules: the number of donor atoms associated with the central atom or ion. However, this distinction is rather artificial, and both can be employed.
The coordination numbers for metal atoms in a molecule or complex are commonly 4, 5, and 6, but all values from 2 to 9 are known and a few examples of higher coordination numbers have been reported. In contrast, common coordination numbers in the solid state are 3, 4, 6, 8, and 12. For example, the atom in the center of body-centered cubic lattice has a coordination number of 8, because it touches the eight atoms at the corners of the unit cell, while an atom in a simple cubic structure would have a coordination number of 6. In both fcc and hcp lattices each of the atoms have a coordination number of 12.
Octahedral and Tetrahedral Vacancies
As was mentioned above, the packing fraction in both fcc and hcp cells is 74.05%, leaving 25.95% of the volume unfilled. The unfilled lattice sites (interstices) between the atoms in a cell are called interstitial sites or vacancies. The shape and relative size of these sites is important in controlling the position of additional atoms. In both fcc and hcp cells most of the space within these atoms lies within two different sites known as octahedral sites and tetrahedral sites. The difference between the two lies in their “coordination number”, or the number of atoms surrounding each site. Tetrahedral sites (vacancies) are surrounded by four atoms arranged at the corners of a tetrahedron. Similarly, octahedral sites are surrounded by six atoms which make-up the apices of an octahedron. For a given close packed lattice an octahedral vacancy will be larger than a tetrahedral vacancy.
Within a face centered cubic lattice, the eight tetrahedral sites are positioned within the cell, at the general fractional coordinate of (n/4,n/4,n/4) where n = 1 or 3, e.g., (1/4,1/4,1/4), (1/4,1/4,3/4), etc. The octahedral sites are located at the center of the unit cell (1/2,1/2,1/2), as well as at each of the edges of the cell, e.g., (1/2,0,0). In the hexagonal close packed system, the tetrahedral sites are at (0,0,3/8) and (1/3,2/3,7/8), and the octahedral sites are at (1/3,1/3,1/4) and all symmetry equivalent positions.
Important Structure Types
The majority of crystalline materials do not have a structure that fits into the one atom per site simple Bravais lattice. A number of other important crystal structures are found, however, only a few of these crystal structures are those of which occur for the elemental and compound semiconductors and the majority of these are derived from fcc or hcp lattices. Each structural type is generally defined by an archetype, a material (often a naturally occurring mineral) which has the structure in question and to which all the similar materials are related. With regard to commonly used elemental and compound semiconductors the important structures are diamond, zinc blende, Wurtzite, and to a lesser extent chalcopyrite. However, rock salt, β-tin, cinnabar and cesium chloride are observed as high pressure or high temperature phases and are therefore also discussed. The following provides a summary of these structures. Details of the full range of solid-state structures are given elsewhere.
Diamond Cubic
The diamond cubic structure consists of two interpenetrating face-centered cubic lattices, with one offset 1/4 of a cube along the cube diagonal. It may also be described as face centered cubic lattice in which half of the tetrahedral sites are filled while all the octahedral sites remain vacant. The diamond cubic unit cell is shown in Figure $8$. Each of the atoms (e.g., C) is four coordinate, and the shortest interatomic distance (C-C) may be determined from the unit cell parameter (a).
$C-C\ =\ a \frac{\sqrt{3} }{4} \approx \ 0.422 a \label{1}$
Zinc Blende
This is a binary phase (ME) and is named after its archetype, a common mineral form of zinc sulfide (ZnS). As with the diamond lattice, zinc blende consists of the two interpenetrating fcc lattices. However, in zinc blende one lattice consists of one of the types of atoms (Zn in ZnS), and the other lattice is of the second type of atom (S in ZnS). It may also be described as face centered cubic lattice of S atoms in which half of the tetrahedral sites are filled with Zn atoms. All the atoms in a zinc blende structure are 4-coordinate. The zinc blende unit cell is shown in Figure $9$. A number of inter-atomic distances may be calculated for any material with a zinc blende unit cell using the lattice parameter (a).
$Zn-S\ =\ a \frac{\sqrt{3} }{4} \approx \ 0.422 a \label{2}$
$Zn-Zn \ =\ S-S\ = \frac{a}{\sqrt{2}} \approx 0.707\ a \label{3}$
Chalcopyrite
The mineral chalcopyrite CuFeS2 is the archetype of this structure. The structure is tetragonal (a = b ≠ c, α = β = γ = 90°, and is essentially a superlattice on that of zinc blende. Thus, is easiest to imagine that the chalcopyrite lattice is made-up of a lattice of sulfur atoms in which the tetrahedral sites are filled in layers, ...FeCuCuFe..., etc. (Figure $10$. In such an idealized structure c = 2a, however, this is not true of all materials with chalcopyrite structures.
Rock Salt
As its name implies the archetypal rock salt structure is NaCl (table salt). In common with the zinc blende structure, rock salt consists of two interpenetrating face-centered cubic lattices. However, the second lattice is offset 1/2a along the unit cell axis. It may also be described as face centered cubic lattice in which all of the octahedral sites are filled, while all the tetrahedral sites remain vacant, and thus each of the atoms in the rock salt structure are 6-coordinate. The rock salt unit cell is shown in Figure $11$. A number of inter-atomic distances may be calculated for any material with a rock salt structure using the lattice parameter (a).
$Na-Cl\ =\ \frac{a}{2} \approx 0.5 a \label{4}$
$Na-Na \ =\ Cl-Cl \ =\ \frac{a}{\sqrt{2}} \approx 0.707\ a \label{5}$
Cinnabar
Cinnabar, named after the archetype mercury sulfide, HgS, is a distorted rock salt structure in which the resulting cell is rhombohedral (trigonal) with each atom having a coordination number of six.
Wurtzite
This is a hexagonal form of the zinc sulfide. It is identical in the number of and types of atoms, but it is built from two interpenetrating hcp lattices as opposed to the fcc lattices in zinc blende. As with zinc blende all the atoms in a wurtzite structure are 4-coordinate. The wurtzite unit cell is shown in Figure $12$. A number of inter atomic distances may be calculated for any material with a wurtzite cell using the lattice parameter (a).
$Zn-S\ =\ a \sqrt{3/8 } \ =\ 0.612\ a\ = \frac{3 c}{8} \ =\ 0.375\ c \label{6}$
$Zn- Zn \ =\ S-S\ =\ a\ =\ 1.632\ c \label{7}$
However, it should be noted that these formulae do not necessarily apply when the ratio a/c is different from the ideal value of 1.632.
Cesium Chloride
The cesium chloride structure is found in materials with large cations and relatively small anions. It has a simple (primitive) cubic cell (Figure $13$) with a chloride ion at the corners of the cube and the cesium ion at the body center. The coordination numbers of both Cs+ and Cl-, with the inner atomic distances determined from the cell lattice constant (a).
$Cs-Cl\ =\ \frac{a \sqrt{3} }{2} \approx 0.866a \label{8}$
$Cs-Cs \ =\ Cl-Cl\ = a \label{9}$
β-Tin
The room temperature allotrope of tin is β-tin or white tin. It has a tetragonal structure, in which each tin atom has four nearest neighbors (Sn-Sn = 3.016 Å) arranged in a very flattened tetrahedron, and two next nearest neighbors (Sn-Sn = 3.175 Å). The overall structure of β-tin consists of fused hexagons, each being linked to its neighbor via a four-membered Sn4 ring.
Defects in Crystalline Solids
Up to this point we have only been concerned with ideal structures for crystalline solids in which each atom occupies a designated point in the crystal lattice. Unfortunately, defects ordinarily exist in equilibrium between the crystal lattice and its environment. These defects are of two general types: point defects and extended defects. As their names imply, point defects are associated with a single crystal lattice site, while extended defects occur over a greater range.
Point Defects: "Too Many or Too Few" or "Just Plain Wrong"
Point defects have a significant effect on the properties of a semiconductor, so it is important to understand the classes of point defects and the characteristics of each type. Figure $13$ summarizes various classes of native point defects, however, they may be divided into two general classes; defects with the wrong number of atoms (deficiency or surplus) and defects where the identity of the atoms is incorrect.
Interstitial Impurity
An interstitial impurity occurs when an extra atom is positioned in a lattice site that should be vacant in an ideal structure (Figure $13$ b).Since all the adjacent lattice sites are filled the additional atom will have to squeeze itself into the interstitial site, resulting in distortion of the lattice and alteration in the local electronic behavior of the structure. Small atoms, such as carbon, will prefer to occupy these interstitial sites. Interstitial impurities readily diffuse through the lattice via interstitial diffusion, which can result in a change of the properties of a material as a function of time. Oxygen impurities in silicon generally are located as interstitials.
Vacancies
The converse of an interstitial impurity is when there are not enough atoms in a particular area of the lattice. These are called vacancies. Vacancies exist in any material above absolute zero and increase in concentration with temperature. In the case of compound semiconductors, vacancies can be either cation vacancies (Figure $13$ c) or anion vacancies (Figure $13$ d), depending on what type of atom are “missing”.
Substitution
Substitution of various atoms into the normal lattice structure is common, and used to change the electronic properties of both compound and elemental semiconductors. Any impurity element that is incorporated during crystal growth can occupy a lattice site. Depending on the impurity, substitution defects can greatly distort the lattice and/or alter the electronic structure. In general, cations will try to occupy cation lattice sites (Figure $13$ e), and anion will occupy the anion site (Figure $13$ f). For example, a zinc impurity in GaAs will occupy a gallium site, if possible, while a sulfur, selenium and tellurium atoms would all try to substitute for an arsenic. Some impurities will occupy either site indiscriminately, e.g., Si and Sn occupy both Ga and As sites in GaAs.
Antisite Defects
Antisite defects are a particular form of substitution defect, and are unique to compound semiconductors. An antisite defect occurs when a cation is misplaced on an anion lattice site or vice versa ( Figure $13$ g and h).Dependant on the arrangement these are designated as either AB antisite defects or BA antisite defects. For example, if an arsenic atom is on a gallium lattice site the defect would be an AsGa defect. Antisite defects involve fitting into a lattice site atoms of a different size than the rest of the lattice, and therefore this often results in a localized distortion of the lattice. In addition, cations and anions will have a different number of electrons in their valence shells, so this substitution will alter the local electron concentration and the electronic properties of this area of the semiconductor.
Extended Defects: Dislocations in a Crystal Lattice
Extended defects may be created either during crystal growth or as a consequence of stress in the crystal lattice. The plastic deformation of crystalline solids does not occur such that all bonds along a plane are broken and reformed simultaneously. Instead, the deformation occurs through a dislocation in the crystal lattice. Figure shows a schematic representation of a dislocation in a crystal lattice. Two features of this type of dislocation are the presence of an extra crystal plane, and a large void at the dislocation core. Impurities tend to segregate to the dislocation core in order to relieve strain from their presence.
Epitaxy
Epitaxy, is a transliteration of two Greek words epi, meaning "upon", and taxis, meaning "ordered". With respect to crystal growth it applies to the process of growing thin crystalline layers on a crystal substrate. In epitaxial growth, there is a precise crystal orientation of the film in relation to the substrate. The growth of epitaxial films can be done by a number of methods including molecular beam epitaxy, atomic layer epitaxy, and chemical vapor deposition, all of which will be described later.
Epitaxy of the same material, such as a gallium arsenide film on a gallium arsenide substrate, is called homoepitaxy, while epitaxy where the film and substrate material are different is called heteroepitaxy. Clearly, in homoepitaxy, the substrate and film will have the identical structure, however, in heteroepitaxy, it is important to employ where possible a substrate with the same structure and similar lattice parameters. For example, zinc selenide (zinc blende, a = 5.668 Å) is readily grown on gallium arsenide (zinc blende, a = 5.653 Å). Alternatively, epitaxial crystal growth can occur where there exists a simple relationship between the structures of the substrate and crystal layer, such as is observed between Al2O3 (100) on Si (100). Whichever route is chosen a close match in the lattice parameters is required, otherwise, the strains induced by the lattice mismatch results in distortion of the film and formation of dislocations. If the mismatch is significant epitaxial growth is not energetically favorable, causing a textured film or polycrystalline untextured film to be grown. As a general rule of thumb, epitaxy can be achieved if the lattice parameters of the two materials are within about 5% of each other. For good quality epitaxy, this should be less than 1%. The larger the mismatch, the larger the strain in the film. As the film gets thicker and thicker, it will try to relieve the strain in the film, which could include the loss of epitaxy of the growth of dislocations. It is important to note that the <100> directions of a film must be parallel to the <100> direction of the substrate. In some cases, such as Fe on MgO, the [111] direction is parallel to the substrate [100]. The epitaxial relationship is specified by giving first the plane in the film that is parallel to the substrate [100]. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.01%3A_Crystal_Structure.txt |
A single crystal of either an elemental (e.g., silicon) or compound (e.g., gallium arsenide) semiconductor forms the basis of almost all semiconductor devices. The ability to control the electronic and opto-electronic properties of these materials is based on an understanding of their structure. In addition, the metals and many of the insulators employed within a microelectronic device are also crystalline.
Group IV (14) Elements
Each of the semiconducting phases of the group IV (14) elements, C (diamond), Si, Ge, and α-Sn, adopt the diamond cubic structure (Figure $1$). Their lattice constants (a, Å) and densities (ρ, g/cm3) are given in Table $1$.
Table $1$: Lattice parameters and densities (measured at 298 K) for the diamond cubic forms of the group IV (14) elements.
Element Lattice Parameter, a (Å) Density (g/cm3)
carbon (diamond) 3.56683(1) 3.51525
silicon 5.4310201(3) 2.319002
germanium 5.657906(1) 5.3234
tin (α-Sn) 6.4892(1)
As would be expected the lattice parameter increase in the order C < Si < Ge < α-Sn. Silicon and germanium form a continuous series of solid solutions with gradually varying parameters. It is worth noting the high degree of accuracy that the lattice parameters are known for high purity crystals of these elements. In addition, it is important to note the temperature at which structural measurements are made, since the lattice parameters are temperature dependent (Figure $1$). The lattice constant (a), in Å, for high purity silicon may be calculated for any temperature (T) over the temperature range 293 - 1073 K by the formula shown below.
$a_{T}\ =\ 5.4304\ +\ 1.8138 \times 10^{-5}\ (T- 298.15\ K)\ +\ 1.542 \times 10^{-9}\ (T-298.15\ K) \label{1}$
Even though the diamond cubic forms of Si and Ge are the only forms of direct interest to semiconductor devices, each exists in numerous crystalline high pressure and meta-stable forms. These are described along with their interconversions, in Table $2$.
Table $2$: High pressure and metastable phases of silicon and germanium.
Phase Structure Remarks
Si I diamond cubic stable at normal pressure
Si II grey tin structure formed from Si I or Si V above 14 GPa
Si III cubic metastable, formed from Si II above 10 GPa
Si IV hexagonal
Si V unidentified stable above 34 GPa, formed from Si II above 16 GPa
Si VI hexagonal close packed stable above 45 GPa
Ge I diamond cubic low-pressure phase
Ge II β-tin structure formed from Ge I above 10 GPa
Ge III tetragonal formed by quenching Ge II at low pressure
Ge IV body centered formed by quenching Ge II to 1 atm at 200 K
Group III-V (13-15) Compounds
The stable phases for the arsenides, phosphides and antimonides of aluminum, gallium and indium all exhibit zinc blende structures (Figure $3$). In contrast, the nitrides are found as wurtzite structures (e.g., Figure $4$). The structure, lattice parameters, and densities of the III-V compounds are given in Table $3$. It is worth noting that contrary to expectation the lattice parameter of the gallium compounds is smaller than their aluminum homolog; for GaAs a = 5.653 Å; AlAs a = 5.660 Å. As with the group IV elements the lattice parameters are highly temperature dependent; however, additional variation arises from any deviation from absolute stoichiometry. These effects are shown in Figure $4$.
Table $4$ Lattice parameters and densities (measured at 298 K) for the III-V (13-15) compound semiconductors. Estimated standard deviations given in parentheses.
Compound Structure Lattice Parameter (Å) Density (g/cm3)
AIN wurtzite a = 3.11(1), c = 4.98(1) 3.255
AIP zinc blende a = 5.4635(4) 2.40(1)
AIAs zinc blende a= 5.660 3.760
AISb zinc blende a = 6.1355(1) 4.26
GaN wurtzite a = 3.190, c=5.187
GaP zinc blende a= 5.4505(2) 4.138
GaAs zinc blende a= 5.56325(2) 5.3176(3)
InN wurtzite a= 3.5446, c= 5.7034 6.81
InP zinc blende a= 5.868(1) 4.81
InAs zinc blende a= 6.0583 5.667
InSb zinc blende a= 6.47937 5.7747(4)
The homogeneity of structures of alloys for a wide range of solid solutions to be formed between III-V compounds in almost any combination. Two classes of ternary alloys are formed: IIIx-III1-x-V (e.g., Alx-Ga1-x-As) and III-V1-x-Vx (e.g., Ga-As1-x-Px) . While quaternary alloys of the type IIIx-III1-x-Vy-V1-y allow for the growth of materials with similar lattice parameters, but a broad range of band gaps. A very important ternary alloy, especially in optoelectronic applications, is Alx-Ga1-x-As and its lattice parameter (a) is directly related to the composition (x).
$a\ =\ 5.6533\ +\ 0.0078\ x \nonumber$
Not all of the III-V compounds have well characterized high-pressure phases. however, in each case where a high-pressure phase is observed the coordination number of both the group III and group V element increases from four to six. Thus, AlP undergoes a zinc blende to rock salt transformation at high pressure above 170 kbar, while AlSb and GaAs form orthorhombic distorted rock salt structures above 77 and 172 kbar, respectively. An orthorhombic structure is proposed for the high-pressure form of InP (>133 kbar). Indium arsenide (InAs) undergoes two-phase transformations. The zinc blende structure is converted to a rock salt structure above 77 kbar, which in turn forms a β-tin structure above 170 kbar.
Group II-VI (12-16) Compounds
The structures of the II-VI compound semiconductors are less predictable than those of the III-V compounds (above), and while zinc blende structure exists for almost all of the compounds there is a stronger tendency towards the hexagonal wurtzite form. In several cases the zinc blende structure is observed under ambient conditions, but may be converted to the wurtzite form upon heating. In general the wurtzite form predominates with the smaller anions (e.g., oxides), while the zinc blende becomes the more stable phase for the larger anions (e.g., tellurides). One exception is mercury sulfide (HgS) that is the archetype for the trigonal cinnabar phase.Table $5$ lists the stable phase of the chalcogenides of zinc, cadmium and mercury, along with their high temperature phases where applicable. Solid solutions of the II-VI compounds are not as easily formed as for the III-V compounds; however, two important examples are ZnSxSe1-x and CdxHg1-xTe.
Compound Structure Lattice Parameter (Å) Density (g/cm3)
ZnS zinc blende a= 5.410 4.075
wurtzite a = 3.822, c= 6.260 4.087
ZnSe zinc blende a = 5.668 5.27
ZnTe zinc blende a = 6.10 5.636
CdS wurtzite a = 4.136, c = 6.714 4.82
CdSe wurtzite a = 4.300, c = 7.011 5.81
CdTe zinc blende a = 6.482 5.87
HgS cinnabar a = 4.149, c = 9.495
zinc blende a = 5.851 7.73
HgSe zinc blende a = 6.085 8.25
HgTe zinc blende a = 6.46 8.07
Table $5$ Lattice parameters and densities (measured at 298 K) for the II-VI (12-16) compound semiconductors.
The zinc chalcogenides all transform to a cesium chloride structure under high pressures, while the cadmium compounds all form rock salt high-pressure phases (Figure $6$). Mercury selenide (HgSe) and mercury telluride (HgTe) convert to the mercury sulfide archetype structure, cinnabar, at high pressure.
I-III-VI2 (11-13-16) Compounds
Nearly all I-III-VI2 compounds at room temperature adopt the chalcopyrite structure (Figure $7$). The cell constants and densities are given in Table $6$. Although there are few reports of high temperature or high-pressure phases, AgInS2 has been shown to exist as a high temperature orthorhombic polymorph (a = 6.954, b = 8.264, and c = 6.683 Å), and AgInTe2 forms a cubic phase at high pressures.
Compound Lattice Parameter a (Å) Lattice parameter c (Å) Density (g cm3)
CuAlS2 5.32 10.430 3.45
CuAlSe2 5.61 10.92 4.69
CuAlTe2 5.96 11.77 5.47
CuGaS2 5.35 10.46 4.38
CuGaSe2 5.61 11.00 5.57
CuGaTe2 6.00 11.93 5.95
CuInS2 5.52 11.08 4.74
CuInSe2 5.78 11.55 5.77
CuInTe2 6.17 12.34 6.10
AgAlS2 6.30 11.84 6.15
AgGaS2 5.75 10.29 4.70
AgGaSe2 5.98 10.88 5.70
AgGaTe2 6.29 11.95 6.08
AgInS2 5.82 11.17 4.97
AgInSe2 6.095 11.69 5.82
AgInTe2 6.43 12.59 6.96
Table $6$ Chalcopyrite lattice parameters and densities (measured at 298 K) for the I-III-VI compound semiconductors. Lattice parameters for tetragonal cell.
Of the I-III-VI2 compounds, the copper indium chalcogenides (CuInE2) are certainly the most studied for their application in solar cells. One of the advantages of the copper indium chalcogenide compounds is the formation of solid solutions (alloys) of the formula CuInE2-xE'x, where the composition variable (x) varies from 0 to 2. The CuInS2-xSex and CuInSe2-xTex systems have also been examined, as has the CuGayIn1-yS2-xSex quaternary system. As would be expected from a consideration of the relative ionic radii of the chalcogenides the lattice parameters of the CuInS2-xSex alloy should increase with increased selenium content. Vergard's law requires the lattice constant for a linear solution of two semiconductors to vary linearly with composition (e.g., as is observed for AlxGa1-xAs), however, the variation of the tetragonal lattice constants (a and c) with composition for CuInS2-xSx are best described by the parabolic relationships.
$a\ =\ 5.532\ +\ 0.0801x\ +\ 0.026 x^{2} \nonumber$
$c\ =\ 11.156\ +\ 0.1204x\ +\ 0.0611 x^{2} \nonumber$
A similar relationship is observed for the CuInSe2-xTex alloys.
$a\ =\ 5.783\ +\ 0.1560 x\ +\ 0.0212x^{2} \nonumber$
$c\ =\ 11.628\ +\ 0.3340x\ +\ 0.0277x^{2} \nonumber$
The large difference in ionic radii between S and Te (0.37 Å) prevents formation of solid solutions in the CuInS2-xTex system, however, the single alloy CuInS1.5Te0.5 has been reported.
Orientation Effects
Once single crystals of high purity silicon or gallium arsenide are produced they are cut into wafers such that the exposed face of these wafers is either the crystallographic {100} or {111} planes. The relative structure of these surfaces are important with respect to oxidation, etching and thin film growth. These processes are orientation-sensitive; that is, they depend on the direction in which the crystal slice is cut.
Atom Density and Dangling Bonds
The principle planes in a crystal may be differentiated in a number of ways, however, the atom and/or bond density are useful in predicting much of the chemistry of semiconductor surfaces. Since both silicon and gallium arsenide are fcc structures and the {100} and {111} are the only technologically relevant surfaces, discussions will be limited to fcc {100} and {111}.
The atom density of a surface may be defined as the number of atoms per unit area. Figure shows a schematic view of the {111} and {100} planes in a fcc lattice. The {111} plane consists of a hexagonal close packed array in which the crystal directions within the plane are oriented at 60° to each other. The hexagonal packing and the orientation of the crystal directions are indicated in Figure $8$ b as an overlaid hexagon. Given the intra-planar inter-atomic distance may be defined as a function of the lattice parameter, the area of this hexagon may be readily calculated. For example in the case of silicon, the hexagon has an area of 38.30 Å2. The number of atoms within the hexagon is three: the atom in the center plus 1/3 of each of the six atoms at the vertices of the hexagon (each of the atoms at the hexagons vertices is shared by three other adjacent hexagons). Thus, the atom density of the {111} plane is calculated to be 0.0783 Å-2. Similarly, the atom density of the {100} plane may be calculated. The {100} plane consists of a square array in which the crystal directions within the plane are oriented at 90° to each other. Since the square is coincident with one of the faces of the unit cell the area of the square may be readily calculated. For example in the case of silicon, the square has an area of 29.49 Å2. The number of atoms within the square is 2: the atom in the center plus 1/4 of each of the four atoms at the vertices of the square (each of the atoms at the corners of the square are shared by four other adjacent squares). Thus, the atom density of the {100} plane is calculated to be 0.0678 Å-2. While these values for the atom density are specific for silicon, their ratio is constant for all diamond cubic and zinc blende structures: {100}:{111} = 1:1.155. In general, the fewer dangling bonds the more stable a surface structure.
An atom inside a crystal of any material will have a coordination number (n) determined by the structure of the material. For example, all atoms within the bulk of a silicon crystal will be in a tetrahedral four-coordinate environment (n = 4). However, at the surface of a crystal the atoms will not make their full compliment of bonds. Each atom will therefore have less nearest neighbors than an atom within the bulk of the material. The missing bonds are commonly called dangling bonds. While this description is not particularly accurate it is, however, widely employed and as such will be used herein. The number of dangling bonds may be defined as the difference between the ideal coordination number (determined by the bulk crystal structure) and the actual coordination number as observed at the surface.
Figure $9$ shows a section of the {111} surfaces of a diamond cubic lattice viewed perpendicular to the {111} plane. The atoms within the bulk have a coordination number of four. In contrast, the atoms at the surface (e.g., the atom shown in blue in Figure $10$ are each bonded to just three other atoms (the atoms shown in red in Figure), thus each surface atom has one dangling bond. As can be seen from Figure $10$, which shows the atoms at the {100} surface viewed perpendicular to the {100} plane, each atom at the surface (e.g., the atom shown in blue in Figure $9$ is only coordinated to two other atoms (the atoms shown in red in Figure $10$, leaving two dangling bonds per atom. It should be noted that the same number of dangling bonds are found for the {111} and {100} planes of a zinc blende lattice. The ratio of dangling bonds for the {100} and {111} planes of all diamond cubic and zinc blende structures is {100}:{111} = 2:1. Furthermore, since the atom densities of each plane are known then the ratio of the dangling bond densities is determined to be: {100}:{111} = 1:0.577.
Silicon
For silicon, the {111} planes are closer packed than the {100} planes. As a result, growth of a silicon crystal is therefore slowest in the <111> direction, since it requires laying down a close packed atomic layer upon another layer in its closest packed form. As a consequence <111> Si is the easiest to grow, and therefore the least expensive.
The dissolution or etching of a crystal is related to the number of broken bonds already present at the surface: the fewer bonds to be broken in order to remove an individual atom from a crystal, the easier it will be to dissolve the crystal. As a consequence of having only one dangling bond (requiring three bonds to be broken) etching silicon is slowest in the <111> direction. The electronic properties of a silicon wafer are also related to the number of dangling bonds.
Silicon microcircuits are generally formed on a single crystal wafer that is diced after fabrication by either sawing part way through the wafer thickness or scoring (scribing) the surface, and then physically breaking. The physical breakage of the wafer occurs along the natural cleavage planes, which in the case of silicon are the {111} planes.
Gallium Arsenide
The zinc blende lattice observed for gallium arsenide results in additional considerations over that of silicon. Although the {100} plane of GaAs is structurally similar to that of silicon, two possibilities exist: a face consisting of either all gallium atoms or all arsenic atoms. In either case the surface atoms have two dangling bonds, and the properties of the face are independent of whether the face is gallium or arsenic.
The {111} plane also has the possibility of consisting of all gallium or all arsenic. However, unlike the {100} planes there is a significant difference between the two possibilities. Figure $11$ shows the gallium arsenide structure represented by two interpenetrating fcc lattices. The [111] axis is vertical within the plane of the page. Although the structure consists of alternate layers of gallium and arsenic stacked along the [111] axis, the distance between the successive layers alternates between large and small. Assigning arsenic as the parent lattice the order of the layers in the [111] direction is As-Ga-As-Ga-As-Ga, while in the [111] direction the layers are ordered, Ga-As-Ga-As-Ga-As (Figure $11$).In silicon these two directions are of course identical. The surface of a crystal would be either arsenic, with three dangling bonds, or gallium, with one dangling bond. Clearly, the latter is energetically more favorable. Thus, the (111) plane shown in Figure $11$ is called the (111) Ga face. Conversely, the [111] plane would be either gallium, with three dangling bonds, or arsenic, with one dangling bond. Again, the latter is energetically more favorable and the [111] plane is therefore called the (111) As face.
The (111) As is distinct from that of (111) Ga due to the difference in the number of electrons at the surface. As a consequence, the (111) As face etches more rapidly than the (111) Ga face. In addition, surface evaporation below 770 °C occurs more rapidly at the (111) As face. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.02%3A_Structures_of_Element_and_Compound_Semiconductors.txt |
An Introduction to X-ray Diffraction
History of X-ray Crystallography
The birth of X-ray crystallography is considered by many to be marked by the formulation of the law of constant angles by Nicolaus Steno in 1669 (Figure $1$).
Although Steno is well known for his numerous principles regarding all areas of life, this particular law dealing with geometric shapes and crystal lattices is familiar ground to all chemists. It simply states that the angles between corresponding faces on crystals are the same for all specimens of the same mineral. The significance of this for chemistry is that given this fact, crystalline solids will be easily identifiable once a database has been established. Much like solving a puzzle, crystal structures of heterogeneous compounds could be solved very methodically by comparison of chemical composition and their interactions.
Although Steno was given credit for the notion of crystallography, the man that provided the tools necessary to bring crystallography into the scientific arena was Wilhelm Roentgen (Figure $2$), who in 1895 successfully pioneered a new form of photography, one that could allegedly penetrate through paper, wood, and human flesh; due to a lack of knowledge of the specific workings of this new discovery, the scientific community conveniently labeled the new particles X-rays. This event set off a chain reaction of experiments and studies, not all performed by physicists. Within one single month, medical doctors were using X-rays to pinpoint foreign objects such in the human body such as bullets and kidney stones (Figure $3$).
The credit for the actual discovery of X-ray diffraction goes to Max von Laue (Figure $4$, to whom the Nobel Prize in physics in 1914 was awarded for the discovery of the diffraction of X-rays. Legend has it that the notion that eventually led to a Nobel prize was born in a garden in Munich, while von Laue was pondering the problem of passing waves of electromagnetic radiation through a specific crystalline arrangement of atoms. Because of the relatively large wavelength of visible light, von Laue was forced to turn his attention to another part of the electromagnetic spectrum, to where shorter wavelengths resided. Only a few decades earlier, Röentgen had publicly announced the discovery of X-rays, which supposedly had a wavelength shorter than that of visible light. Having this information, von Laue entrusted the task of performing the experimental work to two technicians, Walter Friedrich and Paul Knipping. The setup consisted of an X-ray source, which beamed radiation directly into a copper sulfate crystal housed in a lead box. Film was lined against the sides and back of the box, so as to capture the X-ray beam and its diffraction pattern. Development of the film showed a dark circle in the center of the film, surrounded by several extremely well defined circles, which had formed as a result of the diffraction of the X-ray beam by the ordered geometric arrangement of copper sulfate. Max von Laue then proceeded to work out the mathematical formulas involved in the observed diffraction pattern, for which he was awarded the Nobel Prize in physics in 1914.
Principles of X-Ray Diffraction (XRD)
The simplest definition of diffraction is the irregularities caused when waves encounter an object. Diffraction is a phenomenon that exists commonly in everyday activities, but is often disregarded and taken for granted. For example, when looking at the information side of a compact disc, a rainbow pattern will often appear when it catches light at a certain angle. This is caused by visible light striking the grooves of the disc, thus producing a rainbow effect (Figure $5$), as interpreted by the observers' eyes. Another example is the formation of seemingly concentric rings around an astronomical object of significant luminosity when observed through clouds. The particles that make up the clouds diffract light from the astronomical object around its edges, causing the illusion of rings of light around the source. It is easy to forget that diffraction is a phenomenon that applies to all forms of waves, not just electromagnetic radiation. Due to the large variety of possible types of diffractions, many terms have been coined to differentiate between specific types. The most prevalent type of diffraction to X-ray crystallography is known as Bragg diffraction, which is defined as the scattering of waves from a crystalline structure.
Formulated by William Lawrence Bragg (Figure $6$), the equation of Bragg's law relates wavelength to angle of incidence and lattice spacing, \ref{1}, where n is a numeric constant known as the order of the diffracted beam, λ is the wavelength of the beam, d denotes the distance between lattice planes, and θ represents the angle of the diffracted wave. The conditions given by this equation must be fulfilled if diffraction is to occur.
$n\lambda \ =\ 2d\ sin(\theta ) \label{1}$
Because of the nature of diffraction, waves will experience either constructive (Figure $7$) or destructive (Figure $8$) interference with other waves. In the same way, when an X-ray beam is diffracted off a crystal, the different parts of the diffracted beam will have seemingly stronger energy, while other parts will have seemed to lost energy. This is dependent mostly on the wavelength of the incident beam, and the spacing between crystal lattices of the sample. Information about the lattice structure is obtained by varying beam wavelengths, incident angles, and crystal orientation. Much like solving a puzzle, a three dimensional structure of the crystalline solid can be constructed by observing changes in data with variation of the aforementioned variables.
The X-ray Diffractometer
At the heart of any XRD machine is the X-ray source. Modern day machines generally rely on copper metal as the element of choice for producing X-rays, although there are variations among different manufacturers. Because diffraction patterns are recorded over an extended period of time during sample analysis, it is very important that beam intensity remain constant throughout the entire analysis, or else faulty data will be procured. In light of this, even before an X-ray beam is generated, current must pass through a voltage regular, which will guarantee a steady stream of voltage to the X-ray source.
Another crucial component to the analysis of crystalline via X-rays is the detector. When XRD was first developed, film was the most commonly used method for recognizing diffraction patterns. The most obvious disadvantage to using film is the fact that it has to replaced every time a new specimen is introduced, making data collection a time consuming process. Furthermore, film can only be used once, leading to an increase in cost of operating diffraction analysis.
Since the origins of XRD, detection methods have progressed to the point where modern XRD machines are equipped with semiconductor detectors, which produce pulses proportional to the energy absorbed. With these modern detectors, there are two general ways in which a diffraction pattern may be obtained. The first is called continuous scan, and it is exactly what the name implies. The detector is set in a circular motion around the sample, while a beam of X-ray is constantly shot into the sample. Pulses of energy are plotted with respect to diffraction angle, which ensure all diffracted X-rays are recorded. The second and more widely used method is known as step scan. Step scanning bears similarity to continuous scan, except it is highly computerized and much more efficient. Instead of moving the detector in a circle around the entire sample, step scanning involves collecting data at one fixed angle at a time, thus the name. Within these detection parameters, the types of detectors can themselves be varied. A more common type of detector, known as the charge-coupled device (CCD) detector (Figure $9$, can be found in many XRD machines, due to its fast data collection capability. A CCD detector is comprised of numerous radiation sensitive grids, each linked to sensors that measure changes in electromagnetic radiation. Another commonly seen type of detector is a simple scintillation counter (Figure $10$), which counts the intensity of X-rays that it encounters as it moves along a rotation axis. A comparable analogy to the differences between the two detectors mentioned would be that the CCD detector is able to see in two dimensions, while scintillation counters are only able to see in one dimension.
Aside from the above two components, there are many other variables involved in sample analysis by an XRD machine. As mentioned earlier, a steady incident beam is extremely important for good data collection. To further ensure this, there will often be what is known as a Söller slit or collimator found in many XRD machines. A Söller slit collimates the direction of the X-ray beam. In the collimated X-ray beam the rays are parallel, and therefore will spread minimally as they propagates (Figure $11$. Without a collimator X-rays from all directions will be recorded; for example, a ray that has passed through the top of the specimen (see the red arrow in Figure $11$a) but happens to be traveling in a downwards direction may be recorded at the bottom of the plate. The resultant image will be so blurred and indistinct as to be useless. Some machines have a Söller slit between the sample and the detector, which drastically reduces the amount of background noise, especially when analyzing iron samples with a copper X-ray source.
This single crystal XRD machine (Figure $12$) features a cooling gas line, which allows the user to bring down the temperature of a sample considerably below room temperature. Doing so allows for the opportunities for studies performed where the sample is kept in a state of extremely low energy, negating a lot of vibrational motion that might interfere with consistent data collection of diffraction patterns. Furthermore, information can be collected on the effects of temperature on a crystal structure. Also seen in Figure $13$ is the hook-shaped object located between the beam emitter and detector. It serves the purpose of blocking X-rays that were not diffracted from being seen by the detector, drastically reducing the amount of unnecessary noise that would otherwise obscure data analysis.
Evolution of Powder XRD
Over time, XRD analysis has evolved from a very narrow and specific field to something that encompasses a much wider branch of the scientific arena. In its early stages, XRD was (with the exception of the simplest structures) confined to single crystal analysis, as detection methods had not advanced to a point where more complicated procedures was able to be performed. After many years of discovery and refining, however, technology has progressed to where crystalline properties (structure) of solids can be gleaned directly from a powder sample, thus offering information for samples that cannot be obtained as a single crystal. One area in which this is particularly useful is pharmaceuticals, since many of the compounds studied are not available in single crystal form, only in a powder.
Even though single crystal diffraction and powder diffraction essentially generate the same data, due to the powdered nature of the latter sample, diffraction lines will often overlap and interfere with data collection. This is apparently especially when the diffraction angle 2θ is high; patterns that emerge will be almost to the point of unidentifiable, because of disruption of individual diffraction patterns. For this particular reason, a new approach to interpreting powder diffraction data has been created.
There are two main methods for interpreting diffraction data:
• The first is known as the traditional method, which is very straightforward, and bears resemblance to single crystal data analysis. This method involves a two step process: 1) the intensities and diffraction patterns from the sample is collected, and 2) the data is analyzed to produce a crystalline structure. As mentioned before, however, data from a powdered sample is often obscured by multiple diffraction patterns, which decreases the chance that the generated structure is correct.
• The second method is called the direct-space approach. This method takes advantage of the fact that with current technology, diffraction data can be calculated for any molecule, whether or not it is the molecule in question. Even before the actual diffraction data is collected, a large number of theoretical patterns of suspect molecules are generated by computer, and compared to experimental data. Based on correlation and how well the theoretical pattern fits the experimental data best, a guess is formulated to which compound is under question. This method has been taken a step further to mimic social interactions in a community. For example, first generation theoretical trial molecules, after comparison with the experimental data, are allowed to evolve within parameters set by researchers. Furthermore, if appropriate, molecules are produce offspring with other molecules, giving rise to a second generation of molecules, which fit the experimental data even better. Just like a natural environment, genetic mutations and natural selection are all introduced into the picture, ultimately giving rise a molecular structure that represents data collected from XRD analysis.
Another important aspect of being able to study compounds in powder form for the pharmaceutical researcher is the ability to identify structures in their natural state. A vast majority of drugs in this day and age are delivered through powdered form, either in the form of a pill or a capsule. Crystallization processes may often alter the chemical composition of the molecule (e.g., by the inclusion of solvent molecules), and thus marring the data if confined to single crystal analysis. Furthermore, when the sample is in powdered form, there are other variables that can be adjusted to see real-time effects on the molecule. Temperature, pressure, and humidity are all factors that can be changed in-situ to glean data on how a drug might respond to changes in those particular variables.
Powder X-Ray Diffraction
Introduction
Powder X-Ray diffraction (XRD) was developed in 1916 by Debye (Figure $12$) and Scherrer (Figure $13$) as a technique that could be applied where traditional single-crystal diffraction cannot be performed. This includes cases where the sample cannot be prepared as a single crystal of sufficient size and quality. Powder samples are easier to prepare, and is especially useful for pharmaceuticals research.
Diffraction occurs when a wave meets a set of regularly spaced scattering objects, and its wavelength of the distance between the scattering objects are of the same order of magnitude. This makes X-rays suitable for crystallography, as its wavelength and crystal lattice parameters are both in the scale of angstroms (Å). Crystal diffraction can be described by Bragg diffraction, \ref{2}, where λ is the wavelength of the incident monochromatic X-ray, d is the distance between parallel crystal planes, and θ the angle between the beam and the plane.
$\lambda \ =\ 2d\ sin \theta \label{2}$
For constructive interference to occur between two waves, the path length difference between the waves must be an integral multiple of their wavelength. This path length difference is represented by 2d sinθ Figure $14$. Because sinθ cannot be greater than 1, the wavelength of the X-ray limits the number of diffraction peaks that can appear.
Production and Detection of X-rays
Most diffractometers use Cu or Mo as an X-ray source, and specifically the Kα radiation of wavelengths of 1.54059 Å and 0.70932 Å, respectively. A stream of electrons is accelerated towards the metal target anode from a tungsten cathode, with a potential difference of about 30-50 kV. As this generates a lot of heat, the target anode must be cooled to prevent melting.
Detection of the diffracted beam can be done in many ways, and one common system is the gas proportional counter (GPC). The detector is filled with an inert gas such as argon, and electron-ion pairs are created when X-rays pass through it. An applied potential difference separates the pairs and generates secondary ionizations through an avalanche effect. The amplification of the signal is necessary as the intensity of the diffracted beam is very low compared to the incident beam. The current detected is then proportional to the intensity of the diffracted beam. A GPC has a very low noise background, which makes it widely used in labs.
Performing X-ray Diffraction
Exposure to X-rays may have health consequences, follow safety procedures when using the diffractometer.
The particle size distribution should be even to ensure that the diffraction pattern is not dominated by a few large particles near the surface. This can be done by grinding the sample to reduce the average particle size to <10µm. However, if particle sizes are too small, this can lead to broadening of peaks. This is due to both lattice damage and the reduction of the number of planes that cause destructive interference.
The diffraction pattern is actually made up of angles that did not suffer from destructive interference due to their special relationship described by Bragg Law (Figure $15$). If destructive interference is reduced close to these special angles, the peak is broadened and becomes less distinct. Some crystals such as calcite (CaCO3, Figure $15$ have preferred orientations and will change their orientation when pressure is applied. This leads to differences in the diffraction pattern of ‘loose’ and pressed samples. Thus, it is important to avoid even touching ‘loose’ powders to prevent errors when collecting data.
The sample powder is loaded onto a sample dish for mounting in the diffractometer (Figure $16$), where rotating arms containing the X-ray source and detector scan the sample at different incident angles. The sample dish is rotated horizontally during scanning to ensure that the powder is exposed evenly to the X-rays.
A sample X-ray diffraction spectrum of germanium is shown in Figure $17$, with peaks identified by the planes that caused that diffraction. Germanium has a diamond cubic crystal lattice (Figure $18$), named after the crystal structure of prototypical example. The crystal structure determines what crystal planes cause diffraction and the angles at which they occur. The angles are shown in 2θ as that is the angle measured between the two arms of the diffractometer, i.e., the angle between the incident and the diffracted beam (Figure $14$).
Determining Crystal Structure for Cubic Lattices
There are three basic cubic crystal lattices, and they are the simple cubic (SC), body-centered cubic (BCC), and the face-centered cubic (FCC) Figure $19$. These structures are simple enough to have their diffraction spectra analyzed without the aid of software.
Each of these structures has specific rules on which of their planes can produce diffraction, based on their Miller indices (hkl).
• SC lattices show diffraction for all values of (hkl), e.g., (100), (110), (111), etc.
• BCC lattices show diffraction when the sum of h+k+l is even, e.g., (110), (200), (211), etc.
• FCC lattices show diffraction when the values of (hkl) are either all even or all odd, e.g., (111), (200), (220), etc.
• Diamond cubic lattices like that of germanium are FCC structures with four additional atoms in the opposite corners of the tetrahedral interstices. They show diffraction when the values of (hkl) are all odd or all even and the sum h+k+l is a multiple of 4, e.g., (111), (220), (311), etc.
The order in which these peaks appear depends on the sum of h2+k2+l2. These are shown in Table $1$.
(hkl) h2+k2+l2 BCC FCC
100 1
110 2 Y
111 3 Y
200 4 Y Y
210 5
211 6 Y
220 8 Y Y
300, 221 9
310 10 Y
311 11 Y
222 12 Y Y
320 13
321 14 Y
400 16 Y Y
410, 322 17
411, 330 18 Y
331 19 Y
420 20 Y Y
421 21
Table $1$ Diffraction planes and their corresponding h2+k2+l2 values. The planes which result in diffraction for BCC and FCC structures are marked with a “Y”.
The value of d for each of these planes can be calculated using \ref{3}, where a is the lattice parameter of the crystal.
The lattice constant, or lattice parameter, refers to the constant distance between unit cells in a crystal lattice.
$\frac{1}{d^{2}} \ =\ \frac{h^{2}+k^{2}+l^{2}}{a^{2}} \label{3}$
As the diamond cubic structure of Ge can be complicated, a simpler worked example for sample diffraction of NaCl with Cu-Kα radiation is shown below. Given the values of 2θ that result in diffraction, Table $2$ can be constructed.
θ Sinθ Sin2θ
27.36 13.68 0.24 0.0559
31.69 15.85 0.27 0.0746
45.43 22.72 0.39 0.1491
53.85 26.92 0.45 0.2050
56.45 28.23 0.47 0.2237
66.20 33.10 0.55 0.2982
73.04 36.52 0.60 0.3541
75.26 37.63 0.61 0.3728
Table $2$ Ratio of diffraction for germanium.
The values of these ratios can then be inspected to see if they corresponding to an expected series of hkl values. In this case, the last column gives a list of integers, which corresponds to the h2+k2+l2 values of the FCC lattice diffraction. Hence, NaCl has a FCC structure, shown in angles Figure $20$.
The lattice parameter of NaCl can now be calculated from this data. The first peak occurs at θ = 13.68°. Given that the wavelength of the Cu-Kα radiation is 1.54059 Å, Bragg's Equation \ref{4} can be applied as follows:
$1.54059 \ =\ 2d\ sin 13.68 \label{4}$
$d\ =\ 3.2571\ Å \label{5}$
Since the first peak corresponds to the (111) plane, the distance between two parallel (111) planes is 3.2571 Å. The lattice parameter can now be worked out using \ref{6}.
$1/3.2561^{2}\ =\ (1^{2}+1^{2}+I^{2})/a^{2} \label{6}$
$a\ =\ 5.6414\ Å \label{7}$
The powder XRD spectrum of Ag nanoparticles is given in Figure $21$ as collected using Cu-Kα radiation of 1.54059 Å. Determine its crystal structure and lattice parameter using the labeled peaks.
θ Sinθ Sin2θ Sin2θ/Sin2θ 2 x Sin2θ/Sin2θ 3 x Sin2θ/Sin2θ
38.06 19.03 0.33 0.1063 1.00 2.00 3.00
44.24 22.12 0.38 0.1418 1.33 2.67 4.00
64.35 32.17 0.53 0.2835 2.67 5.33 8
77.28 38.64 0.62 0.3899 3.67 7.34 11
81.41 40.71 0.65 0.4253 4 8 12
97.71 48.86 0.75 0.5671 5.33 10.67 16
110.29 55.15 0.82 0.6734 6.34 12.67 19.01
114.69 57.35 0.84 0.7089 6.67 13.34 20.01
Table $3$ Ratio of diffraction angles for Ag.
Applying the Bragg Equation \ref{8},
$1.54059\ =\ 2d\ sin\ 19.03 \label{8}$
$d\ =\ 2.3624\ Å \label{9}$
Calculate the lattice parameter using \ref{10},
$1/2.3624^{2}\ =\ (1^{2}+1^{2}+I^{2})/a^{2} \label{10}$
$a\ =\ 4.0918\ Å \label{11}$
The last column gives a list of integers, which corresponds to the h2+k2+l2 values of the FCC lattice diffraction. Hence, the Ag nanoparticles have a FCC structure.
Determining Composition
As seen above, each crystal will give a pattern of diffraction peaks based on its lattice type and parameter. These fingerprint patterns are compiled into databases such as the one by the Joint Committee on Powder Diffraction Standard (JCPDS). Thus, the XRD spectra of samples can be matched with those stored in the database to determine its composition easily and rapidly.
Solid State Reaction Monitoring
Powder XRD is also able to perform analysis on solid state reactions such as the titanium dioxide (TiO2) anatase to rutile transition. A diffractometer equipped with a sample chamber that can be heated can take diffractograms at different temperatures to see how the reaction progresses. Spectra of the change in diffraction peaks during this transition is shown in Figure $22$, Figure $23$, and Figure $24$.
Summary
XRD allows for quick composition determination of unknown samples and gives information on crystal structure. Powder XRD is a useful application of X-ray diffraction, due to the ease of sample preparation compared to single-crystal diffraction. Its application to solid state reaction monitoring can also provide information on phase stability and transformation.
An Introduction to Single-Crystal X-Ray Crystallography
Described simply, single-crystal X-ray diffraction (XRD) is a technique in which a crystal of a sample under study is bombarded with an X-ray beam from many different angles, and the resulting diffraction patterns are measured and recorded. By aggregating the diffraction patterns and converting them via Fourier transform to an electron density map, a unit cell can be constructed which indicates the average atomic positions, bond lengths, and relative orientations of the molecules within the crystal.
Fundamental Principles
As an analogy to describe the underlying principles of diffraction, imagine shining a laser onto a wall through a fine sieve. Instead of observing a single dot of light on the wall, a diffraction pattern will be observed, consisting of regularly arranged spots of light, each with a definite position and intensity. The spacing of these spots is inversely related to the grating in the sieve— the finer the sieve, the farther apart the spots are, and the coarser the sieve, the closer together the spots are. Individual objects can also diffract radiation if it is of the appropriate wavelength, but a diffraction pattern is usually not seen because its intensity is too weak. The difference with a sieve is that it consists of a grid made of regularly spaced, repeating wires. This periodicity greatly magnifies the diffraction effect because of constructive interference. As the light rays combine amplitudes, the resulting intensity of light seen on the wall is much greater because intensity is proportional to the square of the light’s amplitude.
To apply this analogy to single-crystal XRD, we must simply scale it down. Now the sieve is replaced by a crystal and the laser (visible light) is replaced by an X-ray beam. Although the crystal appears solid and not grid-like, the molecules or atoms contained within the crystal are arranged periodically, thus producing the same intensity-magnifying effect as with the sieve. Because X-rays have wavelengths that are on the same scale as the distance between atoms, they can be diffracted by their interactions with the crystal lattice.
These interactions are dictated by Bragg's law, which says that constructive interference occurs only when \ref{12} is satisfied; where n is an integer, λ is the wavelength of light, d is the distance between parallel planes in the crystal lattice, and θ is the angle of incidence between the X-ray beam and the diffracting planes (see Figure $25$). A complication arises, however, because crystals are periodic in all three dimensions, while the sieve repeats in only two dimensions. As a result, crystals have many different diffraction planes extending in certain orientations based on the crystal’s symmetry group. For this reason, it is necessary to observe diffraction patterns from many different angles and orientations of the crystal to obtain a complete picture of the reciprocal lattice.
The reciprocal lattice of a lattice (Bravais lattice) is the lattice in which the Fourier transform of the spatial wavefunction of the original lattice (or direct lattice) is represented. The reciprocal lattice of a reciprocal lattice is the original lattice.
$n \lambda \ =\ 2d\ sin \theta \label{12}$
The reciprocal lattice is related to the crystal lattice just as the sieve is related to the diffraction pattern: they are inverses of each other. Each point in real space has a corresponding point in reciprocal space and they are related by 1/d; that is, any vector in real space multiplied by its corresponding vector in reciprocal space gives a product of unity. The angles between corresponding pairs of vectors remains unchanged.
Real space is the domain of the physical crystal, i.e. it includes the crystal lattice formed by the physical atoms within the crystal. Reciprocal space is, simply put, the Fourier transform of real space; practically, we see that diffraction patterns resulting from different orientations of the sample crystal in the X-ray beam are actually two-dimensional projections of the reciprocal lattice. Thus by collecting diffraction patterns from all orientations of the crystal, it is possible to construct a three-dimensional version of the reciprocal lattice and then perform a Fourier transform to model the real crystal lattice.
Technique
Single-crystal Versus Powder Diffraction
Two common types of X-ray diffraction are powder XRD and single-crystal XRD, both of which have particular benefits and limitations. While powder XRD has a much simpler sample preparation, it can be difficult to obtain structural data from a powder because the sample molecules are randomly oriented in space; without the periodicity of a crystal lattice, the signal-to-noise ratio is greatly decreased and it becomes difficult to separate reflections coming from the different orientations of the molecule. The advantage of powder XRD is that it can be used to quickly and accurately identify a known substance, or to verify that two unknown samples are the same material.
Single-crystal XRD is much more time and data intensive, but in many fields it is essential for structural determination of small molecules and macromolecules in the solid state. Because of the periodicity inherent in crystals, small signals from individual reflections are magnified via constructive interference. This can be used to determine exact spatial positions of atoms in molecules and can yield bond distances and conformational information. The difficulty of single-crystal XRD is that single crystals may be hard to obtain, and the instrument itself may be cost-prohibitive.
An example of typical diffraction patterns for single-crystal and powder XRD follows ((Figure $27$ and Figure $28$, respectively). The dots in the first image correspond to Bragg reflections and together form a single view of the molecule’s reciprocal space. In powder XRD, random orientation of the crystals means reflections from all of them are seen at once, producing the observed diffraction rings that correspond to particular vectors in the material’s reciprocal lattice.
Technique
In a single-crystal X-ray diffraction experiment, the reciprocal space of a crystal is constructed by measuring the angles and intensities of reflections in observed diffraction patterns. These data are then used to create an electron density map of the molecule which can be refined to determine the average bond lengths and positions of atoms in the crystal.
Instrumentation
The basic setup for single-crystal XRD consist of an X-ray source, a collimator to focus the beam, a goniometer to hold and rotate the crystal, and a detector to measure and record the reflections. Instruments typically contain a beamstop to halt the primary X-ray beam from hitting the detector, and a camera to help with positioning the crystal. Many also contain an outlet connected to a cold gas supply (such as liquid nitrogen) in order to cool the sample crystal and reduce its vibrational motion as data is being collected. A typical instrument is shown in Figure $28$ and Figure $31$.
Obtaining Single Crystals
Despite advances in instrumentation and computer programs that make data collection and solving crystal structures significantly faster and easier, it can still be a challenge to obtain crystals suitable for analysis. Ideal crystals are single, not twinned, clear, and of sufficient size to be mounted within the the X-ray beam (usually 0.1-0.3 mm in each direction). They also have clean faces and smooth edges. Following are images of some ideal crystals (Figure $30$ and Figure $31$), as well as an example of twinned crystals (Figure $32$).
Crystal twinning occurs when two or more crystals share lattice points in a symmetrical manner. This usually results in complex diffraction patterns which are difficult to analyze and construct a reciprocal lattice.
Crystal formation can be affected by temperature, pressure, solvent choice, saturation, nucleation, and substrate. Slow crystal growth tends to be best, as rapid growth creates more imperfections in the crystal lattice and may even lead to a precipitate or gel. Similarly, too many nucleation sites (points at which crystal growth begins) can lead to many small crystals instead of a few, well-defined ones.
There are a number of basic methods for growing crystals suitable for single-crystal XRD:
• The most basic method is to slowly evaporate a saturated solution until it becomes supersaturated and then forms crystals. This often works well for growing small-molecule crystals; macroscopic molecules (such as proteins) tend to be more difficult.
• A solution of the compound to be crystallized is dissolved in one solvent, then a ‘non-solvent’ which is miscible with the first but in which the compound itself is insoluble, is carefully layered on top of the solution. As the non-solvent mixes with the solvent by diffusion, the solute molecules are forced out of solution and may form crystals.
• A crystal solution is placed in a small open container which is then set in a larger closed container holding a volatile non-solvent. As the volatile non-solvent mixes slowly with the solution by vapor diffusion, the solute is again forced to come out of solution, often leading to crystal growth.
• All three of the previous techniques can be combined with seeding, where a crystal of the desired type to be grown is placed in the saturated solution and acts as a nucleation site and starting place for the crystal growth to begin. In some cases, this can even cause crystals to grow in a form that they would not normally assume, as the seed can act as a template that might not otherwise be followed.
• The hanging drop technique is typically used for growing protein crystals. In this technique, a drop of concentrated protein solution is suspended (usually by dotting it on a silicon-coated microscope slide) over a larger volume of the solution. The whole system is then sealed and slow evaporation of the suspended drop causes it to become supersaturated and form crystals. (A variation of this is to have the drop of protein solution resting on a platform inside the closed system instead of being suspended from the top of the container.)
These are only the most common ways that crystals are grown. Particularly for macromolecules, it may be necessary to test hundreds of crystallization conditions before a suitable crystal is obtained. There now exist automated techniques utilizing robots to grow crystals, both for obtaining large numbers of single crystals and for performing specialized techniques (such as drawing a crystal out of solution) that would otherwise be too time-consuming to be of practical use.
Wide Angle X-ray Diffraction Studies of Liquid Crystals
Some organic molecules display a series of intermediate transition states between solid and isotropic liquid states (Figure $33$) as their temperature is raised. These intermediate phases have properties in between the crystalline solid and the corresponding isotropic liquid state, and hence they are called liquid crystalline phases. Other name is mesomorphic phases where mesomorphic means of intermediate form. According to the physicist de Gennes (Figure $34$), liquid crystal is ‘an intermediate phase, which has liquid like order in at least one direction and possesses a degree of anisotropy’. It should be noted that all liquid crystalline phases are formed by anisotropic molecules (either elongated or disk-like) but not all the anisotropic molecules form liquid crystalline phases.
Anisotropic objects can possess different types of ordering giving rise to different types of liquid crystalline phases (Figure $35$).
Nematic Phases
The word nematic comes from the Greek for thread, and refers to the thread-like defects commonly observed in the polarizing optical microscopy of these molecules. They have no positional order only orientational order, i.e., the molecules all pint in the same direction. The direction of molecules denoted by the symbol n commonly referred as the ‘director’ (Figure $36$). The director n is bidirectional that means the states n and -n are indistinguishable.
Smetic Phases
All the smectic phases are layered structures that usually occur at slightly lower temperatures than nematic phases. There are many variations of smectic phases, and some of the distinct ones are as follows:
• Each layer in smectic A is like a two dimensional liquid, and the long axis of the molecules is typically orthogonal to the layers (Figure $35$.
• Just like nematics, the state n and -n are equivalent. They are made up of achiral and non polar molecules.
• As with smectic A, the smectic C phase is layered, but the long axis of the molecules is not along the layers normal. Instead it makes an angle (θ, Figure $35$). The tilt angle is an order parameter of this phase and can vary from 0° to 45-50°.
• Smectic C* phases are smectic phases formed by chiral molecules. This added constraint of chirality causes a slight distortion of the Smectic C structure. Now the tilt direction precesses around the layer normal and forms a helical configuration.
Cholesterics Phases
Sometimes cholesteric phases (Figure $35$) are also referred to as chiral nematic phases because they are similar to nematic phases in many regards. Many derivatives of cholesterol exhibit this type of phase. They are generally formed by chiral molecules or by doping the nematic host matrix with chiral molecules. Adding chirality causes helical distortion in the system, which makes the director, n, rotate continuously in space in the shape of a helix with specific pitch. The magnitude of pitch in a cholesteric phase is a strong function of temperature.
Columnar Phases
In columnar phases liquid crystals molecules are shaped like disks as opposed to rod-like in nematic and smectics liquid crystal phases. These disk shaped molecules stack themselves in columns and form a 2D crystalline array structures (Figure $35$). This type of two dimensional ordering leads to new mesophases.
Introduction to 2D X-ray Diffraction
X-ray diffraction (XRD) is one of the fundamental experimental techniques used to analyze the atomic arrangement of materials. The basic principle behind X-ray diffraction is Bragg’s Law (Figure $36$). According to this law, X-rays that are reflected from the adjacent crystal planes will undergo constructive interference only when the path difference between them is an integer multiple of the X-ray's wavelength, \ref{13}, where n is an integer, d is the spacing between the adjacent crystal planes, θ is the angle between incident X-ray beam and scattering plane, and λ is the wavelength of incident X-ray.
$2d sin \theta \ =\ n \lambda \label{13}\$
Now the atomic arrangement of molecules can go from being extremely ordered (single crystals) to random (liquids). Correspondingly, the scattered X-rays form specific diffraction patterns particular to that sample. Figure $37$ shows the difference between X-rays scattered from a single crystal and a polycrystalline (powder) sample. In case of a single crystal the diffracted rays point to discrete directions (Figure $37a$), while for polycrystalline sample diffracted rays form a series of diffraction cones (Figure $37b$).
A two dimensional (2D) XRD system is a diffraction system with the capability of simultaneously collecting and analyzing the X-ray diffraction pattern in two dimensions. A typical 2D XRD setup consists of five major components (Figure $38$):
• X-ray source.
• X-ray optics.
• Goniometer.
• Sample alignment and monitoring device.
• 2D area detector.
For laboratory scale X-ray generators, X-rays are emitted by bombarding metal targets with high velocity electrons accelerated by strong electric field in the range 20-60 kV. Different metal targets that can be used are chromium (Cr), cobalt (Co), copper (Cu), molybdenum (Mo) and iron (Fe). The most commonly used ones are Cu and Mo. Synchrotrons are even higher energy radiation sources. They can be tuned to generate a specific wavelength and they have much brighter luminosity for better resolution. Available synchrotron facilities in US are:
• Stanford Synchrotron Radiation Lightsource (SSRL), Stanford, CA.
• Synchrotron Radiation Center (SRC), University of Wisconsin-Madison, Madison, WI.
• Advanced Light Source (ALS), Lawrence Berkeley National, Berkeley, CA.
• National Synchrotron Light Source (NSLS), Brookhaven National Laboratory, Upton, NY.
• Advanced Photon Source (APS), Argonne National Laboratory, Argonne, IL.
• Center for Advanced Microstructures & Devices, Louisiana State University, Baton Rouge, LA.
• Cornell High Energy Synchrotron Source (CHESS), Cornell, Ithaca, NY.
The X-ray optics are comprised of the X-ray tube, monochromator, pinhole collimator and beam stop. A monochromator is used to get rid of unwanted X-ray radiation from the X-ray tube. A diffraction from a single crystal can be used to select a specific wavelength of radiation. Typical materials used are pyrolytic graphite and silicon. Monochromatic X-ray beams have three components: parallel, convergent and divergent X-rays. The function of a pinhole collimator is to filter the incident X-ray beam and allow passage of parallel X-rays. A 2D X-ray detector can either be a film or a digital detector, and its function is to measure the intensity of X-rays diffracted from a sample as a function of position, time, and energy.
Advantages of 2D XRD as Compared to 1D XRD
2D diffracton data has much more information in comparison diffraction pattern, which is acquired using a 1D detector. Figure $39$ shows the diffraction pattern from a polycrystalline sample. For illustration purposes only, two diffraction cones are shown in the schematic. In the case of 1D X-ray diffraction, measurement area is confined within a plane labeled as diffractometer plane. The 1D detector is mounted along the detection circle and variation of diffraction pattern in the z direction are not considered. The diffraction pattern collected is an average over a range defined by a beam size in the z direction. The diffraction pattern measured is a plot of X-ray intensity at different 2θ angles. For 2D X-ray diffraction, the measurement area is not limited to the diffractometer plane. Instead, a large portion of the diffraction rings are measured simultaneously depending on the detector size and position from the sample.
One such advantage is the measurement of percent crystallinity of a material. Determination of material crystallinity is required both for research and quality control. Scattering from amorphous materials produces a diffuse intensity ring while polycrystalline samples produce sharp and well-defined rings or spots are seen. The ability to distinguish between amorphous and crystalline is the key in determining percent of crystallinity accurately. Since most crystalline samples have preferred orientation, depending on the sample is oriented it is possible to measure different peak or no peak using conventional diffraction system. On the other hand, sample orientation has no effect on the full circle integrated diffraction measuring done using 2D detector. A 2D XRD can therefore measure percent crystallinity more accurately.
2D Wide Angle X-ray Diffraction Patterns of LCs
As mentioned in the introduction section, liquid crystal is an intermediate state between solid and liquid phases. At temperatures above the liquid crystal phase transition temperature (Figure $40$), they become isotropic liquid, i.e., absence of long-range positional or orientational order within molecules. Since an isotropic state cannot be aligned, its diffraction pattern consists of weak, diffuse rings Figure $40a$. The reason we see any diffraction pattern in the isotropic state is because in classical liquids there exists a short range positional order. The ring has of radius of 4.5 Å and it mostly appears at 20.5°. It represents the distance between the molecules along their widths.
Nematic liquid crystalline phases have long range orientational order but no positional order. An unaligned sample of nematic liquid crystal has similar diffraction pattern as an isotropic state. But instead of a diffuse ring, it has a sharper intensity distribution. For an aligned sample of nematic liquid crystal, X-ray diffraction patterns exhibit two sets of diffuse arcs (Figure $40$ b). The diffuse arc at the larger radius (P1, 4.5 Å) represents the distance between molecules along their widths. Under the presence of an external magnetic field, samples with positive diamagnetic anisotropy align parallel to the field and P1 is oriented perpendicularly to the field. While samples with negative diamagnetic anisotropy align perpendicularly to the field with P1 being parallel to the field. The intensity distribution within these arcs represents the extent of alignment within the sample; generally denoted by S.
The diamagnetic anistropy of all liquid crystals with an aromatic ring is positive, and on order of 10-7. The value decreases with the substitution of each aromatic ring by a cyclohexane or other aliphatic group. A negative diamagnetic anistropy is observed for purely cycloaliphatic LCs.
When a smectic phase is cooled down slowly under the presence the external field, two sets of diffuse peaks are seen in diffraction pattern (Figure $40$ c). The diffuse peak at small angles condense into sharp quasi-Bragg peaks. The peak intensity distribution at large angles is not very sharp because molecules within the smectic planes are randomly arranged. In case of smectic C phases, the angle between the smectic layers normal and the director (θ) is no longer collinear (Figure $40$ d). This tilt can easily be seen in the diffraction pattern as the diffuse peaks at smaller and larger angles are no longer orthogonal to each other.
Sample Preparation
In general, X-ray scattering measurements of liquid crystal samples are considered more difficult to perform than those of crystalline samples. The following steps should be performed for diffraction measurement of liquid crystal samples:
1. The sample should be free of any solvents and absorbed oxygen, because their presence affects the liquid crystalline character of the sample and its thermal response. This can be achieved by performing multiple melting and freezing cycles in a vacuum to get rid of unwanted solvents and gases.
2. For performing low resolution measurements, liquid crystal sample can be placed inside a thin-walled glass capillary. The ends of the capillary can be sealed by epoxy in case of volatile samples. The filling process tends to align the liquid crystal molecules along the flow direction.
3. For high resolution measurements, the sample is generally confined between two rubbed polymer coated glass coverslips separated by an o-ring as a spacer. The rubbing causes formation of grooves in the polymer film which tends to the align the liquid crystal molecules.
4. Aligned samples are necessary for identifying the liquid crystalline phase of the sample. Liquid crystal samples can be aligned by heating above the phase transition temperature and cooling them slowly in the presence of an external electric or magnetic field. A magnetic field is effective for samples with aromatic cores as they have high diamagnetic anisotropy. A common problem in using electric field is internal heating which can interfere with the measurement.
5. Sample size should be sufficient to avoid any obstruction to the passage of the incident X-ray beam.
6. The sample thickness should be around one absorption length of the X-rays. This allows about 63% of the incident light to pass through and get optimum scattering intensity. For most hydrocarbons absorption length is approximately 1.5 mm with a copper metal target (λ = 1.5418 Å). Molybdenum target can be used for getting an even higher energy radiation (λ = 0.71069 Å ).
Data Analysis
Identification of the phase of a liquid crystal sample is critical in predicting its physical properties. A simple 2D X-ray diffraction pattern can tell a lot in this regard (Figure $40$). It is also critical to determine the orientational order of a liquid crystal. This is important to characterize the extent of sample alignment.
For simplicity, the rest of the discussion focuses on nematic liquid crystal phases. In an unaligned sample, there isn't any specific macroscopic order in the system. In the micrometer size domains, molecules are all oriented in a specific direction, called a local director. Because there is no positional order in nematic liquid crystals, this local director varies in space and assumes all possible orientations. For example, in a perfectly aligned sample of nematic liquid crystals, all the local directors will be oriented in the same direction. The specific alignment of molecules in one preferred direction in liquid crystals makes their physical properties such as refractive index, viscosity, diamagnetic susceptibility, directionally dependent.
When a liquid crystal sample is oriented using external fields, local directors preferentially align globally along the field director. This globally preferred direction is referred to as the director and is denoted by unit vector n. The extent of alignment within a liquid crystal sample is typically denoted by the order parameter, S, as defined by \ref{14}, where θ is the angle between long axis of molecule and the preferred direction, n.
$S\ =\ (\frac{3cos^{2} \theta \ -\ 1}{2}) \label{14}$
For isotropic samples, the value of S is zero, and for perfectly aligned samples it is 1. Figure $41$ shows the structure of a most extensively studied nematic liquid crystal molecule, 4-cyano-4'-pentylbiphenyl, commonly known as 5CB. For preparing a polydomain sample 5CB was placed inside a glass capillary via capillary forces (Figure $41$). Figure $42$ shows the 2D X-ray diffraction of the as prepared polydomain sample. For preparing monodomain sample, a glass capillary filled with 5CB was heated to 40 °C (i.e., above the nematic-isotropic transition temperature of 5CB, ~35 °C) and then cooled slowly in the presence of magnetic field (1 Testla, Figure $43$. This gives a uniformly aligned sample with the nematic director n oriented along the magnetic field. Figure $44$ shows the collected 2D X-ray diffraction measurement of a monodomain 5CB liquid crystal sample using Rigaku Raxis-IV++, and it consists of two diffuse arcs (as mentioned before). Figure $45$ shows the intensity distribution of a diffuse arc as a function of Θ, and the calculated order parameter value, S, is -0.48.
Refinement of Crystallographic Disorder in the Tetrafluoroborate Anion
Through the course of our structural characterization of various tetrafluoroborate salts, the complex cation has nominally been the primary subject of interest; however, we observed that the tetrafluoroborate anion (BF4-) anions were commonly disordered (13 out of 23 structures investigated). Furthermore, a consideration of the Cambridge Structural Database as of 14th December 2010 yielded 8,370 structures in which the tetrafluoroborate anion is present; of these, 1044 (12.5%) were refined as having some kind of disorder associated with the BF4- anion. Several different methods have been reported for the treatment of these disorders, but the majority was refined as a non-crystallographic rotation along the axis of one of the B-F bonds.
Unfortunately, the very property that makes fluoro-anions such good candidates for non-coordinating counter-ions (i.e., weak intermolecular forces) also facilitates the presence of disorder in crystal structures. In other words, the appearance of disorder is intensified with the presence of a weakly coordinating spherical anion (e.g., BF4- or PF6-) which lack the strong intermolecular interactions needed to keep a regular, repeating anion orientation throughout the crystal lattice. Essentially, these weakly coordinating anions are loosely defined electron-rich spheres. All considered it seems that fluoro-anions, in general, have a propensity to exhibit apparently large atomic displacement parameters (ADP's), and thus, are appropriately refined as having fractional site-occupancies.
Refining Disorder
In crystallography the observed atomic displacement parameters are an average of millions of unit cells throughout entire volume of the crystal, and thermally induced motion over the time used for data collection. A disorder of atoms/molecules in a given structure can manifest as flat or non-spherical atomic displacement parameters in the crystal structure. Such cases of disorder are usually the result of either thermally induced motion during data collection (i.e., dynamic disorder), or the static disorder of the atoms/molecules throughout the lattice. The latter is defined as the situation in which certain atoms, or groups of atoms, occupy slightly different orientations from molecule to molecule over the large volume (relatively speaking) covered by the crystal lattice. This static displacement of atoms can simulate the effect of thermal vibration on the scattering power of the "average" atom. Consequently, differentiation between thermal motion and static disorder can be ambiguous, unless data collection is performed at low temperature (which would negate much of the thermal motion observed at room temperature).
In most cases, this disorder is easily resolved as some non-crystallographic symmetry elements acting locally on the weakly coordinating anion. The atomic site occupancies can be refined using the FVAR instruction on the different parts (see PART 1 and PART 2 in Figure $47$) of the disorder, having a site occupancy factor (s.o.f.) of x and 1-x, respectively. This is accomplished by replacing 11.000 (on the F-atom lines in the “NAME.INS” file) with 21.000 or -21.000 for each of the different parts of the disorder. For instance, the "NAME.INS" file would look something like that shown in Figure $47$. Note that for more heavily disordered structures, i.e., those with more than two disordered parts, the SUMP command can be used to determine the s.o.f. of parts 2, 3, 4, etc. the combined sum of which is set at s.o.f. = 1.0. These are designated in FVAR as the second, third, and fourth terms.
In small molecule refinement, the case will inevitably arise in which some kind of restraints or constraints must be used to achieve convergence of the data. A restraint is any additional information concerning a given structural feature, i.e., limits on the possible values of parameters, may be added into the refinement, thereby increasing the number of refined parameters. For example, aromatic systems are essentially flat, so for refinement purposes, a troublesome ring system could be restrained to lie in one plane. Restraints are not exact, i.e., they are tied to a probability distribution, whereas constraints are exact mathematical conditions. Restraints can be regarded as falling into one of several general types:
• Geometric restraints, which relates distances that should be similar.
• Rigid group restraints.
• Anti-bumping restraints.
• Linked parameter restraints.
• Similarity restraints.
• ADP restraints (Figure $48$
• Sum and average restraints.
• Origin fixing and shift limiting restraints.
• Those imposed upon atomic displacement parameters.
Geometric Restraints
• SADI - similar distance restraints for named pairs of atoms.
• DFIX - defined distance restraint between covalently bonded atoms.
• DANG - defined non-bonding distance restraints, e.g., between F atoms belonging to the same PART of a disordered BF4-.
• FLAT - restrains group of atoms to lie in a plane.
Anisotropic Displacement Parameter Restraints
• DELU - rigid bond restraints (Figure $48$)
• SIMU - similar ADP restraints on corresponding Uij components to be approximately equal for atoms in close proximity (Figure $48$)
• ISOR - treat named anisotropic atoms to have approximately isotropic behavior (Figure $48$)
Constraints (different than "restraints")
• EADP - equivalent atomic displacement parameters.
• AFIX - fitted group; e.g., AFIX 66 would fit the next six atoms into a regular hexagon.
• HFIX - places H atoms in geometrically ideal positions, e.g., HFIX 123 would place two sets of methyl H atoms disordered over two sites, 180° from each other.
Classess of Disorder for the Tetrafluoroborate Anion
Rotating about a non-crystallographic axis along a B-F bond
The most common case of disorder is a rotation about an axis, the simplest of which involves a non-crystallographic symmetry related rotation axis about the vector made by one of the B-F bonds; this operation leads to three of the four F-atoms having two site occupancies (Figure $49$). This disorder is also seen for tBu and CF3 groups, and due to the C3 symmetry of the C(CH3)3, CF3 and BF3 moieties actually results in a near C2rotation.
In a typical example, the BF4- anion present in the crystal structure of [H(Mes-dpa)]BF4 (Figure $50$) was found to have a 75:25 site occupancy disorder for three of the four fluorine atoms (Figure $51$). The disorder is a rotation about the axis of the B(1)-F(1) bond. For initial refinement cycles, similar distance restraints (SADI) were placed on all B-F and F-F distances, in addition to similar ADP restraints (SIMU) and rigid bond restraints (DELU) for all F atoms. Restraints were lifted for final refinement cycles. A similar disorder refinement was required for [H(2-iPrPh-dpa)]BF4 (45:55), while refinement of the disorder in [Cu(2-iPrPh-dpa)(styrene)]BF4(65:35) was performed with only SADI and DELU restraints were lifted in final refinement cycles.
In the complex [Ag(H-dpa)(styrene)]BF4 use of the free variable (FVAR) led to refinement of disordered fluorine atoms F(2A)-F(4A) and F(2B)-F(4B) as having a 75:25 site-occupancy disorder (Figure $52$). For initial refinement cycles, all B-F bond lengths were given similar distance restraints (SADI). Similar distance restraints (SADI) were also placed on FF distances for each part, i.e., F(2A)F(3A) = F(2B)F(3B), etc. Additionally, similar ADP restraints (SIMU) and rigid bond restraints (DELU) were placed on all F atoms. All restraints, with the exception of SIMU, were lifted for final refinement cycles.
Rotation About a Non-Crystallographic Axis not Along a B-F Bond
The second type of disorder is closely related to the first, with the only difference being that the rotational axis is tilted slightly off the B-F bond vector, resulting in all four F-atoms having two site occupancies (Figure $53$). Tilt angles range from 6.5° to 42°.
The disordered BF4- anion present in the crystal structure of [Cu(Ph-dpa)(styrene)]BF4 was refined having fractional site occupancies for all four fluorine atoms about a rotation slightly tilted off the B(1)-F(2A) bond. However, it should be noted that while the U(eq) values determined for the data collected at low temperature data is roughly half that of that found at room temperature, as is evident by the sizes and shapes of fluorine atoms in Figure $54$, the site occupancies were refined to 50:50 in each case, and there was no resolution in the disorder.
An extreme example of rotation off-axis is observed where refinement of more that two site occupancies (Figure $55$) with as many as thirteen different fluorine atom locations on only one boron atom.
Constrained Rotation About a Non-Crystallographic Axis not Along a B-F Bond
Although a wide range of tilt angles are possible, in some systems the angle is constrained by the presence of hydrogen bonding. For example, the BF4- anion present in [Cu(Mes-dpa)(μ-OH)(H2O)]2[BF4]2 was found to have a 60:40 site occupancy disorder of the four fluorine atoms, and while the disorder is a C2-rotation slightly tilted off the axis of the B(1)-F(1A) bond, the angle is restricted by the presence of two B-FO interactions for one of the isomers (Figure $56$).
An example that does adhere to global symmetry elements is seen in the BF4- anion of [Cu{2,6-iPr2C6H3N(quin)2}2]BF4.MeOH (Figure $57$), which exhibits a hydrogen-bonding interaction with a disordered methanol solvent molecule. The structure of R-N(quin)2 is shown in Figure $54$ b. By crystallographic symmetry, the carbon atom from methanol and the boron atom from the BF4- anion lie on a C2-axis. Fluorine atoms [F(1)-F(4)], the methanol oxygen atom, and the hydrogen atoms attached to methanol O(1S) and C(1S) atoms were refined as having 50:50 site occupancy disorder (Figure $57$).
Non Crystallographic Inversion Center at the Boron Atom
Multiple disorders can be observed with a single crystal unit cell. For example, the two BF4- anions in [Cu(Mes-dpa)(styrene)]BF4 both exhibited 50:50 site occupancy disorders, the first is a C2-rotation tilted off one of the B-F bonds, while the second is disordered about an inversion centered on the boron atom. Refinement of the latter was carried out similarly to the aforementioned cases, with the exception that fixed distance restraints for non-bonded atoms (DANG) were left in place for the disordered fluorine atoms attached to B(2) (Figure $58$).
Disorder on a Crystallographic Mirror Plane
Another instance in which the BF4- anion is disordered about a crystallographic symmetry element is that of [Cu(H-dpa)(1,5-cyclooctadiene)]BF4. In this instance fluorine atoms F(1) through F(4) are present in the asymmetric unit of the complex. Disordered atoms F(1A)-F(4A) were refined with 50% site occupancies, as B(1) lies on a mirror plane (Figure $59$). For initial refinement cycles, similar distance restraints (SADI) were placed on all B-F and F-F distances, in addition to similar ADP restraints (SIMU) and rigid bond restraints (DELU) for all F atoms. Restraints were lifted for final refinement cycles, in which the boron atom lies on a crystallographic mirror plane, and all four fluorine atoms are reflected across.
Disorder on a Non-Crystallographic Mirror Plane
It has been observed that the BF4- anion can exhibit site occupancy disorder of the boron atom and one of the fluorine atoms across an NCS mirror plane defined by the plane of the other three fluorine atoms (Figure $60$) modeling the entire anion as disordered (including the boron atom).
Disorder of the Boron Atom Core
The extreme case of a disorder involves refinement of the entire anion, with all boron and all fluorine atoms occupying more than two sites (Figure $61$). In fact, some disorders of the latter types must be refined isotropically, or as a last-resort, not at all, to prevent one or more atoms from turning non-positive definite. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.03%3A_X-ray_Crystallography.txt |
Low energy electron diffraction (LEED) is a very powerful technique that allows for the characterization of the surface of materials. Its high surface sensitivity is due to the use of electrons with energies between 20-200 eV, which have wavelengths equal to 2.7 – 0.87 Å (comparable to the atomic spacing). Therefore, the electrons can be elastically scattered easily by the atoms in the first few layers of the sample. Its features, such as little penetration of low–energy electrons have positioned it as one of the most common techniques in surface science for the determination of the symmetry of the unit cell (qualitative analysis) and the position of the atoms in the crystal surface (quantitative analysis).
History: Davisson and Germer Experiment
In 1924 Louis de Brogile postulated that all forms of matter, such as electrons, have a wave-particle nature. Three years later after this postulate, the American physicists Clinton J. Davisson and Lester H. Germer (Figure \(1\)) proved experimentally the wave nature of electrons at Bell Labs in New York, see Figure 1. At that time, they were investigating the distribution-in-angle of the elastically scattered electrons (electrons that have suffered no loss of kinetic energy) from the (111) face of a polycrystalline nickel, material composed of many randomly oriented crystals.
The experiment consisted of a beam of electrons from a heated tungsten filament directed against the polycrystalline nickel and an electron detector, which was mounted on an arc to observe the electrons at different angles. During the experiment, air entered in the vacuum chamber where the nickel was, producing an oxide layer on its surface. Davisson and Clinton reduced the nickel by heating it at high temperature. They did not realize that the thermal treatment changed the polycrystalline nickel to a nearly monocrystalline nickel, material composed of many oriented crystals. When they repeated the experiment, it was a great surprise that the distribution-in-angle of the scattered electrons manifested sharp peaks at certain angles. They soon realized that these peaks were interference patterns, and, in analogy to X-ray diffraction, the arrangement of atoms and not the structure of the atoms was responsible for the pattern of the scattered electrons.
The results of Davisson and Germer were soon corroborated by George Paget Thomson, J. J. Thomson’s son. In 1937, both Davisson and Thomson were awarded with the Nobel Prize in Physics for their experimental discovery of the electron diffraction by crystals. It is noteworthy that 31 years after J. J. Thomson showed that the electron is a particle, his son showed that it is also a wave.
Although the discovery of low-energy electron diffraction was in 1927, it became popular in the early 1960’s, when the advances in electronics and ultra-high vacuum technology made possible the commercial availability of LEED instruments. At the beginning, this technique was only used for qualitative characterization of surface ordering. Years later, the impact of computational technologies allowed the use of LEED for quantitative analysis of the position of atoms within a surface. This information is hidden in the energetic dependence of the diffraction spot intensities, which can be used to construct a LEED I-V curve.
Principles and Diffraction Patterns
Electrons can be considered as a stream of waves that hit a surface and are diffracted by regions with high electron density (the atoms). The electrons in the range of 20 to 200 eV can penetrate the sample for about 10 Å without loosing energy. Because of this reason, LEED is especially sensitive to surfaces, unlike X-ray diffraction, which gives information about the bulk-structure of a crystal due to its larger mean free path (around micrometers). Table \(1\) compares general aspects of both techniques.
Low Energy Electron Diffraction X-ray Diffraction
Surface structure determination (high surface sensitivity) Bulk structures determination
Sample single crystal Sample single-crystal or polycrystalline
Sample must be have an oriented surface, sensitive to impurities Surface impurities not important
Experiment in ultra-high vacuum Experiment usually at atmospheric pressure
Experiment done mostly at constant incidence angle and variable wavelength (electron energy) Constant wavelength and variable incidence angle
Diffraction pattern consists of beams visible at almost all energies Diffraction pattern consists of beams flashing out at specific wavelengths and angles
Table \(1\) Comparison between low energy electron diffraction and X-ray diffraction.
Like X-ray diffraction, electron diffraction also follows the Bragg’s law, see Figure \(2\), where λ is the wavelength, a is the atomic spacing, d is the spacing of the crystal layers, θ is the angle between the incident beam and the reflected beam, and n is an integer. For constructive interference between two waves, the path length difference (2a sinθ / 2d sinθ) must be an integral multiple of the wavelength.
In LEED, the diffracted beams impact on a fluorescent screen and form a pattern of light spots (Figure \(3\) a), which is a to-scale version of the reciprocal lattice of the unit cell. The reciprocal lattice is a set of imaginary points, where the direction of a vector from one point to another point is equal to the direction of a normal to one plane of atoms in the unit cell (real space). For example, an electron beam penetrates a few 2D-atomic layers, Figure \(3\) b), so the reciprocal lattice seen by LEED consists of continues rods and discrete points per atomic layer, see Figure \(3\) c. In this way, LEED patterns can give information about the size and shape of the real space unit cell, but nothing about the positions of the atoms. To gain this information about atomic positions, analysis of the spot intensities is required. For further information about reciprocal lattice and crystals refer to Crystal Structure and An Introduction to Single-Crystal X-Ray Crystallography.
Thanks to the hemispheric geometry of the green screen of LEED, we can observe the reciprocal lattice without distortion. It is important to take into account that the separation of the points in the reciprocal lattice and the real interplanar distance are inversely proportional, which means that if the atoms are more widely spaced, the spots in the pattern get closer and vice versa. In the case of superlattices, a periodic structure composed of layers of two materials, new points arise in addition to the original diffraction pattern.
LEED Experimental Equipment
The typical diagram of a LEED system is shown in Figure \(4\). This system sends an electron beam to the surface of the sample, which comes from an electron gun behind a transparent hemispherical fluorescent screen. The electron gun consists of a heated cathode and a set of focusing lenses which send electrons at low energies. The electrons collide with the sample and diffract in different directions depending on the surface. Once diffracted, they are directed to the fluorescent screen. Before colliding with the screen, they must pass through four different grids (known as retarding grids), which contain a central hole through which the electron gun is inserted. The first grid is the nearest one to the sample and is connected to earth ground. A negative potential is applied to the second and third grids, which act as suppressor grids, given that they repel all electrons coming from non–elastic diffractions. These grids perform as filters, which only allow the highest–energy electrons to pass through; the electrons with the lowest energies are blocked in order to prevent a bad resolution image. The fourth grid protects the phosphor screen, which possesses positive charge from the negative grids. The remaining electrons collide with the luminescent screen, creating a phosphor glow (left side of Figure \(4\)), where the light intensity depends on the electron intensity.
For conventional systems of LEED, it is necessary a method of data acquisition. In the past, the general method for analyzing the diffraction pattern was to manually take several dozen pictures. After the development of computers, the photographs were scanned and digitalized for further analysis through computational software. Years later, the use of the charge–coupled device (CCD) camera was incorporated, allowing rapid acquisition, the possibility to average frames during the acquisition in order to improve the signal, the immediate digitalization and channeling of LEED pattern. In the case of the IV curves, the intensities of the points are extracted making use of special algorithms. Figure \(5\) shows a commercial LEED spectrometer with the CCD camera, which has to be in an ultra-high vacuum vessel.
LEED Applications
We have previously talked about the discovery of LEED and its principles, along with the experimental setup of a LEED system. It was also mentioned that LEED provides qualitative and quantitative surface analysis. In the following section, we will discuss the most common applications of LEED and the information that one can obtain with this technique.
Study of Adsorbates on the Surface and Disorder Layers
ne of the principal applications of LEED is the study of adsorbates on catalysts, due to its high surface sensitivity. In order to illustrate the application of LEED in the study of adsorbates. As an example, Figure \(6\) a shows the surface of Cu (100) single crystal, the pristine material. This surface was cleaned carefully by various cycles of sputtering with ions of argon, followed by annealing. The LEED patter of Cu (100) presents four well-defined spots corresponding to its cubic unit cell.
Figure \(6\) b shows the LEED pattern after the growth of graphene on the surface of Cu (100) at 800 °C, we can observe the four spots that correspond to the surface of Cu (100) and a ring just outside these spots, which correspond to the domains of graphene with four different primary rotational alignments with respect to the Cu (100) substrate lattice, see Figure \(7\). When increasing the temperature of growth of graphene to 900 °C, we can observe a ring of twelve spots (as seen in Figure \(6\) c), which indicates that the graphene has a much higher degree of rotational order. Only two domains are observed with an alignment of one of the lattice vectors to one of the Cu (100) surface lattice vectors, given that graphene has a hexagonal geometry, so that only one vector can coincide with the cubic lattice of Cu (100).
One possible explanation for the twelve spots observed at 900 ˚C is that when the temperature of all domains is increased the four different domains observed at 800 ˚C, may possess enough energy to adopt the two orientations in which the vectors align with the surface lattice vector of Cu (100). In addition, at 900 ˚C, a decrease in the size and intensity of the Cu (100) spots is observed, indicating a larger coverage of the copper surface by the domains of graphene.
When the oxygen is chemisorbed on the surface of Cu (100), the new spots correspond to oxygen, Figure \(8\) a. Once graphene is allowed to grow on the surface with oxygen at 900 ˚C, the LEED pattern turns out different: the twelve spots corresponding to graphene domains are not observed due to nucleation of graphene domains in the presence of oxygen in multiple orientations, Figure \(8\) b.
A way to study the disorder of the adsorbed layers is through the LEED–IV curves, see Figure \(9\). In this case, the intensities are in relation to the angle of the electron beam. The spectrum of Cu (100) with only four sharp peaks shows a very organized surface. In the case of the graphene sample growth over the copper surface, twelve peaks are shown, which correspond to the main twelve spots of the LEED pattern. These peaks are sharp, which indicate an high level of order. For the case of the sample of graphene growth over copper with oxygen, the twelve peaks widen, which is an effect of the increase of disorder in the layers.
Structure Determination
As previously mentioned, LEED–IV curves may give us exact information about the position of the atoms in a crystal. These curves are related to a variation of intensities of the diffracted electron (spots) with the energy of the electron beam. The process of determination of the structure by this technique works as ‘proof and error’ and consists of three main parts: the measurement of the intensity spectra, the calculations for various models of atomic positions and the search for the best-fit structure which is determined by an R-factor.
The first step consists of obtaining the experimental LEED pattern and all the electron beam intensities for every spot of the reciprocal lattice in the pattern. Theoretical LEED–IV curves are calculated for a large number of geometrical models and these are compared with the experimental curves. The agreement is quantified by means of a reliability factor or R–factor. The closest this value to zero is, the more perfect the agreement between experimental and theoretical curves. In this way, the level of precision of the crystalline structure will depend on the smallest R–factor that can be achieved.
Pure metals with pure surfaces allow R–factor values of around 0.1. When moving to more complex structures, these values increase. The main reason for this gradually worse agreement between theoretical and experimental LEED-IV curves lies in the approximations in conventional LEED theory, which treats the atoms as perfect spheres with constant scattering potential in between. This description results in inaccurate scattering potential for more open surfaces and organic molecules. In consequence, a precision of 1-2 pm can be achieved for atoms in metal surfaces, whereas the positions of atoms within organic molecules are typically determined within ±10-20 pm. The values of the R-factor are usually between 0.2 and 0.5, where 0.2 represents a good agreement, 0.35 a mediocre agreement and 0.5 a poor agreement.
Figure \(10\) shows an example of a typical LEED–IV curve for Ir (100), which has a quasi-hexagonal unit cell. One can observe the parameters used to calculate the theoretical LEED–IV curve and the best-fitted curve obtained experimentally, which has an R–factor value of 0.144. The model used is also shown. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.04%3A_Low_Energy_Electron_Diffraction.txt |
The first neutron diffraction experiment was in 1945 by Ernest O. Wollan (Figure $1$) using the Graphite Reactor at Oak Ridge. Along with Clifford Shull (Figure $1$) they outlined the principles of the technique. However, the concept that neutrons would diffract like X-rays was first proposed by Dana Mitchell and Philip Powers. They proposed that neutrons have a wave like structure, which is explained by the de Broglie equation, \ref{1}, where $λ$ is the wavelength of the source usually measured in Å, $h$ is Planck’s constant, $v$ is the velocity of the neutron, and finally $m$ represents the mass of the neutron.
$\lambda \ =\ h/mv \label{1}$
The great majority of materials that are studied by diffraction methods are composed of crystals. X-rays where the first type of source tested with crystals in order to determine their structural characteristics. Crystals are said to be perfect structures although some of them show defects on their structure. Crystals are composed of atoms, ions or molecules, which are arranged, in a uniform repeating pattern. The basic concept to understand about crystals is that they are composed of an array of points, which are called lattice points, and the motif, which represents the body part of the crystal. Crystals are composed of a series of unit cells. A unit cell is the repeating portion of the crystal. Usually there are another eight unit cells surrounding each unit cell. Unit cells can be categorized as primitive, which have only one lattice point. This means that the unit cell will only have lattice points on the corners of the cell. This point is going to be shared with eight other unit cells. Whereas in a non primitive cell there will also be point in the corners of the cell but in addition there will be lattice points in the faces or the interior of the cell, which similarly will be shared by other cells. The only primitive cell known is the simple crystal system and for nonprimitive cells there are known face-centered cubic, base centered cubic and body centered cubic.
Crystals can be categorized depending on the arrangement of lattice points; this will generate different types of shapes. There are known seven crystal systems, which are cubic, tetragonal, orthorhombic, rhombohedral, hexagonal, monoclinic and triclinic. All of these have different angles and the axes are equally the same or different in others. Each of these type of systems have different bravais lattice.
Braggs Law
Braggs Law was first derived by physicist Sir W.H. Bragg (Figure $2$) and his son W. L Bragg (Figure $3$) in 1913.
It has been used to determine the spacing of planes and angles formed between these planes and the incident beam that had been applied to the crystal examined. Intense scattered X-rays are produced when X-rays with a set wavelength are executed to a crystal. These scattered X-rays will interfere constructively due the equality in the differences between the travel path and the integral number of the wavelength. Since crystals have repeating units patterns, diffraction can be seen in terms of reflection from the planes of the crystals. The incident beam, the diffracted beam and normal plane to diffraction need to lie in the same geometric plane. The angle, which the incident beam forms when it hits the plane of the crystal, is called $2θ$. Figure $4$ shows a schematic representation of how the incident beam hits the plane of the crystal and is reflected at the same angle $2θ$, which the incident beam hits. Bragg’s Law is mathematically expressed, \ref{2}:
$n\lambda = 2d \sin \theta \label{2}$
where $n$ is the integer order of reflection, $λ$= wavelength, and $d$= plane spacing.
Bragg’s Law is essential in determining the structure of an unknown crystal. Usually the wavelength is known and the angle of the incident beam can be measured. Having these two known values, the plane spacing of the layer of atoms or ions can be obtained. All reflections collected can be used to determine the structure of the unknown crystal material.
Bragg’s Law applies similarly to neutron diffraction. The same relationship is used the only difference being is that instead of using X-rays as the source, neutrons that are ejected and hit the crystal are being examined.
Neutron Diffraction
Neutrons have been studied for the determination of crystalline structures. The study of materials by neutron radiation has many advantages against the normally used such as X-rays and electrons. Neutrons are scattered by the nucleus of the atoms rather than X-rays, which are scattered by the electrons of the atoms. These generates several differences between them such as that scattering of X-rays highly depend on the atomic number of the atoms whereas neutrons depend on the properties of the nucleus. These lead to a greater and accurately identification of the unknown sample examined if neutron source is being used. The nucleus of every atom and even from isotopes of the same element is completely different. They all have different characteristics, which make neutron diffraction a great technique for identification of materials, which have similar elemental composition. In contrast, X-rays will not give an exact solution if similar characteristics are known between materials. Since the diffraction will be similar for adjacent atoms further analysis needs to be done in order to determine the structure of the unknown. Also, if the sample contains light elements such as hydrogen, it is almost impossible to determine the exact location of each of them just by X-ray diffraction or any other technique. Neutron diffraction can tell the number of light elements and the exact position of them present in the structure.
Neutron Inventors
Neutrons were first discovered by James Chadwick in 1932 Figure $5$ when he showed that there were uncharged particles in the radiation he was using. These particles had a similar mass of the protons but did not have the same characteristics as them. Chadwick followed some of the predictions of Rutherford who first worked in this unknown field. Later, Elsasser designed the first neutron diffraction in 1936 and the ones responsible for the actual constructing were Halban and Preiswerk. This was first constructed for powders but later Mitchell and Powers developed and demonstrated the single crystal system. All experiments realized in early years were developed using radium and beryllium sources. The neutron flux from these was not sufficient for the characterization of materials. Then, years passed and neutron reactors had to be constructed in order to increase the flux of neutrons to be able to realize a complete characterization the material being examined.
Between mid and late 40s neutron sources began to appear in countries such as Canada, UK and some other of Europe. Later in 1951 Shull and Wollan presented a paper that discussed the scattering lengths of 60 elements and isotopes, which generated a broad opening of neutron diffraction for the structural information that can be obtained from neutron diffraction.
Neutron Sources
The first source of neutrons for early experiments was gathered from radium and beryllium sources. The problem with this, as already mentioned, was that the flux was not enough to perform huge experiments such as the determination of the structure of an unknown material. Nuclear reactors started to emerge in early 50s and these had a great impact in the scientific field. In the 1960s neutron reactors were constructed depending on the desired flux required for the production of neutron beams. In USA the first one constructed was the High Flux Beam Reactor (HFBR). Later, this was followed by one at Oak Ridge Laboratory (HFIR) (Figure $6$), which also was intended for isotope production and a couple of years later the ILL was built. This last one is the most powerful so far and it was built by collaboration between Germany and France. These nuclear reactors greatly increased the flux and so far there has not been constructed any other better reactor. It has been discussed that probably the best solution to look for greater flux is to look for other approaches for the production of neutrons such as accelerator driven sources. These could greatly increase the flux of neutrons and in addition other possible experiments could be executed. The key point in these devices is spallation, which increases the number of neutrons executed from a single proton and the energy released is minimal. Currently, there are several of these around the world but investigations continue searching for the best approach of the ejection of neutrons.
Neutron Detectors
Although neutrons are great particles for determining complete structures of materials they have some disadvantages. These particles experiment a reasonably weak scattering when looking especially to soft materials. This is a huge concern because there can be problems associated with the scattering of the particles which can lead to a misunderstanding in the analysis of the structure of the material.
Neutrons are particles that have the ability to penetrate through the surface of the material being examined. This is primarily due to the nuclear interaction produced from the particles and the nucleus from the material. This interaction is much greater that the one performed from the electrons, which it is only an electrostatic interaction. Also, it cannot be omitted the interaction that occurs between the electrons and the magnetic moment of the neutrons. All of these interactions discussed are of great advantage for the determination of the structure since neutrons interacts with every single nucleus in the material. The problem comes when the material is being analyzed because neutrons being uncharged materials make them difficult to detect them. For this reason, neutrons need to be reacted in order to generate charged particles, ions. Some of the reactions uusually used for the detection of neutrons are:
$n\ +\ ^{3}He \rightarrow \ ^{3}H\ +\ ^{1}H\ +\ 0.764 MeV \label{3}$
$n\ +\ ^{10}B \rightarrow \ ^{7}Li\ +\ ^{4}He\ +\ \gamma \ +\ 2.3 MeV \label{4}$
$n\ +\ ^{6}Li \rightarrow \ ^{4}He\ +\ ^{3}H\ +\ 4.79 MeV \label{5}$
The first two reactions apply when the detection is performed in a gas environment whereas the third one is carried out in a solid. In each of these reaction there is a large cross section, which makes them ideal for neutron capture. The neutron detection hugely depends on the velocity of the particles. As velocity increases, shorter wavelengths are produced and the less efficient the detection becomes. The particles that are executed to the material need to be as close as possible in order to have an accurate signal from the detector. These signal needs to be quickly transduced and the detector should be ready to take the next measurement.
In gas detectors the cylinder is filled up with either 3He or BF3. The electrons produced by the secondary ionization interact with the positively charged anode wire. One disadvantage of this detector is that it cannot be attained a desired thickness since it is very difficult to have a fixed thickness with a gas. In contrast, in scintillator detectors since detection is developed in a solid, any thickness can be obtained. The thinner the thickness of the solid the more efficient the results obtained become. Usually the absorber is 6Li and the substrate, which detects the products, is phosphor, which exhibits luminescence. This emission of light produced from the phosphor results from the excitation of this when the ions pass thorough the scintillator. Then the signal produced is collected and transduced to an electrical signal in order to tell that a neutron has been detected.
Neutron Scattering
One of the greatest features of neutron scattering is that neutrons are scattered by every single atomic nucleus in the material whereas in X-ray studies, these are scattered by the electron density. In addition, neutron can be scattered by the magnetic moment of the atoms. The intensity of the scattered neutrons will be due to the wavelength at which it is executed from the source. Figure $7$ shows how a neutron is scattered by the target when the incident beam hits it.
The incident beam encounters the target and the scattered wave produced from the collision is detected by a detector at a defined position given by the angles θ, ϕ which are joined by the dΩ. In this scenario there is assumed that there is no transferred energy between the nucleus of the atoms and the neutron ejected, leads to an elastic scattering.
When there is an interest in calculating the diffracted intensities the cross sectional area needs to be separated into scattering and absorption respectively. In relation to the energies of these there is moderately large range for constant scattering cross section. Also, there is a wide range cross sections close to the nuclear resonance. When the energies applied are less than the resonance the scattering length and scattering cross section are moved to the negative side depending on the structure being examined. This means that there is a shift on the scattering, therefore the scattering will not be in a 180° phase. When the energies are higher that resonance it means that the cross section will be asymptotic to the nucleus area. This will be expected for spherical structures. There is also resonance scattering when there are different isotopes because each produce different nuclear energy levels.
Coherent and Incoherent Scattering
Usually in every material, atoms will be arranged differently. Therefore, neutrons when scattered will be either coherently or incoherently. It is convenient to determine the differential scattering cross section, which is given by \ref{6}, where b represents the mean scattering length of the atoms, k is the scattering vector, r nis the position of the vector of the analyzed atom and lastly N is the total number of atoms in the structure.This equation can be separated in two parts, which one corresponds to the coherent scattering and the incoherent scattering as labeled below. Usually the particles scattered will be coherent which facilitates the solution of the cross section but when there is a difference in the mean scattering length, there will be a complete arrangement of the formula and these new changes (incoherent scattering) should be considered. Incoherent scattering is usually due to the isotopes and nuclear spins of the atoms in the structure.
$d\sigma /d\Omega \ =\ |b|^{2}\ |\Sigma e^{(ik.r_{n})}\ |^{2}\ +\ N|b-b^2| \label{6}$
Coherent Exp: $|b|^{2}\ |\Sigma e^{(ik.r_{n})}\ |^{2} \nonumber$
Incoherent Exp: $N\ |b-b|^{2} \nonumber$
The ability to distinguish atoms with similar atomic number or isotopes is proportional to the square of their corresponding scattering lengths. There are already known several coherent scattering lengths of some atoms which are very similar to each other. Therefore, it makes even easier to identify by neutrons the structure of a sample. Also neutrons can find ions of light elements because they can locate very low atomic number elements such as hydrogen. Due to the negative scattering that hydrogen develops it increases the contrast leading to a better identification of it, although it has a very large incoherent scattering which causes electrons to be removed from the incident beam applied.
Magnetic Scattering
As previously mentioned one of the greatest features about neutron diffraction is that neutrons because of their magnetic moment can interact with either the orbital or the spin magnetic moment of the material examined. Not all every single element in the periodic table can exhibit a magnetic moment. The only elements that show a magnetic moment are those, which have unpaired electrons spins. When neutrons hit the solid this produces a scattering from the magnetic moment vector as well as the scattering vector from the neutron itself. Below Figure $8$ shows the different vectors produced when the incident beam hits the solid.
When looking at magnetic scattering it needs to be considered the coherent magnetic diffraction peaks where the magnetic contribution to the differential cross section is p2q2 for an unpolarized incident beam. Therefore the magnetic structure amplitude will be given by \ref{9}, where qn is the magnetic interaction vector, pn is the magnetic scattering length and the rest of the terms are used to know the position of the atoms in the unit cell. When this term $F_{mag}$ is squared, the result is the intensity of magnetic contribution from the peak analyzed. This equation only applies to those elements which have atoms that develop a magnetic moment.
$F_{\text{mag}}\ =\ \Sigma p_{n}q_{n} e^{2\pi i(hx_{n}\ +\ ky_{n}\ +\ Iz_{n})} \label{9}$
Magnetic diffraction becomes very important due to its d-spacing dependence. Due to the greater effect produced from the electrons in magnetic scattering the forward scattering has a greater strength than the backward scattering. There can also be developed similar as in X-ray, interference between the atoms which makes structure factor also be considered. These interference effects could be produced by the wide range in difference between the electron distribution and the wavelength of the thermal neutrons. This factor quickly decreases as compared to X-rays because the beam only interacts with the outer electrons of the atoms.
Sample Preparation and Environment
In neutron diffraction there is not a unique protocol of factors that should be considered such as temperature, electric field and pressure to name a few. Depending on the type of material and data that has been looked the parameters are assigned. There can be reached very high temperatures such as 1800K or it can go as low as 4K. Usually to get to these extreme temperatures a special furnace capable of reaching these temperatures needs to be used. For example, one of the most common used is the He refrigerator when working with very low temperatures. For high temperatures, there are used furnaces with a heating element cylinder such as vanadium (V), niobium (Nb), tantalum (Ta) or tungsten (W) that is attached to copper bars which hold the sample. Figure $9$ shows the design for the vacuum furnaces used for the analysis. The metal that works best at the desired temperature range will be the one chosen as the heating element. The metal that is commonly used is vanadium because it prevents the contribution of other factors such as coherent scattering. Although with this metal this type of scattering is almost completely reduced. Other important factor about this furnaces is that the material been examined should not decompose under vacuum conditions. The crystal needs to be as stable as possible when it is being analyzed. When samples are not able to persist at a vacuum environment, they are heated in the presence of several gases such as nitrogen or argon.
Usually in order to prepare the samples that are being examined in neutron diffraction it is needed large crystals rather small ones as the one needed for X-ray studies. This one of the main disadvantages of this instrument. Most experiments are carried out using a four-circle diffractometer. The main reason being is because several experiment are carried out using very low temperatures and in order to achieve a good spectra it is needed the He refrigerator. First, the crystal being analyzed is mounted on a quartz slide, which needs to be a couple millimeters in size. Then, it is inserted into the sample holder, which is chosen depending on the temperatures wanted to be reached. In addition, neutrons can also analyze powder samples and in order to prepare the sample for these they need to be completely rendered into very fine powders and then inserted into the quartz slide similarly to the crystal structures. The main concern with this method is that when samples are grounded into powders the structure of the sample being examined can be altered.
Summary
Neutron diffraction is a great technique used for complete characterization of molecules involving light elements and also very useful for the ones that have different isotopes in the structure. Due to the fact that neutrons interact with the nucleus of the atoms rather than with the outer electrons of the atoms such as X-rays, it leads to a more reliable data. In addition, due to the magnetic properties of the neutrons there can be characterized magnetic compounds due to the magnetic moment that neutrons develop. There are several disadvantages as well, one of the most critical is that there needs to be a good amount of sample in order to be analyzed by this technique. Also, great amounts of energy are needed to produce large amounts of neutrons. There are several powerful neutron sources that have been developed in order to conduct studies of largest molecules and a smaller quantity of sample. However, there is still the need of devices which can produce a great amount of flux to analyze more sophisticated samples. Neutron diffraction has been widely studied due to the fact that it works together with X-rays studies for the characterization of crystalline samples. The properties and advantages of this technique can greatly increased if some of the disadvantages are solved. For example, the study of molecules which exhibit some type of molecular force can be characterized. This will be because neutrons can precisely locate hydrogen atoms in a sample. Neutrons have gives a better answer to the chemical interactions that are present in every single molecule, whereas X-rays help to give an idea of the macromolecular structure of the samples being examined. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.05%3A_Neutron_Diffraction.txt |
X-ray absorption fine structure (XAFS) spectroscopy includes both X-ray absorption near edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) spectroscopies. The difference between both techniques is the area to analyze, as shown in Figure $1$ and the information each technique provides. The complete XAFS spectrum is collected across an energy range of around 200 eV before the absorption edge of interest and until 1000 eV after it (Figure $2$). The absorption edge is defined as the X-ray energy when the absorption coefficient has a pronounced increasing. This energy is equal to the energy required to excite an electron to an unoccupied orbital.
X-ray absorption near edge structure (XANES) is used to determine the valence state and coordination geometry, whereas extended X-ray absorption fine structure (EXAFS) is used to determine the local molecular structure of a particular element in a sample.
X-Ray Absorption Near Edge Structure (XANES) spectra
XANES is the part of the absorption spectrum closer an absorption edge. It covers from approximately -50 eV to +200 eV relative to the edge energy (Figure $2$).
Because the shape of the absorption edge is related to the density of states available for the excitation of the photoelectron, the binding geometry and the oxidation state of the atom affect the XANES part of the absorption spectrum.
Before the absorption edge, there is a linear and smooth area. Then, the edge appears as a step, which can have other extra shapes as isolated peaks, shoulders or a white line, which is a strong peak onto the edge. Those shapes give some information about the atom. For example, the presence of a white line indicates that after the electron releasing, the atomic states of the element is confined by the potential it feels. This peak sharp would be smoothed if the atom could enter to any kind of resonance. Important information is given because of the absorption edge position. Atoms with higher oxidation state have fewer electrons than protons, so, the energy states of the remaining electrons are lowered slightly, which causes a shift of the absorption edge energy up to several eV to a higher X-ray energy.
Extended X-ray absorption fine structure (EXAFS) spectra
The EXAFS part of the spectrum is the oscillatory part of the absorption coefficient above around 1000 eV of the absorption edge. This region is used to determine the molecular bonding environments of the elements. EXAFS gives information about the types and numbers of atoms in coordination a specific atom and their inter-atomic distances. The atoms at the same radial distance from a determinate atom form a shell. The number of the atoms in the shell is the coordination number (e.g., Figure $2$).
An EXAFS signal is given by the photoelectron scattering generated for the center atom. The phase of the signal is determinate by the distance and the path the photoelectrons travel. A simple scheme of the different paths is shown by Figure $3$. In the case of two shells around the centered atom, there is a degeneracy of four for the path between the main atom to the first shell, a degeneracy of four for the path between the main atom to the second shell, and a degeneracy of eight for the path between the main atom to the first shell, to the second one and to the center atom.
The analysis of EXAFS spectra is accomplished using Fourier transformation to fit the data to the EXAFS equation. The EXAFS equation is a sum of the contribution from all scattering paths of the photoelectrons \ref{1}, where each path is given by \ref{2}.
$\chi (k)\ =\ \sum_{i} \chi _{i}(k) \label{1}$
$\chi _{i} (k) \equiv \frac{(N_{i}S_{0}^{2})F_{eff_{i}}(k)}{kR^{2}_{i}} \sin[2kR_{i}\ +\ \phi _{i}(k)] e^{-2\sigma ^{2}_{i} k^{2}} e^{\frac{-2R_{i}}{\lambda (k)}} \label{2}$
The terms Feffi(k), φi(k), and λi(k) are the effective scattering amplitude of the photoelectron, the phase shift of the photoelectron, and the mean free path of the photoelectron, respectively. The term Ri is the half path length of the photoelectron (the distance between the centered atom and a coordinating atom for a single-scattering event). And the k2 is given by \ref{3}. The remaining variable are frequently determined by modeling the EXAFS spectrum.
$k^{2}\ = \frac{2m_{e}(E-E_{0}\ +\ \Delta E_{0})}{\hbar} \label{3}$
XAFS Analysis for Arsenic Adsorption onto Iron Oxides
The absorption of arsenic species onto iron oxide offers n example of the information that can be obtained by EXAFS. Because the huge impact that the presence of arsenic in water can produce in societies there is a lot of research in the adsorption of arsenic in several kinds of materials, in particular nano materials. Some of the materials more promising for this kind of applications are iron oxides. The elucidation of the mechanism of arsenic coordination onto the surfaces of those materials has been studied lately using X-ray absorption spectroscopy.
There are several ways how arsenate (AsO43−, Figure $4$) can be adsorbed onto the surfaces. Figure $5$ shows the three ways that Sherman proposes arsenate can be adsorbed onto goethite (α-FeOOH): bidentate cornersharing (2C), bidentate edge sharing (2E) and monodentate corner-sharing (1V) shapes. Figure $6$ shows that the bidentate corner sharing (2C) is the configuration that corresponds with the calculated parameters not only for goethite, but for several iron oxides.
Several studies have confirmed that the bidentate corner sharing (2C) is the one present in the arsenate adsorption but also one similar, a tridentate corner sharing complex (3C), for the arsenite adsorption onto most of iron oxides as shows Figure $7$. Table $1$ shows the coordination numbers and distances reported in the literature for the As(III) and As(V) onto goethite.
Table $1$ Coordination numbers (CN) and inter-atomic distances (R) reported in the literature for the As(III) and As(V) adsorption onto goethite.
As CN As-O R As-O (Å) CN As-Fe R As-Fe(Å)
III 3.06±0.03 1.79±0.8 2.57±0.01 3.34±3
3.19 1.77±1 1.4 3.34±5
3 1.78 2 3.55±5
V 1.03 1.631 2 3.30
4.6 1.68 -- 3.55±5 | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.06%3A_XAFS.txt |
Circular dichroism (CD) spectroscopy is one of few structure assessmet methods that can be utilized as an alternative and amplification to many conventional analysis techniques with advatages such as rapid data collection and ease of use. Since most of the efforts and time spent in advancement of chemical sciences are devoted to elucidation and analysis of structure and composition of synthesized molecules or isolated natural products rather than their preparation, one should be aware of all the relevant techniques available and know which instrument can be employed as an alternative to any other technique.
The aim of this module is to introduce CD technique and discuss what kind of information one can collect using CD. Additionally, the advantages of CD compared to other analysis techniques and its limitations will be shown.
Optical Activity
As CD spectroscopy can analyze only optically active species, it is convenient to start the module with a brief introduction of optical activity. In nature almost every life form is handed, meaning that there is certain degree of asymmetry, just like in our hands. One cannot superimpose right hand on the left because they are non-identical mirror images of one another. So are the chiral (handed) molecules, they exist as enantiomers, which mirror images of each other (Figure $1$). One interesting phenomena related to chiral molecules is their ability to rotate plane of polarized light. Optical activity property is used to determine specific rotation, [ α ]Tλ, of pure enantiomer. This feature is used in polarimetery to find the enantiomeric excess, (ee), present in sample.
Circular Dichroism
Circular dichroism (CD) spectroscopy is a powerful yet straightforward technique for examining different aspects of optically active organic and inorganic molecules. Circular dichroism has applications in variety of modern research fields ranging from biochemistry to inorganic chemistry. Such widespread use of the technique arises from its essential property of providing structural information that cannot be acquired by other means. One other laudable feature of CD is its being a quick, easy technique that makes analysis a matter of minutes. Nevertheless, just like all methods, CD has a number of limitations, which will be discussed while comparing CD to other analysis techniques.
CD spectroscopy and related techniques were considered as esoteric analysis techniques needed and accessible only to a small clandestine group of professionals. In order to make the reader more familiar with the technique, first of all, the principle of operation of CD and its several types, as well as related techniques will be shown. Afterwards, sample preparation and instrument use will be covered for protein secondary structure study case.
Depending on the light source used for generation of circularly polarized light, there are:
• Far UV CD, used to study secondary structure proteins.
• Near UV CD, used to investigate tertiary structure of proteins.
• Visible CD, used for monitoring metal ion protein interactions.
Principle of Operation
In the CD spectrometer the sample is places in a cuvette and a beam of light is passed through the sample. The light (in the present context all electromagnetic waves will be refer to as light) coming from source is subjected to circular polarization, meaning that its plane of polarization is made to rotate either clockwise (right circular polarization) or anti-clockwise (left circular polarization) with time while propagating, see Figure $2$.
The sample is, firstly irradiated with left rotating polarized light, and the absorption is determined by \ref{1}. A second irradiation is performed with right polarized light. Now, due to the intrinsic asymmetry of chiral molecules, they will interact with circularly polarized light differently according to the direction of rotation there is going to be a tendency to absorb more for one of rotation directions. The difference between absorption of left and right circularly polarized light is the data, which is obtained from \ref{2}, where εL and εR are the molar extinction coefficients for left and right circularly polarized light, c is the molar concentration, l is the path length, the cuvette width (in cm). The difference in absorption can be related to difference in extinction, Δε, by \ref{3}.
$A\ = \varepsilon c l \label{1}$
$\Delta A\ =\ A_{L}-A_{R}\ =\ (\varepsilon _{L}\ -\ \varepsilon _{R} ) c l \label{2}$
$\Delta \varepsilon \ =\ \varepsilon _{L} \ -\ \varepsilon _{R} \label{3}$
Usually, due to historical reasons the CD is reported not only as difference in absorption or extinction coefficients but as degree of ellipticity, [θ]. The relationship between [θ] and Δε is given by \ref{4}.
$[\theta ]\ =\ 3,298 \Delta \varepsilon \label{4}$
Since the absorption is monitored in a range of wavelengths, the output is a plot of [θ] versus wavelength or Δε versus wavelength. Figure $3$ shows the CD spectrum of Δ–[Co(en)3]Cl3.
Related Techniques
Magnetic Circular Dichroism
Magnetic circular dichroism (MCD) is a sister technique to CD, but there are several distinctions:
• MCD does not require the sample to possess intrinsic asymmetry (i.e., chirality/optical activity), because optical activity is induced by applying magnetic field parallel to light.
• MCD and CD have different selection rules, thus information obtained from these two sister techniques is different. CD is good for assessing environment of the samples’ absorbing part while MCD is superior for obtaining detailed information about electronic structure of absorbing part.
MCD is powerful method for studying magnetic properties of materials and has recently been employed for analysis of iron-nitrogen compound, the strongest magnet known. Moreover, MCD and its variation, variable temperature MCD are complementary techniques to Mossbauer spectroscopy and electron paramagnetic resonance (EPR) spectroscopy. Hence, these techniques can give useful amplification to the chapter about Mossbauer and EPR spectroscopy.
Linear Dichroism
Linear dichrosim (LD) is also a very closely related technique to CD in which the difference between absorbance of perpendicularly and parallel polarized light is measured. In this technique the plane of polarization of light does not rotate. LD is used to determine the orientation of absorbing parts in space.
Advantages and Limitations of CD
Just like any other instrument CD has its strengths and limits. The comparison between CD and NMR shown in Table $1$ gives a good sense of capabilities of CD.
CD NMR
Molecules of any size can be studied There is size limitation
The experiments are quick to perform; single wavelength measurements require milliseconds This is not the case all of the time.
Unique sensitivity to asymmetry in sample's structure. Special conditions are required to differentiate between enantiomers.
Can work with very small concentrations, by lengthening the cuvette width until discernable absorption is achieved. There is a limit to sensitivity of instrument.
Timescale is much shorter (UV) thus allowing to study dynamic systems and kinetics. Timescale is long, use of radio waves gives average of all dynamic systems.
Only qualitative analysis of data is possible. Quantitative data analysis can be performed to estimate chemical composition.
Does not provide atomic level structure analysis Very powerful for atomic level analysis, providing essential information about chemical bonds in system.
The observed spectrum is not enough for claiming one and only possible structure The NMR spectrum is key information for assigning a unique structure.
Table $1$: A comparison of CD spectroscopy to NMR spectroscopy.
What Kind of Data is Obtained from CD?
One effective way to demonstrate capabilities of CD spectroscopy is to cover the protein secondary structure study case, since CD spectroscopy is well-established technique for elucidation of secondary structure of proteins as well as any other macromolecules. By using CD one can estimate the degree of conformational order (what percent of the sample proteins is in α-helix and/or β-sheet conformation), see Figure $4$.
Key points for visual estimation of secondary structure by looking at a CD spectrum:
• α-helical proteins have negative bands at 222 nm and 208 nm and a positive band at 193 nm.
• β-helices have negative bands at 218 nm and positive bands at 195 nm.
• Proteins lacking any ordered secondary structure will not have any peaks above 210 nm.
Since the CD spectra of proteins uniquely represent their conformation, CD can be used to monitor structural changes (due to complex formation, folding/unfolding, denaturation because of rise in temperature, denaturants, change in amino acid sequence/mutation, etc. ) in dynamic systems and to study kinetics of protein. In other words CD can be used to perform stability investigations and interaction modeling.
CD Instrument
Figure $5$ shows a typical CD instrument.
Protocol for Collecting a CD Spectrum
Most of proteins and peptides will require using buffers in order to prevent denaturation. Caution should be shown to avoid using any optically active buffers. Clear solutions are required. CD is taken in high transparency quartz cuvettes to ensure least interference. There are cuvettes available that have path-length ranging from 0.01 cm to 1 cm. Depending on UV activity of buffers used one should choose a cuvette with path-length (distance the beam of light passes through the sample) that compensates for UV absorbance of buffer. Solutions should be prepared according to cuvette that will be used, see Table $2$.
Cuvette Path (cm) Concentration of Sample (mg/mL)
0.01-0.02 0.2-1.0
0.1 0.05-0.2
1 0.005-0.01
Table $2$ Choosing the appropriate cuvette based upon the sample concentration.
Besides, just like salts used to prepare pallets in FT-IR, the buffers in CD will show cutoffs at a certain point in low wavelength region, meaning that buffers start to absorb after certain wavelengh. The cutoff values for most of common buffers are known and can be found from manufacturer. Oxygen absorbs light below 200 nm. Therefore, in order to remove interference buffers should be prepared from distilled water or the water should be degassed before use. Another important point is to accurately determine concentration of sample, because concentration should be known for CD data analysis. Concentration of sample can be determined from extinction coefficients, if such are reported in literature also for protein samples quantitative amino acid analysis can be used.
Many CD instrument come bundled with a sample compartment temperature control unit. This is very handy when doing stability and unfolding/denaturation studies of proteins. Check to make sure the heat sink is filled with water. Turn the temperature control unit on and set to chosen temperature.
UV source in CD is very powerful lamp and can generates large amounts of Ozone in its chamber. Ozone significantly reduces the life of the lamp. Therefore, oxygen should be removed before turning on the main lamp (otherwise it will be converted to ozone near lamp). For this purpose nitrogen gas is constantly flushed into lamp compartment. Let Nitrogen flush at least for 15 min. before turning on the lamp.
Collecting Spectra for Blank, Water, Buffer Background, and Sample
1. Collect spectrum of air blank (Figure $6$). This will be essentially a line lying on x-axis of spectrum, zero absorbance.
2. Fill the cuvette with water and take a spectrum.
3. Water droplets left in cuvette may change concentration of your sample, especially when working with dilute samples. Hence, it is important to thoroughly dry the cuvette. After drying the cuvette, collect spectrum of buffer of exactly same concentration as used for sample (Figure $6$). This is the step where buffer is confirmed to be suitable spectrum of the buffer and water should overlap within experimental error, except for low wavelength region where signal-to-noise ratio is low.
4. Clean the cuvette as described above and fill with sample solution. Collect the CD spectrum for three times for better accuracy (Figure $6$). For proteins multiple scans should overlap and not drift with time.
Data Handling and Analysis
After saving the data for both the spectra of the sample and blank is smoothed using built-in commands of controller software. The smoothed baseline is subtracted from the smoothed spectrum of the sample. The next step is to use software bundles which have algorithms for estimating secondary structure of proteins. Input the data into the software package of choice and process it. The output from algorithms will be the percentage of a particular secondary structure conformation in sample. The data shown in Figure $7$ lists commonly used methods and compares them for several proteins. The estimated secondary structure is compared to X-ray data, and one can see that it is best to use several methods for best accuracy.
Conclusion
What advantages CD has over other analysis methods? CD spectroscopy is an excellent, rapid method for assessing the secondary structure of proteins and performing studies of dynamic systems like folding and binding of proteins. It worth noting that CD does not provide information about the position of those subunits with specific conformation. However, CD outrivals other techniques in rapid assessing of the structure of unknown protein samples and in monitoring structural changes of known proteins caused by ligation and complex formation, temperature change, mutations, denaturants. CD is also widely used to juxtapose fused proteins with wild type counterparts, because CD spectra can tell whether the fused protein retained the structure of wild type or underwent changes. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.07%3A_Circular_Dichroism_Spectroscopy_and_its_Application_for_Determination_of_Secondary_Structure_of.txt |
Electrospray ionization-mass spectrometry (ESI-MS) is an analytical method that focuses on macromolecular structural determination. The unique component of ESI-MS is the electrospray ionization. The development of electrospraying, the process of charging a liquid into a fine aerosol, was completed in the 1960’s when Malcolm Dole (Figure $1$) demonstrated the ability of chemical species to be separated through electrospray techniques. With this important turn of events, the combination of ESI and MS was feasible and was later developed by John B. Fenn (Figure $2$), as a functional analytical method that could provide beneficial information about the structure and size of a protein. Fenn shared the Nobel Prize in 2002, with Koichi Tanaka (Figure $3$ and Kurt Wuthrich (Figure $4$) for the development of ESI-MS.
ESI-MS is the process through which proteins, or macromolecules, in the liquid phase are charged and fragmented into smaller aerosol droplets. These aerosol droplets lose their solvent and propel the charged fragments into the gas phase in several components that vary by charge. These components can then be detected by a mass spectrometer. The recent boom and development of ESI-MS is attributed to its benefits in characterizing and analyzing macromolecules, specifically biologically important macromolecules such as proteins.
How does ESI-MS Function?
ESI-MS is a process that requires the sample to be in liquid solution, so that tiny droplets may be ionized and analyzed individually by a mass spectrometer. The following delineates the processes that occur as relevant to Figure $5$:
• Spray needle/capillary- The liquid solution of the desired macromolecule is introduced into the system through this needle. The needle is highly charged via an outside voltage source that maintains the charge constant across the needle. The normal charge for a needle is approximately 2.5 to 4 kV. The voltage causes the large droplets to fragment into small droplets based on charge that is accumulated from the protein constituent parts, and the liquid is now in the gas phase.
• Droplet formation- The droplets that are expelled from the needle are smaller than initially, and as a result the solvent will evaporate. The smaller droplets then start increasing their charge density on the surface as the volume decreases. As the droplets near the Rayleigh limit, Coulombic interactions of the droplet equal the surface tension of the droplet, a Coulombic explosion occurs that further breaks the droplet into minute fractions, including the isolated analyte with charge.
• Vacuum interface/cone - This portion of the device allows for the droplets to align in a small trail and pass through to the mass spectrometer. Alignment occurs because of the similarity and differences in charges amongst all the droplets. All the droplets are ionized to positive charges through addition of protons to varying basic sites on the droplets, yet all the charges vary in magnitude dependent upon the number of basic sites available for protonation. The receiving end or the cone has the opposite charge of the spray needle, causing an attraction between the cone and the droplets.
• Mass spectrometer- The charged particles then reach the mass spectrometer and are deflected based on the charge of each particle. Deflection occurs by the quadrupole magnet of the mass spectrometer. The different deflection paths of the ions occur due to the strength of the interaction with the magnetic field. This leads to various paths based on a mass/charge (m/z) ratio. The particles are then read by the ion detector, as they arrive, providing a spectrum based on m/z ratio.
What Data is Provided by ESI-MS?
As implied by the name, the data produced from this technique is a mass spectrometry spectrum. Without delving too deeply into the topic of mass spectrometry, which is out of the true scope of this module, a slight explanation will be provided here. The mass spectrometer separates particles based on a magnetic field created by a quadrupole magnet. The strength of the interaction varies on the charge the particles carry. The amount of deflection or strength of interaction is determined by the ion detector and quantified into a mass/charge (m/z) ratio. Because of this information, determination of chemical composition or peptide structure can easily be managed as is explained in greater detail in the following section.
Interpretation of a Typical MS Spectrum
Interpreting the mass spectrometry data involves understanding the m/z ratio. The knowledge necessary to understanding the interpretation of the spectrum is that the peaks correspond to portions of the whole molecule. That is to say, hypothetically, if you put a human body in the mass spectrometer, one peak would coincide with one arm, another peak would coincide with the arm and the abdomen, etc. The general idea behind these peaks, is that an overlay would paint the entire picture, or in the case of the hypothetical example, provide the image of the human body. The m/z ratio defines these portions based on the charges carried by them; thus the terminology of the mass/charge ratio. The more charges a portion of the macromolecule or protein holds, the smaller the m/z ratio will be and the farther left it will appear on the spectrum. The fundamental concept behind interpretation involves understanding that the peaks are interrelated, and thus the math calculations may be carried out to provide relevant information of the protein or macromolecule being analyzed.
Calculations of m/z of the MS Spectrum Peaks
As mentioned above, the pertinent information to be obtained from the ESI-MS data is extrapolated from the understanding that the peaks are interrelated. The steps for calculating the data are as follow:
• Determine which two neighboring peaks will be analyzed.
• Establish the first peak (the one farthest left) as the peak with the greatest m/z ratio. This is mathematically defined as our z+1 peak.
• Establish the adjacent peak to the right of our first peak as the peak with the lower m/z ratio. This is mathematically our z peak.
• Our z+1 peak will also be our m+1 peak as the difference between the two peaks is the charge of one proton. Consequently, our z peak will be defined as our m peak.
• Solve both equations for m to allow for substitution. Both sides of the equation should be in terms of zand can be solved.
• Determine the charge of the z peak and subsequently, the charge of the z+1 peak.
• Subtract one from the m/z ratio and multiply the m/z ratio of each peak by the previous charges determined to obtain the mass of the protein or macromolecule.
• Average the results to determine the average mass of the macromolecule or protein.
1. Determine which two neighboring peaks will be analyzed from the MS (Figure $6$) as the m/z = 5 and m/z = 10 peaks.
2. Establish the first peak (the one farthest left in Figure $1$ as the z + 1 peak (i.e., z + 1 = 5).
3. Establish the adjacent peak to the right of the first peak as the z peak (i.e., z = 10).
4. Establish the peak ratios, \ref{1} and \ref{2}.
$\frac{m+1}{z+1} =\ 5 \label{1}$
$\frac{m}{z} = 10 \label{2}$
5. Solve the ratios for m: \ref{3} and \ref{4}.
$m\ =\ 5z\ +\ 4 \label{3}$
$m\ =\ 10z \label{4}$
6. Substitute one equation for m: \ref{5}.
$5z\ +\ 4\ =\ 10z \label{5}$
7. Solve for z: \ref{6}.
$z\ = 4/5 \label{6}$
8. Find z+1: \ref{7}.
$z\ +\ 1\ =\ 9/5 \label{7}$
Find average molecular mass by subtracting the mass by 1 and multiplying by the charge: \ref{8} and \ref{9}. Hence, the average mass = 7.2
$(10\ -\ 1)(4/5)\ =\ 7.2 \label{8}$
$(5\ -\ 1)(9/5)\ =\ 7.2 \label{9}$
Sample Preparation
Samples for ESI-MS must be in a liquid state. This requirement provides the necessary medium to easily charge the macromolecules or proteins into a fine aerosol state that can be easily fragmented to provide the desired outcomes. The benefit to this technique is that solid proteins that were once difficult to analyze, like metallothionein, can dissolved in an appropriate solvent that will allow analysis through ESI-MS. Because the sample is being delivered into the system as a liquid, the capillary can easily charge the solution to begin fragmentation of the protein into smaller fractions Maximum charge of the capillary is approximately 4 kV. However, this amount of charge is not necessary for every macromolecule. The appropriate charge is dependent on the size and characteristic of the solvent and each individual macromolecule. This has allowed for the removal of the molecular weight limit that was once held true for simple mass spectrometry analysis of proteins. Large proteins and macromolecules can now easily be detected and analyzed through ESI-MS due to the facility with which the molecules can fragment.
Related Techniques
A related technique that was developed at approximately the same time as ESI-MS is matrix assisted laser desorption/ionization mass spectrometry (MALDI-MS). This technique that was developed in the late 1980’s as wells, serves the same fundamental purpose; allowing analysis of large macromolecules via mass spectrometry through an alternative route of generating the necessary gas phase for analysis. In MALDI-MS, a matrix, usually comprised of crystallized 3,5-dimethoxy-4-hydroxycinnamic acid (Figure $7$), water, and an organix solvent, is used to mix the analyte, and a laser is used to charge the matrix.
The matrix then co-crystallizes the analyte and pulses of the laser are then used to cause desorption of the matrix and some of the analyte crystals with it, leading to ionization of the crystals and the phase change into the gaseous state. The analytes are then read by the tandem mass spectrometer. Table $1$ directly compares some attributes between ESI-MS and MALDI-MS. It should be noted that there are several variations of both ESI-MS and MALDI-MS, with the methods of data collection varying and the piggy-backing of several other methods (liquid chromatography, capillary electrophoresis, inductively coupled plasma mass spectrometry, etc.), yet all of them have the same fundamental principles as these basic two methods.
Table $1$ Comparison of the general experimental details of ESI-MS and MALDI-MS.
Experimental Details ESI-MS MALDI-MS
Starting analyte state Liquid Liquid/solid
Method of ionization Charged capillary needle Matrix laser desorption
Final analyte state Gas Gas
Quantity of protein needed 1 μL 1 μL
Spectrum method Mass spectrometry Mass spectrometry
Problems with ESI-MS
ESI-MS has proven to be useful in determination of tertiary structure and molecular weight calculations of large macromolecules. However, there are still several problems incorporated with the technique and macromolecule analysis. One problem is the isolation of the desired protein for analysis. If the protein is unable to be extracted from the cell, this is usually done through gel electrophoresis, there is a limiting factor in what proteins can be analyzed. Cytochrome c (Figure $7$) is an example of a protein that can be isolated and analyzed, but provides an interesting limitation on how the analytical technique does not function for a completely effective protein analysis. The problem with cytochrome c is that even if the protein is in its native confirmation, it can still show different charge distribution. This occurs due to the availability of basic sites for protonation that are consistently exposed to the solvent. Any slight change to the native conformation may cause basic sites, such as in cytochrome c to be blocked causing different m/z ratios to be seen. Another interesting limitation is seen when inorganic elements, such as in metallothioneins proteins that contain zinc, are analyzed using ESI-MS. Metallothioneins have several isoforms that show no consistent trend in ESI-MS data between the varied isoforms. The marked differences occur due to the metallation of each isoform being different, which causes the electrospraying and as a result protonation of the protein to be different. Thus, incorporation of metal atoms in proteins can have various effects on ESI-MS data due to the unexpected interactions between the metal center and the protein itself. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.08%3A_Protein_Analysis_using_Electrospray_Ionization_Mass_Spectroscopy.txt |
Liquid Crystal Phases
Liquid crystals are a state of matter that has the properties between solid crystal and common liquid. There are basically three different types of liquid crystal phases:
• Thermotropic liquid crystal phases are dependent on temperature.
• Lyotropic liquid crystal phases are dependent on temperature and the concentration of LCs in the solvent.
• Metallotropic LCs are composed of organic and inorganic molecules, and the phase transition not only depend on temperature and concentration, but also depend on the ratio between organic and inorganic molecules.
Thermotropic LCs are the most widely used one, which can be divided into five categories:
• Nematic phase in which rod-shaped molecules have no positional order, but they self-align to have long-range directional order with their long axes roughly parallel (Figure \(1\)a).
• Smactic phase where the molecules are positionally ordered along one direction in well-defined layers oriented either along the layer normal (smectic A) or tilted away from the layer normal (smectic C), see Figure \(1\)b.
• Chiral phase which exhibits a twisting of the molecules perpendicular to the director, with the molecular axis parallel to the director Figure \(1\) c.
• Blue phase having a regular three-dimensional cubic structure of defects with lattice periods of several hundred nanometers, and thus they exhibit selective Bragg reflections in the wavelength range of light Figure \(2\).
• Discotic phase in which disk-shaped LC molecules can orient themselves in a layer-like fashion Figure \(3\).
Thermotropic LCs are very sensitive to temperature. If the temperature is too high, thermal motion will destroy the ordering of LCs, and push it into a liquid phase. If the temperature is too low, thermal motion is hard to perform, so the material will become crystal phase.
The existence of liquid crystal phase can be detected by using polarized optical microscopy, since liquid crystal phase exhibits its unique texture under microscopy. The contrasting areas in the texture correspond to domains where LCs are oriented towards different directions.
Polarized Optical Microscopy
Polarized optical microscopy is typically used to detect the existence of liquid crystal phases in a solution.The principle of this is corresponding to the polarization of light. A polarizer is a filter that only permits the light oriented in a specific direction with its polarizing direction to pass through. There are two polarizers in a polarizing optical microscope (POM) (Figure \(4\)) and they are designed to be oriented at right angle to each other, which is termed as cross polar. The fundamental of cross polar is illustrated in Figure \(5\), the polarizing direction of the first polarizer is oriented vertically to the incident beam, so only the waves with vertical direction can pass through it. The passed wave is subsequently blocked by the second polarizer, since this polarizer is oriented horizontally to the incident wave.
Theory of Birefringence
Birefringent or doubly-refracting sample has a unique property that it can produce two individual wave components while one wave passes through it, those two components are termed as ordinary and extraordinary waves. Figure \(6\) is an illustration of a typical construction of Nicol polarizing prism, as we can see, the non-plarized white light are splitted into two ray as it passes through the prism. The one travels out of the prism is called ordinary ray, and the other one is called extraordinary ray. So if we have a birefringent specimen located between the polarizer and analyzer, the initial light will be separated into two waves when it passes though the specimen. After exiting the specimen, the light components become out of phase, but are recombined with constructive and destructive interference when they pass through the analyzer. Now the combined wave will have elliptically or circularly polarized light wave, see Figure \(7\), image contrast arises from the interaction of plane-polarized light with a birefringent specimen so some amount of wave will pass through the analyzer and give a bright domain on the specimen.
Liquid Crystal Display
The most common application of LCs are in liquid crystals displays (LCD). Figure \(8\) is a simple demonstration of how LCD works in digit calculators. There are two crossed polarizers in this system, and liquid crystal (cholesteric spiral pattern) sandwich with positive and negative charging is located between these two polarizers. When the liquid crystal is charged, waves can pass through without changing orientations. When the liquid crystal is out of charge, waves will be rotated 90° as it passes through LCs so it can pass through the second polarizer. There are seven separately charged electrodes in the LCD, so the LCD can exhibit different numbers from 0 to 9 by adjusting the electrodes. For example, when the upper right and lower left electrodes are charged, we can get 2 on the display.
Microscope Images of Liquid Crystal Phase
The first order retardation plate is frequently utilized to determine the optical sign of a birefringent specimen in polarized light microscopy. The optical sign includes positive and negative. If the ordinary wavefront is faster than the extraordinary wavefront (see Figure \(9\) a). When a first order retardation plate is added, the structure of the cell become all apparent compared with the one without retardation plate, Figure \(9\) b).
Images of Liquid Crystal Phases
Figure \(10\) shows the images of liquid crystal phases from different specimens. First order retardation plates are utilized in all of these images. Apparent contrasts are detected here in the image which corresponds to the existence of liquid crystal phase within the specimen.
The Effect of Rotation of the Polarizer
The effect of the angle between horizontal direction and polarizer transmission axis on the appearance of liquid crystal phase may be analyzed. In Figure \(11\) is show images of an ascorbic acid (Figure \(12\)) sample under cross polar mode. When the polarizer rotates from 0° to 90°, big variations on the shape of bright domains and domain colors appear due to the change of wave vibrating directions. By rotating the polarizer, we can have a comprehensive understanding of the overall texture. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.09%3A_The_Analysis_of_Liquid_Crystal_Phases_using_Polarized_Optical_Microscopy.txt |
• 8.1: Microparticle Characterization via Confocal Microscopy
Confocal microscopy was invented by Marvin Minsky (FIGURE) in 1957, and subsequently patented in 1961. Minsky was trying to study neural networks to understand how brains learn, and needed a way to image these connections in their natural state (in three dimensions). He invented the confocal microscope in 1955, but its utility was not fully realized until technology could catch up. In 1973 Egger published the first recognizable cells, and the first commercial microscopes were produced in 1987.
• 8.2: Transmission Electron Microscopy
TEMs provide images with significantly higher resolution than visible-light microscopes (VLMs) do because of the smaller de Broglie wavelength of electrons. These electrons allow for the examination of finer details, which are several thousand times higher than the highest resolution in a VLM. Nevertheless, the magnification provide in a TEM image is in contrast to the absorption of the electrons in the material, which is primarily due to the thickness or composition of the material.
• 8.3: Scanning Tunneling Microscopy
Scanning tunneling microscopy (STM) is a powerful instrument that allows one to image the sample surface at the atomic level. As the first generation of scanning probe microscopy (SPM), STM paves the way for the study of nano-science and nano-materials.
• 8.4: Magnetic Force Microscopy
Magnetic force microscopy (MFM) is a natural extension of scanning tunneling microscopy (STM), whereby both the physical topology of a sample surface and the magnetic topology may be seen. Scanning tunneling microscopy was developed in 1982 by Gerd Binnig and Heinrich Rohrer, and the two shared the 1986 Nobel prize for their innovation.
• 8.5: Spectroscopic Characterization of Nanoparticles
Quantum dots (QDs) are small semiconductor nanoparticles generally composed of two elements that have extremely high quantum efficiencies when light is shined on them.
• 8.6: Measuring the Specific Surface Area of Nanoparticle Suspensions using NMR
Surface area is a property of immense importance in the nano-world, especially in the area of heterogeneous catalysis. A solid catalyst works with its active sites binding to the reactants, and hence for a given active site reactivity, the higher the number of active sites available, the faster the reaction will occur.
• 8.7: Characterization of Graphene by Raman Spectroscopy
Graphene is a quasi-two-dimensional material, which comprises layers of carbon atoms arranged in six-member rings. Since being discovered by Andre Geim and co-wokers at the University of Manchester, graphene has become one of the most exciting topics of research because of its distinctive band structure and physical properties, such as the observation of a quantum hall effect at room temperature, a tunable band gap, and a high carrier mobility.
• 8.8: Characterization of Covalently Functionalized Single-Walled Carbon Nanotubes
Characterization of nanoparticles in general, and carbon nanotubes in particular, remains a technical challenge even though the chemistry of covalent functionalization has been studied for more than a decade. It has been noted by several researchers that the characterization of products represents a constant problem in nanotube chemistry.
• 8.9: Characterization of Bionanoparticles by Electrospray-Differential Mobility Analysis
Electrospray-differential mobility analysis (ES-DMA) is an analytical technique that uses first an electrospray to aerosolize particles and then DMA to characterize their electrical mobility at ambient conditions. This versatil tool can be used to quantitative characterize biomolecules and nanoparticles from 0.7 to 800 nm. In the 1980s, it was discovered that ES could be used for producing aerosols of biomacromolecules.
08: Structure at the Nano Scale
A Brief History of Confocal Microscopy
Confocal microscopy was invented by Marvin Minsky (FIGURE) in 1957, and subsequently patented in 1961. Minsky was trying to study neural networks to understand how brains learn, and needed a way to image these connections in their natural state (in three dimensions). He invented the confocal microscope in 1955, but its utility was not fully realized until technology could catch up. In 1973 Egger published the first recognizable cells, and the first commercial microscopes were produced in 1987.
In the 1990's confocal microscopy became near routine due to advances in laser technology, fiber optics, photodetectors, thin film dielectric coatings, computer processors, data storage, displays, and fluorophores. Today, confocal microscopy is widely used in life sciences to study cells and tissues.
The Basics of Fluorescence
Fluorescence is the emission of a secondary photon upon absorption of a photon of higher wavelength. Most molecules at normal temperatures are at the lowest energy state, the so-called 'ground state'. Occasionally, a molecule may absorb a photon and increase its energy to the excited state. From here it can very quickly transfer some of that energy to other molecules through collisions; however, if it cannot transfer enough energy it spontaneously emits a photon with a lower wavelength Figure \(2\). This is fluorescence.
In fluorescence microscopy, fluorescent molecules are designed to attach to specific parts of a sample, thus identifying them when imaged. Multiple fluorophores can be used to simultaneously identify different parts of a sample. There are two options when using multiple fluorophores:
Fluorophores can be chosen that respond to different wavelengths of a multi-line laser.
Fluorophores can be chosen that respond to the same excitation wavelength but emit at different wavelengths.
In order to increase the signal, more fluorophores can be attached to a sample. However, there is a limit, as high fluorophore concentrations result in them quenching each other, and too many fluorophores near the surface of the sample may absorb enough light to limit the light available to the rest of the sample. While the intensity of incident radiation can be increased, fluorophores may become saturated if the intensity is too high.
Photobleaching is another consideration in fluorescent microscopy. Fluorophores irreversibly fade when exposed to excitation light. This may be due to reaction of the molecules’ excited state with oxygen or oxygen radicals. There has been some success in limiting photobleaching by reducing the oxygen available or by using free-radical scavengers. Some fluorophores are more robust than others, so choice of fluorophore is very important. Fluorophores today are available that emit photons with wavelengths ranging 400 - 750 nm.
How Confocal Microscopy is Different from Optical Microscopy
A microscope’s lenses project the sample plane onto an image plane. An image can be formed at many image planes; however, we only consider one of these planes to be the ‘focal plane’ (when the sample image is in focus). When a pinhole screen in placed at the image focal point, it allows in-focus light to pass while effectively blocking light from out-of-focus locations Figure \(3\). This pinhole is placed at the conjugate image plane to the focal plane, thus the name "confocal". The size of this pinhole determines the depth-of-focus; a bigger pinhole collects light from a larger volume. The pinhole can only practically be made as small as approximately the radius of the Airy disk, which is the best possible light spot from a circular aperture Figure \(4\), because beyond that more signal is blocked resulting in a decreased signal-to-boise ratio.
In optics, the Airy disk and Airy pattern are descriptions of the best focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light.
To further reduce the effect of scattering due to light from other parts of the sample, the sample is only illuminated at a tiny point through the use of a pinhole in front of the light source. This greatly reduces the interference of scattered light from other parts of the sample. The combination of a pinhole in front of both the light source and detector is what makes confocal unique.
Parts of a Confocal Microscope
A simple confocal microscope generally consists of a laser, pinhole aperture, dichromatic mirror, scanning mirrors, microscope objectives, a photomultiplier tube, and computing software used to reconstruct the image Figure \(5\). Because a relatively small volume of the sample is being illuminated at any given time, a very bright light source must be used to produce a detectable signal. Early confocal microscopes used zirconium arc lamps, but recent advances in laser technology have made lasers in the UV-visible and infrared more stable and affordable. A laser allows for a monochromatic (narrow wavelength range) light source that can be used to selectively excite fluorophores to emit photons of a different wavelength. Sometimes filters are used to further screen for single wavelengths.
The light passes through a dichromatic (or "dichroic") mirror Figure \(6\) which allows light with a higher wavelength (from the laser) to pass but reflects light of a lower wavelength (from the sample) to the detector. This allows the light to travel the same path through the majority of the instrument, and eliminates signal due to reflection of the incident light.
The light is then reflects across a pair of mirrors or crystals, one each for the x and y directions, which enable the beam to scan across the sample (Figure \(6\)). The speed of the scan is usually the limiting factor in the speed of image acquisition. Most confocal microscopes can create an image in 0.1 - 1 second. Usually the sample is raster scanned quickly in the x-direction and slowly in the y direction (like reading a paragraph left to right, Figure \(6\)).
The rastering is controlled by galvanometers that move the mirrors back and forth in a sawtooth motion. The disadvantage to scanning with the light beam is that the angle of light hitting the sample changes. Fortunately, this change is small. Interestingly, Minsky's original design moved the stage instead of the beam, as it was difficult to maintain alignment of the sensitive optics. Despite the obvious disadvantages of moving a bulky specimen, there are some advantages of moving the stage and keeping the optics stationary:
The light illuminates the specimen axially everywhere circumventing optical aberrations, and
The field of view can be made much larger by controlling the amplitude of the stage movements.
An alternative to light-reflecting mirrors is the acousto-optic deflector (AOD). The AOD allows for fast x-direction scans by creating a diffraction grating from high-frequency standing sound (pressure) waves which locally change the refractive index of a crystal. The disadvantage to AODs is that the amount of deflection depends on the wavelength, so the emission light cannot be descanned (travel back through the same path as the excitation light). The solution to this is to descan only in the y direction controlled by the slow galvanometer and collect the light in a slit instead of a pinhole. This results in reduced optical sectioning and slight distortion due to the loss of radial symmetry, but good images can still be formed. Keep in mind this is not a problem for reflected light microscopy which has the same wavelength for incident and reflected light!
Another alternative is the Nipkow disk, which has a spiral array of pinholes that create the simultaneous sampling of many points in the sample. A single rotation covers the entire specimen several times over (at 40 revolutions per second, that's over 600 frames per second). This allows descanning, but only about 1% of the excitation light passes through. This is okay for reflected light microscopy, but the signal is relatively weak and signal-to-noise ratio is low. The pinholes could be made bigger to increase light transmission but then the optical sectioning is less effective (remember depth of field is dependent on the diameter of the pinhole) and xy resolution is poorer. Highly responsive, efficient fluorophores are needed with this method.
Returning to the confocal microscope (Figure \(5\)), light then passes through the objective which acts as a well-corrected condenser and objective combination. The illuminated fluorophores fluoresce and emitted light travels up the objective back to the dichromatic mirror. This is known as epifluorescence when the incident light has the same path as detected light. Since the emitted light now has a lower wavelength than the incident, it cannot pass through the dichromatic mirror and is reflected to the detector. When using reflected light, a beamsplitter is used instead of a dichromatic mirror. Fluorescence microscopy when used properly can be more sensitive than reflected light microscopy.
Though the signal’s position is well-defined according to the position of the xy mirrors, the signal from fluorescence is relatively weak after passing through the pinhole, so a photomultiplier tube is used to detect emitted photons. Detecting all photons without regard to spatial position increases the signal, and the photomultiplier tube further increases the detection signal by propagating an electron cascade resulting from the photoelectric effect (incident photons kicking off electrons). The resulting signal is an analog electrical signal with continuously varying voltage that corresponds to the emission intensity. This is periodically sampled by an analog-to-digital converter.
It is important to understand that the image is a reconstruction of many points sampled across the specimen. At any given time the microscope is only looking at a tiny point, and no complete image exists that can be viewed at an instantaneous point in time. Software is used to recombine these points to form an image plane, and combine image planes to form a 3-D representation of the sample volume.
Two-photon Microscopy
Two-photon microscopy is a technique whereby two beams of lower intensity are directed to intersect at the focal point. Two photons can excite a fluorophore if they hit it at the same time, but alone they do not have enough energy to excite any molecules. The probability of two photons hitting a fluorophore at nearly the exact same time (less than 10-16) is very low, but more likely at the focal point. This creates a bright point of light in the sample without the usual cone of light above and below the focal plane, since there are almost no excitations away from the focal point.
To increase the chance of absorption, an ultra-fast pulsed laser is used to create quick, intense light pulses. Since the hourglass shape is replaced by a point source, the pinhole near the detector (used to reduce the signal from light originating from outside the focal plane) can be eliminated. This also increases the signal-to-noise ratio (here is very little noise now that the light source is so focused, but the signal is also small). These lasers have lower average incident power than normal lasers, which helps reduce damage to the surrounding specimen. This technique can image deeper into the specimen (~400 μm), but these lasers are still very expensive, difficult to set up, require a stronger power supply, intensive cooling, and must be aligned in the same optical table because pulses can be distorted in optical fibers.
Microparticle Characterization
Confocal microscopy is very useful for determining the relative positions of particles in three dimensions Figure \(8\). Software allows measurement of distances in the 3D reconstructions so that information about spacing can be ascertained (such as packing density, porosity, long range order or alignment, etc.).
FIgure \(8\) A reconstruction of a colloidal suspension of poly(methyl methacrylate) (PMMA) microparticles approximately 2 microns in diameter. Adapted from Confocal Microscopy of Colloids, Eric Weeks.
If imaging in fluorescence mode, remember that the signal will only represent the locations of the individual fluorophores. There is no guarantee fluorophores will completely attach to the structures of interest or that there will not be stray fluorophores away from those structures. For microparticles it is often possible to attach the fluorophores to the shell of the particle, creating hollow spheres of fluorophores. It is possible to tell if a sample sphere is hollow or solid but it would depend on the transparency of the material.
Dispersions of microparticles have been used to study nucleation and crystal growth, since colloids are much larger than atoms and can be imaged in real-time. Crystalline regions are determined from the order of spheres arranged in a lattice, and regions can be distinguished from one another by noting lattice defects.
Self-assembly is another application where time-dependent, 3-D studies can help elucidate the assembly process and determine the position of various structures or materials. Because confocal is popular for biological specimens, the position of nanoparticles such as quantum dots in a cell or tissue can be observed. This can be useful for determining toxicity, drug-delivery effectiveness, diffusion limitations, etc.
A Summary of Confocal Microscopy's Strengths and Weaknesses
Strengths
Less haze, better contrast than ordinary microscopes.
3-D capability.
Illuminates a small volume.
Excludes most of the light from the sample not in the focal plane.
Depth of field may be adjusted with pinhole size.
Has both reflected light and fluorescence modes.
Can image living cells and tissues.
Fluorescence microscopy can identify several different structures simultaneously.
Accommodates samples with thickness up to 100 μm.
Can use with two-photon microscopy.
Allows for optical sectioning (no artifacts from physical sectioning) 0.5 - 1.5 μm.
Weaknesses
Images are scanned slowly (one complete image every 0.1-1 second).
Must raster scan sample, no complete image exists at any given time.
There is an inherent resolution limit because of diffraction (based on numerical aperture, ~200 nm).
Sample should be relatively transparent for good signal.
High fluorescence concentrations can quench the fluorescent signal.
Fluorophores irreversibly photobleach.
Lasers are expensive.
Angle of incident light changes slightly, introducing slight distortion. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.01%3A_Microparticle_Characterization_via_Confocal_Microscopy.txt |
TEM: An Overview
Transmission electron microscopy (TEM) is a form of microscopy which in which a beam of electrons transmits through an extremely thin specimen, and then interacts with the specimen when passing through it. The formation of images in a TEM can be explained by an optical electron beam diagram in Figure $1$. TEMs provide images with significantly higher resolution than visible-light microscopes (VLMs) do because of the smaller de Broglie wavelength of electrons. These electrons allow for the examination of finer details, which are several thousand times higher than the highest resolution in a VLM. Nevertheless, the magnification provide in a TEM image is in contrast to the absorption of the electrons in the material, which is primarily due to the thickness or composition of the material.
When a crystal lattice spacing (d) is investigated with electrons with wavelength λ, diffracted waves will be formed at specific angles 2θ, satisfying the Bragg condition, \ref{1}.
$2dsin\theta \ =\ \lambda \label{1}$
The regular arrangement of the diffraction spots, the so-called diffraction pattern (DP), can be observed. While the transmitted and the diffracted beams interfere on the image plane, a magnified image (electron microscope image) appears. The plane where the DP forms is called the reciprocal space, which the image plane is called the real space. A Fourier transform can mathematically transform the real space to reciprocal space.
By adjusting the lenses (changing their focal lengths), both electron microscope images and DP can be observed. Thus, both observation modes can be successfully combined in the analysis of the microstructures of materials. For instance, during investigation of DPs, an electron microscope image is observed. Then, by inserting an aperture (selected area aperture), adjusting the lenses, and focusing on a specific area that we are interested in, we will get a DP of the area. This kind of observation mode is called a selected area diffraction. In order to investigate an electron microscope image, we first observe the DP. Then by passing the transmitted beam or one of the diffracted beams through a selected aperture and changing to the imaging mode, we can get the image with enhanced contrast, and precipitates and lattice defects can easily be identified.
Describing the resolution of a TEM in terms of the classic Rayleigh criterion for VLMs, which states that the smallest distance that can be investigated, δ, is given approximately by \ref{2}, where λ is the wavelength of the electrons, µ is the refractive index of the viewing medium, and β is the semi-angle of collection of the magnifying lens.
$\delta \ = \frac{0.61 \lambda }{\mu \ sin \beta} \label{2}$
ccording to de Broglie’s ideas of the wave-particle duality, the particle momentum p is related to its wavelength λ through Planck’s constant h, \ref{3}.
$\lambda = \frac{h}{p} \label{3}$
Momentum is given to the electron by accelerating it through a potential drop, V, giving it a kinetic energy, eV. This potential energy is equal to the kinetic energy of the electron, \ref{4}.
$eV\ =\ \frac{m_{o} u ^{2}}{2} \label{4}$
Based upon the foregoing, we can equate the momentum (p) to the electron mass (mo), multiplied by the velocity (v) and substituting for v from \ref{5} i.e., \ref{6}.
$p\ =\ m_{o} u \ =\ (2m_{o}eV)^{\frac{1}{2}} \label{5}$
These equations define the relationship between the electron wavelength, λ, and the accelerating voltage of the electron microscope (V), Eq. However, we have to consider about the relative effects when the energy of electron more than 100 keV. So in order to be exact we must modify \ref{6} to give \ref{7}.
$\lambda \ =\frac{h}{(2m_{o}eV)^{\frac{1}{2}} } \label{6}$
$\lambda \ =\frac{h}{[2m_{o}eV(1\ +\ \frac{eV}{2m_{o}e^{2}})]^{\frac{1}{2}}} \label{7}$
From \ref{2} and \ref{3}, if a higher resolution is desired a decrease in the electron wavelength is accomplished by increasing the accelerating voltage of the electron microscope. In other words, the higher accelerating rating used, the better resolution obtained.
Why the Specimen Should be Thin
The scattering of the electron beam through the material under study can form different angular distribution (Figure $2$) and it can be either forward scattering or back scattering. If an electron is scattered < 90o, then it is forward scattered, otherwise, it is backscattered. If the specimen is thicker, fewer electrons are forward scattered and more are backscattered. Incoherent, backscattered electrons are the only remnants of the incident beam for bulk, non-transparent specimens. The reason that electrons can be scattered through different angles is related to the fact that an electron can be scattered more than once. Generally, the more times of scattering happen, the greater the angle of scattering.
All scattering in the TEM specimen is often approximated as a single scattering event since it is the simplest process. If the specimen is very thin, this assumption will be reasonable enough. If the electron is scattered more than once, it is called ‘plural scattering.’ It is generally safe to assume single scattering occurs, unless the specimen is particularly thick. When the times of scattering increase, it is difficult to predict what will happen to the electron and to interpret the images and DPs. So, the principle is ‘thinner is better’, i.e., if we make thin enough specimens so that the single-scattering assumption is plausible, and the TEM research will be much easier.
In fact, forward scattering includes the direct beam, most elastic scattering, refraction, diffraction, particularly Bragg diffraction, and inelastic scattering. Because of forward scattering through the thin specimen, a DP or an image would be showed on the viewing screen, and an X-ray spectrum or an electron energy-loss spectrum can be detected outside the TEM column. However, backscattering still cannot be ignored, it is an important imagine mode in the SEM.
Limitations of TEM
Interpreting Transmission Images
One significant problem that might encounter when TEM images are analyzed is that the TEM present us with 2D images of a 3D specimen, viewed in transmission. This problem can be illustrated by showing a picture of two rhinos side by side such that the head of one appears attached to the rear of the other (Figure $3$).
One aspect of this particular drawback is that a single TEM images has no depth sensitivity. There often is information about the top and bottom surfaces of the specimen, but this is not immediately apparent. There has been progress in overcoming this limitation, by the development of electron tomography, which uses a sequence of images taken at different angles. In addition, there has been improvement in specimen-holder design to permit full 360o rotation and, in combination with easy data storage and manipulation; nanotechnologists have begun to use this technique to look at complex 3D inorganic structures such as porous materials containing catalyst particles.
Electron Beam Damage
A detrimental effect of ionizing radiation is that it can damage the specimen, particularly polymers (and most organics) or certain minerals and ceramics. Some aspects of beam damage made worse at higher voltages. Figure $4$ shows an area of a specimen damaged by high-energy electrons. However, the combination of more intense electron sources with more sensitive electron detectors, and the use computer enhancement of noisy images, can be used to minimize the total energy received by the sample.
Sample Preparation
The specimens under study have to be thin if any information is to be obtained using transmitted electrons in the TEM. For a sample to be transparent to electrons, the sample must be thin enough to transmit sufficient electrons such that enough intensity falls on the screen to give an image. This is a function of the electron energy and the average atomic number of the elements in the sample. Typically for 100 keV electrons, a specimen of aluminum alloy up to ~ 1 µm would be thin, while steel would be thin up to about several hundred nanometers. However, thinner is better and specimens < 100 nm should be used wherever possible.
The method to prepare the specimens for TEM depends on what information is required. In order to observe TEM images with high resolution, it is necessary to prepare thin films without introducing contamination or defects. For this purpose, it is important to select an appropriate specimen preparation method for each material, and to find an optimum condition for each method.
Crushing
A specimen can be crushed with an agate mortar and pestle. The flakes obtained are suspended in an organic solvent (e.g., acetone), and dispersed with a sonic bath or simply by stirring with a glass stick. Finally, the solvent containing the specimen flakes is dropped onto a grid. This method is limited to materials which tend to cleave (e.g., mica).
Electropolishing
Slicing a bulk specimen into wafer plates of about 0.3 mm thickness by a fine cutter or a multi-wire saw. The wafer is further thinned mechanically down to about 0.1 mm in thickness. Electropolishing is performed in a specific electrolyte by supplying a direct current with the positive pole at the thin plate and the negative pole at a stainless steel plate. In order to avoid preferential polishing at the edge of the specimen, all the edges are cover with insulating paint. This is called the window method. The electropolishing is finished when there is a small hole in the plate with very thin regions around it (Figure $5$). This method is mainly used to prepare thin films of metals and alloys.
Chemical Polishing
Thinning is performed chemically, i.e., by dipping the specimen in a specific solution. As for electropolishing, a thin plate of 0.1~0.2 mm in thickness should be prepared in advance. If a small dimple is made in the center of the plate with a dimple grinder, a hole can be made by etching around the center while keeping the edge of the specimen relatively thick. This method is frequently used for thinning semiconductors such as silicon. As with electro-polishing, if the specimen is not washed properly after chemical etching, contamination such as an oxide layer forms on the surface.
Ultramicrotomy
Specimens of thin films or powders are usually fixed in an acrylic or epoxy resin and trimmed with a glass knife before being sliced with a diamond knife. This process is necessary so that the specimens in the resin can be sliced easily by a diamond knife. Acrylic resins are easily sliced and can be removed with chloroform after slicing. When using an acrylic resin, a gelatin capsule is used as a vessel. Epoxy resin takes less time to solidify than acrylic resins, and they remain strong under electron irradiation. This method has been used for preparing thin sections of biological specimens and sometimes for thin films of inorganic materials which are not too hard to cut.
Ion Milling
A thin plate (less than 0.1 mm) is prepared from a bulk specimen by using a diamond cutter and by mechanical thinning. Then, a disk 3 mm in diameter is made from the plate using a diamond knife or a ultrasonic cutter, and a dimple is formed in the center of the surface with a dimple grinder. If it is possible to thin the disk directly to 0.03 mm in thickness by mechanical thinning without using a dimple grinder, the disk should be strengthened by covering the edge with a metal ring. Ar ions are usually used for the sputtering, and the incidence angle against the disk specimen and the accelerating voltage are set as 10 - 20o and a few kilovolts, respectively. This method is widely used to obtain thin regions of ceramics and semiconductors in particular, and also for cross section of various multilayer films.
Focused Ion Beam (FIB)
This method was originally developed for the purpose of fixing semiconductor devices. In principle, ion beams are sharply focused on a small area, and the specimen in thinned very rapidly by sputtering. Usually Ga ions are used, with an accelerating voltage of about 30 kV and a current of about 10 A/cm2. The probe size is several tens of nanometers. This method is useful for specimens containing a boundary between different materials, where it may be difficult to homogeneously thin the boundary region by other methods such as ion milling.
Vacuum Evaporation
The specimen to be studied is set in a tungsten-coil or basket. Resistance heating is applied by an electric current passing through the coil or basket, and the specimen is melted, then evaporated (or sublimed), and finally deposited onto a substrate. The deposition process is usually carried under a pressure of 10-3-10-4 Pa, but in order to avoid surface contamination, a very high vacuum is necessary. A collodion film or cleaved rock salt is used as a substrate. Rock salt is especially useful in forming single crystals with a special orientation relationship between each crystal and the substrate. Salt is easily dissolved in water, and then the deposited films can be fixed on a grid. Recently, as an alternative to resistance heating, electron beam heating or an ion beam sputtering method has been used to prepare thin films of various alloys. This method is used for preparing homogeneous thin films of metals and alloys, and is also used for coating a specimen with the metal of alloy.
The Characteristics of the Grid
The types of TEM specimens that are prepared depend on what information is needed. For example, a self-supporting specimen is one where the whole specimen consists of one material (which may be a composite). Other specimens are supported on a grid or on a Cu washer with a single slot. Some grids are shown in Figure $6$. Usually the specimen or grid will be 3 mm in diameter.
TEM specimen stage designs include airlocks to allow for insertion of the specimen holder into the vacuum with minimal increase in pressure in other areas of the microscope. The specimen holders are adapted to hold a standard size of grid upon which the sample is placed or a standard size of self-supporting specimen. Standard TEM grid sizes is a 3.05 mm diameter ring, with a thickness and mesh size ranging from a few to 100 µm. The sample is placed onto the inner meshed area having diameter of approximately 2.5 mm. The grid materials usually are copper, molybdenum, gold or platinum. This grid is placed into the sample holder which is paired with the specimen stage. A wide variety of designs of stages and holders exist, depending upon the type of experiment being performed. In addition to 3.05 mm grids, 2.3 mm grids are sometimes, if rarely, used. These grids were particularly used in the mineral sciences where a large degree of tilt can be required and where specimen material may be extremely rare. Electron transparent specimens have a thickness around 100 nm, but this value depends on the accelerating voltage.
Once inserted into a TEM, the sample is manipulated to allow study of the region of interest. To accommodate this, the TEM stage includes mechanisms for the translation of the sample in the XY plane of the sample, for Z height adjustment of the sample holder, and usually at least one rotation degree of freedom. Most TEMs provide the ability for two orthogonal rotation angles of movement with specialized holder designs called double-tilt sample holders.
A TEM stage is required to have the ability to hold a specimen and be manipulated to bring the region of interest into the path of the electron beam. As the TEM can operate over a wide range of magnifications, the stage must simultaneously be highly resistant to mechanical drift as low as a few nm/minute while being able to move several µm/minute, with repositioning accuracy on the order of nanometers.
Transmission Electron Microscopy Image for Multilayer-Nanomaterials
Although, TEMs can only provide 2D analysis for a 3D specimen; magnifications of 300,000 times can be routinely obtained for many materials making it an ideal methodfor the study of nanomaterials. Besides from the TEM images, darker areas of the image show that the sample is thicker or denser in these areas, so we can observe the different components and structures of the specimen by the difference of color. For investigating multilayer-nanomaterials, a TEM is usually the first choice, because not only does it provide a high resolution image for nanomaterials but also it can distinguish each layer within a nanostructured material.
Observations of Multilayer-nanomaterials
TEM was been used to analyze the depth-graded W/Si multilayer films. Multilayer films were grown on polished, 100 mm thick Si wafers by magnetron sputtering in argon gas. The individual tungsten and silicon layer thicknesses in periodic and depth-graded multilayers are adjusted by varying the computer-controlled rotational velocity of the substrate platen. The deposition times required to produce specific layer thicknesses were determined from detailed rate calibrations. Samples for TEM were prepared by focused ion beam milling at liquid N2 temperature to prevent any beam heating which might result in re-crystallization and/or re-growth of any amorphous or fine grained polycrystalline layers in the film.
TEM measurements were made using a JEOL-4000 high-resolution transmission electron microscope operating at 400 keV; this instrument has a point-to-point resolution of 0.16 nm. Large area cross-sectional images of a depth-graded multilayer film obtained under medium magnification (~100 kX) were acquired at high resolution. A cross-sectional TEM image showed 150 layers W/Si film with the thickness of layers in the range of 3.33 ~ 29.6 nm (Figure $7$ shows a part of layers). The dark layers are tungsten and the light layers are silicon and they are separated by the thin amorphous W–Si interlayers (gray bands). By the high resolution of the TEM and the nature characteristics of the material, each layer can be distinguished clearly with their different darkness.
Not all kinds of multilayer nanomaterials can be observed clearly under TEM. A materials consist of pc-Si:H multilayers were prepared by a photo-assisted chemical vapor deposition (photo-CVD) using a low-pressure mercury lamp as an UV light source to dissociate the gases. The pc-Si:H multilayer included low H2-diluted a-Si:H sublayers (SL’s) and highly H2-diluted a-Si:H sublayers (SH’s). Control of the CVD gas flow (H2|SiH4) under continuous UV irradiation resulted in the deposition of multilayer films layer by layer.
For a TEM measurement, a 20 nm thick undiluted a-Si:H film on a c-Si wafer before the deposition of multilayer to prevent from any epitaxial growth. Figure $8$ shows a cross-sectional TEM image of a six-cycled pc-Si:H multilayer specimen. The white dotted lines are used to emphasize the horizontal stripes, which have periodicity in the TEM image. As can be seen, there are no significant boundaries between SL and SH could be observed because all sublayers are prepared in H2 gas. In order to get the more accurate thickness of each sublayer, other measurements might be necessary.
TEM Imaging of Carbon Nanomaterials
Transmission electron microscopy (TEM) is a form of microscopy that uses an high energy electron beam (rather than optical light). A beam of electrons is transmitted through an ultra thin specimen, interacting with the specimen as it passes through. The image (formed from the interaction of the electrons with the sample) is magnified and focused onto an imaging device, such as a photographic film, a fluorescent screen,or detected by a CCD camera. In order to let the electrons pass through the specimen, the specimen has to be ultra thin, usually thinner than 10 nm.
The resolution of TEM is significantly higher than light microscopes. This is because the electron has a much smaller de Broglie wavelength than visible light (wavelength of 400~700 nm). Theoretically, the maximum resolution, d, has been limited by λ, the wavelength of the detecting source (light or electrons) and NA, the numerical aperture of the system.
$d\ = \frac{\lambda }{2n\ sin \alpha} \approx \frac{\lambda }{2NA} \label{8}$
For high speed electrons (in TEM, electron velocity is close to the speed of light, c, so that the special theory of relativity has to be considered), the λe:
$\lambda _{e} =\ \frac{h}{\sqrt{2m_{0}E(1+E/2m_{0}c^{2})}} \label{9}$
According to this formula, if we increase the energy of the detecting source, its wavelength will decrease, and we can get higher resolution. Today, the energy of electrons used can easily get to 200 keV, sometimes as high as 1 MeV, which means the resolution is good enough to investigate structure in sub-nanometer scale. Because the electrons is focused by several electrostatic and electromagnetic lenses, like the problems optical camera usually have, the image resolution is also limited by aberration, especially the spherical aberration called Cs. Equipped with a new generation of aberration correctors, transmission electron aberration-corrected microscope (TEAM) can overcome spherical aberration and get to half angstrom resolution.
Although TEAM can easily get to atomic resolution, the first TEM invented by Ruska in April 1932 could hardly compete with optical microscope, with only 3.6×4.8 = 14.4 magnification. The primary problem was the electron irradiation damage to sample in poor vacuum system. After World War II, Ruska resumed his work in developing high resolution TEM. Finally, this work brought him the Nobel Prize in physics 1986. Since then, the general structure of TEM hasn’t changed too much as shown in Figure $9$. The basic components in TEM are: electron gun, condenser system, objective lens (most important len in TEM which determines the final resolution), diffraction lens, projective lenses (all lens are inside the equipment column, between apertures), image recording system (used to be negative films, now is CCD cameras) and vacuum system.
The Family of Carbon Allotropes and Carbon Nanomaterials
Common carbon allotropes include diamond, graphite, amorphrous C (a-C), fullerene (also known as buckyball), carbon nanotube (CNT, including single wall CNT and multi wall CNT), graphene. Most of them are chemically inert and have been found in nature. We can also define carbon as sp2 carbon (which is graphite), sp3 carbon (which is diamond) or hybrids of sp2 and sp3 carbon. As shown in Figure, (a) is the structure of diamond, (b) is the structure of graphite, (c) graphene is a single sheet of graphite, (d) is amorphous carbon, (e) is C60, and (f) is single wall nanotube. As for carbon nanomaterials, fullerene, CNT and graphene are the three most well investigated, due to their unique properties in both mechanics and electronics. Under TEM, these carbon nanomaterials will display three different projected images.
Atomic Structure of Carbon Nanomaterials under TEM
All carbon naomaterials can be investigated under TEM. Howerver, because of their difference in structure and shape, specific parts should be focused in order to obtain their atomic structure.
For C60, which has a diameter of only 1 nm, it is relatively difficult to suspend a sample over a lacey carbon grid (a common kind of TEM grid usually used for nanoparticles). Even if the C60 sits on a thin a-C film, it also has some focus problems since the surface profile variation might be larger than 1 nm. One way to solve this problem is to encapsulate the C60 into single wall CNTs, which is known as nano peapods. This method has two benefits:
CNT helps focus on C60. Single wall is aligned in a long distance (relative to C60). Once it is suspended on lacey carbon film, it is much easier to focus on it. Therefore, the C60 inside can also be caught by minor focus changes.
The CNT can protect C60 from electron irradiation. Intense high energy electrons can permanently change the structure of the CNT. For C60, which is more reactive than CNTs, it can not survive after exposing to high dose fast electrons.
In studying CNT cages, C92 is observed as a small circle inside the walls of the CNT. While a majority of electron energy is absorbed by the CNT, the sample is still not irradiation-proof. Thus, as is seen in Figure $11$, after a 123 s exposure, defects can be generated and two C92 fused into one new larger fullerene.
Although, the discovery of C60 was first confirmed by mass spectra rather than TEM. When it came to the discovery of CNTs, mass spectra was no longer useful because CNTs shows no individual peak in mass spectra since any sample contains a range of CNTs with different lengths and diameters. On the other hand, HRTEM can provide a clear image evidence of their existence. An example is shown in Figure $12$.
Graphene is a planar fullerene sheet. Until recently, Raman, AFM and optical microscopy (graphene on 300 nm SiO2 wafer) were the most convenient methods to characterize samples. However, in order to confirm graphene’s atomic structure and determine the difference between mono-layer and bi-layer, TEM is still a good option. In Figure $13$, a monolayer suspended graphene is observed with its atomic structure clearly shown. Inset is the FFT of the TEM image, which can be used as a filter to get an optimized structure image. High angle annular dark field (HAADF) image usually gives better contrast for different particles on it. It is also sensitive with changes of thickness, which allows a determination of the number of graphene layers.
Graphene Stacking and Edges Direction
Like the situation in CNT, TEM image is a projected image. Therefore, even with exact count of edge lines, it is not possible to conclude that a sample is a single layer graphene or multi-layer. If folding graphene has AA stacking (one layer is superposed on the other), with a projected direction of [001], one image could not tell the thickness of graphene. In order to distinguish such a bilayer of graphene from a single layer of graphene, a series of tilting experiment must be done. Different stacking structures of graphene are shown in Figure $13$ a.
Theoretically, graphene has the potential for interesting edge effects. Based upon its sp2 structure, its edge can be either that of a zigzag or armchair configuration. Each of these possess different electronic properties similar to that observed for CNTs. For both research and potential application, it is important to control the growth or cutting of graphene with one specific edge. But before testing its electronic properties, all the edges have to be identified, either by directly imaging with STM or by TEM. Detailed information of graphene edges can be obtained with HRTEM, simulated with fast fourier transform (FFT). In Figure $14$ b, armchair directions are marked with red arrow respectively. A clear model in Figurec shows a 30 degree angle between zigzag edge and armchair edge.
Transmission Electron Energy Loss Spectroscopy
Electron energy loss spectroscopy (EELS) is a technique that measures electronic excitations within solid-state materials. When an electron beam with a narrow range of kinetic energy is directed at a material some electrons will be inelastically scattered, resulting in a kinetic energy loss. Electrons can be inelastically scattered from phonon excitations, plasmon excitations, interband transitions, or inner shell ionization. EELS measures the energy loss of these inelastically scattered electrons and can yield information on atomic composition, bonding, electronic properties of valance and conduction bands and surface properties. An example of atomic level composition mapping is shown in Figure $15$ a. EELS has even been used to measure pressure and temperature within materials.
The EEl Spectrum
An idealized EEL spectrum is show in Figure $16$. The most prominent feature of any EEL spectrum is the zero loss peak (ZLP). The ZLP is due to those electrons from the electron beam that do not inelastically scatter and reach the detector with their original kinetic energy; typically 100-300 keV. By definition the ZLP is set to 0 eV for further analysis and all signals arising from inelastically scatter electrons occur at >0 eV. The second largest feature is often the plasmon resonance - the collective excitation of conduction band electrons within a material. The plasmon resonance and other peaks attributed to weakly bound, or outer shell electrons, occur in the “low-loss” region of the spectrum. The low-loss regime is typically thought of as energy loss <50 eV, but this cut-off from low-loss to high-loss is arbitrary. Shown in the inset of Figure $16$ is an edge from atom core-loss and further fine structure. Inner shell ionizations, represented by the core-loss peaks, are useful in determining elemental compositions as these peaks can act as fingerprints for specific elements. For example, if there is a peak at 532 eV in a EEL spectrum, there is a high probability that the sample contains a considerable amount of oxygen because this is known to be the energy needed to remove an inner shell electron from oxygen. This idea is further explored by looking at sudden changes in the bulk plasmon for aluminum in different chemical environments as shown in Figure $16$.
Of course, there are several other techniques available for probing atomic compositions many of which are covered in this text. These include Energy dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and Auger electron spectroscopy. Please reference these chapters thorough introduction to these techniques.
Electron Energy Loss Spectroscopy Versus Energy Dispersive X-ray Spectroscopy
As a technique EELS is most frequently compared to energy dispersive X-ray spectroscopy (EDX) also known as energy dispersive spectroscopy (EDS). Energy dispersive X-ray detectors are commonly found as analytical probes on both scanning and transmission electron microscopes. The popularity of EDS can be understood by recognizing the simplicity of compositional analysis using this technique. However, EELS data can offer complementary compositional analysis while also generally yielding further insight into the solid-state physics and chemistry in a system at the cost of a steeper learning curve. EDS and EELS spectra are both derived from the electronic excitations of materials, however, EELS probes the initial excitation while EDS looks at X-ray emissions from the decay of this excited state. As a result, EEL spectra investigate energy ranges from 0-3 keV while EDS spectra analyze a wider energy range from 1-40 keV. The difference in ranges makes EDS suited particularly well for heavy elements while EELS complements measurement elements lighter than Zn.
History and Implementation
In the early 1940s, James Hillier (Figure $18$) and R.F. Baker were looking to develop a method for pairing the size, shape, and structure available from electron microscopes to a convenient method for “determining the composition of individual particles in a mixed specimen”. Their instrument, shown in Figure $19$,reported in the Journal of Applied Physics in September 1994 was the first electron-optical instrument used to measure the velocity distribution in an electron beam transmitting through a sample.
The instrument was built from a repurposed transmission electron microscope (TEM). It consisted of an electron source and three electromagnetic focusing lenses, standard for TEMs at the time, but also incorporated a magnetic deflecting lenses, which when turned on, would redirect the electrons 180° into a photographic plate. The electrons with varying kinetic energies dispersed across the photographic plate and could be correlated to the energy loss of each peak depending on position. In this groundbreaking work, Hillier and Baker were able to find the discrete energy loss corresponding to the K levels of both carbon and oxygen.
The vast majority of EEL spectrometers are found as secondary analyzers in transmission electron microscopes. It wasn’t until the 1990s when EELS became a widely used research tool because of advances in electron beam aberration correction and vacuum technologies. Today, EELS is capable of spatial resolutions down to the single atom level, and if the electron beam is monochromated energy resolution can be as low as 0.01eV. Figure $20$ depicts the typical layout of an EEL spectrometer at the base of a TEM. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.02%3A_Transmission_Electron_Microscopy.txt |
Scanning tunneling microscopy (STM) is a powerful instrument that allows one to image the sample surface at the atomic level. As the first generation of scanning probe microscopy (SPM), STM paves the way for the study of nano-science and nano-materials. For the first time, researchers could obtain atom-resolution images of electrically conductive surfaces as well as their local electric structures. Because of this milestone invention, Gerd Binnig (Figure $1$) and Heinrich Rohrer (Figure $2$) won the Nobel Prize in Physics in 1986.
Principles of Scanning Tunneling Microscopy
The key physical principle behind STM is the tunneling effect. In terms of their wave nature, the electrons in the surface atoms actually are not as tightly bonded to the nucleons as the electrons in the atoms of the bulk. More specifically, the electron density is not zero in the space outside the surface, though it will decrease exponentially as the distance between the electron and the surface increases (Figure $3$ a). So, when a metal tip approaches to a conductive surface within a very short distance, normally just a few Å, their perspective electron clouds will starting to overlap, and generate tunneling current if a small voltage is applied between them, as shown in Figure $3) b. When we consider the separation between the tip and the surface as an ideal one-dimensional tunneling barrier, the tunneling probability, or the tunneling current I, will depend largely on s, the distance between the tip and surface, \ref{1}, where m is the electron mass, e the electron charge, h the Plank constant, ϕ the averaged work function of the tip and the sample, and V the bias voltage. $I \propto e^{-2s\ [2m/h^{2} (<\phi >\ -\ e|V|/2)]^{1/2}} \label{1}$ A simple calculation will show us how strongly the tunneling current is affected by the distance (s). If s is increased by ∆s = 1 Å, \ref{2} and \ref{3}. $\Delta I\ =\ e^{-2k_{0} \Delta s} \label{2}$ $k_{0}\ =\ [2m/h^{2} (<\phi >\ -\ e|V|/2)]^{1/2} \label{3}$ Usually (<ϕ> -e|V|/2) is about 5 eV, which k0 about 1 Å-1, then ∆I/I = 1/8. That means, if s changes by 1 Å, the current will change by one order of the magnitude. That’s the reason why we can get atom-level image by measuring the tunneling current between the tip and the sample. In a typical STM operation process, the tip is scanning across the surface of sample in x-y plain, the instrument records the x-y position of the tip, measures the tunneling current, and control the height of the tip via a feedback circuit. The movements of the tip in x, y and z directions are all controlled by piezo ceramics, which can be elongated or shortened according to the voltage applied on them. Normally, there are two modes of operation for STM, constant height mode and constant current mode. In constant height mode, the tip stays at a constant height when it scans through the sample, and the tunneling current is measured at different (x, y) position (Figure \(4$b). This mode can be applied when the surface of sample is very smooth. But, if the sample is rough, or has some large particles on the surface, the tip may contact with the sample and damage the surface. In this case, the constant current mode is applied. During this scanning process, the tunneling current, namely the distance between the tip and the sample, is settled to an unchanged target value. If the tunneling current is higher than that target value, that means the height of the sample surface is increasing, the distance between the tip and sample is decreasing. In this situation, the feedback control system will respond quickly and retract the tip. Conversely, if the tunneling current drops below the target value, the feedback control will have the tip closer to the surface. According to the output signal from feedback control, the surface of the sample can be imaged.
Comparison of Atomic Force Microscopy (AFM) and Scanning Tunneling Microscopy (STM)
Both AFM and STM are widely used in nano-science. According to the different working principles though, they have their own advantages and disadvantages when measuring specific properties of sample (Table $1$). STM requires an electric circuit including the tip and sample to let the tunneling current go through. That means, the sample for STM must be conducting. In case of AFM however, it just measures the deflection of the cantilever caused by the van der Waals forces between the tip and sample. Thus, in general any kind of sample can be used for AFM. But, because of the exponential relation of the tunneling current and distance, STM has a better resolution than AFM. In STM image one can actually “see” an individual atom, while in AFM it’s almost impossible, and the quality of AFM image is largely depended on the shape and contact force of the tip. In some cases, the measured signal would be rather complicated to interpret into morphology or other properties of sample. On the other side, STM can give straight forward electric property of the sample surface.
AFM STM
Sample Requirement - Conducting
Work environment Air, liquid Vacuum
Lateral resolution ~1 nm ~0.1 nm
Vertical resolution ~0.05 nm ~0.05 nm
Working mode Tapping, contact Constant current, constant height
Table $1$ Comparison of AFM and STM
Applications of Scanning Tunneling Microscopy in Nanoscience
STM provides a powerful method to detect the surface of conducting and semi-conducting materials. Recently STM can also be applied in the imaging of insulators, superlattice assemblies and even the manipulation of molecules on surface. More importantly, STM can provide the surface structure and electric property of surface at atomic resolution, a true breakthrough in the development of nano-science. In this sense, the data collected from STM could reflect the local properties even of single molecule and atom. With these valuable measurement data, one could give a deeper understanding of structure-property relations in nanomaterials.
An excellent example is the STM imaging of graphene on Ru(0001), as shown in Figure $4$. Clearly seen is the superstructure with a periodicity of ~30 Å , coming from the lattice mismatch of 12 unit cells of the graphene and 11 unit cells of the underneath Ru(0001) substrate. This so-called moiré structure can also be seen in other systems when the adsorbed layers have strong chemical bonds within the layer and weak interaction with the underlying surface. In this case, the periodic superstructure seen in graphene tells us that the formed graphene is well crystallized and expected to have high quality.
Another good example is shown to see that the measurement from STM could tell us the bonding information in single-molecular level. In thiol- and thiophene-functionalization of single-wall carbon nanotubes (SWNTs), the use of Au nanoparticles as chemical markers for AFM gives misleading results, while STM imaging could give correct information of substituent location. From AFM image, Au-thiol-SWNT (Figure $6$a) shows that most of the sidewalls are unfunctionalized, while Au-thiophene-SWNT (Figure $6$ c)shows long bands of continuous functionalized regions on SWNT. This could lead to the estimation that thiophene is better functionalized to SWNT than thiol. Yet, if we look up to the STM image (Figure $6$b and d), in thiol-SWNTs the multiple functional groups are tightly bonded in about 5 - 25 nm, while in thiophene-SWNTs the functionalization is spread out uniformly along the whole length of SWNT. This information indicates that actually the functionalization levels of thiol- and thiophene-SWNTs are comparable. The difference is that, in thiol-SWNTs, functional groups are grouped together and each group is bonded to a single gold nanoparticle, while in thiophene-SWNTs, every individual functional group is bonded to a nanoparticle.
Adaptations to Scanning Tunneling Microscopy
Scanning tunneling microscopy (STM) is a relatively recent imaging technology that has proven very useful for determining the topography of conducting and semiconducting samples with angstrom (Å) level precision. STM was invented by Gerd Binnig (Figure $7$) and Heinrich Rohrer (Figure $8$), who both won the 1986 Nobel Prize in physics for their technological advances.
The main component of a scanning tunneling microscope is a rigid metallic probe tip, typically composed of tungsten, connected to a piezodrive containing three perpendicular piezoelectric transducers (Figure $9$). The tip is brought within a fraction of a nanometer of an electrically conducting sample. At close distances, the electron clouds of the metal tip overlap with the electron clouds of the surface atoms (Figure $9$ inset). If a small voltage is applied between the tip and the sample a tunneling current is generated. The magnitude of this tunneling current is dependent on the bias voltage applied and the distance between the tip and the surface. A current amplifier can covert the generated tunneling current into a voltage. The magnitude of the resulting voltage as compared to the initial voltage can then be used to control the piezodrive, which controls the distance between the tip and the surface (i.e., the z direction). By scanning the tip in the x and y directions, the tunneling current can be measured across the entire sample. The STM system can operate in either of two modes: Constant height or constant current
In constant height mode, the tip is fixed in the z direction and the change in tunneling current as the tip changes in the x,y direction is collected and plotted to describe the change in topography of the sample. This method is dangerous for use in samples with fluctuations in height as the fixed tip might contact and destroy raised areas of the sample. A common method for non-uniformly smooth samples is constant current mode. In this mode, a target current value, called the set point, is selected and the tunneling current data gathered from the sample is compared to the target value. If the collected voltage deviates from the set point, the tip is moved in the z direction and the voltage is measured again until the target voltage is reached. The change in the z direction required to reach the set point is recorded across the entire sample and plotted as a representation of the topography of the sample. The height data is typically displayed as a gray scale image of the topography of the sample, where lighter areas typically indicate raised sample areas and darker spots indicate protrusions. These images are typically colored for better contrast.
The standard method of STM, described above, is useful for many substances (including high precision optical components, disk drive surfaces, and buckyballs) and is typically used under ultrahigh vacuum to avoid contamination of the samples from the surrounding systems. Other sample types, such as semiconductor interfaces or biological samples, need some enhancements to the traditional STM apparatus to yield more detailed sample information. Three such modifications, spin-polarized STM (SP-STM), ballistic electron emission microscopy (BEEM) and photon STM (PSTM) are summarized in Table $2$ and in described in detail below.
Name Alterations to Conventional STM Sample Types Limitations
STM None Conducting surface Rigidity of probe
SP-STM Magnetized STM tip Magnetic Needs to be overlaid with STM, magnetized tip type
BEEM Three-terminal with base electrode and current collector Interfaces Voltage, changes due to barrier height
PSTM Optical fiber tip Biological Optical tip and psrim manufacture
Table $2$ Comparison of conventional and altered STM types
Spin Polarized STM
Spin-polarized scanning tunneling microscopy (SP-STM) can be used to provide detailed information of magnetic phenomena on the single-atom scale. This imaging technique is particularly important for accurate measurement of superconductivity and high-density magnetic data storage devices. In addition, SP-STM, while sensitive to the partial magnetic moments of the sample, is not a field-sensitive technique and so can be applied in a variety of different magnetic fields.
Device setup and sample preparation
In SP-STM, the STM tip is coated with a thin layer of magnetic material. As with STM, voltage is then applied between tip and sample resulting in tunneling current. Atoms with partial magnetic moments that are aligned in the same direction as the partial magnetic moment of the atom at the very tip of the STM tip show a higher magnitude of tunneling current due to the interactions between the magnetic moments. Likewise, atoms with partial magnetic moments opposite that of the atom at the tip of the STM tip demonstrate a reduced tunneling current (Figure $10$). A computer program can then translate the change in tunneling current to a topographical map, showing the spin density on the surface of the sample.
The sensitivity to magnetic moments depends greatly upon the direction of the magnetic moment of the tip, which can be controlled by the magnetic properties of the material used to coat the outermost layer of the tungsten STM probe. A wide variety of magnetic materials have been studied as possible coatings, including both ferromagnetic materials, such as a thin coat of iron or of gadolinium, and antiferromagnetic materials such as chromium. Another method that has been used to make a magnetically sensitive probe tip is irradiation of a semiconducting GaAs tip with high energy circularly polarized light. This irradiation causes a splitting of electrons in the GaAs valence band and population of the conduction band with spin-polarized electrons. These spin-polarized electrons then provide partial magnetic moments which in turn influence the tunneling current generated by the sample surface.
Sample preparation for SP-STM is essentially the same as for STM. SP-STM has been used to image samples such as thin films and nanoparticle constructs as well as determining the magnetic topography of thin metallic sheets such as in Figure $11$. The upper image is a traditional STM image of a thin layer of cobalt, which shows the topography of the sample. The second image is an SP-STM image of the same layer of cobalt, which shows the magnetic domain of the sample. The two images, when combined provide useful information about the exact location of the partial magnetic moments within the sample.
Limitations
One of the major limitations with SP-STM is that both distance and partial magnetic moment yield the same contrast in a SP-STM image. This can be corrected by combination with conventional STM to get multi-domain structures and/or topological information which can then be overlaid on top of the SP-STM image, correcting for differences in sample height as opposed to magnetization.
The properties of the magnetic tip dictate much of the properties of the technique itself. If the outermost atom of the tip is not properly magnetized, the technique will yield no more information than a traditional STM. The direction of the magnetization vector of the tip is also of great importance. If the magnetization vector of the tip is perpendicular to the magnetization vector of the sample, there will be no spin contrast. It is therefore important to carefully choose the coating applied to the tungsten STM tip in order to align appropriately with the expected magnetic moments of the sample. Also, the coating makes the magnetic tips more expensive to produce than standard STM tips. In addition, these tips are often made of mechanically soft materials, causing them to wear quickly and require a high cost of maintenance.
Ballistic Electron Emission Microscopy
Ballistic electron emission microscopy (BEEM) is a technique commonly used to image semiconductor interfaces. Conventional surface probe techniques can provide detailed information on the formation of interfaces, but lack the ability to study fully formed interfaces due to inaccessibility to the surface. BEEM allows for the ability to obtain a quantitative measure of electron transport across fully formed interfaces, something necessary for many industrial applications.
Device Setup and Sample Preparation
BEEM utilizes STM with a three-electrode configuration, as seen in Figure $12$. In this technique, ballistic electrons are first injected from a STM tip into the sample, traditionally composed of at least two layers separated by an interface, which rests on three indium contact pads that provide a connection to a base electrode (Figure $12$). As the voltage is applied to the sample, electrons tunnel across the vacuum and through the first layer of the sample, reaching the interface, and then scatter. Depending on the magnitude of the voltage, some percentage of the electrons tunnel through the interface, and can be collected and measured as a current at a collector attached to the other side of the sample. The voltage from the STM tip is then varied, allowing for measurement of the barrier height. The barrier height is defined as the threshold at which electrons will cross the interface and are measurable as a current in the far collector. At a metal/n-type semiconductor interface this is the difference between the conduction band minimum and the Fermi level. At a metal/p-type semiconductor interface this is the difference between the valence band maximum of the semiconductor and the metal Fermi level. If the voltage is less than the barrier height, no electrons will cross the interface and the collector will read zero. If the voltage is greater than the barrier height, useful information can be gathered about the magnitude of the current at the collector as opposed to the initial voltage.
Samples are prepared from semiconductor wafers by chemical oxide growth-strip cycles, ending with the growth of a protective oxide layer. Immediately prior to imaging the sample is spin-etched in an inert environment to remove oxides of oxides and then transferred directly to the ultra-high vacuum without air exposure. The BEEM apparatus itself is operated in a glove box under inert atmosphere and shielded from light.
Nearly any type of semiconductor interface can be imaged with BEEM. This includes both simple binary interfaces such as Au/n-Si(100) and more chemically complex interfaces such as Au/n-GaAs(100), such as seen in Figure $13$.
Limitations
Expected barrier height matters a great deal in the desired setup of the BEEM apparatus. If it is necessary to measure small collector currents, such as with an interface of high-barrier-height, a high-gain, low-noise current preamplifier can be added to the system. If the interface is of low-barrier-height, the BEEM apparatus can be operated at very low temperatures, accomplished by immersion of the STM tip in liquid nitrogen and enclosure of the BEEM apparatus in a nitrogen-purged glove box.
Photon STM
Photon scanning tunneling microscopy (PSTM) measures light to determine more information about characteristic sample topography. It has primarily been used as a technique to measure the electromagnetic interaction of two metallic objects in close proximity to one another and biological samples, which are both difficult to measure using many other common surface analysis techniques.
Device Setup and Sample Preparation
This technique works by measuring the tunneling of photons to an optical tip. The source of these photons is the evanescent field generated by the total internal reflection (TIR) of a light beam from the surface of the sample (Figure $14$). This field is characteristic of the sample material on the TIR surface, and can be measured by a sharpened optical fiber probe tip where the light intensity is converted to an electrical signal (Figure $15$). Much like conventional STM, the force of this electrical signal modifies the location of the tip in relation to the sample. By mapping these modifications across the entire sample, the topography can be determined to a very accurate degree as well as allowing for calculations of polarization, emission direction and emission time.
In PSTM, the vertical resolution is governed only by the noise, as opposed to conventional STM where the vertical resolution is limited by the tip dimensions. Therefore, this technique provides advantages over more conventional STM apparatus for samples where subwavelength resolution in the vertical dimension is a critical measurement, including fractal metal colloid clusters, nanostructured materials and simple organic molecules.
Samples are prepared by placement on a quartz or glass slide coupled to the TIR face of a triangular prism containing a laser beam, making the sample surface into the TIR surface (Figure $16$). The optical fiber probe tips are constructed from UV grade quartz optical fibers by etching in HF acid to have nominal end diameters of 200 nm or less and resemble either a truncated cone or a paraboloid of revolution (Figure $16$).
PSTM shows much promise in the imaging of biological materials due to the increase in vertical resolution and the ability to measure a sample within a liquid environment with a high index TIR substrate and probe tip. This would provide much more detailed information about small organisms than is currently available.
Limitations
The majority of the limitations in this technique come from the materials and construction of the optical fibers and the prism used in the sample collection. The sample needs to be kept at low temperatures, typically around 100K, for the duration of the imaging and therefore cannot decompose or be otherwise negatively impacted by drastic temperature changes.
Conclusion
Scanning tunneling microscopy can provide a great deal of information into the topography of a sample when used without adaptations, but with adaptations, the information gained is nearly limitless. Depending on the likely properties of your sample surface, SP-STM, BEEM and PSTM can provide much more accurate topographical pictures than conventional forms of STM (Table $2$). All of these adaptations to STM have their limitations and all work within relatively specialized categories and subsets of substances, but they are very strong tools that are constantly improving to provide more useful information about materials to the nanometer scale.
Scanning Transmission Electron Microscope- Electron Energy Loss Spectroscopy (STEM-EELS)
History
STEM-EELS is a terminology abbreviation for scanning transmission electron microscopy (STEM) coupled with electron energy loss spectroscopy (EELS). It works by combining two instruments, obtaining an image through STEM and applying EELS to detect signals on the specific selected area of the image. Therefore, it can be applied for many research, such as characterizing morphology, detecting different elements, and different valence state. The first STEM was built by Baron Manfred von Arden (Figure $17$) in around 1983, since it was just the prototype of STEM, it was not as good as transmission electron microscopy (TEM) by that time. Development of STEM was stagnant until the field emission gun was invented by Albert Crewe (Figure $18$) in 1970s; he also came with the idea of annular dark field detector to detect atoms. In 1997, its resolution increased to 1.9 Å, and further increased to 1.36 Å in 2000. 4D STEM-EELS was developed recently, and this type of 4D STEM-EELS has high brightness STEM equipped with a high acquisition rate EELS detector, and a rotation holder. The rotation holder plays quite an important role to achieve this 4D aim, because it makes observation of the sample in 360° possible, the sample could be rotated to acquire the sample’s thickness. High acquisition rate EELS enables this instrument the acquisition of the pixel spectrum in a few minutes.
Basics of STEM-EELS
Interaction between Electrons and Sample
When electrons interact with the samples, the interaction between those two can be classified into two types, namely, elastic and inelastic interactions (Figure $19$). In the elastic interaction, if electrons do not interact with the sample and pass through it, these electrons will contribute to the direct beam. The direct beam can be applied in STEM. In another case, electrons’ moving direction in the sample is guided by the Coulombic force; the strength of the force is decided by charge and the distance between electrons and the core. In both cases, these is no energy transfer from electrons to the samples, that’s the reason why it is called elastic interaction. In inelastic interaction, energy transfers from incident electrons to the samples, thereby, losing energy. The lost energy can be measured and how many electrons amounted to this energy can also be measured, and these data yield the electron energy loss spectrum (EELS).
How do TEM, STEM and STEM-EELS work?
In transmission electron microscopy (TEM), a beam of electrons is emitted from tungsten source and then accelerated by electromagnetic field. Then with the aid of lens condenser, the beam will focus on and pass through the sample. Finally, the electrons will be detected by a charge-coupled device (CCD) and produce images, Figure $20$. STEM works differently from TEM, the electron beam focuses on a specific spot of the sample and then raster scans the sample pixel by pixel, the detector will collect the transmitted electrons and visualize the sample. Moreover, STEM-EELS allows to analyze these electrons, the transmitted electrons could be characterized by adding a magnetic prism, the more energy the electrons lose, the more they will be deflected. Therefore, STEM-EELS can be used to characterize the chemical properties of thin samples.
Principles of STEM-EELS
A brief illustration of STEM-EELS is displayed in Figure $21$. The electron source provides electrons, and it usually comes from a tungsten source located in a strong electrical field. The electron field will provide electrons with high energy. The condenser and the object lens also promote electrons forming into a fine probe and then raster scanning the specimen. The diameter of the probe will influence STEM’s spatial resolution, which is caused by the lens aberrations. Lens aberration results from the refraction difference between light rays striking the edge and center point of the lens, and it also can happen when the light rays pass through with different energy. Base on this, an aberration corrector is applied to increase the objective aperture, and the incident probe will converge and increase the resolution, then promote sensitivity to single atoms. For the annular electron detector, the installment sequence of detectors is a bright field detector, a dark field detector and a high angle annular dark field detector. Bright field detector detects the direct beam that transmits through the specimen. Annular dark field detector collects the scattered electrons, which only go through at an aperture. The advantage of this is that it will not influence the EELS to detect signals from direct beam. High angle annular dark field detector collects electrons which are Rutherford scattering (elastic scattering of charged electrons), and its signal intensity is related with the square of atomic number (Z). So, it is also named as Z-contrast image. The unique point about STEM in acquiring image is that the pixels in image are obtained in a point by point mode by scanning the probe. EELS analysis is based on the energy loss of the transmitted electrons, so the thickness of the specimen will influence the detecting signal. In other words, if the specimen is too thick, the intensity of plasmon signal will decrease and may cause difficulty distinguishing these signals from the background.
Typical features of EELS Spectra
As shown in Figure $22$, a significant peak appears at energy zero in EELS spectra and is therefore called zero-loss peak. Zero-loss peak represents the electrons which undergo elastic scattering during the interaction with specimen. Zero-loss peak can be used to determine the thickness of specimen according to \ref{4}, where t stands for the thickness, λinel is inelastic mean free path, It stands for the total intensity of the spectrum and IZLP is the intensity of zero loss peak.
$t\ =\ \lambda _{inel}\ ln[I_{t}/I_{ZLP}] \label{4}$
The low loss region is also called valence EELS. In this region,valence electrons will be excited to the conduction band. Valence EELS can provide the information about band structure, bandgap, and optical properties. In the low loss region, plasmon peak is the most important. Plasmon is a phenomenon originates from the collective oscillation of weakly bound electrons. Thickness of the sample will influence the plasmon peak. The incident electrons will go through inelastic scattering several times when they interact with a very thick sample, and then result in convoluted plasmon peaks. It is also the reason why STEM-EELS favors sample with low thickness (usually less than 100 nm).
The high loss region is characterized by the rapidly increasing intensity with a gradually falling, which called ionization edge. The onset of ionization edges equals to the energy that inner shell electron needs to be excited from the ground state to the lowest unoccupied state. The amount of energy is unique for different shells and elements. Thus, this information will help to understand the bonding, valence state, composition and coordination information.
Energy resolution affects the signal to background ratio in the low loss region and is used to evaluate EELS spectrum. Energy resolution is based on the full width at half maximum of zero-loss peak.
Background signal in the core-loss region is caused by plasmon peaks and core-loss edges, and can be described by the following power law, \ref{5}, where IBG stands for the background signal, E is the energy loss, A is the scaling constant and r is the slope exponent:
$I_{BG}\ =\ AE^{-r} \label{5}$
Therefore, when quantification the spectra data, the background signal can be removed by fitting pre-edge region with the above-mentioned equation and extrapolating it to the post-edge region.
Advantages and Disadvantages of STEM-EELS
STEM-EELS has advantages over other instruments, such as the acquisition of high resolution of images. For example, the operation of TEM on samples sometimes result in blurring image and low contrast because of chromatic aberration. STEM-EELS equipped with aberration corrector, will help to reduce the chromatic aberration and obtain high quality image even at atomic resolution. It is very direct and convenient to understand the electron distributions on surface and bonding information. STEM-EELS also has the advantages in controlling the spread of energy. So, it becomes much easier to study the ionization edge of different material.
Even though STEM-EELS does bring a lot of convenience for research in atomic level, it still has limitations to overcome. One of the main limitation of STEM-EELS is controlling the thickness of the sample. As discussed above, EELS detects the energy loss of electrons when they interact with samples and the specimen, then the thickness of samples will impact on the energy lost detection. Simplify, if the sample is too thick, then most of the electrons will interact with the sample, signal to background ratio and edge visibility will decrease. Thus, it will be hard to tell the chemical state of the element. Another limitation is due to EELS needs to characterize low-loss energy electrons, which high vacuum condition is essential for characterization. To achieve such a high vacuum environment, high voltage is necessary. STEM-EELS also requires the sample substrates to be conductive and flat.
Application of STEM-EELS
STEM-EELS can be used to detect the size and distribution of nanoparticles on a surface. For example, CoO on MgO catalyst nanoparticles may be prepared by hydrothermal methods. The size and distribution of nanoparticles will greatly influence the catalytic properties, and the distribution and morphology change of CoO nanoparticles on MgO is important to understand. Co L3/L2 ratios display uniformly around 2.9, suggesting that Co2+ dominates the electron state of Co. The results show that the ratios of O:(Co+Mg) and Mg:(Co+Mg) are not consistence, indicating that these three elements are in a random distribution. STEM-EELS mapping images results further confirm the non-uniformity of the elemental distribution, consistent with a random distribution of CoO on the MgO surface (Figure $23$).
Figure $24$ shows the K-edge absorption of carbon and transition state information could be concluded. Typical carbon based materials have the features of the transition state, such that 1s transits to π* state and 1s to σ* states locate at 285 and 292 eV, respectively. The two-transition state correspond to the electrons in the valence band electrons being excited to conduction state. Epoxy exhibits a sharp peak around 285.3 eV compared to GO and GNPs. Meanwhile, GNPs have the sharpest peak around 292 eV, suggesting the most C atoms in GNPs are in 1s to σ* state. Even though GO is in oxidation state, part of its carbon still behaves 1s transits to π*.
The annular dark filed (ADF) mode of STEM provides information about atomic number of the elements in a sample. For example, the ADF image of La1.2Sr1.8Mn2O7 (Figure $25$ a and b) along [010] direction shows bright spots and dark spots, and even for bright spots (p and r), they display different levels of brightness. This phenomenon is caused by the difference in atomic numbers. Bright spots are La and Sr, respectively. Dark spots are Mn elements. O is too light to show on the image. EELS result shows the core-loss edge of La, Mn and O (Figure $25$ c), but the researchers did not give information on core-loss edge of Sr, Sr has N2,3 edge at 29 eV and L3 edge at 1930 eV and L2 edge at 2010 eV. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.03%3A_Scanning_Tunneling_Microscopy.txt |
Magnetic force microscopy (MFM) is a natural extension of scanning tunneling microscopy (STM), whereby both the physical topology of a sample surface and the magnetic topology may be seen. Scanning tunneling microscopy was developed in 1982 by Gerd Binnig and Heinrich Rohrer, and the two shared the 1986 Nobel prize for their innovation. Binnig later went on to develop the first atomic force microscope (AFM) along with Calvin Quate and Christoph Gerber (Figure \(1\)). Magnetic force microscopy was not far behind, with the first report of its use in 1987 by Yves Martin and H. Kumar Wickramasinge (Figure \(2\)). An AFM with a magnetic tip was used to perform these early experiments, which proved to be useful in imaging both static and dynamic magnetic fields.
MFM, AFM, and STM all have similar instrumental setups, all of which are based on the early scanning tunneling microscopes. In essence, STM uses a very small conductive tip attached to a piezoelectric cylinder to carefully scan across a small sample space. The electrostatic forces between the conducting sample and tip are measured, and the output is a picture that shows the surface of the sample. AFM and MFM are essentially derivative types of STM, which explains why a typical MFM device is very similar to an STM, with a piezoelectric driver and magnetized tip as seen in Figure \(3\) and Figure \(4\).
One may notice that this MFM instrument very closely resembles an atomic force microscope, and this is for good reason. The simplest MFM instruments are no more than AFM instruments with a magnetic tip. The differences between AFM and MFM lie in the data collected and its processing. Where AFM gives topological data through tapping, noncontact, or contact mode, MFM gives both topological (tapping) and magnetic topological (non-contact) data through a two-scan process known as interleave scanning. The relationships between basic STM, AFM, and MFM are summarized in Table \(1\).
Techniques Samples Qualities Observed Modes Benefits Limitations
MFM Any film or powder surface; magnetic Electrostatic interactions; magnetic forces/domains; van der Waals' interactions; topology; morphology Tapping; non-contact Magnetic and physical properties; high resolution Resolution depends on tip size; different tips for various applications; complicated data processing and analysis
STM Only conductive surfaces Topology; morphology Constant height; constant current Simplest instrumental setup; many variations Resolution depends on tip size; tips wear out easily; rare technique
AFM Any film or powder surface Particle size; topology; morphology Tapping;contact; non-contact Common, standardized; often do not need special tip; ease of data analysis Resolution depends on tip size; easy to break tips; slow process
Table \(1\) A summary of the capabilities of MFM, SPM, and AFM instrumentation.
Data Collection
Interleave scanning, also known as two-pass scanning, is a process typically used in an MFM experiment. The magnetized tip is first passed across the sample in tapping mode, similar to an AFM experiment, and this gives the surface topology of the sample. Then, a second scan is taken in non-contact mode, where the magnetic force exerted on the tip by the sample is measured. These two types of scans are shown in Figure \(5\).
In non-contact mode (also called dynamic or AC mode), the magnetic force gradient from the sample affects the resonance frequency of the MFM cantilever, and can be measured in three different ways.
Phase detection: the phase difference between the oscillation of the cantilever and piezoelectric source is measured
Amplitude detection: the changes in the cantilever’s oscillations are measured
Frequency modulation: the piezoelectric source’s oscillation frequency is changed to maintain a 90° phase lag between the cantilever and the piezoelectric actuator. The frequency change needed for the lag is measured.
Regardless of the method used in determining the magnetic force gradient from the sample, a MFM interleave scan will always give the user information about both the surface and magnetic topology of the sample. A typical sample size is 100x100 μm, and the entire sample is scanned by rastering from one line to another. In this way, the MFM data processor can compose an image of the surface by combining lines of data from either the surface or magnetic scan. The output of an MFM scan is two images, one showing the surface and the other showing magnetic qualities of the sample. An idealized example is shown in Figure \(6\).
Types of MFM Tips
Any suitable magnetic material or coating can be used to make an MFM tip. Some of the most commonly used standard tips are coated with FeNi, CoCr, and NiP, while many research applications call for individualized tips such as carbon nanotubes. The resolution of the end image in MFM is dependent directly on the size of the tip, therefore MFM tips must come to a sharp point on the angstrom scale in order to function at high resolution. This leads to tips being costly, an issue exacerbated by the fact that coatings are often soft or brittle, leading to wear and tear. The best materials for MFM tips, therefore, depend on the desired resolution and application. For example, a high coercivity coating such as CoCr may be favored for analyzing bulk or strongly magnetic samples, whereas a low coercivity material such as FeNi might be preferred for more fine and sensitive applications.
Data Output and Applications
From an MFM scan, the product is a 2D scan of the sample surface, whether this be the physical or magnetic topographical image. Importantly, the resolution depends on the size of the tip of the probe; the smaller the probe, the higher the number of data points per square micrometer and therefore the resolution of the resulting image. MFM can be extremely useful in determining the properties of new materials, as in Figure \(7\), or in analyzing already known materials’ magnetic landscapes. This makes MFM particularly useful for the analysis of hard drives. As people store more and more information on magnetic storage devices, higher storage capacities need to be developed and emergency backup procedures for this data must be developed. MFM is an ideal procedure for characterizing the fine magnetic surfaces of hard drives for use in research and development, and also can show the magnetic surfaces of already-used hard drives for data recovery in the event of a hard drive malfunction. This is useful both in forensics and in researching new magnetic storage materials.
MFM has also found applications on the frontiers of research, most notably in the field of Spintronics. In general, Spintronics is the study of the spin and magnetic moment of solid-state materials, and the manipulation of these properties to create novel electronic devices. One example of this is quantum computing, which is promising as a fast and efficient alternative to traditional transistor-based computing. With regards to Spintronics, MFM can be used to characterize non-homogenous magnetic materials and unique samples such as dilute magnetic semiconductors (DMS). This is useful for research in magnetic storage such as MRAM, semiconductors , and magnetoresistive materials.
MFM for Characterization of Magnetic Storage Devices
In device manufacturing, the smoothness and/or roughness of the magnetic coatings of hard drive disks is significant in their ability to operate. Smoother coatings provide a low magnetic noise level, but stick to read/write heads, whereas rough surfaces have the opposite qualities. Therefore, fine tuning not only of the magnetic properties but the surface qualities of a given magnetic film is extremely important in the development of new hard drive technology. Magnetic force microscopy allows the manufacturers of hard drives to analyze disks for magnetic and surface topology, making it easier to control the quality of drives and determine which materials are suitable for further research. Industrial competition for higher bit density (bits per square millimeter), which means faster processing and increased storage capability, means that MFM is very important for characterizing films to very high resolution.
Conclusion
Magnetic force microscopy is a powerful surface technique used to deduce both the magnetic and surface topology of a given sample. In general, MFM offers high resolution, which depends on the size of the tip, and straightforward data once processed. The images outputted by the MFM raster scan are clear and show structural and magnetic features of a 100x100 μm square of the given sample. This information can be used not only to examine surface properties, morphology, and particle size, but also to determine the bit density of hard drives, features of magnetic computing materials, and identify exotic magnetic phenomena at the atomic level. As MFM evolves, thinner and thinner magnetic tips are being fabricated to finer applications, such as in the use of carbon nanotubes as tips to give high atomic resolution in MFM images. The customizability of magnetic coatings and tips, as well as the use of AFM equipment for MFM, make MFM an important technique in the electronics industry, making it possible to see magnetic domains and structures that otherwise would remain hidden. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.04%3A_Magnetic_Force_Microscopy.txt |
Using UV-vis for the Detection and Characterization of Silicon Quantum Dots
What are Quantum Dots?
Quantum dots (QDs) are small semiconductor nanoparticles generally composed of two elements that have extremely high quantum efficiencies when light is shined on them. The most common quantum dots are CdSe, PbS, and ZnSe, but there are many many other varieties of these particles that contain other elements as well. QDs can also be made of just three elements or just one element such as silicon.
Synthesis of Silicon Quantum Dots
Silicon quantum dots are synthesized in inverse micelles. SiCl4 is reduced using a two fold excess of LiAlH4 (Figure $1$). After the silicon has been fully reduced and the excess reducing agent quenched, the particles are capped with hydrogens and are hydrophobic. A platinum catalyzed ligand exchange of hydrogen for allylamine will produce hydrophilic particles (Figure $2$). All reactions in making these particles are extremely air sensitive, and silica is formed readily, so the reactions should be performed in a highly controlled atmosphere, such as a glove box. The particles are then washed in DMF, and finally filtered and stored in deionized water. This will allow the Si QDs to be pure in water, and the particles are ready for analysis. This technique yields Si QDs of 1 - 2 nm in size.
Sample Preparation of Silicon Quantum Dots
The reported absorbtion wavelength for 1 - 2 nm Si QDs absorb is 300 nm. With the hydrophobic Si QDs, UV-vis absorbance analysis in toluene does not yield an acceptable spectrum because the UV-vis absorbance cutoff is 287 nm, which is very close to 300 nm for the peaks to be resolvable. A better hydrophobic solvent would be hexanes. All measurements of these particles would require a quartz cuvette since the glass aborbance cutoff (300 nm) is exactly where the particles would be observed. Hydrophilic substituted particles do not need to be transferred to another solvent because water’s absorbance cutoff is much lower. There is usually a slight impurity of DMF in the water due to residue on the particles after drying. If there is a DMF peak in the spectrum with the Si QDs the wavelengths are far enough apart to be resolved.
What Information can be Obtained from UV-Visible Spectra?
Quantum dots are especially interesting when it comes to UV-vis spectroscopy because the size of the quantum dot can be determined from the position of the absorbtion peak in the UV-vis spectrum. Quantum dots absorb different wavelengths depending on the size of the particles (e.g., Figure $3$). Many calibration curves would need to be done to determine the exact size and concentration of the quantum dots, but it is entirely possible and very useful to be able to determine size and concentration of quantum dots in this way since other ways of determining size are much more expensive and extensive (electron microscopy is most widely used for this data).
An example of silicon quantum dot data can be seen in Figure $4$. The wider the absorbance peak is, the less monodispersed the sample is.
Why is Knowing the Size of Quantum Dots Important?
Different size (different excitation) quantum dots can be used for different applications. The absorbance of the QDs can also reveal how monodispersed the sample is; more monodispersity in a sample is better and more useful in future applications. Silicon quantum dots in particular are currently being researched for making more efficient solar cells. The monodispersity of these quantum dots is particularly important for getting optimal absorbance of photons from the sun or other light source. Different sized quantum dots will absorb light differently, and a more exact energy absorption is important in the efficiency of solar cells. UV-vis absorbance is a quick, easy, and cheap way to determine the monodispersity of the silicon quantum dot sample. The peak width of the absorbance data can give that information. The other important information for future applications is to get an idea about the size of the quantum dots. Different size QDs absorb at different wavelengths; therefore, specific size Si QDs will be required for different cells in tandem solar cells.
UV-Visible Spectrocopy of Noble Metal Nanoparticles
Noble metal nanoparticles have been used for centuries to color stained glass windows and provide many opportunities for novel sensing and optical technologies due to their intense scattering (deflection) and absorption of light. One of the most interesting and important properties of noble metal nanoparticles is their localized surface plasmon resonance (LSPR). The LSPR of noble metal nanoparticles arises when photons of a certain frequency induce the collective oscillation of conduction electrons on the nanoparticles’ surface. This causes selective photon absorption, efficient scattering, and enhanced electromagnetic field strength around the nanoparticles. More information about the properties and potential applications of noble metal nanoparticles can be found in Silver Nanoparticles: A Case Study in Cutting Edge Research
Synthesis of Noble Metal Nanoparticles
Noble metal nanoparticles can be synthesized via the reduction of metal salts. Spherical metal nanoparticle “seeds” are first synthesized by reducing metal salts in water with a strong reducing agent such as sodium borohydride (Figure $5$). The seeds are then "capped" to prevent aggregation with a surface group such as citrate (Figure $5$).
Adjusting the Geometry of Metal Nanoparticles
After small nanoparticle seeds have been synthesized, the seeds can be grown into nanoparticles of various sizes and shapes. Seeds are added to a solution of additional metal salt and a structure-directing agent, and are then reduced with a weak reducing agent such as ascorbic acid (see Figure $6$). The structure-directing agent will determine the geometry of the nanoparticles produced. For example, cetyltrimethylammonium bromide (CTAB) is often used to produce nanorods (Figure $6$).
Assemblies of Metal Nanoparticles
Once synthesized, noble metal nanoparticles can be assembled into various higher-order nanostructures. Nanoparticle dimers, linear chains of two nanoparticles, can be assembled using a linker molecule that binds the two nanoparticles together (Figure $7$). Less-organized nanoparticle assemblies can be formed through the addition of counterions. Counterions react with the surface groups on nanoparticles, causing the nanoparticles to be stripped of their protective surface coating and inducing their aggregation.
UV-Visible Spectroscopy of Noble Metal Nanoparticles
UV-visible absorbance spectroscopy is a powerful tool for detecting noble metal nanoparticles, because the LSPR of metal nanoparticles allows for highly selective absorption of photons. UV-visible absorbance spectroscopy can also be used to detect various factors that affect the LSPR of noble metal nanoparticles. More information about the theory and instrumentation of UV-visible absorbance spectroscopy can be found in the section related to UV-Vis Spectroscopy.
Mie Theory
Mie theory, a theory that describes the interaction of light with a homogenous sphere, can be used to predict the UV-visible absorbance spectrum of spherical metallic nanoparticles. One equation that can be obtained using Mie theory is \ref{1}, which describes the extinction, the sum of absorption and scattering of light, of spherical nanoparticles. In \ref{1}, E(λ) is the extinction, NA is the areal density of the nanoparticles, a is the radius of the nanoparticles, εm is the dielectric constant of the environment surrounding the nanoparticles, λ is the wavelength of the incident light, and εr and εi are the real and imaginary parts of the nanoparticles’ dielectric function. From this relation, we can see that the UV-visible absorbance spectrum of a solution of nanoparticles is dependent on the radius of the nanoparticles, the composition of the nanoparticles, and the environment surrounding the nanoparticles.
$E(\lambda )\ = \frac{24\pi N_{A}a^{3}\varepsilon _{m}^{3/2}}{\lambda \ln(10)} \left[\frac{\varepsilon _{i}}{(\varepsilon_{r}\ +\ 2\varepsilon _{m})^{2}\ +\ \varepsilon _{i}^{2}}\right] \label{1}$
More Advanced Theoretical Techniques
Mie theory is limited to spherical nanoparticles, but there are other theoretical techniques that can be used to predict the UV-visible spectrum of more complex noble metal nanostructures. These techniques include surface-based methods such as the generalized multipole technique and T-matrix method, as well as volume-based techniques such as the discrete dipole approximation and the finite different time domain method.
Using UV-Vis Spectroscopy to Predict Nanoparticle Geometry
Just as the theoretical techniques described above can use nanoparticle geometry to predict the UV-visible absorbance spectrum of noble metal nanoparticles, nanoparticles’ UV-visible absorbance spectrum can be used to predict their geometry. As shown in Figure $8$ below, the UV-visible absorbance spectrum is highly dependent on nanoparticle geometry. The shapes of the two spectra are quite different despite the two types of nanoparticles having similar dimensions and being composed of the same material (Figure $8$).
Using UV-Visible Spectroscopy to Determine Nanoparticle Aggregation States
The UV-visible absorbance spectrum is also dependent on the aggregation state of the nanoparticles. When nanoparticles are in close proximity to each other, their plasmons couple, which affects their LSPR and thus their absorption of light. Dimerization of nanospheres causes a “red shift,” a shift to longer wavelengths, in the UV-visible absorbance spectrum as well as a slight increase in absorption at higher wavelengths (see Figure $9$). Unlike dimerization, aggregation of nanoparticles causes a decrease in the intensity of the peak absorbance without shifting the wavelength at which the peak occurs (λmax). Information about the calculation of λmax can be found in the earlier section about silver nanoparticles. Figure $9$ illustrates the increase in nanoparticle aggregation with increased salt concentrations based on the decreased absorbance peak intensity.
Using UV-Visible Spectroscopy to Determine Nanoparticle Surface Composition
The λmax of the UV-visible absorbance spectrum of noble metal nanoparticles is highly dependent on the environment surrounding the nanoparticles. Because of this, shifts in λmax can be used to detect changes in the surface composition of the nanoparticles. One potential application of this phenomenon is using UV-visible absorbance spectroscopy to detect the binding of biomolecules to the surface of noble metal nanoparticles. The red shift in the λmax of the UV-visible absorbance spectrum in Figure $10$ below with the addition of human serum albumin protein indicates that the protein is binding to the surface of the nanoparticles.
Optical Properties of Group 12-16 (II-VI) Semiconductor Nanoparticles
What are Group 12-16 semiconductors?
Semiconductor materials are generally classified on the basis of the periodic table group that their constituent elements belong to. Thus, Group 12-16 semiconductors, formerly called II-VI semiconductors, are materials whose cations are from the Group 12 and anions are from Group 16 in the periodic table (Figure $11$). Some examples of Group 12-16 semiconductor materials are cadmium selenide (CdSe), zinc sulfide (ZnS), cadmium teluride (CdTe), zinc oxide (ZnO), and mercuric selenide (HgSe) among others.
The new IUPAC (International Union of Pure and Applied Chemistry) convention is being followed in this document, to avoid any confusion with regard to conventions used earlier. In the old IUPAC convention, Group 12 was known as Group IIB with the roman numeral ‘II’ referring to the number of electrons in the outer electronic shells and B referring to being on the right part of the table. However, in the CAS (Chemical Abstracts Service), the alphabet B refers to transition elements as compared to main group elements, though the roman numeral has the same meaning. Similarly, Group 16 was earlier known as Group VI because all the elements in this group have 6 valence shell electrons.
What are Group 12-16 (II-VI) Semiconductor Nanoparticles?
From the Greek word nanos - meaning "dwarf" this prefix is used in the metric system to mean 10-9 or one billionth (1/1,000,000,000). Thus a nanometer is 10-9 or one billionth of a meter, and a nanojoule is 10-9 or one billionth of a Joule, etc. A nanoparticle is ordinarily defined as any particle with at least one of its dimensions in the 1 - 100 nm range.
Nanoscale materials often show behavior which is intermediate between that of a bulk solid and that of an individual molecule or atom. An inorganic nanocrystal can be imagined to be comprised of a few atoms or molecules. It thus will behave differently from a single atom; however, it is still smaller than a macroscopic solid, and hence will show different properties. For example, if one would compare the chemical reactivity of a bulk solid and a nanoparticle, the latter would have a higher reactivity due to a significant fraction of the total number of atoms being on the surface of the particle. Properties such as boiling point, melting point, optical properties, chemical stability, electronic properties, etc. are all different in a nanoparticle as compared to its bulk counterpart. In the case of Group 12-16 semiconductors, this reduction in size from bulk to the nanoscale results in many size dependent properties such as varying band gap energy, optical and electronic properties.
Optical Properties of Semiconductor Quantum Nanoparticles
In the case of semiconductor nanocrystals, the effect of the size on the optical properties of the particles is very interesting. Consider a Group 12-16 semiconductor, cadmium selenide (CdSe). A 2 nm sized CdSe crystal has a blue color fluorescence whereas a larger nanocrystal of CdSe of about 6 nm has a dark red fluorescence (Figure $12$). In order to understand the size dependent optical properties of semiconductor nanoparticles, it is important to know the physics behind what is happening at the nano level.
Energy Levels in a Semiconductor
The electronic structure of any material is given by a solution of Schrödinger equations with boundary conditions, depending on the physical situation. The electronic structure of a semiconductor (Figure $13$ can be described by the following terms:
Energy Level
By the solution of Schrödinger’s equations, the electrons in a semiconductor can have only certain allowable energies, which are associated with energy levels. No electrons can exist in between these levels, or in other words can have energies in between the allowed energies. In addition, from Pauli’s Exclusion Principle, only 2 electrons with opposite spin can exist at any one energy level. Thus, the electrons start filling from the lowest energy levels. Greater the number of atoms in a crystal, the difference in allowable energies become very small, thus the distance between energy levels decreases. However, this distance can never be zero. For a bulk semiconductor, due to the large number of atoms, the distance between energy levels is very small and for all practical purpose the energy levels can be described as continuous (Figure $13$).
Band Gap
From the solution of Schrödinger’s equations, there are a set of energies which is not allowable, and thus no energy levels can exist in this region. This region is called the band gap and is a quantum mechanical phenomenon (Figure $13$). In a bulk semiconductor the bandgap is fixed; whereas in a quantum dot nanoparticle the bandgap varies with the size of the nanoparticle.
Conduction Band
The conduction band consists of energy levels from the upper edge of the bandgap and higher (Figure $13$). To reach the conduction band, the electrons in the valence band should have enough energy to cross the band gap. Once the electrons are excited, they subsequently relax back to the valence band (either radiatively or non-radiatively) followed by a subsequent emission of radiation. This property is responsible for most of the applications of quantum dots.
Exciton and Exciton Bohr Radius
When an electron is excited from the valence band to the conduction band, corresponding to the electron in the conduction band a hole (absence of electron) is formed in the valence band. This electron pair is called an exciton. Excitons have a natural separation distance between the electron and hole, which is characteristic of the material. This average distance is called exciton Bohr radius. In a bulk semiconductor, the size of the crystal is much larger than the exciton Bohr radius and hence the exciton is free to move throughout the crystal.
Energy Levels in a Quantum Dot Semiconductor
Before understanding the electronic structure of a quantum dot semiconductor, it is important to understand what a quantum dot nanoparticle is. We earlier studied that a nanoparticle is any particle with one of its dimensions in the 1 - 100 nm. A quantum dot is a nanoparticle with its diameter on the order of the materials exciton Bohr radius. Quantum dots are typically 2 - 10 nm wide and approximately consist of 10 to 50 atoms. With this understanding of a quantum dot semiconductor, the electronic structure of a quantum dot semiconductor can be described by the following terms.
Quantum Confinement
When the size of the semiconductor crystal becomes comparable or smaller than the exciton Bohr radius, the quantum dots are in a state of quantum confinement. As a result of quantum confinement, the energy levels in a quantum dot are discrete (Figure $14$ as opposed to being continuous in a bulk crystal (Figure $13$).
Discrete Energy Levels
In materials that have small number of atoms and are considered as quantum confined, the energy levels are separated by an appreciable amount of energy such that they are not continuous, but are discrete (see Figure $13$). The energy associated with an electron (equivalent to conduction band energy level) is given by is given by \ref{2}, where h is the Planck’s constant, me is the effective mass of electron and n is the quantum number for the conduction band states, and n can take the values 1, 2, 3 and so on. Similarly, the energy associated with the hole (equivalent to valence band energy level) is given by \ref{2}, where n' is the quantum number for the valence states, and n' can take the values 1, 2, 3, and so on. The energy increases as one goes higher in the quantum number. Since the electron mass is much smaller than that of the hole, the electron levels are separated more widely than the hole levels.
$E^{e}\ =\ \frac{h^{2}n^{2}}{8\pi ^{2}m_e d^2 } \label{2}$
$E^h \ =\ \frac{h^{2}n'^{2}}{8\pi ^{2}m_h d^2 } \label{3}$
Tunable Band Gap
As seen from \ref{2} and \ref{3}, the energy levels are affected by the diameter of the semiconductor particles. If the diameter is very small, since the energy is dependent on inverse of diameter squared, the energy levels of the upper edge of the band gap (lowest conduction band level) and lower edge of the band gap (highest valence band level) change significantly with the diameter of the particle and the effective mass of the electron and the hole, resulting in a size dependent tunable band gap. This also results in the discretization of the energy levels.
Qualitatively, this can be understood in the following way. In a bulk semiconductor, the addition or removal of an atom is insignificant compared to the size of the bulk semiconductor, which consists of a large number of atoms. The large size of bulk semiconductors makes the changes in band gap so negligible on the addition of an atom, that it is considered as a fixed band gap. In a quantum dot, addition of an atom does make a difference, resulting in the tunability of band gap.
UV-Visible Absorbance
Due to the presence of discrete energy levels in a QD, there is a widening of the energy gap between the highest occupied electronic states and the lowest unoccupied states as compared to the bulk material. As a consequence, the optical properties of the semiconductor nanoparticles also become size dependent.
The minimum energy required to create an exciton is the defined by the band gap of the material, i.e., the energy required to excite an electron from the highest level of valence energy states to the lowest level of the conduction energy states. For a quantum dot, the bandgap varies with the size of the particle. From \ref{2} and \ref{3}, it can be inferred that the band gap becomes higher as the particle becomes smaller. This means that for a smaller particle, the energy required for an electron to get excited is higher. The relation between energy and wavelength is given by \ref{4}, where h is the Planck’s constant, c is the speed of light, λ is the wavelength of light. Therefore, from \ref{4} to cross a bandgap of greater energy, shorter wavelengths are absorbed, i.e., a blue shift is seen.
$E\ =\ hc \label{4}$
For Group 12-16 semiconductors, the bandgap energy falls in the UV-visible range. That is ultraviolet light or visible light can be used to excite an electron from the ground valence states to the excited conduction states. In a bulk semiconductor the band gap is fixed, and the energy states are continuous. This results in a rather uniform absorption spectrum (Figure $15$ a).
In the case of Group 12-16 quantum dots, since the bandgap can be changed with the size, these materials can absorb over a range of wavelengths. The peaks seen in the absorption spectrum (Figure $15$ b) orrespond to the optical transitions between the electron and hole levels. The minimum energy and thus the maximum wavelength peak corresponds to the first exciton peak or the energy for an electron to get excited from the highest valence state to the lowest conduction state. The quantum dot will not absorb wavelengths of energy longer than this wavelength. This is known as the absorption onset.
Fluorescence
Fluorescence is the emission of electromagnetic radiation in the form of light by a material that has absorbed a photon. When a semiconductor quantum dot (QD) absorbs a photon/energy equal to or greater than its band gap, the electrons in the QD’s get excited to the conduction state. This excited state is however not stable. The electron can relax back to its ground state by either emitting a photon or lose energy via heat losses. These processes can be divided into two categories – radiative decay and non-radiative decay. Radiative decay is the loss of energy through the emission of a photon or radiation. Non-radiative decay involves the loss of heat through lattice vibrations and this usually occurs when the energy difference between the levels is small. Non-radiative decay occurs much faster than radiative decay.
Usually the electron relaxes to the ground state through a combination of both radiative and non-radiative decays. The electron moves quickly through the conduction energy levels through small non-radiative decays and the final transition across the band gap is via a radiative decay. Large nonradiative decays don’t occur across the band gap because the crystal structure can’t withstand large vibrations without breaking the bonds of the crystal. Since some of the energy is lost through the non-radiative decay, the energy of the emitted photon, through the radiative decay, is much lesser than the absorbed energy. As a result the wavelength of the emitted photon or fluorescence is longer than the wavelength of absorbed light. This energy difference is called the Stokes shift. Due this Stokes shift, the emission peak corresponding to the absorption band edge peak is shifted towards a higher wavelength (lower energy), i.e., Figure $16$.
Intensity of emission versus wavelength is a bell-shaped Gaussian curve. As long as the excitation wavelength is shorter than the absorption onset, the maximum emission wavelength is independent of the excitation wavelength. Figure $16$ shows a combined absorption and emission spectrum for a typical CdSe tetrapod.
Factors Affecting the Optical Properties of NPs
There are various factors that affect the absorption and emission spectra for Group 12-16 semiconductor quantum crystals. Fluorescence is much more sensitive to the background, environment, presence of traps and the surface of the QDs than UV-visible absorption. Some of the major factors influencing the optical properties of quantum nanoparticles include:
• Surface defects, imperfection of lattice, surface charges- The surface defects and imperfections in the lattice structure of semiconductor quantum dots occur in the form of unsatisfied valencies. Similar to surface charges, unsatisfied valencies provide a sink for the charge carriers, resulting in unwanted recombinations.
• Surface ligands- The presence of surface ligands is another factor that affects the optical properties. If the surface ligand coverage is a 100%, there is a smaller chance of surface recombinations to occur.
• Solvent polarity- The polarity of solvents is very important for the optical properties of the nanoparticles. If the quantum dots are prepared in organic solvent and have an organic surface ligand, the more non-polar the solvent, the particles are more dispersed. This reduces the loss of electrons through recombinations again, since when particles come in close proximity to each other, increases the non-radiative decay events.
Applications of the Optical Properties of Group 12-16 Semiconductor NPs
The size dependent optical properties of NP’s have many applications from biomedical applications to solar cell technology, from photocatalysis to chemical sensing. Most of these applications use the following unique properties.
For applications in the field of nanoelectronics, the sizes of the quantum dots can be tuned to be comparable to the scattering lengths, reducing the scattering rate and hence, the signal to noise ratio. For Group 12-16 QDs to be used in the field of solar cells, the bandgap of the particles can be tuned so as to form absorb energy over a large range of the solar spectrum, resulting in more number of excitons and hence more electricity. Since the nanoparticles are so small, most of the atoms are on the surface. Thus, the surface to volume ratio is very large for the quantum dots. In addition to a high surface to volume ratio, the Group 12-16 QDs respond to light energy. Thus quantum dots have very good photocatalytic properties. Quantum dots show fluorescence properties, and emit visible light when excited. This property can be used for applications as biomarkers. These quantum dots can be tagged to drugs to monitor the path of the drugs. Specially shaped Group 12-16 nanoparticles such as hollow shells can be used as drug delivery agents. Another use for the fluorescence properties of Group 12-16 semiconductor QDs is in color-changing paints, which can change colors according to the light source used.
Characterization of Group 12-16 (II-VI) Semiconductor Nanoparticles by UV-Visible Spectroscopy
Quantum dots (QDs) as a general term refer to nanocrystals of semiconductor materials, in which the size of the particles are comparable to the natural characteristic separation of an electron-hole pair, otherwise known as the exciton Bohr radius of the material. When the size of the semiconductor nanocrystal becomes this small, the electronic structure of the crystal is governed by the laws of quantum physics. Very small Group 12-16 (II-VI) semiconductor nanoparticle quantum dots, in the order of 2 - 10 nm, exhibit significantly different optical and electronic properties from their bulk counterparts. The characterization of size dependent optical properties of Group 12-16 semiconductor particles provide a lot of qualitative and quantitative information about them – size, quantum yield, monodispersity, shape and presence of surface defects. A combination of information from both the UV-visible absorption and fluorescence, complete the analysis of the optical properties.
UV-Visible Absorbance Spectroscopy
Absorption spectroscopy, in general, refers to characterization techniques that measure the absorption of radiation by a material, as a function of the wavelength. Depending on the source of light used, absorption spectroscopy can be broadly divided into infrared and UV-visible spectroscopy. The band gap of Group 12-16 semiconductors is in the UV-visible region. This means the minimum energy required to excite an electron from the valence states of the Group 12-16 semiconductor QDs to its conduction states, lies in the UV-visible region. This is also a reason why most of the Group 12-16 semiconductor quantum dot solutions are colored.
This technique is complementary to fluorescence spectroscopy, in that UV-visible spectroscopy measures electronic transitions from the ground state to the excited state, whereas fluorescence deals with the transitions from the excited state to the ground state. In order to characterize the optical properties of a quantum dot, it is important to characterize the sample with both these techniques
In quantum dots, due to the very small number of atoms, the addition or removal of one atom to the molecule changes the electronic structure of the quantum dot dramatically. Taking advantage of this property in Group 12-16 semiconductor quantum dots, it is possible to change the band gap of the material by just changing the size of the quantum dot. A quantum dot can absorb energy in the form of light over a range of wavelengths, to excite an electron from the ground state to its excited state. The minimum energy that is required to excite an electron, is dependent on the band gap of the quantum dot. Thus, by making accurate measurements of light absorption at different wavelengths in the ultraviolet and visible spectrum, a correlation can be made between the band gap and size of the quantum dot. Group 12-16 semiconductor quantum dots are of particular interest, since their band gap lies in the visible region of the solar spectrum.
The UV-visible absorbance spectroscopy is a characterization technique in which the absorbance of the material is studied as a function of wavelength. The visible region of the spectrum is in the wavelength range of 380 nm (violet) to 740 nm (red) and the near ultraviolet region extends to wavelengths of about 200 nm. The UV-visible spectrophotometer analyzes over the wavelength range 200 – 900 nm.
When the Group 12-16 semiconductor nanocrystals are exposed to light having an energy that matches a possible electronic transition as dictated by laws of quantum physics, the light is absorbed and an exciton pair is formed. The UV-visible spectrophotometer records the wavelength at which the absorption occurs along with the intensity of the absorption at each wavelength. This is recorded in a graph of absorbance of the nanocrystal versus wavelength.
Instrumentation
A working schematic of the UV-visible spectrophotometer is show in Figure $17$.
The Light Source
Since it is a UV-vis spectrophotometer, the light source (Figure $17$) needs to cover the entire visible and the near ultra-violet region (200 - 900 nm). Since it is not possible to get this range of wavelengths from a single lamp, a combination of a deuterium lamp for the UV region of the spectrum and tungsten or halogen lamp for the visible region is used. This output is then sent through a diffraction grating as shown in the schematic.
The Diffraction Grating and the Slit
The beam of light from the visible and/or UV light source is then separated into its component wavelengths (like a very efficient prism) by a diffraction grating (Figure $17$). Following the slit is a slit that sends a monochromatic beam into the next section of the spectrophotometer.
Rotating Discs
Light from the slit then falls onto a rotating disc (Figure $17$). Each disc consists of different segments – an opaque black section, a transparent section and a mirrored section. If the light hits the transparent section, it will go straight through the sample cell, get reflected by a mirror, hits the mirrored section of a second rotating disc, and then collected by the detector. Else if the light hits the mirrored section, gets reflected by a mirror, passes through the reference cell, hits the transparent section of a second rotating disc and then collected by the detector. Finally if the light hits the black opaque section, it is blocked and no light passes through the instrument, thus enabling the system to make corrections for any current generated by the detector in the absence of light.
Sample Cell, Reference Cell and Sample Preparation
For liquid samples, a square cross section tube sealed at one end is used. The choice of cuvette depends on the following factors:
• Type of solvent - For aqueous samples, specially designed rectangular quartz, glass or plastic cuvettes are used. For organic samples glass and quartz cuvettes are used.
• Excitation wavelength – Depending on the size and thus, bandgap of the 12-16 semiconductor nanoparticles, different excitation wavelengths of light are used. Depending on the excitation wavelength, different materials are used
Table $1$ Cuvette materials and their wavelengths.
Cuvette Wavelength (nm)
Visible only glass 380-780
Visible only plastic 380-780
UV plastic 220-780
Quartz 200-900
• Cost - Plastic cuvettes are the least expensive and can be discarded after use. Though quartz cuvettes have the maximum utility, they are the most expensive, and need to reused. Generally, disposable plastic cuvettes are used when speed is more important than high accuracy.
The best cuvettes need to be very clear and have no impurities that might affect the spectroscopic reading. Defects on the cuvette such as scratches, can scatter light and hence should be avoided. Some cuvettes are clear only on two sides, and can be used in the UV-Visible spectrophotometer, but cannot be used for fluorescence spectroscopy measurements. For Group 12-16 semiconductor nanoparticles prepared in organic solvents, the quartz cuvette is chosen.
In the sample cell the quantum dots are dispersed in a solvent, whereas in the reference cell the pure solvent is taken. It is important that the sample be very dilute (maximum first exciton absorbance should not exceed 1 au) and the solvent is not UV-visible active. For these measurements, it is required that the solvent does not have characteristic absorption or emission in the region of interest. Solution phase experiments are preferred, though it is possible to measure the spectra in the solid state also using thin films, powders, etc. The instrumentation for solid state UV-visible absorption spectroscopy is slightly different from the solution phase experiments and is beyond the scope of discussion.
Detector
Detector converts the light into a current signal that is read by a computer. Higher the current signal, greater is the intensity of the light. The computer then calculates the absorbance using the in \ref{5}, here A denotes absorbance, I is sample cell intensity and Io is the reference cell intensity.
$A\ =\ log_{10}(I_{0}/I) \label{5}$
The following cases are possible:
Where I < I0 and A < 0. This usually occurs when the solvent absorbs in the wavelength range. Preferably the solvent should be changed, to get an accurate reading for actual reference cell intensity.
Where I = I0 and A= 0. This occurs when pure solvent is put in both reference and sample cells. This test should always be done before testing the sample, to check for the cleanliness of the cuvettes.
When A = 1. This occurs when 90% or the light at a particular wavelength has been absorbed, which means that only 10% is seen at the detector. So I0/I becomes 100/10 = 10. Log10 of 10 is 1.
When A > 1. This occurs in extreme case where more than 90% of the light is absorbed.
Output
The output is the form of a plot of absorbance against wavelength, e.g., Figure $18$.
Beer-Lambert Law
In order to make comparisons between different samples, it is important that all the factors affecting absorbance should be constant except the sample itself.
Effect of Concentration on Absorbance
The extent of absorption depends on the number of absorbing nanoparticles or in other words the concentration of the sample. If it is a reasonably concentrated solution, it will have a high absorbance since there are lots of nanoparticles to interact with the light. Similarly in an extremely dilute solution, the absorbance is very low. In order to compare two solutions, it is important that we should make some allowance for the concentration.
Effect of Container Shape
Even if we had the same concentration of solutions, if we compare two solutions – one in a rectagular shaped container (e.g., Figure $19$) so that light travelled 1 cm through it and the other in which the light travelled 100 cm through it, the absorbance would be different. This is because if the length the light travelled is greater, it means that the light interacted with more number of nanocrystals, and thus has a higher absorbance. Again, in order to compare two solutions, it is important that we should make some allowance for the concentration.
The Law
The Beer-Lambert law addresses the effect of concentration and container shape as shown in \ref{5}, \ref{6} and \ref{7}, where A denotes absorbance; ε is the molar absorptivity or molar absorption coefficient; l is the path length of light (in cm); and c is the concentration of the solution (mol/dm3).
$log_{10}(I_{0}/I)\ =\ \varepsilon l c \label{6}$
$A\ =\ \varepsilon l c \label{7}$
Molar Absorptivity
From the Beer-Lambert law, the molar absorptivity 'ε' can be expressed as shown in \ref{8}.
$c\ =\ A/l \varepsilon \label{8}$
Molar absorptivity corrects for the variation in concentration and length of the solution that the light passes through. It is the value of absorbance when light passes through 1 cm of a 1 mol/dm3 solution.
Limitations of Beer-Lambert Law
The linearity of the Beer-Lambert law is limited by chemical and instrumental factors.
• At high concentrations (> 0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity.
• If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots.
• The spectrophotometer performs calculations assuming that the refractive index of the solvent does not change significantly with the presence of the quantum dots. This assumption only works at low concentrations of the analyte (quantum dots).
• Presence of stray light.
Analysis of Data
The data obtained from the spectrophotometer is a plot of absorbance as a function of wavelength. Quantitative and qualitative data can be obtained by analysing this information.
Quantitative Information
The band gap of the semiconductor quantum dots can be tuned with the size of the particles. The minimum energy for an electron to get excited from the ground state is the energy to cross the band gap. In an absorption spectra, this is given by the first exciton peak at the maximum wavelength (λmax).
Size of the Quantum Dots
The size of quantum dots can be approximated corresponding to the first exciton peak wavelength. Emperical relationships have been determined relating the diameter of the quantum dot to the wavelength of the first exciton peak. The Group 12-16 semiconductor quantum dots that they studied were cadmium selenide (CdSe), cadmium telluride (CdTe) and cadmium sulfide (CdS). The empirical relationships are determined by fitting experimental data of absorbance versus wavelength of known sizes of particles. The empirical equations determined are given for CdTe, CdSe, and CdS in \ref{9}, \ref{10} and \ref{11} respectively, where D is the diameter and λ is the wavelength corresponding to the first exciton peak. For example, if the first exciton peak of a CdSe quantum dot is 500 nm, the corresponding diameter of the quantum dot is 2.345 nm and for a wavelength of 609 nm, the corresponding diameter is 5.008 nm.
$D\ =\ (9.8127\ x\ 10^{-7})\lambda ^{3}\ -\ (1.7147\ x\ 10^{-3})\lambda ^{2}\ +\ (1.0064)\lambda \ -\ 194.84 \label{9}$
$D\ =\ (1.6122\ x\ 10^{-7})\lambda ^{3}\ -\ (2.6575\ x\ 10^{-3})\lambda ^{2}\ +\ (1.6242)\lambda \ -\ 41.57 \label{10}$
$D\ =\ (-6.6521\ x\ 10^{-7})\lambda ^{3}\ -\ (1.9577\ x\ 10^{-3})\lambda ^{2}\ +\ (9.2352)\lambda \ -\ 13.29 \label{11}$
Concentration of Sample
Using the Beer-Lambert law, it is possible to calculate the concentration of the sample if the molar absorptivity for the sample is known. The molar absorptivity can be calculated by recording the absorbance of a standard solution of 1 mol/dm3 concentration in a standard cuvette where the light travels a constant distance of 1 cm. Once the molar absorptivity and the absorbance of the sample are known, with the length the light travels being fixed, it is possible to determine the concentration of the sample solution.
Empirical equations can be determined by fitting experimental data of extinction coefficient per mole of Group 12-16 semiconductor quantum dots, at 250 °C, to the diameter of the quantum dot, \ref{12}, \ref{13}, and \ref{14}.
$\varepsilon \ =\ 10043 x D^{2.12} \label{12}$
$\varepsilon \ =\ 5857\ x\ D^{2.65} \label{13}$
$\varepsilon \ =\ 21536\ x\ D^{2.3} \label{14}$
The concentration of the quantum dots can then be then be determined by using the Beer Lambert law as given by \ref{8}.
Qualitative Information
Apart from quantitative data such as the size of the quantum dots and concentration of the quantum dots, a lot of qualitative information can be derived from the absorption spectra.
Size Distribution
If there is a very narrow size distribution, the first exciton peak will be very sharp (Figure $20$). his is because due to the narrow size distribution, the differences in band gap between different sized particles will be very small and hence most of the electrons will get excited over a smaller range of wavelengths. In addition, if there is a narrow size distribution, the higher exciton peaks are also seen clearly.
Shapd Particles
In the case of a spherical quantum dot, in all dimensions, the particle is quantum confined (Figure $21$). In the case of a nanorod, whose length is not in the quantum regime, the quantum effects are determined by the width of the nanorod. Similar is the case in tetrapods or four legged structures. The quantum effects are determined by the thickness of the arms. During the synthesis of the shaped particles, the thickness of the rod or the arm of the tetrapod does not vary among the different particles, as much as the length of the rods or arms changes. Since the thickness of the rod or tetrapod is responsible for the quantum effects, the absorption spectrum of rods and tetrapods has sharper features as compared to a quantum dot. Hence, qualitatively it is possible to differentiate between quantum dots and other shaped particles.
Crystal Lattice Information
In the case of CdSe semiconductor quantum dots it has been shown that it is possible to estimate the crystal lattice of the quantum dot from the adsorption spectrum (Figure $22$), and hence determine if the structure is zinc blend or wurtzite.
UV-Vis Absorption Spectra of Group 12-16 Semiconductor Nanoparticles
Cadmium Selenide (CdSe)
Cadmium selenide (CdSe) is one of the most popular Group 12-16 semiconductors. This is mainly because the band gap (712 nm or 1.74 eV) energy of CdSe. Thus, the nanoparticles of CdSe can be engineered to have a range of band gaps throughout the visible range, corresponding to the major part of the energy that comes from the solar spectrum. This property of CdSe along with its fluorescing properties is used in a variety of applications such as solar cells and light emitting diodes. Though cadmium and selenium are known carcinogens, the harmful biological effects of CdSe can be overcome by coating the CdSe with a layer of zinc sulfide. Thus CdSe, can also be used as bio-markers, drug-delivery agents, paints and other applications.
A typical absorption spectrum of narrow size distribution wurtzite CdSe quantum dot is shown in Figure $23$. A size evolving absorption spectra is shown in Figure $24$. However, a complete analysis of the sample is possible only by also studying the fluorescence properties of CdSe.
Cadmium Telluride (CdTe)
Cadmium telluride has a band gap of 1.44 eV (860 nm) and as such it absorbs in the infrared region. Like CdSe, the sizes of CdTe can be engineered to have different band edges and thus, different absorption spectra as a function of wavelength. A typical CdTe spectra is shown in Figure $25$. Due to the small bandgap energy of CdTe, it can be used in tandem with CdSe to absorb in a greater part of the solar spectrum.
Other Group 12-16 Semiconductor Systems
Table $1$ shows the bulk band gap of other Group 12-16 semiconductor systems. The band gap of ZnS falls in the UV region, while those of ZnSe, CdS, and ZnTe fall in the visible region.
Material Band Gap (eV) Wavelength (nm)
ZnS 3.61 343.2
ZnSe 2.69 460.5
ZnTe 2.39 518.4
CdS 2.49 497.5
CdSe 1.74 712.1
CsTe 1.44 860.3
Table $1$ Bulk band gaps of different Group 12-16 semiconductors.
Heterostructures of Group 12-16 Semiconductor Systems
It is often desirable to have a combination of two Group 12-16 semiconductor system quantum heterostructures of different shapes like dots and tetrapods, for applications in solar cells, bio-markers, etc. Some of the most interesting systems are ZnS shell-CdSe core systems, such as the CdSe/CdS rods and tetrapods.
Figure $26$ shows a typical absorption spectra of CdSe-ZnS core-shell system. This system is important because of the drastically improved fluorescence properties because of the addition of a wide band gap ZnS shell than the core CdSe. In addition with a ZnS shell, CdSe becomes bio-compatible.
A CdSe seed, CdS arm nanorods system is also interesting. Combining CdSe and CdS in a single nanostructure creates a material with a mixed dimensionality where holes are confined to CdSe while electrons can move freely between CdSe and CdS phases.
Optical Characterization of Group 12-16 (II-VI) Semiconductor Nanoparticles by Fluorescence Spectroscopy
Group 12-16 semiconductor nanocrystals when exposed to light of a particular energy absorb light to excite electrons from the ground state to the excited state, resulting in the formation of an electron-hole pair (also known as excitons). The excited electrons relax back to the ground state, mainly through radiative emission of energy in the form of photons.
Quantum dots (QD) refer to nanocrystals of semiconductor materials where the size of the particles is comparable to the natural characteristic separation of an electron-hole pair, otherwise known as the exciton Bohr radius of the material. In quantum dots, the phenomenon of emission of photons associated with the transition of electrons from the excited state to the ground state is called fluorescence.
Fluorescence Spectroscopy
Emission spectroscopy, in general, refers to a characterization technique that measures the emission of radiation by a material that has been excited. Fluorescence spectroscopy is one type of emission spectroscopy which records the intensity of light radiated from the material as a function of wavelength. It is a nondestructive characterization technique.
After an electron is excited from the ground state, it needs to relax back to the ground state. This relaxation or loss of energy to return to the ground state, can be achieved by a combination of non-radiative decay (loss of energy through heat) and radiative decay (loss of energy through light). Non-radiative decay by vibrational modes typically occurs between energy levels that are close to each other. Radiative decay by the emission of light occurs when the energy levels are far apart like in the case of the band gap. This is because loss of energy through vibrational modes across the band gap can result in breaking the bonds of the crystal. This phenomenon is shown in Figure $27$.
The band gap of Group 12-16 semiconductors is in the UV-visible region. Thus, the wavelength of the emitted light as a result of radiative decay is also in the visible region, resulting in fascinating fluorescence properties.
A fluorimeter is a device that records the fluorescence intensity as a function of wavelength. The fluorescence quantum yield can then be calculated by the ratio of photons absorbed to photons emitted by the system. The quantum yield gives the probability of the excited state getting relaxed via fluorescence rather than by any other non-radiative decay.
Difference between Fluorescence and Phosphorescence
Photoluminescence is the emission of light from any material due to the loss of energy from excited state to ground state. There are two main types of luminescence – fluorescence and phosphorescence. Fluorescence is a fast decay process, where the emission rate is around 108 s-1 and the lifetime is around 10-9 - 10-7 s. Fluorescence occurs when the excited state electron has an opposite spin compared to the ground state electrons. From the laws of quantum mechanics, this is an allowed transition, and occurs rapidly by emission of a photon. Fluorescence disappears as soon as the exciting light source is removed.
Phosphorescence is the emission of light, in which the excited state electron has the same spin orientation as the ground state electron. This transition is a forbidden one and hence the emission rates are slow (103 - 100 s-1). So the phosphorescence lifetimes are longer, typically seconds to several minutes, while the excited phosphors slowly returned to the ground state. Phosphorescence is still seen, even after the exciting light source is removed. Group 12-16 semiconductor quantum dots exhibit fluorescence properties when excited with ultraviolet light.
Instrumentation
The working schematic for the fluorometer is shown in Figure $28$.
The Light Source
The excitation energy is provided by a light source that can emit wavelengths of light over the ultraviolet and the visible range. Different light sources can be used as excitation sources such as lasers, xenon arcs and mercury-vapor lamps. The choice of the light source depends on the sample. A laser source emits light of a high irradiance at a very narrow wavelength interval. This makes the need for the filter unnecessary, but the wavelength of the laser cannot be altered significantly. The mercury vapor lamp is a discrete line source. The xenon arc has a continuous emission spectrum between the ranges of 300 - 800 nm.
The Diffraction Grating and Primary Filter
The diffraction grating splits the incoming light source into its component wavelengths (Figure $29$). The monochromator can then be adjusted to choose with wavelengths to pass through. Following the primary filter, specific wavelengths of light are irradiated onto the sample.
Sample Cell and Sample Preparation
A proportion of the light from the primary filter is absorbed by the sample. After the sample gets excited, the fluorescent substance returns to the ground state, by emitting a longer wavelength of light in all directions (Figure $28$). Some of this light passes through a secondary filter. For liquid samples, a square cross section tube sealed at one end and all four sides clear, is used as a sample cell. The choice of cuvette depends on three factors:
1. Type of Solvent - For aqueous samples, specially designed rectangular quartz, glass or plastic cuvettes are used. For organic samples glass and quartz cuvettes are used.
2. Excitation Wavelength - Depending on the size and thus, bandgap of the Group 12-16 semiconductor nanoparticles, different excitation wavelengths of light are used. Depending on the excitation wavelength, different materials are used (Table $2$).
Cuvette Wavelength (nm)
Visible only glass 380-780
Visible only plastic 380-780
UV plastic 220-780
Quartz 200-900
Table $2$ Cuvette Materials and their wavelengths.
3. Cost - Plastic cuvettes are the least expensive and can be discarded after use. Though quartz cuvettes have the maximum utility, they are the most expensive, and need to reused. Generally, disposable plastic cuvettes are used when speed is more important than high accuracy.
The cuvettes have a 1 cm path length for the light (Figure $29$). The best cuvettes need to be very clear and have no impurities that might affect the spectroscopic reading. Defects on the cuvette, such as scratches, can scatter light and hence should be avoided. Since the specifications of a cuvette are the same for both, the UV-visible spectrophotometer and fluorimeter, the same cuvette that is used to measure absorbance can be used to measure the fluorescence. For Group 12-16 semiconductor nanoparticles preparted in organic solvents, the clear four sided quartz cuvette is used. The sample solution should be dilute (absorbance <1 au), to avoid very high signal from the sample to burn out the detector. The solvent used to disperse the nanoparticles should not absorb at the excitation wavelength.
Secondary Filter
The secondary filter is placed at a 90° angle (Figure $28$) to the original light path to minimize the risk of transmitted or reflected incident light reaching the detector. Also this minimizes the amount of stray light, and results in a better signal-to-noise ratio. From the secondary filter, wavelengths specific to the sample are passed onto the detector.
Detector
The detector can either be single-channeled or multichanneled (Figure $28$). The single-channeled detector can only detect the intensity of one wavelength at a time, while the multichanneled detects the intensity at all wavelengths simultaneously, making the emission monochromator or filter unnecessary. The different types of detectors have both advantages and disadvantages.
Output
The output is the form of a plot of intensity of emitted light as a function of wavelength as shown in Figure $30$).
Analysis of Data
The data obtained from fluorimeter is a plot of fluorescence intensity as a function of wavelength. Quantitative and qualitative data can be obtained by analysing this information.
Quantitative Information
From the fluorescence intensity versus wavelength data, the quantum yield (ΦF) of the sample can be determined. Quantum yield is a measure of the ratio of the photons absorbed with respect to the photons emitted. It is important for the application of Group 12-16 semiconductor quantum dots using their fluorescence properties, for e.g., bio-markers.
The most well-known method for recording quantum yield is the comparative method which involves the use of well characterized standard solutions. If a test sample and a standard sample have similar absorbance values at the same excitation wavelength, it can be assumed that the number of photons being absorbed by both the samples is the same. This means that a ratio of the integrated fluorescence intensities of the test and standard sample measured at the same excitation wavelength will give a ratio of quantum yields. Since the quantum yield of the standard solution is known, the quantum yield for the unknown sample can be calculated.
A plot of integrated fluorescence intensity versus absorbance at the excitation wavelength is shown in Figure $31$. The slope of the graphs shown in Figure $31$ are proportional to the quantum yield of the different examples. Quantum yield is then calculated using \ref{15}, where subscripts ST denotes standard sample and X denotes the test sample; QY is the quantum yield; RI is the refractive index of the solvent.
$\frac{QY_{X}}{QY_{ST}}\ =\ \frac{slope_{X} (RI_{X})^{2}}{slope_{ST} (RI_{ST})^{2}} \label{15}$
Take the example of Figure $32$. If the same solvent is used in both the sample and the standard solution, the ratio of quantum yields of the sample to the standard is given by \ref{16}. If the quantum yield of the standard is known to 0.95, then the quantum yield of the test sample is 0.523 or 52.3%.
$\frac{QY_{X}}{QY_{ST}}\ =\ \frac{1.41}{2.56} \label{16}$
The assumption used in the comparative method is valid only in the Beer-Lambert law linear regime. Beer-Lambert law states that absorbance is directly proportional to the path length of light travelled within the sample, and concentration of the sample. The factors that affect the quantum yield measurements are the following:
• Concentration - Low concentrations should be used (absorbance < 0.2 a.u.) to avoid effects such as self quenching.
• Solvent - It is important to take into account the solvents used for the test and standard solutions. If the solvents used for both are the same then the comparison is trivial. However, if the solvents in the test and standard solutions are different, this difference needs to be accounted for. This is done by incorporating the solvent refractive indices in the ratio calculation.
• Standard Samples - The standard samples should be characterized thoroughly. In addition, the standard sample used should absorb at the excitation wavelength of the test sample.
• Sample Preparation - It is important that the cuvettes used are clean, scratch free and clear on all four sides. The solvents used must be of spectroscopic grade and should not absorb in the wavelength range.
• Slit Width - The slit widths for all measurements must be kept constant.
The quantum yield of the Group 12-16 semiconductor nanoparticles are affected by many factors such as the following.
• Surface Defects - The surface defects of semiconductor quantum dots occur in the form of unsatisfied valencies. Thus resulting in unwanted recombinations. These unwanted recombinations reduce the loss of energy through radiative decay, and thus reducing the fluorescence.
• Surface Ligands - If the surface ligand coverage is a 100%, there is a smaller chance of surface recombinations to occur.
• Solvent Polarity - If the solvent and the ligand have similar solvent polarities, the nanoparticles are more dispersed, reducing the loss of electrons through recombinations.
Qualitative Information
Apart from quantum yield information, the relationship between intensity of fluorescence emission and wavelength, other useful qualitative information such as size distribution, shape of the particle and presence of surface defects can be obtained.
As shown in Figure $32$, the shape of the plot of intensity versus wavelength is a Gaussian distribution. In Figure $32$, the full width at half maximum (FWHM) is given by the difference between the two extreme values of the wavelength at which the photoluminescence intensity is equal to half its maximum value. From the full width half max (FWHM) of the fluorescence intensity Gaussian distribution, it is possible to determine qualitatively the size distribution of the sample. For a Group 12-16 quantum dot sample if the FWHM is greater than 30, the system is very polydisperse and has a large size distribution. It is desirable for all practical applications for the FWHM to be lesser than 30.
From the FWHM of the emission spectra, it is also possible to qualitatively get an idea if the particles are spherical or shaped. During the synthesis of the shaped particles, the thickness of the rod or the arm of the tetrapod does not vary among the different particles, as much as the length of the rods or arms changes. The thickness of the arm or rod is responsible for the quantum effects in shaped particles. In the case of quantum dots, the particle is quantum confined in all dimensions. Thus, any size distribution during the synthesis of quantum dots greatly affects the emission spectra. As a result the FWHM of rods and tetrapods is much smaller as compared to a quantum dot. Hence, qualitatively it is possible to differentiate between quantum dots and other shaped particles.
Another indication of branched structures is the decrease in the intensity of fluorescence peaks. Quantum dots have very high fluorescence values as compared to branched particles, since they are quantum confined in all dimensions as compared to just 1 or 2 dimensions in the case of branched particles.
Fluorescence Spectra of Different Group 12-16 Semiconductor Nanoparticles
The emission spectra of all Group 12-16 semiconductor nanoparticles are Gaussian curves as shown in Figure $30$ and Figure $32$. The only difference between them is the band gap energy, and hence each of the Group 12-16 semiconductor nanoparticles fluoresce over different ranges of wavelengths.
Cadmium Selenide
Since its bulk band gap (1.74 eV, 712 nm) falls in the visible region cadmium Selenide (CdSe) is used in various applications such as solar cells, light emitting diodes, etc. Size evolving emission spectra of cadmium selenide is shown in Figure $33$. Different sized CdSe particles have different colored fluorescence spectra. Since cadmium and selenide are known carcinogens and being nanoparticles are easily absorbed into the human body, there is some concern regarding these particles. However, CdSe coated with ZnS can overcome all the harmful biological effects, making cadmium selenide nanoparticles one of the most popular 12-16 semiconductor nanoparticle.
A combination of the absorbance and emission spectra is shown in Figure $34$ for four different sized particles emitting green, yellow, orange, and red fluorescence.
Cadmium Telluride
Cadmium Telluride (CdTe) has a band gap of 1.44 eV and thus absorbs in the infra red region. The size evolving CdTe emission spectra is shown in Figure $35$.
Adding Shells to QDs
Capping a core quantum dot with a semiconductor material with a wider bandgap than the core, reduces the nonradiative recombination and results in brighter fluorescence emission. Quantum yields are affected by the presences of free surface charges, surface defects and crystal defects, which results in unwanted recombinations. The addition of a shell reduces the nonradiative transitions and majority of the electrons relax radiatively to the valence band. In addition, the shell also overcomes some of the surface defects.
For the CdSe-core/ZnS-shell systems exhibit much higher quantum yield as compared to core CdSe quantum dots as seen in Figure $36$.
Band Gap Measurement
In solid state physics a band gap also called an energy gap, is an energy range in an ideal solid where no electron states can exist. As shown in Figure $37$ for an insulator or semiconductor the band gap generally refers to the energy difference between the top of the valence band and the bottom of the conduction band. This is equivalent to the energy required to free an outer shell electron from its orbit about the nucleus to become a mobile charge carrier, able to move freely within the solid material.
The band gap is a major factor determining the electrical conductivity of a solid. Substances with large band gaps are generally insulators (i.e., dielectric), those with smaller band gaps are semiconductors, while conductors either have very small band gaps or no band gap (because the valence and conduction bands overlap as shown in Figure $38$.
The theory of bands in solids is one of the most important steps in the comprehension of the properties of solid matter. The existence of a forbidden energy gap in semiconductors is an essential concept in order to be able to explain the physics of semiconductor devices. For example, the magnitude of the bad gap of solid determines the frequency or wavelength of the light, which will be adsorbed. Such a value is useful for photocatalysts and for the performance of a dye sensitized solar cell.
Nanocomposites materials are of interest to researchers the world over for various reasons. One driver for such research is the potential application in next-generation electronic and photonic devices. Particles of a nanometer size exhibit unique properties such as quantum effects, short interface migration distances (and times) for photoinduced holes and electrons in photochemical and photocatalytic systems, and increased sensitivity in thin film sensors.
Measurement Methods
Electrical measurement method
For a p-n junction, the essential electrical characteristic is that it constitutes a rectifier, which allows the easy flow of a charge in one direction but restrains the flow in the opposite direction. The voltage-current characteristic of such a device can be described by the Shockley equation, \ref{17}, in which, I0 is the reverse bias saturation current, q the charge of the electron, k is Boltzmann’s constant, and T is the temperature in Kelvin.
$I\ =\ I_{0}(e^{qV/kT} - 1) \label{17}$
When the reverse bias is very large, the current I is saturated and equal to I0. This saturation current is the sum of several different contributions. They are diffusion current, generation current inside the depletion zone, surface leakage effects and tunneling of carriers between states in the band gap. In a first approximation at a certain condition, I0 can be interpreted as being solely due to minority carriers accelerated by the depletion zone field plus the applied potential difference. Therefore it can be shown that, \ref{18}, where A is a constant, Eg the energy gap (slightly temperature dependent), and γ an integer depending on the temperature dependence of the carrier mobility µ.
$I_{0}\ =\ AT^{(3\ +\ \gamma /2)}e^{-E_{g}(T)/KT} \label{18}$
We can show that γ is defined by the relation by a more advanced treatment, \ref{19}.
$T \mu^{2} \ =\ T^{\gamma } \label{19}$
After substituting the value of I0 given by \ref{17} into \ref{18}, we take the napierian logarithm of the two sides and multiply them by kT for large forward bias (qV > 3kT); thus, rearranging, we have \ref{20}.
$qV\ =\ E_{g}(T)\ +\ T[k\ ln(1/A)]\ -\ (3+\gamma /2)klnT \label{20}$
As InT can be considered as a slowly varying function in the 200 - 400 K interval, therefore for a constant current, I, flowing through the junction a plot of qV versus the temperature should approximate a straight line, and the intercept of this line with the qV axis is the required value of the band gap Eg extrapolated to 0 K. Through \ref{21} instead of qV, we can get a more precise value of Eg.
$qV_{c}\ =\ qV\ +\ (3+\gamma /2)klnT \label{21}$
\ref{20} shows that the value of γ depends on the temperature and µ that is a very complex function of the particular materials, doping and processing. In the 200 - 400 K range, one can estimate that the variation ΔEg produced by a change of Δγ in the value of γ is \ref{22}. So a rough value of γ is sufficient for evaluating the correction. By taking the experimental data for the temperature dependence of the mobility µ, a mean value for γ can be found. Then the band gap energy qV can be determined.
$\Delta E_{g}\ =\ 10^{-2}eV\Delta \gamma \label{22}$
The electrical circuit required for the measurement is very simple and the constant current can be provided by a voltage regulator mounted as a constant current source (see Figure $39$). The potential difference across the junction can be measured with a voltmeter. Five temperature baths were used: around 90 °C with hot water, room temperature water, water-ice mixture, ice-salt-water mixture and mixture of dry ice and acetone. The result for GaAs is shown in Figure $40$. The plot qV corrected (qVc) versus temperature gives E1 = 1.56±0.02 eV for GaAs. This may be compared with literature value of 1.53 eV.
Optical Measurement Method
Optical method can be described by using the measurement of a specific example, e.g., hexagonal boron nitride (h-BN, Figure $41$. The UV-visible absorption spectrum was carried out for investigating the optical energy gap of the h-BN film based on its optically induced transition.
For this study, a sample of h-BN was first transferred onto an optical quartz plate, and a blank quartz plate was used for the background as the reference substrate. The following Tauc’s equation was used to determine the optical band gap Eg, \ref{23}, where ε is the optical absorbance, λ is the wavelength and ω = 2π/λ is the angular frequency of the incident radiation.
$\omega ^{2} \varepsilon \ =\ (h\omega \ -\ E_{g})^{2} \label{23}$
As Figure $42$a shows, the absorption spectrum has one sharp absorption peak at 201 - 204 nm. On the basis of Tauc’s formulation, it is speculated that the plot of ε1/2/λ versus 1/λ should be a straight line at the absorption range. Therefore, the intersection point with the x axis is 1/λg (λg is defined as the gap wavelength). The optical band gap can be calculated based on Eg) hc/λg. The plot in Figure $42$b shows ε1/2/λ versus 1/λ curve acquired from the thin h-BN film. For more than 10 layers sample, he calculated gap wavelength λg is about 223 nm, which corresponds to an optical band gap of 5.56 eV.
Previous theoretical calculations of a single layer of h-BN shows 6 eV band gap as the result. The thickness of h-BN film are 1 layer, 5 layers and thick (>10 layers) h-BN films, the measured gap is about 6.0, 5.8, 5.6 eV, respectively, which is consistent with the theoretical gap value. For thicker samples, the layer-layer interaction increases the dispersion of the electronic bands and tends to reduce the gap. From this example, we can see that the band gap is relative to the size of the materials, this is the most important feature of nano material.
Band Gap Measurements of Quantum Dots
A semiconductor is a material that has unique properties in the way it reacts to electrical current. A semiconductor’s ability to conduct an electrical current is intermediate between that of an insulator (such as rubber or glass) and a conductor (such as copper). However, the conductivity of a semiconductor material increases with increasing temperature, a behavior opposite to that of a metal. Semiconductors may also have a lower resistance to the flow of current in one direction than in the other.
Band Theory
The properties of semiconductors can best be understood by band theory, where the difference between conductors, semiconductors, and insulators can be understood by increasing separations between a valence band and a conduction band, as shown in Figure $43$. In semiconductors a small energy gap separates the valence band and the conduction band. This energy gap is smaller than that of insulators – which is too large for essentially any electrons from the valence band to enter the conduction band – and larger than that of conductors, where the valence and conduction bands overlap. At 0 K all of the electrons in a semiconductor lie in the valence band, but at higher temperatures some electrons will have enough energy to be promoted to the conduction band
Carrier Generation and Recombination
In addition to the band structure of solids, the concept of carrier generation and recombination is very important to the understanding of semiconducting materials. Carrier generation and recombination is the process by which mobile charge carriers (electrons and electron holes) are created and eliminated. The valence band in semiconductors is normally very full and its electrons immobile, resulting in no flow as electrical current. However, if an electron in the valence band acquires enough energy to reach the conduction band, it can flow freely in the nearly empty conduction band. Furthermore, it will leave behind an electron hole that can flow as current exactly like a physical charged particle. The energy of an electron-electron hole pair is quantified in the form of a neutrally-charged quasiparticle called an exciton. For semiconducting materials, there is a characteristic separation distance between the electron and the hole in an exciton called the exciton Bohr radius. The exciton Bohr radius has large implications for the properties of quantum dots.
The process by which electrons gain energy and move from the valence to the conduction band is termed carrier generation, while recombination describes the process by which electrons lose energy and re-occupy the energy state of an electron hole in the valence band. Carrier generation is accompanied by the absorption of radiation, while recombination is accompanied by the emission of radiation.
Quantum Dots
In the 1980s, a new nanoscale (~1-10 nm) semiconducting structure was developed that exhibits properties intermediate between bulk semiconductors and discrete molecules. These semiconducting nanocrystals, called quantum dots, are small enough to be subjected to quantum effects, which gives them interesting properties and the potential to be useful in a wide-variety of applications. The most important characteristic of quantum dots (QDs) is that they are highly tunable, meaning that the optoelectronic properties are dependent on the particles size and shape. As Figure $44$ illustrates, the band gap in a QD is inversely related to its size, which produces a blue shift in emitted light as the particle size decreases. The highly tunable nature of QDs result not only from the inverse relationship between band gap size and particle size, but also from the ability to set the size of QDs and make QDs out of a wide variety of materials. The potential to produce QDs with properties tailored to fulfill a specific function has produce an enormous amount of interest in quantum dots (see the section on Optical Properties of Group 12-16 (II-VI) Semiconductor Nanoparticles).
Band Gap Measurements of QDs
As previously mentioned, QDs are small enough that quantum effects influence their properties. At sizes under approximately 10 nm, quantum confinement effects dominate the optoelectronic properties of a material. Quantum confinement results from electrons and electron holes being squeezed into a dimension that approaches a critical quantum measurement, called the exciton Bohr radius. As explained above, the distance between the electron and the hole within an exciton is called the exciton Bohr radius. In bulk semiconductors the exciton can move freely in all directions, but when the size of a semiconductor is reduced to only a few nanometers, quantum confinement effects occur and the band gap properties are changed. Confinement of the exciton in one dimension produces a quantum well, confinement in two dimensions produces a quantum wire, and confinement in all three dimensions produces a quantum dot.
Recombination occurs when an electron from a higher energy level relaxes to a lower energy level and recombines with an electron hole. This process is accompanied by the emission of radiation, which can be measured to give the band gap size of a semiconductor. The energy of the emitted photon in a recombination process of a QD can be modeled as the sum of the band gap energy, the confinement energies of the excited electron and the electron hole, and the bound energy of the exciton as show in \ref{24}.
$E\ =\ E_{bandgap}\ +\ E_{confinement}\ +\ E_{exciton} \label{24}$
The confinement energy can be modeled as a simple particle in a one-dimensional box problem and the energy levels of the exciton can be represented as the solutions to the equation at the ground level (n = 1) with the mass replaced by the reduced mass. The confinement energy is given by \ref{25}, where ħ is the reduced Plank’s constant, µ is the reduced mass, and a is the particle radius. me and mh are the effective masses of the electron and the hole, respectively.
$E_{confinement}\ =\ \frac{\hbar ^{2} \pi ^{2}}{2a^{2}}(\frac{1}{m_{e}}+\frac{1}{m_{h}})=\ \frac{\hbar ^{2}\pi ^{2}}{2\mu a^{2}} \label{25}$
The bound exciton energy can be modeled by using the Coulomb interaction between the electron and the positively charged electron-hole, as shown in \ref{26}. The negative energy is proportional to Rydberg’s energy (Ry) (13.6 eV) and inversely proportional to the square of the size-dependent dielectric constant, εr. µ and me are the reduced mass and the effective mass of the electron, respectively.
$E\ =\ -R_{y} \frac{1}{\varepsilon _{r}^{2}}\frac{\mu }{m_{e}} +\ -R_{y}^{*} \label{26}$
Using these models and spectroscopic measurements of the emitted photon energy (E), it is possible to measure the band gap of QDs.
Photoluminescence Spectroscopy
Photoluminescence (PL) Spectroscopy is perhaps the best way to measure the band gap of QDs. PL spectroscopy is a contactless, nondestructive method that is extremely useful in measuring the separation between different energy levels. PL spectroscopy works by directing light onto a sample, where energy is absorbed by electrons in the sample and elevated to a higher energy-state through a process known as photo-excitation. Photo-excitation produces the electron-electron hole pair. The recombination of the electron-electron hole pair then occurs with the emission of radiation (light). The energy of the emitted light (photoluminescence) relates to the difference in energy levels between the lower (ground) electronic state and the higher (excited) electronic state. This amount of energy is measured by PL spectroscopy to give the band gap size.
PL spectroscopy can be divided in two different categories: fluorescence and phosphorescence. It is fluorescent PL spectroscopy that is most relevant to QDs. In fluorescent PL spectroscopy, an electron is raised from the ground state to some elevated excited state. The electron than relaxes (loses energy) to the lowest electronic excited state via a non-radiative process. This non-radiative relaxation can occur by a variety of mechanisms, but QDs typically dissipate this energy via vibrational relaxation. This form of relaxation causes vibrations in the material, which effectively heat the QD without emitting light. The electron then decays from the lowest excited state to the ground state with the emission of light. This means that the energy of light absorbed is greater than the energy of the light emitted. The process of fluorescence is schematically summarized in the Jablonski diagram in Figure $45$.
Instrumentation
A schematic of a basic design for measuring fluorescence is shown in Figure $46$. The requirements for PL spectroscopy are a source of radiation, a means of selecting a narrow band of radiation, and a detector. Unlike optical absorbance spectroscopy, the detector must not be placed along the axis of the sample, but rather at 90º to the source. This is done to minimize the intensity of transmitted source radiation (light scattered by the sample) reaching the detector. Figure $46$ shows two different ways of selecting the appropriate wavelength for excitation: a monochromator and a filter. In a fluorimeter the excitation and emission wavelengths are selected using absorbance or interference filters. In a spectrofluorimeterthe excitation and emission wavelengths are selected by a monochromator.
Excitation vs. Emission Spectra
PL spectra can be recorded in two ways: by measuring the intensity of emitted radiation as a function of the excitation wavelength, or by measuring the emitted radiation as a function of the the emission wavelength. In an excitation spectrum, a fixed wavelength is used to monitor emission while the excitation wavelength is varied. An excitation spectrum is nearly identical to a sample’s absorbance spectrum. In an emission spectrum, a fixed wavelength is used to excite the sample and the intensity of the emitted radiation is monitored as a function of wavelength.
Optical Absorbance Spectroscopy
PL spectroscopy data is frequently combined with optical absorbance spectroscopy data to produce a more detailed description of the band gap size of QDs. UV-visible spectroscopy is a specific kind of optical absorbance spectroscopy that measures the transitions from ground state to excited state. This is the opposite of PL spectroscopy, which measures the transitions from excited states to ground states. UV-visible spectroscopy uses light in the visible or ultraviolet range to excite electrons and measures the absorbance of radiation verses wavelength. A sharp peak in the UV-visible spectrum indicates the wavelength at which the sample best absorbs radiation.
As mentioned before, an excitation spectrum is a graph of emission intensity versus excitation wavelength. This spectrum often looks very similar to the absorbance spectrum and in some instances they are the exact same. However, slight differences in the theory behind these techniques do exist. Broadly speaking, an absorption spectrum measures wavelengths at which a molecule absorbs lights, while an excitation spectrum determines the wavelength of light necessary to produce emission or fluorescence from the sample, as monitored at a particular wavelength. It is quite possible then for peaks to appear in the absorbance spectrum that would not occur on the PL excitation spectrum.
Instrumentation
A schematic diagram for a UV-vis spectrometer is shown in Figure $47$. Like PL spectroscopy, the instrument requires a source of radiation, a means of selecting a narrow band of radiation (monochromator), and a detector. Unlike PL spectroscopy, the detector is placed along the same axis as the sample, rather than being directed 90º away from it.
Sample Spectra
A UV-Vis spectrum, such as the one shown in Figure $48$, can be used not only to determine the band gap of QDs, but to also determine QD size. Because QDs absorb different wavelengths of light based on the size of the particles, UV-Vis (and PL) spectroscopy can provide a convenient and inexpensive way to determine the band gap and/or size of the particle by using the peaks on the spectrum.
The highly tunable nature of QDs, as well as their high extinction coefficient, makes QDs well-suited to a large variety of applications and new technologies. QDs may find use as inorganic fluorophores in biological imaging, as tools to improve efficiency in photovoltaic devices, and even as a implementations for qubits in quantum computers. Knowing the band gap of QDs is essential to understanding how QDs may be used in these technologies. PL and optical absorbance spectroscopies provide ideal ways of obtaining this information. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.05%3A_Spectroscopic_Characterization_of_Nanoparticles.txt |
Surface area is a property of immense importance in the nano-world, especially in the area of heterogeneous catalysis. A solid catalyst works with its active sites binding to the reactants, and hence for a given active site reactivity, the higher the number of active sites available, the faster the reaction will occur. In heterogeneous catalysis, if the catalyst is in the form of spherical nanoparticles, most of the active sites are believed to be present on the outer surface. Thus it is very important to know the catalyst surface area in order to get a measure of the reaction time. One expresses this in terms of volume specific surface area, i.e., surface area/volume although in industry it is quite common to express it as surface area per unit mass of catalyst, e.g., m2/g.
Overview of NMR
Nuclear magnetic resonance (NMR) is the study of the nuclei of the response of an atom to an external magnetic field. Many nuclei have a net magnetic moment with I ≠ 0, along with an angular momentum in one direction where I is the spin quantum number of the nucleus. In the presence of an external magnetic field, a nucleus would precess around the field. With all the nuclei precessing around the external magnetic field, a measurable signal is produced. NMR can be used on any nuclei with an odd number of protons or neutrons or both, like the nuclei of hydrogen (1H), carbon (13C), phosphorous (31P), etc. Hydrogen has a relatively large magnetic moment (μ = 14.1 x 10-27 J/T) and hence it is used in NMR logging and NMR rock studies. The hydrogen nucleus composes of a single positively charged proton that can be seen as a loop of current generating a magnetic field. It is may be considered as a tiny bar magnet with the magnetic axis along the spin axis itself as shown in Figure $1$. In the
absence of any external forces, a sample with hydrogen alone will have the individual magnetic moments
randomly aligned as shown in Figure $2$.
Nuclear magnetic resonance (NMR) is the study of the nuclei of the response of an atom to an external magnetic field. Many nuclei have a net magnetic moment with I≠0, along with an angular momentum in one direction where I is the spin quantum number of the nucleus. In the presence of an external magnetic field, a nucleus would precess around the field. With all the nuclei precessing around the external magnetic field, a measurable signal is produced.
NMR can be used on any nuclei with an odd number of protons or neutrons or both, like the nuclei of hydrogen (1H), carbon (13C), phosphorous (31P), etc. Hydrogen has a relatively large magnetic moment (μ = 14.1 x 10-27 J/T) and hence it is used in NMR logging and NMR rock studies. The hydrogen nucleus composes of a single positively charged proton that can be seen as a loop of current generating a magnetic field. It is may be considered as a tiny bar magnet with the magnetic axis along the spin axis itself as shown in Figure. In the absence of any external forces, a sample with hydrogen alone will have the individual magnetic moments randomly aligned as shown in Figure $2$.
Advantages of NMR over BET Technique
BET measurements follow the BET (Brunner-Emmet-Teller) adsorption isotherm of a gas on a solid surface. Adsorption experiments of a gas of known composition can help determine the specific surface area of the solid particle. This technique has been the main source of surface area analysis used industrially for a long time. However BET techniques take a lot of time for the gas-adsorption step to be complete while one shall see in the course of this module that NMR can give you results in times averaging around 30 minutes depending on the sample. BET also requires careful sample preparation with the sample being in dry powder form, whereas NMR can accept samples in the liquid state as well.
Calculations
From an atomic stand point, T1 relaxation occurs when a precessing proton transfers energy with its surroundings as the proton relaxes back from higher energy state to its lower energy state. With T2 relaxation, apart from this energy transfer there is also dephasing and hence T2 is less than T1 in general. For solid suspensions, there are three independent relaxation mechanisms involved:-
1. Bulk fluid relaxation which affects both T1 and T2 relaxation.
2. Surface relaxation, which affects both T1 and T2 relaxation.
3. Diffusion in the presence of the magnetic field gradients, which affects only T2 relaxation
These mechanisms act in parallel so that the net effects are given by \ref{1} and \ref{2}.
$\frac{1}{T_{2}}=\frac{1}{T_{2, bulk}}\ +\ \frac{1}{T_{2,surface}}+\frac{1}{T_{2,diffusion}} \label{1}$
$\frac{1}{T_{1}} = \frac{1}{T_{1, bulk}}\ +\ \frac{1}{T_{1,surface}} \label{2}$
The relative importance of each of these terms depend on the specific scenario. For the case of most solid suspensions in liquid, the diffusion term can be ignored by having a relatively uniform external magnetic field that eliminates magnetic gradients. Theoretical analysis has shown that the surface relaxation terms can be written as
\ref{3} and \ref{4}.
$\frac{1}{T_{1,surface}} = \rho _{1} (\frac{S}{V})_{particle} \label{3}$
$\frac{1}{T_{2,surface}} = \rho_{2} (\frac{S}{V})_{particle} \label{4}$
Thus one can use T1 or T2 relaxation experiment to determine the specific surface area. We shall explain the case of the T2 technique further as \ref{5}.
$\frac{1}{T_{2}} = \frac{1}{T_{2,bulk}}+ \rho_{2}(\frac{S}{V})_{particle} \label{5}$
One can determine T2 by spin-echo measurements for a series of samples of known S/V values and prepare a calibration chart as shown in Figure $3$, with the intercept as 1/T2,bulk and the slope as ρ2, one can thus find the specific surface area of an unknown sample of the same material.
Sample Preparation and Experimental Setup
The sample must be soluble in the solvent. For proton NMR, about 0.25-1.00 mg/mL are needed depending on the sensitivity of the instrument.
The solvent properties will have an impact of some or all of the spectrum. Solvent viscosity affects obtainable resolution, while other solvents like water or ethanol have exchangeable protons that will prevent the observation of such exchangeable protons present in the solute itself. Solvents must be chosen such that the temperature dependence of solute solubility is low in the operation temperature range. Solvents containing aromatic groups like benzene can cause shifts in the observed spectrum compared to non-aromatic solvents.
NMR tubes are available in a wide range of specifications depending on specific scenarios. The tube specifications need to be extremely narrow while operating with high strength magnetic fields. The tube needs to be kept extremely clean and free from dust and scratches to obtain good results, irrespective of the quality of the tube. Tubes can cleaned without scratching by rinsing out the contents and soaking them in a degreasing solution, and by avoiding regular glassware cleaning brushes. After soaking for a while, rinse with distilled water and acetone and dry the tube by blowing filterened nitrogen gas through a pipette or by using a swob of cotton wool.
Filter the sample solution by using a Pasteur pipette stuffed with a piece of cotton wool at the neck. Any suspended material like dust can cause changes in the spectrum. When working with dilute aqueous solutions, sweat itself can have a major effect and so gloves are recommended at all times.
Sweat contains mainly water, minerals (sodium 0.9 g/L, potassium 0.2 g/L, calcium 0.015 g/L, magnesium 0.0013 g/L and other trace elements like iron, nickel, zinc, copper, lead and chromium), as well as lactate and urea. In presence of a dilute solution of the sample, the proton-containing substances in sweat (e.g., lactate and urea) can result in a large signal that can mask the signal of the sample.
The NMR probe is the most critical piece of equipment as it contains the apparatus that must detect the small NMR signals from the sample without adding a lot of noise. The size of the probe is given by the diameter of the NMR tube it can accommodate with common sizes 5, 10 and 15 mm. A larger size probe can be used in the case of less sensitive samples in order to get as much solute into the active zone as possible. When the sample is available in less quantity, use a smaller size tube to get an intrinsically higher sensitivity.
NMR Analysis
A result sheet of T2 relaxation has the plot of magnetization versus time, which will be linear in a semi-log plot as shown in Figure $4$. Fitting it to the equation, we can find T2 and thus one can prepare a calibration plot of 1/T2 versus S/V of known samples.
Limitations of the T2 Technique
The following are a few of the limitations of the T2 technique:
• One can’t always guarantee no magnetic field gradients, in which case the T1 relaxation technique is to be used. However this takes much longer to perform than the T2 relaxation.
• There is the requirement of the odd number of nucleons in the sample or solvent.
• The solid suspension should not have any para- or ferromagnetic substance (for instance, organics like hexane tend to have dissolved O2 which is paramagnetic).
• The need to prepare a calibration chart of the material with known specific surface area.
Example of Usage
A study of colloidal silica dispersed in water provides a useful example. Figure $5$ shows a representation of an individual silica particle.
A series of dispersion in DI water at different concentrations was made and surface area calculated. The T2 relaxation technique was performed on all of them with a typical T2 plot shown in Figure $6$ and T2 was recorded at 2117 milliseconds for this sample.
A calibration plot was prepared with 1/T2 – 1/T2,bulk as ordinate (the y-axis coordinate) and S/V as abscissa (the x-axis coordinate). This is called the surface relaxivity plot and is illustrated in Figure $7$.
Accordingly for the colloidal dispersion of silica in DI water, the best fit resulted in \ref{6}, from which one can see that the value of surface relaxivity, 2.3 x 10-8, is in close accordance with values reported in literature.
$\frac{1}{T_{2}}\ -\ \frac{1}{T_{2,bulk}}\ =\ 2.3 \times 10^{-8} (\frac{S}{V})\ -\ 0.0051 \label{6}$
The T2 technique has been used to find the pore-size distribution of water-wet rocks. Information of the pore size distribution helps petroleum engineers model the permeability of rocks from the same area and hence determine the extractable content of fluid within the rocks.
Usage of NMR for surface area determination has begun to take shape with a company, Xigo nanotools, having developed an instrument called the Acorn AreaTM to get surface area of a suspension of aluminum oxide. The results obtained from the instrument match closely with results reported by other techniques in literature. Thus the T2 NMR technique has been presented as a strong case to obtain specific surface areas of nanoparticle suspensions. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.06%3A_Measuring_the_Specific_Surface_Area_of_Nanoparticle_Suspensions_using_NMR.txt |
Graphene is a quasi-two-dimensional material, which comprises layers of carbon atoms arranged in six-member rings (Figure \(1\)). Since being discovered by Andre Geim and co-wokers at the University of Manchester, graphene has become one of the most exciting topics of research because of its distinctive band structure and physical properties, such as the observation of a quantum hall effect at room temperature, a tunable band gap, and a high carrier mobility.
Graphene can be characterized by many techniques including atomic force microscopy (AFM), transmission electron microscopy (TEM) and Raman spectroscopy. AFM can be used to determine the number of the layers of the graphene, and TEM images can show the structure and morphology of the graphene sheets. In many ways, however, Raman spectroscopy is a much more important tool for the characterization of graphene. First of all, Raman spectroscopy is a simple tool and requires little sample preparation. What’s more, Raman spectroscopy can not only be used to determine the number of layers, but also can identify if the structure of graphene is perfect, and if nitrogen, hydrogen or other fuctionalization is successful.
Raman Spectrum of Graphene
While Raman spectroscopy is a useful technique for characterizing sp2 and sp3 hybridized carbon atoms, including those in graphite, fullerenes, carbon nanotubes, and graphene. Single, double, and multi-layer graphenes have also been differentiated by their Raman fingerprints.
Figure \(2\) shows a typical Raman spectrum of N-doped single-layer graphene. The D-mode, appears at approximately 1350 cm-1, and the G-mode appears at approximately 1583 cm-1. The other Raman modes are at 1620 cm-1 (D’- mode), 2680 cm-1 (2D-mode), and 2947 cm-1 (D+G-mode).
The G-band
The G-mode is at about 1583 cm-1, and is due to E2g mode at the Γ-point. G-band arises from the stretching of the C-C bond in graphitic materials, and is common to all sp2 carbon systems. The G-band is highly sensitive to strain effects in sp2 system, and thus can be used to probe modification on the flat surface of graphene.
Disorder-induced D-band and D'-band
The D-mode is caused by disordered structure of graphene. The presence of disorder in sp2-hybridized carbon systems results in resonance Raman spectra, and thus makes Raman spectroscopy one of the most sensitive techniques to characterize disorder in sp2 carbon materials. As is shown by a comparison of Figure \(2\) and Figure \(3\) there is no D peak in the Raman spectra of graphene with a perfect structure.
If there are some randomly distributed impurities or surface charges in the graphene, the G-peak can split into two peaks, G-peak (1583 cm-1) and D’-peak (1620 cm-1). The main reason is that the localized vibrational modes of the impurities can interact with the extended phonon modes of graphene resulting in the observed splitting.
The 2D-band
All kinds of sp2 carbon materials exhibit a strong peak in the range 2500 - 2800 cm-1 in the Raman spectra. Combined with the G-band, this spectrum is a Raman signature of graphitic sp2 materials and is called 2D-band. 2D-band is a second-order two-phonon process and exhibits a strong frequency dependence on the excitation laser energy.
What’s more, the 2D band can be used to determine the number of layer of graphene. This is mainly because in the multi-layer graphene, the shape of 2D band is pretty much different from that in the single-layer graphene. As shown in Figure \(4\), the 2D band in the single-layer graphene is much more intense and sharper as compared to the 2D band in multi-layer graphene. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.07%3A_Characterization_of_Graphene_by_Raman_Spectroscopy.txt |
Characterization of nanoparticles in general, and carbon nanotubes in particular, remains a technical challenge even though the chemistry of covalent functionalization has been studied for more than a decade. It has been noted by several researchers that the characterization of products represents a constant problem in nanotube chemistry. A systematic tool or suites of tools are needed for adequate characterization of chemically functionalized single-walled carbon nanotubes (SWNTs), and is necessary for declaration of success or failure in functionalization trials.
So far, a wide range of techniques have been applied to characterize functionalized SWNTs: infra red (IR), Raman, and UV/visible spectroscopies, thermogravimetric analysis (TGA), atomic force microscopy (AFM), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), etc. A summary of the attribute of each of the characterization method is given in Table \(1\).
Table \(1\) Common characterization methodology for functionalized SWNTs.
Method Sample Information Limitations
TGA Solid Functionalization ratio no evidence for covalent functionalization, not specific
XPS solid elements, functionalization ratio no evidence of covalent functionalization, not specific quantification complicated
Raman solid sp3 indicated by D mode not specific, quantification not reliable
Infrared (IR) solid for ATR-IR or solution substituent groups no direct evidence for covalent functionalization quantification not possible
UV/Visible solution sidewall functionalization not specific or quantitative, need highly disperesed sample
Solution NMR solution substituents no evidence of covalent functionalization, high solubility of sample
Solid state NMR solid substituents sp3 molecular motions, quantification at high level of functionalization high functionalization needed, long time for signal acquisition, quantification not available for samples with protons on side chains
AFM solid on substrate topography only a small portion of sample characterized, no evidence of covalent functionalization, no chemical identity
TEM solid on substrate image of sample distribution dispersion only a small portion of sample characterized, no evidence of covalent functionalization, no chemical identity dispersion information complicated
STM solid on substrate distribution no chemical identity of functional groups small portion of sample conductive sample only
Elemental and Physical Analysis
Thermogravimetric Analysis (TGA)
Thermogravimetric analysis (TGA) is the mostly widely used method to determine the level of sidewall functionalization. Since most functional groups are labile or decompose upon heating, while the SWNTs are stable up to 1200 °C under Ar atmosphere. The weight loss at 800 °C under Ar is often used to determine functionalization ratio using this indirect method. Unfortunately, quantification can be complicated with presence of multiple functional groups. Also, TGA does not provide direct evidence for covalent functionalization since it cannot differentiate between covalent attachment and physical adsorption.
X-ray Photoelectron Spectroscopy (XPS)
XPS confirms the presence of different elements in functionalized SWNTs. This is useful for identification of heteroatom elements such as F and N, and then XPS can be used for quantification with simple substituent groups and used indirectly. Deconvolution of XPS is useful to study fine structures on SWNTs. However, the overlapping of binding energies in the spectrum complicates quantification.
Spectroscopy
Raman Spectroscopy
Raman spectroscopy is very informative and important for characterizing functionalized SWNTs. The tangential G mode (ca. 1550 – 1600 cm-1) is characteristic of sp2 carbons on the hexagonal graphene network. The D-band, so-called disorder mode (found at ca. 1295 cm-1) appears due to disruption of the hexagonal sp2 network of SWNTs. The D-band was largely used to characterize functionalized SWNTs and ensure functionalization is covalent and occurred at the sidewalls. However, the observation of D band in Raman can also be related to presence of defects such as vacancies, 5-7 pairs, or dopants. Thus, using Raman to provide evidence of covalent functionalization needs to be done with caution. In particular, the use of Raman spectroscopy for a determination of the degree of functionalization is not reliable.
It has been shown that quantification with Raman is complicated by the distribution of functional groups on the sidewall of SWNTs. For example, if fluorinated-SWNTs (F-SWNTs) are functionalized with thiol or thiophene terminated moieties, TGA shows that they have similar level of functionalization. However, their relative intensities of D:G in Raman spectrum are quite different. The use of sulfur substituents allow for gold nanoparticles with 5 nm in diameter to be attached as a “chemical marker” for direct imaging of the distribution of functional groups. AFM and STM suggest that the functional groups of thio-SWNTs are group together while the thiophene groups are widely distributed on the sidewall of SWNTs. Thus the difference is not due to significant difference in substituent concentration but on substituent distribution, while Raman shows different D:G ratio.
Infrared Spectroscopy
IR spectroscopy is useful in characterizing functional groups bound to SWNTs. A variety of organic functional groups on sidewall of SWNTs have been identified by IR, such as COOH(R), -CH2, -CH3, -NH2, -OH, etc. However, it is difficult to get direct functionalization information from IR spectroscopy. The C-F group has been identified by IR in F-SWNTs. However, C-C, C-N, C-O groups associated with the side-wall functionalization have not been observed in the appropriately functionalized SWNTs.
UV/Visible Spectroscopy
UV/visible spectroscopy is maybe the most accessible technique that provides information about the electronic states of SWNTs, and hence functionalization. The absorption spectrum shows bands at ca. 1400 nm and 1800 nm for pristine SWNTs. A complete loss of such structure is observed after chemical alteration of SWNTs sidewalls. However, such information is not quantitative and also does not show what type of functional moiety is on the sidewall of SWNTs.
Nuclear Magnetic Resonance
NMR can be considered as a “new” characterization technique as far as SWNTs are concerned. Solution state NMR is limited for SWNT characterization because low solubility and slow tumbling of the SWNTs results in broad spectra. Despite this issue, there are still solution 1H NMR reported of SWNTs functionalized by carbenes, nitrenes and azomethine ylides because of the high solubility of derivatized SWNTs. However, proof of covalent functionalization cannot be obtained from the 1H NMR. As an alternative, solid state 13C NMR has been employed to characterize several functionalized SWNTs and show successful observation of sidewall organic functional groups, such as carboxylic and alkyl groups. But there has been a lack of direct evidence of sp3 carbons on the sidewall of SWNTs that provides information of covalent functionalization.
Solid state 13C NMR has been successfully employed in the characterization of F-SWNTs through the direct observation of the sp3C-F carbons on sidewall of SWNTs. This methodology has been transferred to more complicated systems; however, it has been found that longer side chain length increases the ease to observe sp3C-X sidewall carbons.
Solid state NMR is a potentially powerful technique for characterizing functionalized SWNTs because molecular dynamic information can also be obtained. Observation that higher side chain mobility can be achieved by using a longer side chain length offers a method of exploring functional group conformation. In fact, there have been reports using solid state NMR to study molecular mobility of functionalized multi-walled carbon nanotubes.
Microscopy
AFM, TEM and STM are useful imaging techniques to characterize functionalized SWNTs. As techniques, they are routinely used to provide an “image” of an individual nanoparticle, as opposed to an average of all the particles.
Atomic Force Microscopy
AFM shows morphology on the surface of SWNTs. The height profile on AFM is often used to show presence of functional groups on sidewall of SWNTs. Individual SWNTs can be probed by AFM and sometimes provide information of dispersion and exfoliation of bundles. Measurement of heights along an individual SWNT can be correlated with the substituent group, i.e., the larger an alkyl chain of a sidewall substituent the greater the height measured. AFM does not distinguish whether those functional groups are covalently attached or physically adsorbed on the surface of SWNTs.
Transmission Electron Microscopy
TEM can be used to directly image SWNTs and at high resolution clearly shows the sidewall of individual SWNT. However, the resolution of TEM is not sufficient to directly observe covalent attachment of chemical modification moieties, i.e., to differentiate between sp2 and sp3 carbon atoms. TEM can be used to provide information of functionalization effect on dispersion and exfoliation of ropes.
Samples are usually prepared from very dilute concentration of SWNTs. Sample needs to be very homogeneous to get reliable data. As with AFM, TEM only shows a very small portion of sample, using them to characterize functionalized SWNTs and evaluate dispersion of samples in solvents needs to be done with caution.
Scanning Tunneling Microscopy
STM offers a lot of insight on structure and surface of functionalized SWNTs. STM measures electronic structure, while sometimes the topographical information can be indirectly inferred by STM images. STM has been used to characterize F-SWNTs gold-marked SWNTs, and organic functionalized SWNTs. Distribution of functional groups can be inferred from STM images since the location of a substituent alters the localized electronic structure of the tube. STM images the position/location of chemical changes to the SWNT structure. The band-like structure of F-SWNTs was first disclosed by STM.
STM has the same problem that is inherent with AFM and TEM, that when using small sample size, the result may not be statistically relevant. Also, chemical identity of the features on SWNTs cannot be determined by STM; rather, they have to be identified by spectroscopic methods such as IR or NMR. A difficulty with STM imaging is that the sample has to be conductive, thus deposition of the SWNT onto a gold (or similar) surface is necessary. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.08%3A_Characterization_of_Covalently_Functionalized_Single-Walled_Carbon_Nanotubes.txt |
|Electrospray-differential mobility analysis (ES-DMA) is an analytical technique that uses first an electrospray to aerosolize particles and then DMA to characterize their electrical mobility at ambient conditions. This versatil tool can be used to quantitative characterize biomolecules and nanoparticles from 0.7 to 800 nm. In the 1980s, it was discovered that ES could be used for producing aerosols of biomacromolecules. In the case of the DMA, its predecesor was developed by Hewitt in 1957 to analize charging of small particles. The modified DMA, which is a type of ion mobility analyzer, was developed by Knuts}on and Whitby (Figure $1$\) in 1975 and later it was commercialized. Among the several designs, the cylindrical DMA has become the standard design and has been used for the obtention of monodisperse aerosols, as well as for the classification of polydisperse aerosols.
The first integration of ES with DMA ocurred in 1996 when this technique was used to determine the size of different globular proteins. DMA was refined over the past decade to be used in a wide range of applications for the characterization of polymers, viruses, bacteriophages and nanoparticle-biomolecule conjugates. Although numerous publications have reported the use of ES-DMA in medicinal and pharmaceutical applications, this present module describes the general principles of the technique and its application in the analysis of gold nanoparticles.
How Does ES-DMA Function?
ES-DMA consits of an electrospray source (ES) that aerosolizes bionanoparticles and a class of ion mobility analyzer (DMA) that measures their electrical mobility by balancing electrical and drag forces on the particles. DMA continously separates particles based on their charge to size ratio. An schematic of the experimental setup for ES-DMA is shown in Figure $2$ for the analysis of gold nanoparticles.
The process of analyzing particles with ES-DMA involves four steps:
First, the analyte dissolved in a volatile buffer such as ammonium acetate [NH4][O2CCH3] is placed inside a pressure chamber. Then, the solution is delivered to the nozzle through a fused silica capillary to generate multiply charged droplets. ES nebulizers produce droplets of 100-400 nm in diameter but they are highly charged.
In the next step, the droplets are mixed with air and carbon dioxide (CO2) and are passed through the charge reducer or neutralizer where the solvent continues to evaporate and charge distribution decreases. The charge reducer is an ionizing α radiation source such as Po210 that ionizes the carrier gas and reduces the net charges on the particles to a Fuchs’-Boltzmann distribution. As a result, the majority of the droplets contain single net charge particles that pass directly to the DMA. DMA separates positively or negatively charged particles by applying a negative or positive potential. Figure $3$ shows a single channel design of cylindrical DMA that is composed of two concentric electrodes between which a voltage is applied. The inner electrode is maintained at a controlled voltage from 1V to 10 kV, whereas the outer electrode is electrically grounded.
In the third step, the aerosol flow (Qa) enters through a slit that is adjacent to one electrode and the sheath air (air or N2) flow (Qs) is introduced to separate the aerosol flow from the other electrode. After a voltage is applied between the inner and outer electrodes, an electric field is formed and the charged particles with specific electrical mobility are attracted to a charged collector rod. The positions of the charged particles along the length of the collector depend on their electrical mobility (Zp), the fluid flow rate and the DMA geometry. In the case of particles with a high electrical mobility, they are collected in the upper part of the rod (particles a and b, Figure $4$) while particles with a low electrical mobility are collected in the lower part of the rod (particle d, Figure $3$.
$Z_{p} = \frac{(Q_{s}\ +\ Q_{a})ln(R_{2})}{R_{1}} \label{1}$
With the value of the electrical mobility, the particle diameter (dp) can be determined by using Stokes’ law as described by \ref{2}, where n is the number of charge units, e is the elementary unit of charge (1.61x10-19C), Cc is the Cunningham slip correction factor and µ is the gas viscosity. Cc \ref{3}, considers the noncontinuum flow effect when dp is similar to or smaller than the mean free path (λ) of the carrier gas.
$d_{p} \ =\frac{n_{e}C_{c}}{3\pi \mu Z^{p}} \label{2}$
$C_{c} = 1\ +\ \frac{2\lambda }{d_{p}} [1.257\ +\ 0.4e^{-\frac{-1.10 d_{p}}{2\lambda }}] \label{3}$
In the last step, the size-selected particles are detected with a condensation particle counter (CPC) or an aerosol electrometer (AE) that determines the particle number concentration. CPC has lower detection and quantitation limits and is the most sensitive detector available. AE is used when the particles concentrations are high or when particles are so small that cannot be detected by CPC. Figure $4$ shows the operation of the CPC in which the aerosol is mixed with butanol (C4H9OH) or water vapor (working fluid) that condensates on the particles to produce supersaturation. Hence, large size particles (around 10 μm) are obtained, detected optically and counted. Since each droplet is approximately of the same size, the count is not biased. The particle size distribution is obtained by changing the applied voltage. Generally, the performance of the CPC is evaluated in terms of the minimum size that is counted with 50% efficiency.
What Type of Information is Obtained by ES-DMA?
ES-DMA provides information of the mobility diameter of particles and their concentration in number of particles per unit volume of analyzed gas so that the particle size distribution is obtained as shown in Figure $10$. Another form of data representation is the differential distribution plot of ΔN/Δlogdp vs dp (Figure $11$. This presentation has a logarithmic size axis that is usually more convenient because particles are often distributed over a wide range of sizes.
How Data from ES-DMA is processed?
To obtain the actual particle size distribution (Figure), the raw data acquired with the ES-DMA is corrected for charge correction, transfer function of the DMA and collection efficiency for CPC. Figure $6$ illustrates the charge correction in which a charge reducer or neutralizer is necessary to reduce the problem of multiple charging and simplify the size distribution. The charge reduction depends on the particle size and multiple charging can be produced as the particle size increases. For instance, for 10 nm particles, the percentages of single charged particles are lower than those of neutral particles. After a negative voltage is applied, only the positive charged particles are collected. Conversely, for 100 nm particles, the percentages of single charged particles increase and multiple charges are present. Hence, after a negative bias is applied, +1 and +2 particles are collected. The presence of more charges in particles indicates high electrical mobility and
The transfer function for DMA modifies the input particle size distribution and affects the resolution as shown in Figure $7$. This transfer function depends on the operation conditions such as flow rates and geometry of the DMA. Furthermore, the transfer function can be broadened by Brownian diffusion and this effect produces the actual size distribution. The theoretical resolution is measured by the ratio of the sheath to the aerosol flow in under balance flow conditions (sheath flow equals excess flow and aerosol flow in equals monodisperse aerosol flow out).
The CPC has a size limit of detection of 2.5 nm because small particles are difficult to activate at the supersaturation of the working fluid. Therefore, CPC collection efficiency is required that consists on the calibration of the CPC against an electrometer.
• Applications of ES-DMADetermination of molecular weight of polymers and proteins in the range of 3.5 kDa to 2 MDa by correlating molecular weight and mobility diameter.
• Determination of absolute number concentration of nanoparticles in solution by obtaining the ES droplet size distributions and using statistical analysis to find the original monomer concentration. Dimers or trimers can be formed in the electrospray process due to droplet induced aggregation and are observed in the spectrum.
• Kinetics of aggregation of nanoparticles in solution by analysis of multimodal mobility distributions from which distinct types of aggregation states can be identified.
• Quantification of ligand adsorption to bionanoparticles by measuring the reduction in electrical mobility of a complex particle (particle-protein) that corresponds to an increase in mobility diameter.
Characterization of SAM-functionalized Gold Nanoparticles by ES-DMA
Citrate (Figure $8$ tabilized gold nanoparticles (AuNPs)) with diameter in the range 10-60 nm and conjugated AuNPs are analyzed by ES-DMA. This investigation shows that the formation of salt particles on the surface of AuNPs can interfere with the mobility analysis because of the reduction in analyte signals. Since sodium citrate is a non volatile soluble salt, ES produces two types of droplets. One droplet consists of AuNPs and salt and the other droplet contains only salt. Thus, samples must be cleaned by centrifugation prior to determine the size of bare AuNPs. Figure $9$ presents the size distribution of AuNPs of distinct diameters and peaks corresponding to salt residues.
The mobility size of bare AuNPs (dp0) can be obtained by using \ref{4}, where dp,m and ds are mobility sizes of the AuNPs encrusted with salts and the salt NP, respectively. However, the presence of self-assembled monolayer (SAM) produces a difference in electrical mobility between conjugated and bare AuNPs. Hence, the determination of the diameter of AuNPs (salt-free) is critical to distinguish the increment in size after functionalization with SAM. The coating thickness of SAM that corresponds to the change in particle size (ΔL) is calculated by using \ref{5}, where dp and dp0 are the coated and uncoated particle mobility diameters, respectively.
$d_{p0} =\ \sqrt[3]{d_{p,m}^{3}\ -\ d^{3}_{s}} \label{4}$
$\Delta L\ =\ d_{p}\ -\ d_{p0} \label{5}$
In addition, the change in particle size can be determined by considering a simple rigid core-shell model that gives theoretical values of ΔL1 higher than the experimental ones (ΔL). A modified core-shell model is proposed in which a size dependent effect on ΔL2 is observed for a range of particle sizes. AuNPs of 10 nm and 60 nm are coated with MUA (Figure $10$), a charge alkanethiol, and the particle size distributions of bare and coated AuNPs are presented in Figure. The increment in average particle size is 1.2 ± 0.1 nm for 10 nm AuNPs and 2.0 ± 0.3 nm for 60 nm AuNPs so that ΔL depends on particle size.
Advantages of ES-DMA
• ES-DMA does not need prior information about particle type.
• It characterizes broad particle size range and operates under ambient pressure conditions.
• A few µL or less of sample volume is required and total time of analysis is 2-4 min.
• Data interpretation and mobility spectra simple to analyze compared to ES-MS where there are several charge states.
Limitations of ES-DMA
• Analysis requires the following solution conditions: concentrations of a few hundred µg/mL, low ionic strength (<100 mM) and volatile buffers.
• Uncertainty is usually ± 0.3 nm from a size range of a few nm to around 100 nm. This is not appropriate to distinguish proteins with slight differences in molecular weight.
Related Techniques
A tandem technique is ES-DMA-APM that determines mass of ligands adsorbed to nanoparticles after size selection with DMA. APM is an aerosol particle mass analyzer that measures mass of particles by balancing electrical and centrifugal forces. DMA-APM has been used to analyze the density of carbon nanotubes, the porosity of nanoparticles and the mass and density differences of metal nanoparticles that undergo oxidation.
r | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.09%3A_Characterization_of_Bionanoparticles_by_Electrospray-Differential_Mobility_Analysis.txt |
• 9.1: Interferometry
The processes which occur at the surfaces of crystals depend on many external and internal factors such as crystal structure and composition, conditions of a medium where the crystal surface exists and others. The appearance of a crystal surface is the result of complexity of interactions between the crystal surface and the environment.
• 9.2: Atomic Force Microscopy (AFM)
Atomic force microscopy (AFM) is a high-resolution form of scanning probe microscopy, also known as scanning force microscopy (SFM).
• 9.3: SEM and its Applications for Polymer Science
The scanning electron microscope (SEM) is a very useful imaging technique that utilized a beam of electrons to acquire high magnification images of specimens. Very similar to the transmission electron microscope (TEM), the SEM maps the reflected electrons and allows imaging of thick (~mm) samples, whereas the TEM requires extremely thin specimens for imaging; however, the SEM has lower magnifications.
• 9.4: Catalyst Characterization Using Thermal Conductivity Detector
Metal dispersion is a commong term within the catalyst industry. The term refers to the amount of metal that is active for a specific reaction. Let’s assume a catalyst material has a composition of 1 wt% palladium and 99% alumina (Al2O3) (Figure 9.4.1 ) Even though the catalyst material has 1 wt% of palladium, not all the palladium is active.
• 9.5: Nanoparticle Deposition Studies Using a Quartz Crystal Microbalance
09: Surface Morphology and Structure
The Application of Vertical Scanning Interferometry to the Study of Crystal Surface Processes
The processes which occur at the surfaces of crystals depend on many external and internal factors such as crystal structure and composition, conditions of a medium where the crystal surface exists and others. The appearance of a crystal surface is the result of complexity of interactions between the crystal surface and the environment. The mechanisms of surface processes such as dissolution or growth are studied by the physical chemistry of surfaces. There are a lot of computational techniques which allows us to predict the changing of surface morphology of different minerals which are influenced by different conditions such as temperature, pressure, pH and chemical composition of solution reacting with the surface. For example, Monte Carlo method is widely used to simulate the dissolution or growth of crystals. However, the theoretical models of surface processes need to be verified by natural observations. We can extract a lot of useful information about the surface processes through studying the changing of crystal surface structure under influence of environmental conditions. The changes in surface structure can be studied through the observation of crystal surface topography. The topography can be directly observed macroscopically or by using microscopic techniques. Microscopic observation allows us to study even very small changes and estimate the rate of processes by observing changing the crystal surface topography in time.
Much laboratory worked under the reconstruction of surface changes and interpretation of dissolution and precipitation kinetics of crystals. Invention of AFM made possible to monitor changes of surface structure during dissolution or growth. However, to detect and quantify the results of dissolution processes or growth it is necessary to determine surface area changes over a significantly larger field of view than AFM can provide. More recently, vertical scanning interferometry (VSI) has been developed as new tool to distinguish and trace the reactive parts of crystal surfaces. VSI and AFM are complementary techniques and practically well suited to detect surface changes.
VSI technique provides a method for quantification of surface topography at the angstrom to nanometer level. Time-dependent VSI measurements can be used to study the surface-normal retreat across crystal and other solid surfaces during dissolution process. Therefore, VSI can be used to directly and nondirectly measure mineral dissolution rates with high precision. Analogically, VSI can be used to study kinetics of crystal growth.
Physical Principles of Optical Interferometry
Optical interferometry allows us to make extremely accurate measurements and has been used as a laboratory technique for almost a hundred years. Thomas Young observed interference of light and measured the wavelength of light in an experiment, performed around 1801. This experiment gave an evidence of Young's arguments for the wave model for light. The discovery of interference gave a basis to development of interferomertry techniques widely successfully used as in microscopic investigations, as in astronomic investigations.
The physical principles of optical interferometry exploit the wave properties of light. Light can be thought as electromagnetic wave propagating through space. If we assume that we are dealing with a linearly polarized wave propagating in a vacuum in z direction, electric field E can be represented by a sinusoidal function of distance and time.
$E(x,y,z,t)\ =\ a \cos[2\pi (vt\ -\ z/\lambda )] \label{1}$
Where a is the amplitude of the light wave, v is the frequency, and λ is its wavelength. The term within the square brackets is called the phase of the wave. Let’s rewrite this equation in more compact form,
$E(x,y,z,t)\ =\ a \cos(\omega t\ -\ kz) \label{2}$
where ω=2πv is the circular frequency, and k=2π/λ is the propagation constant. Let’s also transform this second equation into a complex exponential form,
$E(x,y,z,t)\ =\ Re(a\ e^{i(\psi + \omega t)})\ =\ Re(a\ e^{i\omega t}) \label{3}$
where ϕ=2πz/λ and A=e−iϕ is known as the complex amplitude. If n is a refractive index of a medium where the light propagates, the light wave traverses a distance d in such a medium. The equivalent optical path in this case is
$p\ =\ n\ \cdot \ d \label{4}$
When two light waves are superposed, the result intensity at any point depends on whether reinforce or cancel each other (Figure $1$). This is well known phenomenon of interference. We will assume that two waves are propagating in the same direction and are polarized with their field vectors in the same plane. We will also assume that they have the same frequency. The complex amplitude at any point in the interference pattern is then the sum of the complex amplitudes of the two waves, so that we can write,
$A\ =\ A_{1}\ +\ A_{2} \label{5}$
where A1=a1exp(−iϕ1) and A2=a2exp(−iϕ2) are the complex amplitudes of two waves. The resultant intensity is, therefore,
$I\ =\ |A|^{2}\ =\ I_{1}\ +\ I_{2}\ +\ 2(I_{1}I_{2})^{1/2} cos (\Delta \psi) \label{6}$
where I1 and I2 are the intensities of two waves acting separately, and Δϕ=ϕ1−ϕ2 is the phase difference between them. If the two waves are derived from a common source, the phase difference corresponds to an optical path difference,
$\Delta p\ =\ (\lambda /2 \pi) \Delta \psi \label{7}$
If Δϕ, the phase difference between the beams, varies linearly across the field of view, the intensity varies cosinusoidally, giving rise to alternating light and dark bands or fringes (Figure $1$). The intensity of an interference pattern has its maximum value:
$I_{max}\ =\ I_{1}\ +\ I_{2}\ +\ 2(I_{1}I_{2})^{1/2} \label{8}$
when Δϕ=2mπ, where m is an integer and its minimum value i determined by:
$I_{min}\ =\ I_{1}\ +\ I_{2}\ -\ 2(I_{1}I_{2})^{1/2} \label{9}$
when Δϕ=(2m+1)π The principle of interferometry is widely used to develop many types of interferometric set ups. One of the earliest set ups is Michelson interferometry. The idea of this interferometry is quite simple: interference fringes are produced by splitting a beam of monochromatic light so that one beam strikes a fixed mirror and the other a movable mirror. An interference pattern results when the reflected beams are brought back together. The Michelson interferometric scheme is shown in Figure $2$.
The difference of path lengths between two beams is 2x because beams traverse the designated distances twice. The interference occurs when the path difference is equal to integer numbers of wavelengths,
$\Delta p\ =\ 2x\ m\lambda (m= 0, \pm 1, \pm 2 ... ) \label{10}$
Modern interferometric systems are more complicated. Using special phase-measurement techniques they capable to perform much more accurate height measurements than can be obtained just by directly looking at the interference fringes and measuring how they depart from being straight and equally spaced. Typically interferometric system consist of lights source, beamsplitter, objective system, system of registration of signals and transformation into digital format and computer which process data. Vertical scanning interferometry is contains all these parts. Figure $3$ shows a configuration of a VSI interferometric system.
Many of modern interferometric systems use Mirau objective in their constructions. Mireau objective is based on a Michelson interferometer. This objective consists of a lens, a reference mirror and a beamsplitter. The idea of getting interfering beams is simple: two beams (red lines) travel along the optical axis. Then they are reflected from the reference surface and the sample surface respectively (blue lines). After this these beams are recombined to interfere with each other. An illumination or light source system is used to direct light onto a sample surface through a cube beam splitter and the Mireau objective. The sample surface within the field of view of the objective is uniformly illuminated by those beams with different incidence angles. Any point on the sample surface can reflect those incident beams in the form of divergent cone. Similarly, the point on the reference symmetrical with that on the sample surface also reflects those illuminated beams in the same form.
The Mireau objective directs the beams reflected of the reference and the sample surface onto a CCD (charge-coupled device) sensor through a tube lens. The CCD sensor is an analog shift register that enables the transportation of analog signals (electric charges) through successive stages (capacitors), controlled by a clock signal. The resulting interference fringe pattern is detected by CCD sensor and the corresponding signal is digitized by a frame grabber for further processing with a computer.
The distance between a minimum and a maximum of the interferogram produced by two beams reflected from the reference and sample surface is known. That is, exactly half the wavelength of the light source. Therefore, with a simple interferogram the vertical resolution of the technique would be also limited to λ/2. If we will use a laser light as a light source with a wavelength of 300 nm the resolution would be only 150 nm. This resolution is not good enough for a detailed near-atomic scale investigation of crystal surfaces. Fortunately, the vertical resolution of the technique can be improved significantly by moving either the reference or the sample by a fraction of the wavelength of the light. In this way, several interferograms are produced. Then they are all overlayed, and their phase shifts compared by the computer software Figure. This method is widely known as phase shift interferometry (PSI).
Most optical testing interferometers now use phase-shifting techniques not only because of high resolution but also because phase-shifting is a high accuracy rapid way of getting the interferogram information into the computer. Also usage of this technique makes the inherent noise in the data taking process very low. As the result in a good environment angstrom or sub-angstrom surface height measurements can be performed. As it was said above, in phase-shifting interferometry the phase difference between the interfering beams is changed at a constant rate as the detector is read out. Once the phase is determined across the interference field, the corresponding height distribution on the sample surface can be determined. The phase distribution $φ(x, y)$ is recorded by using the CCD camera.
Let’s assign $A(x, y)$, $B(x, y)$, $C(x, y)$ and $D(x, y)$ to the resulting interference light intensities which are corresponded to phase-shifting steps of 0, π/2, π and 3π/2. These intensities can be obtained by moving the reference mirror through displacements of λ/8, λ/4 and 3λ/8, respectively. The equations for the resulting intensities would be:
$A(x,y)\ =\ I_{1}(x,y)\ +\ I_{2}(x,y) \cos(\alpha (x,y)) \label{11}$
$B(x,y)\ =\ I_{1}(x,y)\ -\ I_{2}(x,y) \sin(\alpha (x,y)) \label{12}$
$C(x,y)\ =\ I_{1}(x,y)\ -\ I_{2}(x,y) \cos(\alpha (x,y)) \label{13}$
$D(x,y)\ =\ I_{1}(x,y)\ +\ I_{2}(x,y) \sin(\alpha (x,y)) \label{14}$
where $I_1(x,y)$ and $I_2(x,y)$ are two overlapping beams from two symmetric points on the test surface and the reference respectively. Solving Equations \ref{11} - \ref{14}, the phase map $φ(x, y)$ of a sample surface will be given by the relation:
$\psi (x,y)\ =\ \frac{B(x,y)\ -\ D(x,y)}{A(x,y)\ -\ C(x,y)} \label{15}$
Once the phase is determined across the interference field pixel by pixel on a two-dimensional CCD array, the local height distribution/contour, $h(x, y)$, on the test surface is given by
$h(x,y)\ =\ \frac{\lambda}{4\pi } \psi (x,y) \label{16}$
Normally the resulted fringe can be in the form of a linear fringe pattern by adjusting the relative position between the reference mirror and sample surfaces. Hence any distorted interference fringe would indicate a local profile/contour of the test surface.
It is important to note that the Mireau objective is mounted on a capacitive closed-loop controlled PZT (piezoelectric actuator) as to enable phase shifting to be accurately implemented. The PZT is based on piezoelectric effect referred to the electric potential generated by applying pressure to piezoelectric material. This type of materials is used to convert electrical energy to mechanical energy and vice-versa. The precise motion that results when an electric potential is applied to a piezoelectric material has an importance for nanopositioning. Actuators using the piezo effect have been commercially available for 35 years and in that time have transformed the world of precision positioning and motion control.
Vertical scanning interferometer also has another name; white-light interferometry (WLI) because of using the white light as a source of light. With this type of source a separate fringe system is produced for each wavelength, and the resultant intensity at any point of examined surface is obtained by summing these individual patterns. Due to the broad bandwidth of the source the coherent length L of the source is short:
$L\ =\ \frac{\lambda ^{2}}{n \Delta \lambda} \label{17}$
where λ is the center wavelength, n is the refractive index of the medium, ∆λ is the spectral width of the source. In this way good contrast fringes can be obtained only when the lengths of interfering beams pathways are closed to each other. If we will vary the length of a pathway of a beam reflected from sample, the height of a sample can be determined by looking at the position for which a fringe contrast is a maximum. In this case interference pattern exist only over a very shallow depth of the surface. When we vary a pathway of sample-reflected beam we also move the sample in a vertical direction in order to get the phase at which maximum intensity of fringes will be achieved. This phase will be converted in height of a point at the sample surface.
The combination of phase shift technology with white-light source provides a very powerful tool to measure the topography of quite rough surfaces with the amplitude in heights about and the precision up to 1-2 nm. Through a developed software package for quantitatively evaluating the resulting interferogram, the proposed system can retrieve the surface profile and topography of the sample objects Figure $5$.
A Comparison of Common Methods to Determine Surface Topography: SEM, AFM and VSI
Except the interferometric methods described above, there are a several other microscopic techniques for studying crystal surface topography. The most common are scanning electron microscopy (SEM) and atomic force microscopy (AFM). All these techniques are used to obtain information about the surface structure. However they differ from each other by the physical principles on which they based.
Scanning Electron Microscopy
SEM allows us to obtain images of surface topography with the resolution much higher than the conventional light microscopes do. Also it is able to provide information about other surface characteristics such as chemical composition, electrical conductivity etc, see Figure $6$. All types of data are generated by the reflecting of accelerated electron beams from the sample surface. When electrons strike the sample surface, they lose their energy by repeated random scattering and adsorption within an outer layer into the depth varying from 100 nm to 5 microns.
The thickness of this outer layer also knows as interactive layer depends on energy of electrons in the beam, composition and density of a sample. Result of the interaction between electron beam and the surface provides several types of signals. The main type is secondary or inelastic scattered electrons. They are produced as a result of interaction between the beam of electrons and weakly bound electrons in the conduction band of the sample. Secondary electrons are ejected from the k-orbitals of atoms within the surface layer of thickness about a few nanometers. This is because secondary electrons are low energy electrons (<50 eV), so only those formed within the first few nanometers of the sample surface have enough energy to escape and be detected. Secondary backscattered electrons provide the most common signal to investigate surface topography with lateral resolution up to 0.4 - 0.7 nm.
High energy beam electrons are elastic scattered back from the surface. This type of signal gives information about chemical composition of the surface because the energy of backscattered electrons depends on the weight of atoms within the interaction layer. Also this type of electrons can form secondary electrons and escape from the surface or travel father into the sample than the secondary. The SEM image formed is the result of the intensity of the secondary electron emission from the sample at each x,y data point during the scanning of the surface.
Atomic Force Microscopy
AFM is a very popular tool to study surface dissolution. AFM set up consists of scanning a sharp tip on the end of a flexible cantilever which moves across a sample surface. The tips typically have an end radius of 2 to 20 nm, depending on tip type. When the tip touch the surface the forces of these interactions leads to deflection of a cantilever. The interaction between tip and sample surface involve mechanical contact forces, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces etc. The deflection of a cantilever is usually measured by reflecting a laser beam off the back of the cantilever into a split photodiode detector. A schematic drawing of AFM can be seen in Figure $7$. The two most commonly used modes of operation are contact mode AFM and tapping mode AFM, which are conducted in air or liquid environments.
Working under the contact mode AFM scans the sample while monitoring the change in cantilever deflection with the split photodiode detector. Loop maintains a constant cantilever reflection by vertically moving the scanner to get a constant signal. The distance which the scanner goes by moving vertically at each x,y data point is stored by the computer to form the topographic image of the sample surface. Working under the tapping mode AFM oscillates the cantilever at its resonance frequency (typically~300 kHz) and lightly “taps” the tip on the surface during scanning. The electrostatic forces increase when tip gets close to the sample surface, therefore the amplitude of the oscillation decreases. The laser deflection method is used to detect the amplitude of cantilever oscillation. Similar to the contact mode, feedback loop maintains a constant oscillation amplitude by moving the scanner vertically at every x,y data point. Recording this movement forms the topographical image. The advantage of tapping mode over contact mode is that it eliminates the lateral, shear forces present in contact mode. This enables tapping mode to image soft, fragile, and adhesive surfaces without damaging them while work under contact mode allows the damage to occur.
Comparison of Techniques
All techniques described above are widely used in studying of surface nano- and micromorphology. However, each method has its own limitations and the proper choice of analytical technique depends on features of analyzed surface and primary goals of research.
All these techniques are capable to obtain an image of a sample surface with quite good resolution. The lateral resolution of VSI is much less, then for other techniques: 150 nm for VSI and 0.5 nm for AFM and SEM. Vertical resolution of AFM (0.5 Ǻ) is better then for VSI (1 - 2 nm), however VSI is capable to measure a high vertical range of heights (1 mm) which makes possible to study even very rough surfaces. On the contrary, AFM allows us to measure only quite smooth surfaces because of its relatively small vertical scan range (7 µm). SEM has less resolution, than AFM because it requires coating of a conductive material with the thickness within several nm.
The significant advantage of VSI is that it can provide a large field of view (845 × 630 µm for 10x objective) of tested surfaces. Recent studies of surface roughness characteristics showed that the surface roughness parameters increase with the increasing field of view until a critical size of 250,000 µm is reached. This value is larger then the maximum field of view produced by AFM (100 × 100 µm) but can be easily obtained by VSI. SEM is also capable to produce images with large field of view. However, SEM is able to provide only 2D images from one scan while AFM and VSI let us to obtain 3D images. It makes quantitative analysis of surface topography more complicated, for example, topography of membranes is studied by cross section and top view images.
VSI AFM SEM
Lateral resolution 0.5 - 1.2µm 0.5 nm 0.5 - 1 nm
Vertical Resolution 2 nm 0.5 Å Only 2D images
Field of View 845 x 630 µm (10x objective) 100 x 100 µm 1 - 2 mm
Vertical Range of Scan 1 mm 10 µm -
Preparation of Sample - - Required coating of a conducted material
Required environment Air Air, liquid Vacuum
Table $1$ A comparison of VSI sample and resolution with AFM and SEM.
The Experimental Studying of Surface Processes Using Microscopic Techniques
The limitations of each technique described above are critically important to choose appropriate technique for studying surface processes. Let’s explore application of these techniques to study dissolution of crystals.
When crystalline matter dissolves the changes of the crystal surface topography can be observed by using microscopic techniques. If we will apply an unreactive mask (silicon for example) on crystal surface and place a crystalline sample into the experiment reactor then we get two types of surfaces: dissolving and remaining the same or unreacted. After some period of time the crystal surface starts to dissolve and change its z-level. In order to study these changes ex situ we can pull out a sample from the reaction cell then remove a mask and measure the average height difference Δh bar between the unreacted and dissolved areas. The average heights of dissolved and unreacted areas are obtained through digital processing of data obtained by microscopes. The velocity of normal surface retreat vSNR during the time interval ∆t is defined by \ref{18}
$\nu _{SNR}\ =\ \frac{\Delta \hbar}{\Delta t} \label{18}$
Dividing this velocity by the molar volume (cm3/mol) gives a global dissolution rate in the familiar units of moles per unit area per unit time:
$R\ =\ \frac{\nu _{SNR}}{\bar{V}} \label{19}$
This method allows us to obtain experimental values of dissolution rates just by precise measuring of average surface heights. Moreover, using this method we can measure local dissolution rates at etch pits by monitoring changes in the volume and density of etch pits across the surface over time. VSI technique is capable to perform these measurements because of large vertical range of scanning. In order to get precise values of rates which are not depend on observing place of crystal surface we need to measure enough large areas. VSI technique provides data from areas which are large enough to study surfaces with heterogeneous dissolution dynamics and obtain average dissolution rates. Therefore, VSI makes possible to measure rates of normal surface retreat during the dissolution and observe formation, growth and distribution of etch pits on the surface.
However, if the mechanism of dissolution is controlled by dynamics of atomic steps and kink sites within a smooth atomic surface area, the observation of the dissolution process need to use a more precise technique. AFM is capable to provide information about changes in step morphology in situ when the dissolution occurs. For example, immediate response of the dissolved surface to the changing of environmental conditions (concentrations of ions in the solution, pH etc.) can be studied by using AFM.
SEM is also used to examine micro and nanotexture of solid surfaces and study dissolution processes. This method allows us to observe large areas of crystal surface with high resolution which makes possible to measure a high variety of surfaces. The significant disadvantage of this method is the requirement to cover examine sample by conductive substance which limits the resolution of SEM. The other disadvantage of SEM is that the analysis is conducted in vacuum. Recent technique, environmental SEM or ESEM overcomes these requirements and makes possible even examine liquids and biological materials. The third disadvantage of this technique is that it produces only 2D images. This creates some difficulties to measure Δhbar within the dissolving area. One of advantages of this technique is that it is able to measure not only surface topography but also chemical composition and other surface characteristics of the surface. This fact is used to monitor changing in chemical composition during the dissolution.
Dual Polarization Interferometry for Thin Film Characterization
Overview
As research interests begin to focus on progressively smaller dimensions, the need for nanoscale characterization techniques has seen a steep rise in demand. In addition, the wide scope of nanotechnology across all fields of science has perpetuated the application of characterization techniques to a multitude of disciplines. Dual polarization interferometry (DPI) is an example of a technique developed to solve a specific problem, but was expanded and utilized to characterize fields ranging surface science, protein studies, and crystallography. With a simple optical instrument, DPI can perform label-free sensing of refractive index and layer thickness in real time, which provides vital information about a system on the nanoscale, including the elucidation of structure-function relationships.
History
DPI was conceived in 1996 by Dr. Neville Freeman and Dr. Graham Cross (Figure $8$) when they recognized a need to measure refractive index and adlayer thickness simultaneously in protein membranes to gain a true understanding of the dynamics of the system. They patented the technique in 1998, and the instrument was commercialized by Farfield Group Ltd. in 2000.
Freeman and Cross unveiled the first full publication describing the technique in 2003, where they chose to measure well-known protein systems and compare their data to X-ray crystallography and neutron reflection data. The first system they studied was sulpho-NHS-LC-biotin coated with streptavidin and a biotinylated peptide capture antibody, and the second system was BS3 coated with anti-HSA. Molecular structures are shown in Figure $9$. Their results showed good agreement with known layer thicknesses, and the method showed clear advantages over neutron reflection and surface plasmon resonance. A schematic and picture of the instrument used by Freeman and Cross in this publication is shown in Figure $10$ and Figure $11$, respectively.
Instrumentation
Theory
The optical power of DPS comes from the ability to measure two different interference fringe patterns simultaneously in real time. Phase changes in these fringe patterns result from changes in refractive index and layer thickness that can be detected by the waveguide interferometer, and resolving these interference patterns provides refractive index and layer thickness values.
Optics
A representation of the interferometer is shown in Figure $12$. The interferometer is composed of a simplified slab waveguide, which guides a wave of light in one transverse direction without scattering. A broad laser light is shone on the side facet of stacked waveguides separated with a cladding layer, where the waveguides act as a sensing layer and a reference layer that produce an interference pattern in a decaying (evanescent) electric field.
A full representation of DPI theory and instrumentation is shown in Figure $13$ and Figure $14$, respectively. The layer thickness and refractive index measurements are determined by measuring two phase changes in the system simultaneously because both transverse-electric and transverse-magnetic polarizations are allowed through the waveguides. Phase changes in each polarization of the light wave are lateral shifts of the wave peak from a given reference peak. The phase shifts are caused by changes in refractive index and layer thickness that result from molecular fluctuations in the sample. Switching between transverse-electric and transverse-magnetic polarizations happens very rapidly at 2 ms, where the switching mechanism is performed by a liquid crystal wave plate. This enables real-time measurements of the parameters to be obtained simultaneously.
Comparison of DPI with Other Techniques
Initial DPI Evaluations
The first techniques rigorously compared to DPI were neutron reflection (NR) and X-ray diffraction. These studies demonstrated that DPI had a very high precision of 40 pm, which is comparable to NR and superior to X-ray diffraction. Additionally, DPI can provide real time information and conditions similar to an in-vivo environment, and the instrumental requirements are far simpler than those for NR. However, NR and X-ray diffraction are able to provide structural information that DPI cannot determine.
DPI Comparison with orthogonal Analytical Techniques
Comparisons between DPI and alternative techniques have been performed since the initial evaluations, with techniques including surface plasmon resonance (SPR), atomic force microscopy (AFM), and quartz crystal microbalance with dissipation monitoring (QCM-D).
SPR is well-established for characterizing protein adsorption and has been used before DPI was developed. These techniques are very similar in that they both use an optical element based on an evanescent field, but they differ greatly in the method of calculating the mass of adsorbed protein. Rigorous testing showed that both tests give very accurate results, but their strengths differ. Because SPR uses spot-testing with an area of 0.26 mm2 while DPI uses the average measurements over the length of the entire 15 mm chip, SPR is recommended for use in kinetic studies where diffusion in involved. However, DPI shows much greater accuracy than SPR when measuring refractive index and layer thickness.
Atomic Force Microscopy is a very different analytical technique than DPI because it is a type of microscopy used for high-resolution surface characterization. Hence, AFM and DPI are well-suited to be used in conjunction because AFM can provide accurate molecular structures and surface mapping while DPI provides layer thickness that AFM cannot determine.
QCM-D is a technique that can be used with DPI to provide complementary data. QCM-D differs from DPI by calculating both mass of the solvent and the mass of the adsorbed protein layer. These techniques can be used together to determine the amount of hydration in the adsorbed layer. QCM-D can also quantify the supramolecular conformation of the adlayer using energy dissipation calculations, while DPI can detect these conformational changes using birefringence, thus making these techniques orthogonal. One way that DPI is superior to QCM-D is that the latter techniques loses accuracy as the film becomes very thin, while DPI retains accuracy throughout the angstrom scale.
A tabulated representation of these techniques and their ability to measure structural detail, in-vivoconditions, and real time data is shown in Table $2$.
Technique Real Time Close to In-vivo Structural Details
QCM-D Yes Yes Medium
SPR Yes Yes Low
X-ray No No Very high
AFM No No High
NR No Yes High
DPI Yes Yes Medium
Table $2$: Comparison of DPI with other analytical techniques. Data reproduced from J. Escorihuela, M. A. Gonzalez-Martinez, J. L. Lopez-Paz, R. Puchades, A. Maquieira, and D. Gimenez-Romero, Chem. Rev., 2015, 115, 265.a Close to in-vivo means that the sensor can provide information that is similar to those experiences under in-vivo conditions. Copyright: Chemical Reviews, (2015).
Applications of DPI
Protein Studies
DPI has been most heavily applied to protein studies. It has been used to elucidate membrane crystallization, protein orientation in a membrane, and conformational changes. It has also been used to study protein-protein interactions, protein-antibody interactions, and the stoichiometry of binding events.
Thin Film Studies
Since its establishment using protein interaction studies, DPI has seen its applications expanded to include thin film studies. DPI was compared to ellipsometry and QCM-D studies to indicate that it can be applied to heterogeneous thin films by applying revised analytical formulas to estimate the thickness, refractive index, and extinction coefficient of heterogeneous films that absorb light. A non-uniform density distribution model was developed and tested on polyethylenimine deposited onto silica and compared to QCD-M measurements. Additionally, this revised model was able to calculate the mass of multiple species of molecules in composite films, even if the molecules absorbed different amounts of light. This information is valuable for providing surface composition. The structure of polyethylenimine used to form an adsorbing film is shown in Figure $15$.
A challenge of measuring layer thickness in thin films such as polyethylenimine is that DPI’s evanescent field will create inaccurate measurements in inhomogeneous films as the film thickness increases. An error of approximately 5% was seen when layer thickness was increased to 90 nm. Data from this study determining the densities throughout the polyethylenimine film are shown in Figure $16$.
Thin Layer Adsorption Studies
Similar to thin film characterization studies, thin layers of adsorbed polymers have also been elucidated using DPI. It has been demonstrated that two different adsorption conformations of polyacrylamide form on resin, which provides useful information about adsorption behaviors of the polymer. This information is industrially important because polyacrylamide is widely used through the oil industry, and the adsorption of polyacrylamide onto resin is known to affect the oil/water interfacial stability.
Initial adsorption kinetics and conformations were also illuminated using DPI on bottlebrush polyelectrolytes. Bottlebrush polyelectrolytes are show in Figure $17$. It was shown that polyelectrolytes with high charge density initially adsorbed in layers that were parallel to the surface, but as polyelectrolytes were replaced with low charge density species, alignment changed to prefer perpendicular arrangement to the surface.
Hg2+ Biosensing Studies
In 2009, it was shown by Wang et al. that DPI could be used for small molecule sensing. In their first study describing this use of DPI, they used single stranded DNA that was rich in thymine to complex Hg2+ ions. When DNA complexed with Hg2+, the DNA transformed from a random coil structure to a hairpin structure. This change in structure could be detected by DPI at Hg2+ concentrations smaller than the threshold concentration allowed in drinking water, indicating the sensitivity of this label-free method for Hg2+ detection. High selectivity was indicated when the authors did not observe similar structural changes for Mg2+, Ca2+, Mn2+, Fe3+, Cd2+, Co2+, Ni2+, Zn2+ or Pb2+ ions. A graphical description of this experiment is shown in Figure. Wang et al. later demonstrated that biosensing of small molecules and other metal cations can be achieved using other forms of functionalized DNA that specifically bind the desired analytes. Examples of molecules detected in this manner are shown in Figure $18$. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.01%3A_Interferometry.txt |
Atomic force microscopy (AFM) is a high-resolution form of scanning probe microscopy, also known as scanning force microscopy (SFM). The instrument uses a cantilever with a sharp tip at the end to scan over the sample surface (Figure $1$). As the probe scans over the sample surface, attractive or repulsive forces between the tip and sample, usuually in the form of Van Der Waal forces but also can be a number of others such as electrostatic and hydrophobic/hydrophilic, cause a deflection of the cantilever. The deflection is measured by a laser (Figure $2$) which is reflected off the cantilever into photodiodes. As one of the photodiodes collects more light, it creates an output signal that is processed and provides information about the vertical bending of the cantilever. This data is then sent to a scanner that controls the height of the probe as it moves across the surface. The variance in height applied by the scanner can then be used to produce a three-dimensional topographical representation of the sample.
Modes of Operation
Contact Mode
The contact mode method utilizes a constant force for tip-sample interactions by maintaining a constant tip deflection (Figure $2$.The tip communicates the nature of the interactions that the probe is having at the surface via feedback loops and the scanner moves the entire probe in order to maintain the original deflection of the cantilever. The constant force is calculated and maintained by using Hooke's Law, \ref{2}. This equation relates the force (F), spring constant (k), and cantilever deflection (x). Force constants typically range from 0.01 to 1.0 N/m. Contact mode usually has the fastest scanning times but can deform the sample surface. It is also only the only mode that can attain "atomic resolution."
$F\ =\ -kx \label{1}$
Tapping Mode
In the tapping mode the cantilever is externally oscillated at its fundamental resonance frequency (Figure $3$). A piezoelectric on top of the cantilever is used to adjust the amplitude of oscillation as the probe scans across the surface. The deviations in the oscillation frequency or amplitude due to interactions between the probe and surface are measured, and provide information about the surface or types of material present in the sample. This method is gentler than contact AFM since the tip is not dragged across the surface, but it does require longer scanning times. It also tends to provide higher lateral resolution than contact AFM.
Noncontact Mode
For noncontact mode the cantilever is oscillated just above its resonance frequency and this frequency is decreased as the tip approaches the surface and experiences the forces associated with the material (Figure $4$). The average tip-to-sample distance is measured as the oscillation frequency or amplitude is kept constant, which then can be used to image the surface. This method exerts very little force on the sample, which extends the lifetime of the tip. However, it usually does not provide very good resolution unless placed under a strong vacuum.
Experimental Limitations
A common problem seen in AFM images is the presence of artifacts which are distortions of the actual topography, usually either due to issues with the probe, scanner, or image processing. The AFM scans slowly which makes it more susceptible to external temperature fluctuations leading to thermal drift. This leads to artifacts and inaccurate distances between topographical features.
It is also important to consider that the tip is not perfectly sharp and therefore may not provide the best aspect ratio, which leads to a convolution of the true topography. This leads to features appearing too large or too small since the width of the probe cannot precisely move around the particles and holes on the surface. It is for this reason that tips with smaller radii of curvature provide better resolution in imaging. The tip can also produce false images and poorly contrasted images if it is blunt or broken.
The movement of particles on the surface due to the movement of the cantilever can cause noise, which forms streaks or bands in the image. Artifacts can also be made by the tip being of inadequate proportions compared to the surface being scanned. It is for this reason that it is important to use the ideal probe for the particular application.
Sample Size and Preparation
The sample size varies with the instrument but a typical size is 8 mm by 8 mm with a typical height of 1 mm. Solid samples present a problem for AFM since the tip can shift the material as it scans the surface. Solutions or dispersions are best for applying as uniform of a layer of material as possible in order to get the most accurate value of particles’ heights. This is usually done by spin-coating the solution onto freshly cleaved mica which allows the particles to stick to the surface once it has dried.
Applications of AFM
AFM is particularly versatile in its applications since it can be used in ambient temperatures and many different environments. It can be used in many different areas to analyze different kinds of samples such as semiconductors, polymers, nanoparticles, biotechnology, and cells amongst others. The most common application of AFM is for morphological studies in order to attain an understanding of the topography of the sample. Since it is common for the material to be in solution, AFM can also give the user an idea of the ability of the material to be dispersed as well as the homogeneity of the particles within that dispersion. It also can provide a lot of information about the particles being studied such as particle size, surface area, electrical properties, and chemical composition. Certain tips are capable of determining the principal mechanical, magnetic, and electrical properties of the material. For example, in magnetic force microscopy (MFM) the probe has a magnetic coating that senses magnetic, electrostatic, and atomic interactions with the surface. This type of scanning can be performed in static or dynamic mode and depicts the magnetic structure of the surface.
AFM of Carbon Nanotubes
Atomic force microscopy is usually used to study the topographical morphology of these materials. By measuring the thickness of the material it is possible to determine if bundling occurred and to what degree. Other dimensions of the sample can also be measured such as the length and width of the tubes or bundles. It is also possible to detect impurities, functional groups (Figure $5$), or remaining catalyst by studying the images. Various methods of producing nanotubes have been found and each demonstrates a slightly different profile of homogeneity and purity. These impurities can be carbon coated metal, amorphous carbon, or other allotropes of carbon such as fullerenes and graphite. These facts can be utilized to compare the purity and homogeneity of the samples made from different processes, as well as monitor these characteristics as different steps or reactions are performed on the material. The distance between the tip and the surface has proven itself to be an important parameter in noncontact mode AFM and has shown that if the tip is moved past the threshold distance, approximately 30 μm, it can move or damage the nanotubes. If this occurs, a useful characterization cannot be performed due to these distortions of the image.
AFM of Fullerenes
Atomic force microscopy is best applied to aggregates of fullerenes rather than individual ones. While the AFM can accurately perform height analysis of individual fullerene molecules, it has poor lateral resolution and it is difficult to accurately depict the width of an individual molecule. Another common issue that arises with contact AFM and fullerene deposited films is that the tip shifts clusters of fullerenes which can lead to discontinuities in sample images.
A Practical Guide to Using the Nanoscope Atomic Force Microscopy
The following is intended as a guide for use of the Nanoscope AFM system within the Shared Equipment Authority at Rice University (http://sea.rice.edu/). However, it can be adapted for similar AFM instruments.
Please familiarize yourself with the Figures. All relevant parts of the AFM setup are shown.
Initial Setup
Sign in.
Turn on each component shown in Figure $6$.
1. The controller that powers the scope (the switch is at the back of the box)
2. The camera monitor
3. The white light source
Select imaging mode using the mode selector switch is located on the left hand side of the atomic force microscope (AFM) base (Figure $7$, there are three modes:
1. Scanning tunneling microscopy (STM)
2. Atomic force microscopy/lateral force microscopy (AFM/LFM)
3. Tapping mode atomic force microscopy (TM-AFM)
Sample Preparation
Most particulate samples are imaged by immobilizing them onto mica sheet, which is fixed to a metal puck (Figure $6$). Samples that are in a solvent are easily deposited. To make a sample holder a sheet of Mica is punched out and stuck to double-sided carbon tape on a metal puck. In order to ensure a pristine surface, the mica sheet is cleaved by removing the top sheet with Scotch™ tape to reveal a pristine layer underneath. The sample can be spin coated onto the mica or air dried.
The spin coat method;
• Use double-sided carbon sticky tape to secure the puck on the spin coater.
• Load the sample by drop casting the sample solution onto the mica surface.
• The sample must be dry to ensure that the tip remains clean.
Puck Mounting
1. Place the sample puck in the magnetic sample holder, and center the sample.
2. Verify that the AFM head is sufficiently raised to clear the sample with the probe. The sample plane is lower than the plane defined by the three balls. The sample should sit below the nubs. Use the lever on the right side of the J-scanner to adjust the height. (N.B. the labels up and down refer to the tip. “Tip up” moves the sample holder down to safety, and tip down moves the sample up. Use caution when moving the sample up.)
3. Select the appropriate cantilever for the desired imaging mode. The tips are fragile and expensive (ca.\$20 per tip) so handle with care.
• Contact AFM use a silicon nitride tip (NP).
• Tapping AFM use a silicon tip (TESP).
Tip Mounting and Alignment
1. Mount a tip using the appropriate fine tweezers. Use the tweezers carefully to avoid possible misalignment. Work on a white surface (a piece of paper or a paper towel) so that the cantilever can be easily seen. The delicate part of the tip the cantilever is located at the beveled end and should not be handled at that end (shown in Figure $8$). The tips are stored on a tacky gel tape. Use care, as dropping the tip will break the cantilever. Think carefully about how you approach the tip with the tweezers. Generally gripping it from the side is the best option. Once the tip is being held by the tweezers it needs to be placed in the tip holder clamp. With one hand holding the tweezers, use the other hand to open the clip by pressing down on the whole holder while it is lying on a flat hard surface. Once the clip is raised by downward pressure insert the tip (Figure $9$a). Make sure the tip is seated firmly and that the back end is in contact with the end of the probe groove, there is a circular hole in the clamp. When the clamp holds the tip the hole should look like a half moon, with half filled with the back straight end of the tip. The groove is larger than the tip, so try to put the tip in the same place each time you replace it to improve reproducibility.
2. Carefully place the tip holder onto the three nubs to gently hold it in place. Bring the holder in at angle to avoid scraping it against the sample (Figure $9$ b).
3. Tighten the clamping screw located on the back of the AFM head to secure the cantilever holder and to guarantee electrical contact. The screw is on the back of the laser head, at the center.
4. Find the cantilever on the video display. Move the translational stage to find it.
5. Adjust the focusing knob of the optical microscope (located above AFM head) to focus on the cantilever tip. Tightening the focus knob moves the camera up. Focus on the dark blob on the right hand side of the screen as that is the cantilever.
6. Focus on the top mica surface, being careful not to focus on the bottom surface between the top of the double-sided carbon tape and the mica surface. Generally you will see a bubble trapped between the carbon tape and the mica surface. If you are focused on the top surface you can frequently see the reflection of the tip on the mica surface. The real focus is half way between the two cantilever focus points.
7. Slowly lower the tip down to the surface, if the camera is focused properly onto the surface the cantilever tip will gradually come into view. Keep lowering until the two tips images converge into one. Please note that you can crash the tip into the surface if you go past this point. This is damaging to the tip and may not be possible to obtain an image if it happens, and the tip may have to be replaced. You will know if this happens when looking at the cantilever tip if it goes from black to bright white. At this point the tip is in contact with the surface and turns white as it is not reflecting light back into the photo-diode , but instead into the camera.
8. Find the laser spot, it the spot is not visible on the camera screen look at the cantilever holder and see if it was visible. It helps to lower the brightness of the white light, use the translational stage again to search for it.
9. Once the laser spot has been located use the X and Y laser adjustment knobs to align the laser spot roughly onto the tip of the cantilever.
10. Maximize the sum signal using the photo-detector mirror lever located on the back of the head and the laser X and Y translation. As long as the sum signal value is above 3.6 V, the instrument will work, but keep adjusting the X and Y directions of the laser until the sum signal is as high as possible.
11. To ensure that the laser is centered on the photodiode, zero the detector signals using the mirror adjustment knobs located on the top and back of the head. The knob on the top of the head adjusts TMAFM mode, and the knob at the rear of the head adjusts AFM/LFM mode. The range is -9.9 V to 9.9 V in both modes. The number will change slowly at the extremes of the range and quickly around 0 V. Ideally, the zeroed signal should be between ±0.1 V. Do this first in TMAFM mode, then switch to AFM/LFM mode and try to zero the detector. Flip back and forth between the two modes a few times (adjusting each time) until the value in both modes is as close to 0 V as possible. It will fluctuate during the experiment. If there is steady drift, you can adjust it during the experiment. If the number won’t settle down, the laser could be at a bad position on the cantilever. Move the laser spot and repeat (Figure $10$). Always end this step in TMAFM mode.
12. Focus again on the sample surface.
13. The sample surface can still be moved with respect to the camera via the sample stage. In choosing a place to image nanoparticles, avoid anything that you can see on the sample surface. The scale on the screen is 18 µm per cm.
Tip Tuning
1. Log onto computer.
2. The software is called Nanoscope. Close the version dialog box. Typically the screen on the left will allow adjustment of software parameters, and the screen on the right will show the data.
3. On the adjustment screen, the two icons are to adjust the microscope (a picture of a microscope) and to perform data analysis (a picture of a rainbow). Click the microscope icon.
4. Under the microscope pull down menu, choose profile and select tapping AFM. Don’t use another users profile. Use the “tapping” AFM.
5. Before beginning tapping mode, the cantilever must be tuned to ensure correct operation. Each tip has its own resonance frequency. The cantilever can be blindly auto-tuned or manually tuned. However the auto-tuning scheme can drive the amplitude so high as to damage the tip.
Auto Tuning
1. Click on the cantilever tune icon.
2. Click the auto-tune button. The computer will enter the tuning procedure, automatically entering such parameters as set point and drive amplitude. If tuned correctly, the drive frequency will be approximately 300 Hz.
Manually Tuning
1. Click on the cantilever tune icon.
2. Select manual tuning under the sweep controls menu.
3. The plot is of amplitude (white) and phase (yellow) versus frequency. The sweep width is the X-range. The central frequency is the driving frequency which should be between 270-310 Hz. Typically the initial plot will not show any peaks, and the X and Y settings will need to be adjusted in order to see the resonance plot.
4. Widen the spectral window to about 100 Hz. The 270 – 310 Hz window where the driving frequency will be set needs to be visible.
5. To zoom in use the green line (this software is not click and drag!):
1. Left click separation
2. Left click position
3. Right click to do something
4. Right click to clear lines
6. If a peak is clipped, change the drive amplitude. Ideally this will be between 15 and 20 mV, and should be below 500 mV. If a white line is not visible (there should be a white line along the bottom of the graph), the drive amplitude must be increased.
7. Ideally the peak will have a regular shape and only small shoulders. If there is a lot of noise, re-install the tip and things could improve. (Be careful as the auto-tuning scheme can drive the amplitude so high as to damage the tip.)
8. At this point, auto-tuning is okay. We can see that the parameters are reasonable. To continue the manual process, continue following these steps.
9. Adjust the drive amplitude so that the peak is at 2.0 V.
10. Amplitude set point while tuning corresponds to the vertical off set. If it is set to 0, the green line is 0.
11. Position the drive frequency not at the center of the peak, but instead at 5% toward the low energy (left) of the peak value. This offset is about 4/10th of a division. Right click three times to execute this change. This accounts for the damping that occurs when the tip approaches the sample surface.
12. Left monitor - channel 2 dialogue box - click zero phase.
Image Acquisition
1. Click the eyeball icon for image mode.
2. Parameter adjustments.
1. Other controls.
2. Microscope mode: tapping.
3. Z-limit max height: 5.064 µm. This can be reduced if limited in Z-resolution.
4. Color table: 2.
5. Engage set point: 1.00.
6. Serial number of this scanner (double check since this has the factory parameter and is different from the other AFM).
7. Parameter update retract; disabled.
3. Scan Controls
1. Scan size: 2 µm. Be careful when changing this value – it will automatically go between µm and nm
2. (reasonable values are from 200 nm to 100 µm).
3. Aspect ratio: 1 to 1.
4. X and Y offset: 0.
5. Scan angle (like scan rotation): raster on the diagonal.
6. Scan rate: 1.97 Hz is fast, and 100 Hz is slow.
4. Feedback Control:
1. SPM: amplitude.
2. Integral gain: 0.5 (this parameter and the next parameter may be changed to improve image).
3. Proportional gain: 0.7.
4. Amplitude set point: 1 V.
5. Drive frequency: from tuning.
6. Drive amplitude: from tuning.
Once all parameters are set, click engage (icon with green arrow down) to start engaging cantilever to sample surface and to begin image acquisition. The bottom of the screen should be “tip secured”. When the tip reaches the surface it automatically begins imaging.
If the amplitude set point is high, the cantilever moves far away from the surface, since the oscillation is damped as it approaches. While in free oscillation (set amplitude set point to 3), adjust drive amplitude so that the output voltage (seen on the scope) is 2 V. Big changes in this value while an experiment is running indicate that something is on the tip. Once the output voltage is at 2 V, bring the amplitude set point back down to a value that puts the z outer position line white and in the center of the bar on the software (1 V is very close).
Select channel 1 data type – height. Select channel 2 data type - amplitude. Amplitude looks like a 3D image and is an excellent visualization tool or for a presentation. However the real data is the height data.
Bring the tip down (begin with amplitude set point to 2). The goal is to tap hard enough to get a good image, but not so hard as to damage the surface of the tip. Set to 3 clicks bellow jus touching by further lowering amplitude set point with 3 left arrow clicks on the keyboard. The tip Z-center position scale on the right hand screen shows the extension on the piezo scanner. When the tip is properly adjusted, expect this value to be near the center.
Select view/scope mode (the scope icon). Check to see if trace and retrace are tracking each other. If so, the lines should look the same, but they probably will not overlap each other vertically or horizontally. If they are tracking well, then your tip is scanning the sample surface and you may return to view/image mode (the image icon). If they are not tracking well, adjust the scan rate, gains, and/or set point to improve the tracking. If tracing and retrace look completely different, you may need to decrease the set point to improve the tracking. If trace and retrace look completely different, you may need to decrease the set point one or two clicks with the left arrow key until they start having common features in both directions. Then reduce the scan rate: a reasonable value for scan sizes of 1-3 µm would be 2 Hz. Next try increasing the integral gain. As you increase the integral gain, the tracking should improve, although you will reach a value beyond which the noise will increase as the feedback loop starts to oscillate. If this happens, reduce gains, if trace and retrace still do not track satisfactorily, reduce the set point again. Once the tip is tracking the surface, choose view/image mode.
Integral gain controls the amount of integrated error signal used in the feedback calculation. The higher this parameter is set, the better the tip will track the same topography. However, if it is set too high, noise due to feedback oscillation will be introduced into the scan.
Proportional gain controls the amount of proportional arrow signal used in the feedback calculation.
Once amplitude set point is adjusted with the phase data, change channel 2 to amplitude. The data scale can be changed (it is the same as for display as it does not affect the data). In the amplitude image, lowering the voltage increases the contrast.
Move small amounts on the image surface with X and Y offset to avoid large, uninteresting objects. For example, setting the Y offset to -2 will remove features at the bottom of the image, thus shifting the image up. Changing it to -3 will then move the image one more unit up. Make sure you are using µm and not nm if you expect to see a real change.
To move further, disengage the tip (click the red up arrow icon so that the tip moves up 25 µm and secures). Move upper translational stage to keep the tip in view in the light camera. Re-engage the tip.
If the shadow in the image is drawn out, the amplitude set point should be lowered even further. The area on the image that is being drawn is controlled by the frame pull-down menu (and the up and down arrows). Lower the set point and redraw the same neighborhood to see if there is improvement. The proportional and integral gain can also be adjusted.
The frame window allows you to restart from the top, bottom, or a particular line.
Another way to adjust the amplitude set point value is to click on signal scope to ensure trace and retrace overlap. To stop Y rastering, slow scan axis.
To take a better image, increase the number of lines (512 is max), decrease the speed (1 Hz), and lower the amplitude set point. The resolution is about 10 nm in the X and Y directions due to the size of the tip. The resolution in the Z direction is less than 1 nm.
Changing the scan size allows us to zoom in on features. You can zoom in on a center point by using zoom in box (left clicking to toggle between box position and size), or you can manually enter a scan size on the left hand screen.
Click on capture (the camera icon) to grab images. To speed things up, restart the scan at an edge to grab a new image after making any changes in the scan and feedback parameters. When parameters are changed, the capture option will toggle to “ next”. There is a forced capture option, which allows you to collect an image even if parameters have been switched during the capture. It is not completely reliable.
To change the file name, select capture filename under the capture menu. The file will be saved in the!directory which is d:\capture. To save the picture, under the utility pull-down menu select TIFF export. The zip drive is G:.
Image Acquisition
Analysis involves flattening the image and measuring various particle dimensions, click the spectrum button.
Select the height data (image pull-down menu, select left or right image). The new icons in the “analysis” menu are:
• Thumbnails
• Top view
• Side view
• Section analysis
• Roughness
• Rolling pin (flattening)
• Plane auto-fit
To remove the bands (striping) in the image, select the rolling pin. The order of flattening is the order of the baseline correction. A raw offset is 0 order, a straight sloping line is order 1. Typically a second order correction is chosen to remove “scanner bow” which are the dark troughs on the image plane.
To remove more shadows, draw exclusion boxes over large objects and then re-flatten. Be sure to save the file under a new name. The default is t overwrite it.
In section analysis, use the multiple cursor option to measure a particle in all dimensions. Select fixed cursor. You can save pictures of this information, but things must be written down! There is also a particle analysis menu.
Disengage the cantilever and make sure that the cantilever is in secure mode before you move the cantilever to the other spots or change to another sample.
Loosen the clamp to remove the tip and holder.
Remove the tip and replace it onto the gel sticky tape using the fine tweezers.
Recover the sample with tweezers.
Close the program.
Log out of the instrument.
After the experiment, turn off the monitor and the power of the light source. Leave the controller on.
Sign out in the log book.
AFM - Scanning Probe Microscopy
Atomic force microscopy (AFM) has become a powerful tool to investigate 2D materials and the related 2D materials (e.g., graphene) for both the nano-scale imaging as well as the measurement and analysis of the frictional properties.
The basic structure and function of the typical Nanoscope AFM system is discussed in the section on the practical guide.
For the contact mode of AFM, a schematic is shown in Figure $11$ The tip scans at the surface of the sample, the cantilever will have a shift of Δz, which is a function of the position of the tip. If we know the mechanical constant of the tip C, the interaction force, or the normal load of between the tip and sample can be calculated by \ref{2}, where C is determined by the material and intrinsic properties of the tip and cantilever. As shown in Figure $11$ a, we usually treat the back side of the cantilever as a mirror to reflect the laser, so the change of the position will change the path length of the laser, and then detected by the quadrant detector.
$F\ =\ C \cdot \Delta z \label{2}$
We can get the topography, height profile, phase and lateral force channel while measuring through the contact mode AFM. Comparing the tapping mode, the lateral force, also known as the friction, appears very crucial. The direct signal acquired is the current change caused due to the lateral force on the sample interacting with the tip, so the unit is usually nA. To calculate the real friction force in Newton (N) or nano-Newton (nN), you need to let this current signal time a friction coefficient, which is also determined by the intrinsic properties of the materials that makes the tip.
A typical AFM is shown in Figure $11$ b. The sample stage is at the inside of the bottom chamber. You can blow the gas into the chamber or pump the vacuum in need for the testing under different ambient. That is especially important in testing the frictional properties of materials.
For the sample preparation part, the sample fixed on the mica mentioned earlier in the guide is for the synthesized chemical powders. For graphene, it can be simply placed on any flat substrate, such as mica, SiC, sapphire, silica, etc. Just placing the solid state sample on substrate onto the sample stage and the further work can be conducted.
Data Collection
For data collection, the topography and height profile are acquired using the same method in the tapping mode. However, there are two additional pieces of information that are necessary in order to determine the frictional properties of the material. First, the normal load. The normal load is described in \ref{2}; however, what we directly get here proportional to the normal load is the setpoint we give it for the tip to the sample. It is a current. So we need a vertical force coefficient (CVF) to get what the normal load we apply to the material, as illustrated in \ref{3}
$F\ =\ I_{setpoint} \cdot C_{VF} \label{3}$
For data collection, the topography and height profile are acquired using the same method in the tapping mode. However, there are two additional pieces of information that are necessary in order to determine the frictional properties of the material. First, the normal load. The normal load is described in \ref{4}, where K is the stiffness of the tip, it can be get through the vibrational model of the cantilever, and usually we can get it if we buy the commercial AFM tip. L is the optical coefficient of the cantilever, it can be acquired by calibrate the force-displacement curve of the tip, as shown in Figure $12$. Then L can be acquired by getting the slope of process 1 or 6 in Figure $13$.
$C_{VP} \ =\ \frac{K}{L} \label{4}$
Figure $13$ is a typical friction image, it is composed of n*n lines by scanning. Each point is the friction force value corresponding to that point. All we need to do is to get the average friction for the area we are interested in. Then use this current signal multiplied by the lateral force coefficient then we can obtain the actual friction force.
During the process of collecting the original data of the lateral force (friction), for every line in the image, the friction information is actually composed of two data line: trace and retrace (see Figure $13$). The average of results for trace (Figure $13$, black line) and retrace (Figure $13$, red line) as the friction signal of the certain point on the line. That is to say, the actual friction is determined from \ref{5}, where the Iforward and Ibackward are data points we can derive from the trace and retrace from the friction image, and CLF is the lateral force coefficient.
$F_{f}\ =\ \frac{I_{forward}\ -\ I_{backward}}{2} \cdot C_{LF} \label{5}$
Data Analysis
There are several ways to compare the details of the frictional properties at the nanoscale. Figure $14$ is an example comparing the friction on the sample (in this case, few-layer graphene) and the friction on the substrate (SiO2). As illustrated in \ref{5}, qualitatively we can easily see the friction on the graphene is way smaller than it on the SiO2 substrate. As graphene is a great lubricant and have low friction, the original data just enable us to confirm that.
Figure $15$ shows multi-layers of graphene on a mica. By selecting a certain cross section line and comparing both height profile and friction profile, it will provide us some information of the friction related to the structure behind this section. The friction-distance curve is a typical important path for the data analysis.
We can also take the average of friction signal for an area and compare that from the region to the region. Figure $16$ shows a region of the graphene with the layer numbers from 1-4. Figure $16$ a and b are also the topography and the friction image respectively. By compare the average friction from the area to the area, we can obviously see the friction on graphene decreases as the number of layers increases. Though Figure $16$ c and d we can obviously see this average friction change on the surface from 1 to 4 layers of graphene. But for a more general statistical way, getting the normalized signal of the average friction and comparing them can be more straightforward.
Another way to compare the frictional properties is that, to apply different normal load and see how the friction change, then get the information on friction-normal load curve. This is important because we know too much normal load for the materials can easily break or wear the materials. Examples and details will be discussed below.
The effect of H2O: a cautionary tale
During the process of using tip approach to graphene and applying the normal load (increasing normal load, loading process) and withdrawing the tip gradually (decreasing normal load, unloading process), the friction on graphene exhibits hysteresis, which means a large increment of the friction while we drag off the tip. This process can be analyzed from friction-normal load curve, as shown in Figure $17$. It was thought that this effect may be due to the detail of interacting behavior of the contact area between the tip and graphene. However, if you test this in different ambient conditions, for example if nitrogen was blown into the chamber while testing occured, this hysteresis disappears.
Figure $17$ Friction hysteresis on the surface of graphene/Cu. Adapted from P. Egberts, G. H. Han, X. Z. Liu, A. T. C. Johnson, and R. W. Carpick, ACS Nano, 2014, 8, 5012. Copyright: American Chemical Society (2014).
In order to explore the mechanism of such a phenomenon, a series of friction test under different conditions. A key factor here is the humidity in the testing environment. Figure $18$ is a typical friction measurement on monolayer and 3-layer graphene on SiOx. We can see the friction hysteresis is very different under dry nitrogen gas (0.1% humidity) and the ambient (24% humidity) from Figure $19$.
Simulation on this system suggests this friction hysteresis on the surface of graphene is due to the water interacting with the surface of graphene. The contact angle between the tip/water molecule-graphene interfaces is the key component. The further study suggests once you put the graphene samples in air and expose them for a long period of times (several days), the chemical bonding at the surface can change due to the water molecule in the air so that the friction properties at nanoscale can be very different.
The bond between the material under investigation and the substrate can be very vital for the friction behavior at the nanoscale. The studies during the years suggest that the friction of the graphene will decrease as the number of layers increase. This is adaptable for suspended graphene (with nothing to support it), and graphene on most of substrates (such as SiOx, Cu foil and so on). However, if the graphene is supported by fresh cleaved mica surface, there’s no difference for the frictional properties of different-layer graphene, this is due to the large surface dissipation energy, so the graphene is very firmly fixed to the mica.
However, on the other hand, the surface of mica is also hydrophilic, this is causal to the water distribution on the surface of mica, and the water intercalation between the graphene and mica bonding. Through the friction measurement of the graphene on mica, we can analyze this system quantitatively, as shown in Figure $18$.
Summary
This case study just gives an example that, contact-mode Atomic Force Microscopy, or Frictional Force Microscopy is a powerful tool to investigate the frictional properties of materials, for the use both in scientific research as well as chemical industry.
The most important lesson for researchers is that in analyzing any literature data it is important to know what the relative humidity conditions are for the particular experiment, such that various experiments may be compared directly. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.02%3A_Atomic_Force_Microscopy_%28AFM%29.txt |
SEM and its Applications for Polymer Science
Introduction
The scanning electron microscope (SEM) is a very useful imaging technique that utilized a beam of electrons to acquire high magnification images of specimens. Very similar to the transmission electron microscope (TEM), the SEM maps the reflected electrons and allows imaging of thick (~mm) samples, whereas the TEM requires extremely thin specimens for imaging; however, the SEM has lower magnifications. Although both SEM and TEM use an electron beam, the image is formed very differently and users should be aware of when each microscope is advantageous.
Microscopy Physics
Image Formation
All microscopes serve to enlarge the size of an object and allow people to view smaller regions within the sample. Microscopes form optical images and although instruments like the SEM have extremely high magnifications, the physics of the image formation are very basic. The simplest magnification lens can be seen Figure $1$. The formula for magnification is shown in \ref{1}, where M is magnification, f is focal length, u is the distance between object and lens, and v is distance from lens to the image.
$M\ =\ \frac{f}{u-f}\ = \frac{v-f}{f} \label{1}$
Multistage microscopes can amplify the magnification of the original object even more as shown in Figure. Where magnification is now calculated from \ref{2}, where f1, f2 are focal distances with respect to the first and second lens and v1, v2are the distances from the lens to the magnified image of first and second lens, respectively.
$M \ =\ \frac{(v_{1}\ -\ f_{1})(v_{2}\ -\ f_{2})}{f_{1}f_{2}} \label{2}$
In reality, the objects we wish to magnify need to be illuminated. Whether or not the sample is thin enough to transmit light divides the microscope into two arenas. SEM is used for samples that do not transmit light, whereas the TEM (transmission electron microscope) requires transparent samples. Due to the many frequencies of light from the introduced source, a condenser system is added to control the brightness and narrow the range of viewing to reduce aberrations, which distort the magnified image.
Electron Microscopes
Microscope images can be formed instantaneous (as in the optical microscope or TEM) or by rastering (scanning) a beam across the sample and forming the image point-by-point. The latter is how SEM images are formed. It is important to understand the basic principles behind SEM that define properties and limitations of the image.
Resolution
The resolution of a microscope is defined as the smallest distance between two features that can be uniquely identified (also called resolving power). There are many limits to the maximum resolution of the SEM and other microscopes, such as imperfect lenses and diffraction effects. Each single beam of light, once passed through a lens, forms a series of cones called an airy ring (see Figure $3$). For a given wavelength of light, the central spot size is inversely proportional to the aperture size (i.e., large aperture yields small spot size) and high resolution demands a small spot size.
Aberrations distort the image and we try to minimize the effect as much as possible. Chromatic aberrations are caused by the multiple wavelengths present in white light. Spherical aberrations are formed by focusing inside and outside the ideal focal length and caused by the imperfections within the objective lenses. Astigmatism is because of further distortions in the lens. All aberrations decrease the overall resolution of the microscope.
Electrons
Electrons are charged particles and can interact with air molecules therefore the SEM and TEM instruments require extremely high vacuum to obtain images (10-7 atm). High vacuum ensures that very few air molecules are in the electron beam column. If the electron beam interacts with an air molecule, the air will become ionized and damage the beam filament, which is very costly to repair. The charge of the electron allows scanning and also inherently has a very small deflection angle off the source of the beam.
The electrons are generated with a thermionic filament. A tungsten (W) or LaB6 filament is chosen based on the needs of the user. LaB6 is much more expensive and tungsten filaments meet the needs of the average user. The microscope can be operated as field emission (tungsten filament).
Electron Scattering
To accurately interpret electron microscopy images, the user must be familiar with how high energy electrons can interact with the sample and how these interactions affect the image. The probability that a particular electron will be scattered in a certain way is either described by the cross section, σ, or mean free path, λ, which is the average distance which an electron travels before being scattered.
Elastic Scatter
Elastic scatter, or Rutherford scattering, is defined as a process which deflects an electron but does not decrease its energy. The wavelength of the scattered electron can be detected and is proportional to the atomic number. Elastically scattered electrons have significantly more energy that other types and provide mass contrast imaging. The mean free path, λ, is larger for smaller atoms meaning that the electron travels farther.
Inelastic Scatter
Any process that causes the incoming electron to lose a detectable amount of energy is considered inelastic scattering. The two most common types of inelastic scatter are phonon scattering and plasmon scattering. Phonon scattering occurs when a primary electron looses energy by exciting a phonon, atomic vibrations in a solid, and heats the sample a small amount. A Plasmon is an oscillation within the bulk electrons in the conduction band for metals. Plasmon scattering occurs when an electron interacts with the sample and produces plasmons, which typically have 5 - 30 eV energy loss and small λ.
Secondary Effects
A secondary effect is a term describing any event which may be detected outside the specimen and is essentially how images are formed. To form an image, the electron must interact with the sample in one of the aforementioned ways and escape from the sample and be detected. Secondary electrons (SE) are the most common electrons used for imaging due to high abundance and are defined, rather arbitrarily, as electrons with less than 50 eV energy after exiting the sample. Backscattered electrons (BSE) leave the sample quickly and retain a high amount of energy; however there is a much lower yield of BSE. Backscattered electrons are used in many different imaging modes. Refer to Figure $4$ for a diagram of interaction depths corresponding to various electron interactions.
SEM Construction
The SEM is made of several main components: electron gun, condenser lens, scan coils, detectors, specimen, and lenses (see Figure $5$). Today, portable SEMs are available but the typical size is about 6 feet tall and contains the microscope column and the control console.
A special feature of the SEM and TEM is known as depth of focus, dv/du the range of positions (depths) at which the image can be viewed with good focus, see \ref{3}. This allows the user to see more than a singular plane of a specified height in focus and essentially allows a range of three dimensional imaging.
$\frac{dv}{du}\ =\ \frac{-v^{2}}{u^{2}}\ =\ -M^{2} \label{3}$
Electron Detectors (image formation)
The secondary electron detector (SED) is the main source of SEM images since a large majority of the electrons emitted from the sample are less than 50 eV. These electrons form textural images but cannot determine composition. The SEM may also be equipped with a backscatter electron detector (BSED) which collects the higher energy BSE’s. Backscattered electrons are very sensitive to atomic number and can determine qualitative information about nuclei present (i.e., how much Fe is in the sample). Topographic images are taken by tilting the specimen 20 - 40° toward the detector. With the sample tilted, electrons are more likely to scatter off the top of the sample rather than interact within it, thus yielding information about the surface.
Sample Preparation
The most effective SEM sample will be at least as thick as the interaction volume; depending on the image technique you are using (typically at least 2 µm). For the best contrast, the sample must be conductive or the sample can be sputter-coated with a metal (such as Au, Pt, W, and Ti). Metals and other materials that are naturally conductive do not need to be coated and need very little sample preparation.
SEM of Polymers
As previously discussed, to view features that are smaller than the wavelength of light, an electron microscope must be used. The electron beam requires extremely high vacuum to protect the filament and electrons must be able to adequately interact with the sample. Polymers are typically long chains of repeating units composed primarily of “lighter” (low atomic number) elements such as carbon, hydrogen, nitrogen, and oxygen. These lighter elements have fewer interactions with the electron beam which yields poor contrast, so often times a stain or coating is required to view polymer samples. SEM imaging requires a conductive surface, so a large majority of polymer samples are sputter coated with metals, such as gold.
The decision to view a polymer sample with an SEM (versus a TEM for example) should be evaluated based on the feature size you expect the sample to have. Generally, if you expect the polymer sample to have features, or even individual molecules, over 100 nm in size you can safely choose SEM to view your sample. For much smaller features, the TEM may yield better results, but requires much different sample preparation than will be described here.
Polymer Sample Preparation Techniques
Sputter Coating
A sputter coater may be purchased that deposits single layers of gold, gold-palladium, tungsten, chromium, platinum, titanium, or other metals in a very controlled thickness pattern. It is possible, and desirable, to coat only a few nm’s of metal onto the sample surface.
Spin Coating
Many polymer films are depositing via a spin coater which spins a substrate (often ITO glass) and drops of polymer liquid are dispersed an even thickness on top of the substrate.
Staining
Another option for polymer sample preparation is staining the sample. Stains act in different ways, but typical stains for polymers are osmium tetroxide (OsO4), ruthenium tetroxide (RuO4) phosphotungstic acid (H3PW12O40), hydrazine (N2H4), and silver sulfide (Ag2S).
Examples
Comp-block Copolymer (Microstructure of Cast Film)
• Cast polymer film (see Figure $6$).
• To view interior structure, the film was cut with a microtome or razor blade after the film was frozen in liquid N2 and fractured.
• Stained with RuO4 vapor (after cutting).
• Structure measurements were averaged over a minimum of 25 measurements.
Polystyrene-polylactide Bottlebrush Copolymers (Lamellar Spacing)
• Pressed polymer samples into disks and annealed for 16 h at 170 °C.
• To determine ordered morphologies, the disk was fractured (see Figure $7$).
• Used SEM to verify lamellar spacing from USAXS.
SWNTs in Ultrahigh Molecular Weight Polyethylene
• Dispersed SWNTs in interactive polymer.
• Samples were sputter-coated in gold to enhance contrast.
• The films were solution-crystallized and the cross-section was imaged.
• Environmental SEM (ESEM) was used to show morphologies of composite materials.
• WD = 7 mm.
• Study was conducted to image sample before and after drawing of film.
• Images confirmed the uniform distribution of SWNT in PE (Figure $8$).
• MW = 10,000 Dalton.
• Study performed to compare transparency before and after UV irradiation.
Nanostructures in Conjugated Polymers (Nanoporous Films)
• Polymer and NP were processed into thin films and heated to crosslink.
• SEM was used to characterize morphology and crystalline structure (Figure $9$).
• SEM was used to determine porosity and pore size.
• Magnified orders of 200 nm - 1 μm.
• WD = 8 mm.
• MW = 23,000 Daltons
• Sample prep: spin coating a solution of poly-(thiophene ester) with copper NPs suspended on to ITO coated glass slides. Ziess, Supra 35
Cryo-SEM Colloid Polystyrene Latex Particles (Fracture Patterns)
• Used cryogenic SEM (cryo-SEM) to visualize the microstructure of particles (Figure $10$)
• Particles were immobilized by fast-freezing in liquid N2 at –196 °C.
• Sample is fractured (-196 °C) to expose cross section.
• 3 nm sputter coated with platinum.
• Shapes of the nanoparticles after fracture were evaluated as a function of crosslink density. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.03%3A_SEM_and_its_Applications_for_Polymer_Science.txt |
Introduction
A catalyst is a "substance that accelerates the rate of chemical reactions without being consumed". Some reactions, such as the hydrodechlorination of TCE, \ref{1}, don't occur spontaneously, but can occur in the presence of a catalyst.
$C_{2}Cl_{3}H\ +\ 4H_{2} \underset{PD}\rightarrow C_{2}H_{6}\ +\ 3HCl \label{1}$
Metal dispersion is a commong term within the catalyst industry. The term refers to the amount of metal that is active for a specific reaction. Let’s assume a catalyst material has a composition of 1 wt% palladium and 99% alumina (Al2O3) (Figure $1$) Even though the catalyst material has 1 wt% of palladium, not all the palladium is active. The material might be oxidized due to air exposure or some of the material is not exposed to the surface (Figure $2$), hence it can’t participate in the reaction. For this reason it is important to characterize the material.
In order for Pd to react according to \ref{1}, it needs to be in the metallic form. Any oxidized palladium will be inactive. Thus, it is important to determine the oxidation state of the Pd atoms on the surface of the material. This can be accomplished using an experiment called temperature programmed reduction (TPR). Subsequently, the percentage of active palladium can be determined by hydrogen chemisorption. The percentage of active metal is an important parameter when comparing the performance of multiple catalyst. Usually the rate of reaction is normalized by the amount of active catalyst.
Principle of Thermal Conductivity
Thermal conductivity is the ability of a chemical specie to conduct heat. Each gas has a different thermal conductivity. The units of thermal conductivity in the international system of units are W/m·K. Table $1$ shows the thermal conductivity of some common gasses.
This detector is part of a typical commercial instrument such as a Micromeritics AutoChem 2920 (Figure $4$). This instrument is an automated analyzer with the ability to perform chemical adsorption and temperature-programmed reactions on a catalyst, catalyst support, or other materials.
Temperature Programmed Reduction (TPR)
TPR will determine the number of reducible species on a catalyst and will tell at what temperature each of these species was reduced. For example palladium is ordinarily found as Pd(0) or Pd(II), i.e., oxidation states 0 and +2. Pd(II) can be reduced at very low temperatures (5 - 10 °C) to Pd(0) following \ref{2}.
$PdO\ +\ H_{2} \rightarrow Pd(0)\ +\ H_{2}O \label{2}$
A 128.9 mg 1wt% Pd/Al2O3 samples is used for the experiment, Figure $5$. Since we want to study the oxidation state of the commercial catalyst, no pre-treatment needs to be executed to the sample. A 10% hydrogen-argon mixture is used as analysis and reference gas. Argon has a low thermal conductivity and hydrogen has a much higher thermal conductivity. All gases will flow at 50 cm3/min. The TPR experiment will start at an initial temperature of 200 K, temperature ramp 10 K/min, and final temperature of 400 K. The H2/Ar mixture is flowed through the sample, and past the detector in the analysis port. While in the reference port the mixture doesn’t become in contact with the sample. When the analysis gas starts flowing over the sample, a baseline reading is established by the detector. The baseline is established at the initial temperature to ensure there is no reduction. While this gas is flowing, the temperature of the sample is increased linearly with time and the consumption of hydrogen is recorded. Hydrogen atoms react with oxygen atoms to form H2O.
Water molecules are removed from the gas stream using a cold trap. As a result, the amount of hydrogen in the argon/hydrogen gas mixture decreases and the thermal conductivity of the mixture also decrease. The change is compared to the reference gas and yields to a hydrogen uptake volume. Figure $6$ is a typical TPR profile for PdO.
Pulse Chemisorption
Once the catalyst (1 wt% Pd/Al2O3) has been completely reduced, the user will be able to determine how much palladium is active. A pulse chemisorption experiment will determine active surface area, percentage of metal dispersion and particle size. Pulses of hydrogen will be introduced to the sample tube in order to interact with the sample. In each pulse hydrogen will undergo a dissociative adsorption on to palladium active sites until all palladium atoms have reacted. After all active sites have reacted, the hydrogen pulses emerge unchanged from the sample tube. The amount of hydrogen chemisorbed is calculated as the total amount of hydrogen injected minus the total amount eluted from the system.
Data Collection for Hydrogen Pulse Chemisorption
The sample from previous experiment (TPR) will be used for this experiment. Ultra high-purity argon will be used to purge the sample at a flow rate of 40 cm3/min. The sample will be heated to 200 °C in order to remove all chemisorbed hydrogen atoms from the Pd(0) surface. The sample is cooled down to 40 °C. Argon will be used as carrier gas at a flow of 40 cm3/min. Filaments temperature will be 175 °C and the detector temperature will be 110 °C. The injection loop has a volume of 0.03610 cm3 @ STP. As shown in Figure $6$, hydrogen pulses will be injected in to the flow stream, carried by argon to become in contact and react with the sample. It should be noted that the first pulse of hydrogen was almost completely adsorbed by the sample. The second and third pulses show how the samples is been saturated. The positive value of the TCD detector is consistent with our assumptions. Since hydrogen has a higher thermal conductivity than argon, as it flows through the detector it will tend to cool down the filaments, the detector will then apply a positive voltage to the filaments in order to maintain a constant temperature.
Pulse Chemisorption Data Analysis
Table $1$ shows me the integration of the peaks from Figure $7$. This integration is performed by an automated software provided with the instrument. It should be noted that the first pulse was completely consumed by the sample, the pulse was injected between time 0 and 5 minutes. From Figure $7$ we observe that during the first four pulses, hydrogen is consumed by the sample. After the fourth pulse, it appears the sample is not consuming hydrogen. The experiment continues for a total of seven pulses, at this point the software determines that no consumption is occurring and stops the experiment. Pulse eight is denominated the "saturation peak", meaning the pulse at which no hydrogen was consumed.
Pulse n Area
1 0
2 0.000471772
3 0.00247767
4 0.009846683
5 0.010348201
6 0.10030243
7 0.009967717
8 0.010580979
Table $1$ Hydrogen pulse chemisorption data.
Using \ref{3} the change in area (Δarean) is calculated for each peak pulse area (arean)and compared to that of the saturation pulse area (areasaturation = 0.010580979). Each of these changes in area is proportional to an amount of hydrogen consumed by the sample in each pulse. Table $2$ shows the calculated change in area.
$\Delta Area_{n}\ =\ Area_{saturation}\ -\ Area_{n} \label{3}$
Pulse n Arean ΔArean
1 0 0.010580979
2 0.000471772 0.0105338018
3 0.00247767 0.008103309
4 0.009846683 0.000734296
5 0.010348201 0.000232778
6 0.010030243 0.000550736
7 0.009967717 0.000613262
8 0.010580979 0
Table $2$ Hydrogen pulse chemisorption data with ΔArea.
The Δarean values are then converted into hydrogen gas consumption using \ref{4}, where Fc is the area-to-volume conversion factor for hydrogen and SW is the weight of the sample. Fc is equal to 2.6465 cm3/peak area. Table $3$ shows the results of the volume adsorbed and the cumulative volume adsorbed. Using the data on Table $3$, a series of calculations can now be performed in order to have a better understanding of our catalyst properties.
$V_{adsorbed}\ =\ \frac{\Delta Area_{n} \times F_{c}}{SW} \label{4}$
Pulse n arean Δarean Vadsorbed (cm3/g STP) Cumulative quantity (cm3/g STP)
1 0 0.0105809790 0.2800256 0.2800256
2 0.000471772 0.0105338018 0.2787771 0.558027
3 0.00247767 0.0081033090 0.2144541 0.7732567
4 0.009846683 0.0007342960 0.0194331 0.7926899
5 0.010348201 0.0002327780 0.0061605 0.7988504
6 0.010030243 0.0005507360 0.0145752 0.8134256
7 0.009967717 0.000613262 0.0162300 0.8296556
8 0.010580979 0 0.0000000 0.8296556
Table $3$ Includes the volume adsorbed per pulse and the cumulative volume adsorbed
Gram Molecular Weight
Gram molecular weight is the weighted average of the number of moles of each active metal in the catalyst. Since this is a monometallic catalyst, the gram molecular weight is equal to the molecular weight of palladium (106.42 [g/mol]). The GMCCalc is calculated using \ref{5}, where F is the fraction of sample weight for metal N and WatomicN is the gram molecular weight of metal N (g/g-mole). \ref{6} shows the calculation for this experiment.
$GMW_{Calc}\ =\ \frac{1}{(\frac{F_{1}}{W_{atomic\ 1}})\ +\ (\frac{F_{2}}{W_{atomic\ 2}})\ +\ ...\ +\ (\frac{F_{N}}{W_{atomic\ N}})} \label{5}$
$GMW_{Calc}\ =\ \frac{1}{(\frac{F_{1}}{W_{atomic\ Pd}})}\ =\ \frac{W_{atomic\ PD}}{F_{1}}\ =\ \frac{106.42 \frac{g}{g-mole}}{1}\ =\ 106.42 \frac{g}{g-mole} \label{6}$
Metal Dispersion
The metal dispersion is calculated using \ref{7}, where PD is the percent metal dispersion, Vs is the volume adsorbed (cm3 at STP), SFCalc is the calculated stoichiometry factor (equal to 2 for a palladium-hydrogen system), SW is the sample weight and GMWCalc is the calculated gram molecular weight of the sample [g/g-mole]. Therefore, in \ref{8} we obtain a metal dispersion of 6.03%.
$PD\ =\ 100\ \times \ (\frac{V_{s} \times SF_{Calc}}{SW \times 22414}) \times GMW_{Calc} \label{7}$
$PD\ =\ 100\ \times \ (\frac{0.8296556 [cm^{3}]\ \times \ 2}{0.1289 [g]\ \times 22414 [\frac{cm^{3}}{mol}]})\ \times \ 106.42 [\frac{g}{g-mol}]\ =\ 6.03\% \label{8}$
Metallic Surface Area per Gram of Metal
The metallic surface area per gram of metal is calculated using \ref{9}, where SAMetallic is the metallic surface area (m2/g of metal), SWMetal is the active metal weight, SFCalc is the calculated stoichiometric factor and SAPd is the cross sectional area of one palladium atom (nm2). Thus, in \ref{10} we obtain a metallic surface area of 2420.99 m2/g-metal.
$SA_{Metallic}\ =\ (\frac{V_{S}}{SW_{Metal}\ \times \ 22414})\ \times \ (SF_{Calc})\ \times \ (6.022\ \times \ 10^{23})\ \times \ SA_{Pd} \label{9}$
$SA_{Metallic}\ =\ (\frac{0.8296556\ [cm^{3}]}{0.001289\ [g_{metal}]\ \times \ 22414\ [\frac{cm^{3}}{mol}]})\ \times \ (2)\ \times \ (6.022\ \times \ 10^{23}\ [\frac{atoms}{mol}])\ \times \ 0.07\ [\frac{nm^{2}}{atom}]\ =\ 2420.99\ [\frac{m^{2}}{g-metal}] \label{10}$
Active Particle Size
The active particle size is estimated using \ref{11}, where DCalc is palladium metal density (g/cm3), SWMetal is the active metal weight, GMWCalc is the calculated gram molecular weight (g/g-mole), and SAPd is the cross sectional area of one palladium atom (nm2). As seen in \ref{12} we obtain an optical particle size of 2.88 nm.
$APS\ =\ \frac{6}{D_{Calc}\ \times \ (\frac{W_{S}}{GMW_{Calc}})\ \times \ (6.022\ \times \ 10^23)\ \times \ SA_{Metallic}} \label{11}$
$APS\ =\ \frac{600}{(1.202\ \times \ 10^{-20} [\frac{g_{Pd}}{nm^{3}}])\ \times \ (\frac{0.001289\ [g]}{106.42\ [\frac{g_{Pd}}{mol}]})\ \times \ (6.022\ \times \ 10^{23}\ [\frac{atoms}{mol}])\ \times \ (2420.99\ [\frac{m^{2}}{g-Pd}])} \ =\ 2.88\ nm \label{12}$
In a commercial instrument, a summary report will be provided which summarizes the properties of our catalytic material. All the equations used during this example were extracted from the AutoChem 2920-User's Manual.
Properties Value
Palladium atomic weight 106.4 g/mol
Atomic cross sectional area 0.0787 nm2
Metal Density 12.02 g/cm3
Palladium loading 1 wt %
Metal dispersion 6.03 %
Metallic surface area 2420.99 m2/g-metal
Active particle diameter (hemisphere) 2.88 nm
Table $4$ Summary report provided by Micromeritics AuthoChem 2920. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.04%3A_Catalyst_Characterization_Using_Thermal_Conductivity_Detector.txt |
Overview
The working principle of a quartz crystal microbalance with dissipation (QCM-D) module is the utilization of the resonance properties of some piezoelectric of materials. A piezoelectric material is a material that exhibits an electrical field when a mechanical strain is applied. This phenomenon is also observed in the contrary where an applied electrical field produce a mechanical strain in the material. The material used is α-SiO2 that produces a very stable and constant frequency. The direction and magnitude of the mechanical strain is directly dependent of the direction of the applied electrical field and the inherent physical properties of the crystal.
A special crystal cut is used, called AT-cut, which is obtain as wafers of the crystal of about 0.1 to 0.3 mm in width and 1 cm in diameter. The AT-cut is obtained when the wafer is cut at 35.25° of the main crystallographic axis of SiO2. This special cut allows only one vibration mode, the shear mode, to be accessed and thus exploited for analytical purposes. When a electrical field is applied to the crystal wafer via metal electrodes, that are vapor-deposited in the surface, a mechanical shear is produced and maintained as long as the electrical field is applied. Since this electric field can be controlled by opening and closing an electrical circuit, a resonance within the crystal is formed (Figure $1$).
Since the frequency of the resonance is dependent of the characteristics of the crystal, an increase of mass, for example when the sample is loaded into the sensor would change the frequency change. This relation \ref{1} was obtained by Sauerbrey in 1959, where Δm (ng.cm-2) is the areal mass, C (17.7 ngcm-2Hz-1) is the vibrational constant (shear, effective area, etc.), n in Hz is the resonant overtone, and Δf is the change in frequency. The dependence of the change in the frequency can be related directly to the change in mass deposited in the sensor only when three conditions are met and assumed:
• The mass deposited is small compared to the mass of the sensor
• It is rigid enough so that it vibrates with the sensor and does not suffer deformation
• The mass is evenly distributed among the surface of the sensor
$\Delta m\ =\ -C\frac{1}{n}\Delta f \label{1}$
An important incorporation in recent equipment is the use of the dissipation factor. The inclusion of the dissipation faster takes into account the weakening of the frequency as it travels along the newly deposited mass. In a rigid layer the frequency is usually constant and travels through the newly formed mass without interruption, thus, the dissipation is not important. On the other hand, when the deposited material has a soft consistency the dissipation of the frequency is increased. This effect can be monitored and related directly to the nature of the mass deposited.
The applications of the QCM-D ranges from the deposition of nanoparticles into a surface, from the interaction of proteins within certain substrates. It can also monitors the bacterial amount of products when feed with different molecules, as the flexibility of the sensors into what can be deposited in them include nanoparticle, special functionalization or even cell and bacterias!
Experimental Planning
In order to use QCM-D for studing the interaction of nanoparticles with a specific surface several steps must be followed. For demonstration purposes the following procedure will describe the use of a Q-Sense E4 with autosampler from Biolin Scientific. A summary is shown below as a quick guide to follow, but further details will be explained:
• Surface election and cleaning according with the manufacturer recommendations
• Sample preparation including having the correct dilutions and enough samplke for the running experiment
• Equipment cleaning and set up of the correct aparameters for the experiment
• Data acquisition
• Data interpretation
Surface Election
The decision of what surface of the the sensor to use is the most important decision to make fore each study. Biolin has a large library of available coatings ranging from different compositions of pure elements and oxides (Figure $2$) to specific binding proteins. It is important to take into account the different chemistries of the sensors and the results we are looking for. For example studying a protein with high sulfur content on a gold sensor can lead to a false deposition results, as gold and sulfur have a high affinity to form bonds. For the purpose of this example, a gold coated sensor will be used in the remainder of the discussion.
Sensor Cleaning
Since QCM-D relies on the amount of mass that is deposited into the surface of the sensor, a thorough cleaning is needed to ensure there is no contaminants on the surface that can lead to errors in the measurement. The procedure the manufacturer established to clean a gold sensor is as follows:
1. Put the sensor in the UV/ozone chamber for 10 minutes
2. Prepare 10 mL of a 5:1:1 solution of hydrogen peroxide:ammonia:water
3. Submerge in this solution at 75 C for 5 minutes
4. Rinse with copious amount of milliQ water
5. Dry with inert gas
6. Put the sensor in the UV/ozone chamber for 10 minutes as shown in Figure $3$.
Once the sensors are clean, extreme caution should be taken to avoid contamination of the surface. The sensors can be loaded in the flow chamber of the equipment making sure that the T-mark of the sensor matches the T mark of the chamber in order to make sure the electrodes are in constant contact. The correct position is shown in Figure $4$.
Sample Preparation
As the top range of mass that can be detected is merely micrograms, solutions must be prepared accordingly. For a typical run, a buffer solution is needed in which the deposition will be studied as well as, the sample itself and a solution of 2% of sodium dodecylsulfate [CH3(CH2)10CH2OSO3Na, SDS]. For this example we will be using nanoparticles of magnetic iron oxide (nMag) coated with PAMS, and as a buffer 8% NaCl in DI water.
• For the nanoparticles sample it is necessary to make sure the final concentration of the nanoparticles will not exceed 1 mM.
• For the buffer solution, it is enough to dissolve 8 g of NaCl in DI water.
• For the SDS solution, 2 g of SDS should be dissolved very slowly in approximate 200 mL of DI water, then 100 mL aliquots of DI water is added until the volume is 1 L. This is in order to avoid the formation of bubbles and foam in the solution.
Instrument Preparation
Due to the sensitivity of the equipment, it is important to rinse and clean the tubing before loading any sample or performing any experiments. To rinse the tubing and the chambers, use a solution of 2% of SDS. For this purpose, a cycle in the autosampler equipment is program with the steps shown in Table $1$.
Step Duration (min) Speed (μL/min) Volume (mL)
DI water (2:2) 10 100 1
SDS (1:1) 20 300 6
DI water (1:2) 10 100 1
Table $1$ Summary of cleaning processes.
Once the equipment is cleaned, it is ready to perform an experiment, a second program in the autosampler is loaded with the parameters shown in Table $2$.
Step Duration (min) Speed (μL/min) Volume (mL)
Buffer (1:3) 7 100 0.7
Nanoparticles 30 100 3.0
Table $2$ Experimental set-up
The purpose of flowing the buffer in the beginning is to provide a background signal to take into account when running the samples. Usually a small quantity of the sample is loaded into the sensor at a very slow flow rate in order to let the deposition take place.
Data Acquisition
Example data obtained with the above parameters is shown in Figure $5$. The blue squares depict the change in the frequency. As the experiment continues, the frequency decreases as more mass is deposited. On the other hand, shown as the red squares, the dissipation increases, describing the increase of both the height and certain loss of the rigidity in the layer from the top of the sensor. To illustrate the different steps of the experiment, each section has been color coded. The blue part of the data obtained corresponds to the flow of the buffer, while the yellow part corresponds to the deposition equilibrium of the nanoparticles onto the gold surface. After certain length of time equilibrium is reached and there is no further change. Once equilibrium indicates no change for about five minutes, it is safe to say the deposition will not change.
Instrument Clean-up
As a measure preventive care for the equipment, the same cleaning procedure should be followed as what was done before loading the sample. Use of a 2% solution of SDS helps to ensure the equipment remains as clean as possible.
Data Modeling
Once the data has been obtained, QTools (software that is available in the software suit of the equipment) can be used to convert the change in the frequency to areal mass, via the Sauerbrey equation, \ref{1}. The correspondent graph of areal mass is shown in \ref{1}. From this graph we can observe how the mass is increasing as the nMag is deposited in the surface of the sensor. The blue section again illustrates the part of the experiment where only buffer was been flown to the chamber. The yellow part illustrates the deposition, while the green part shows no change in the mass after a period of time, which indicates the deposition is finished. The conversion from areal mass to mass is a simple process, as gold sensors come with a definite area of 1 cm2, but a more accurate measure should be taken when using functionalized sensors.
It is important to take into account the limitations of the Saubery equation, because the equation accounts for a uniform layer on top of the surface of the sensor. Deviations due to clusters of material deposited in one place or the formation of partial multilayers in the sensor cannot be calculated through this model. Further characterization of the surface should be done to have a more accurate model of the phenomena. | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.05%3A_Nanoparticle_Deposition_Studies_Using_a_Quartz_Crystal_Microbalance.txt |
• 10.1: A Simple Test Apparatus to Verify the Photoresponse of Experimental Photovoltaic Materials and Prototype Solar Cells
One of the problems associated with testing a new unproven photovoltaic material or cell design is that significant processing required in order to create a fully functioning solar cell. If it is desired to screen a wide range of materials or synthetic conditions it can be time consuming (and costly of research funds) to prepare fully functioning devices. In addition, the success of each individual cell may be more dependent on fabrication steps not associated with the variations under study.
• 10.2: Measuring Key Transport Properties of FET Devices
As research interests begin to focus on progressively smaller dimensions, the need for nanoscale characterization techniques has seen a steep rise in demand. In addition, the wide scope of nanotechnology across all fields of science has perpetuated the application of characterization techniques to a multitude of disciplines. Dual polarization interferometry (DPI) is an example of a technique developed to solve a specific problem, but was expanded and utilized to characterize fields ranging surfa
10: Device Performance
Introduction
One of the problems associated with testing a new unproven photovoltaic material or cell design is that significant processing required in order to create a fully functioning solar cell. If it is desired to screen a wide range of materials or synthetic conditions it can be time consuming (and costly of research funds) to prepare fully functioning devices. In addition, the success of each individual cell may be more dependent on fabrication steps not associated with the variations under study. For example, lithography and metallization could cause more variability than the parameters of the materials synthesis. Thus, the result could be to give no useful information as to the viability of each material under study, or even worse a false indication of research direction.
So-called quick and dirty qualitative measurements can be employed to assess not only the relative photoresponse of new absorber layer materials, but also the relative power output of photovoltaic devices. The measurement procedure can provide a simple, inexpensive and rapid evaluation of cell materials and structures that can help guide the development of new materials for solar cell applications.
Equipment Needs
Everything needed for the measurements can be purchased at a local electronics store and a hardware or big box store. Needed items are:
• Two handheld digital voltmeter with at least ±0.01 mV sensitivity (0.001 mV is better, of course).
• A simple breadboard and associated wiring kit.
• A selection of standard size and wattage resistors (1/8 - 1 Watt, 1 - 1000 ohms).
• A selection of wire wound potentiometers (0 - 10 ohms; 0 - 100 ohms; 0 - 1000 ohms) if I-V tracing is desired.
• A light source. This can be anything from a simple flood light to an old slide projector.
• A small fan or other cooling device for “steady state” (i.e., for measurements that last more than a few seconds such as tracing an I-V curve).
• 9 volt battery and holder or simple ac/dc low voltage power supply.
Measurement of the Photo-response of an Experimental Solar Cell
A qualitative measurement of a solar cell’s current-voltage (I-V) characteristics can be obtained using the simple circuit diagram illustrated in Figure $1$. Figure $2$ shows an I-V test setup using a household flood lamp for the light source. A small fan sits to the right just out of the picture.
Driving the potentiometer to its maximum value will place the cell close to open circuit operation, depending on the potentiometer range, so that the open circuit voltage can be simply extrapolated from the I versus V curve. If desired, the circuit can simply be opened to make the actual measurement once the rest of the data have been recorded. Data in this case were simply recorded by hand and later entered into a spreadsheet so an I-V plot could be generated. A sample plot is shown in Figure $3$. Keep in mind that cell efficiency cannot be determined with this technique unless the light source has been calibrated and color corrected to match terrestrial sunlight. The fact that the experimental device actually generated net power was the result sought. The shape of the curve and the very low voltage are the result of very large resistive losses in the device along with a very “leaky” junction.
One improvement that can be made to the above system is to replace the floodlight with a simple slide projector. The floodlight will typically have a spectrum very heavily weighted in the red and infrared and will be deficient in the shorter wavelengths. Though still not a perfect match to the solar spectrum, the slide projector does at least have more output at the shorter wavelengths; at the same time it will have less IR output compared to the floodlight and the combination should give a somewhat more representative response. A typical set up is shown in Figure $4$.
The mirror in Figure $5$ serves two purposes. First, it turns the beam so the test object can be laid flat a measurement bed and second it serves to collimate and concentrate the beam by focusing it on a smaller area, giving a better approximation of terrestrial solar intensity over a range of intensities such as AM2 (air mass 2) through AM0 (Figure $5$). An estimate of the intensity can be made using a calibrated silicon solar cell of the sort that can be purchased online from any of several scientific hobby shops such as Edmunds Scientific. While still far from enabling a quantitative measurement of device output, the technique will at least provide indications within a ballpark range of actual cell efficiency.
Figure $6$ shows a measurement made with the test device placed at a distance from the mirror for which the intensity was previously determined to be equivalent to AM1 solar intensity, or 1000 watts per square meter. Since the beam passes through the projector lens and reflects from the second surface of the slightly concave mirror, there is essentially no UV light left in the beam that could be harmful to the naked eye. Still, if this technique is used, it is recommended that observations be made through a piece of ordinary glass such as eyeglasses or even a small glass shield inserted for that purpose. The blue area in the figure represents the largest rectangle that can be drawn under the curve and gives the maximum output power of the cell, which is simply the product of the current and voltage at maximum power.
Figure $6$ is a plot of current density, obtained by dividing the current from the device by its area. It is common to normalize the output is this manner.
If the power density of the incident light (P0) is known in W/cm2, the device efficiency can be obtained by dividing the maximum power (as determined from Im and Vm) by the incident power density times the area of the cell (Acell), \ref{1}.
$\eta \ =\ I_{m}V_{m}/P_{0}A_{cell} \label{1}$
Measurement of the Photoconductivity of Experimental Photovoltaic Materials
In many cases it is beneficial to determine the photoconductivity of a new material prior to cell fabrication. This allows for the rapid screening of materials or synthesis variable of a single material even before issues of cell design and construction are considered.
Figure $7$ shows the circuit diagram of a simple photoconductivity test made with a slightly different set up compared to that shown above. In this case a voltage is placed across the sample after it has been connected to a resistor placed in series with the sample. A simple 9 V battery secured with a battery holder or a small ac to dc power converter can be used to supply the voltage. The sample and resistor sit inside a small box with an open top.
The voltage across (in this case) the 10 ohm resister was measured with a shutter held over the sample (a simple piece of cardboard sitting on the top of the box) and with the shutter removed. The difference in voltage is a direct indication of the change in the photoconductance of the sample and again is a very quick and simple test to see if the material being developed does indeed have a photoresponse of some sort without having to make a full device structure. Adjusting the position of the light source so that the incident light power density at the sample surface is 200 or 500 or 1000 W/m2 enables an approximate numerical estimate of the photocurrent that was generated and again can help guide the development of new materials for solar cell applications. The results from such a measurement are shown in Figure $8$ for a sample of carbon nanotubes (CNT) coated with CdSe by liquid phase deposition (LPD). | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/10%3A_Device_Performance/10.01%3A_A_Simple_Test_Apparatus_to_Verify_the_Photoresponse_of_Experimental_Photovoltaic_Materials_and_Prototype_Solar_.txt |
Field Effect Transistors
Arguably the most important invention of modern times, the transistor was invented in 1947 at Bell Labs by John Bardeen, William Shockley, and Walter Brattain. The result of efforts to replace inefficient and bulky vacuum tubes in current regulation and switching functions. Further advances in transistor technology led to the field effect transistors (FETs), the bedrock of modern electronics. FETs operate by utilizing an electric field to control the flow of charge carriers along a channel, analogous to a water valve to control the flow of water in your kitchen sink. The FET consists of 3 terminals, a source (S), drain (D), and gate (G). The region between the source and drain is called the channel. The conduction in the channel depends on the availability of charge carriers controlled by the gate voltage. Figure depicts a typical schematic and Figure $1$ the associated cross-section of a FET with the source, draing and gate terminals labeled. FETs come in a variety of flavors depending on their channel doping (leading to enhancement and depletion modes) and gate types, as seen in Figure $2$. The two FET types are junction field effect transistors (JFETs) and metal oxide semiconductor field effect transistors (MOSFETs).
JFET Fundamentals
Junction field effect transistors (JFETs) as their name implies utilize a PN-junction to control the flow of charge carriers. The PN-junction is formed when opposing doping schemes are broght together on both sides of the channel. The doping schemes can be made to be either n-type (electrons) or p-type (holes) by doping with boron/gallium or phosphorus/arsenic respectively. The n-channel JFETs consists of pnp junctions where the source and drain are n-doped and the gate is p-doped. Figure $4$ shows the cross section of a n-channel JFET in the “ON” state obtained by applying a positive drain-source voltage in the absence of a gate-source voltage. Alternatively the p-channel JFET consists of npn junctions where the source and drain are p-doped and the gate is n-doped. For p-channel a negative drain-source voltage is applied in the absence of a gate voltage to turn “ON” the npn device, as seen in Figure $5$. Since JFETs are “ON” when no gate-source voltage is applied they are called depletion mode devices. Meaning that a depletion region is required to turn “OFF” the device. This is where the PN-junction comes into play. The PN-junction works by enabling a depletion region to form where electrons and holes combine leaving behind positive and negative ions which inhibit further charge transfer as well as depleting the availability of charge carriers at the interface. This depletion region is pushed further into the channel by applying a gate-source voltage. If the voltage is sufficient the depletion region on either side of the channel will “pinch off” the flow through the channel and the device will be “OFF”. This voltage is called the pinch off voltage, VP. The n-channel VP is obtained by increasing the gate-source voltage in the negative direction, while the p-channel VP is obtained by increasing the gate-source voltage in the positive direction.
MOSFET Fundamentals
The metal oxide semiconductor field effect transistor (MOSFET) utilizes an oxide layer (typically SiO2) to isolate the gate from the source and drain. The thin layer of oxide prevents flow of current to the gate, but enables an electric field to be applied to the channel which regulates the flow of charge carriers through the channel.MOSFETs unlike JFETs can operate in depletion or enhancement mode characterized by their ON or OFF state at zero gate-source voltage, VGS.
For depletion mode MOSFETs the device is “ON” when the VGS is zero as a result of the devices structure and doping scheme. The n-channel depletion mode MOSFET consists of heavily n-doped source and drain terminals on top of a p-doped substrate. Underneath an insulating oxide layer there is a thin layer of n-type silicon which allows charge carriers to flow in the absence of a gate voltage. When a negative voltage is applied to the gate a depletion region forms inside the channel, as seen in Figure. If the gate voltage is sufficient the depletion region pinches off the flow of electrons.
For enhancement mode MOSFETs the ON state is attained by applying a gate voltage in the direction of the drain voltage; a positive voltage for n-channel enhancement MOSFETs, and a negative voltage for p-channel enhancement MOSFETs. The term “enhancement” is derived from the increase in conductivity seen by applying a gate voltage. This increase in conductivity is enabled by an inversion layer induced by the applied electric field at the gate as shown in Figure $7$ for n-channel enhancement mode MOSFETs and Figure $8$ for p-channel enhancement mode MOSFETs respectively.
The thickness of this inversion layer is controlled by the magnitude of the gate voltage. The minimum voltage required to form the inversion layer is called the gate-to-source threshold voltage, VT. In the case of n-channel enhancement mode MOSFETs, the “ON” state is reached when VGS > VT and a positive drain-source voltage, VDS, is applied. If the VGS is too low, then increasing the VDS further results only in increasing the depletion region around the drain. The p-channel enhancement mode MOSFETs operate similarly except that the voltages are reversed. Specifically, the “ON” state occurs when VGS < VT and a negative drain-source voltage is applied.
Measurement of key FET Parameters
In both an academic and industrial setting characterization of FETs is beneficial for determining device performance. Identifying the quality and type of FET can easily be addressed by measuring the transport characteristics under different experimental conditions utilizing a semiconductor characterization system (SCS). By analyzing the V-I characteristics through what are called voltage sweeps, the following key device parameters can be determined:
Pinch off Voltage Vp
The voltage needed to turn “OFF” a JFET. When designing circuits it is essential that the pinch-off voltage be determined to avoid current leakage which can dramatically reduce performance.
Threshold Voltage VT
The voltage needed to turn “ON” a MOSFET. This is a critical parameter in effective circuit design.
Channel Resistance RDS
The resistance between the drain and source in the channel. This influences the amount of current being transferred between the two terminals.
Power Dissipation PD
The power dissipation determines the amount of heat generated by the transistor. This becomes a real problem since the transport properties deteriorate as the channel is heated.
Effective Charge Carrier Mobility µn
The charge carrier mobility determines how quickly the charge carrier can move through the channel. In most cases higher mobility leads to better device performance. The mobility can also be used to gauge the impurity, defect, temperature, and charge carrier concentrations.
Transconductance gain gm (transfer admittance)
The gm is a measure of gain or amplification of a current for a given change in gate voltage. This is critical for amplification type electronics.
Equipment Needs
PC with Keithley Interactive Test Environment (KITE) software.
Semiconductor characterization system (Keithley 4200-SCS or equivalent).
Probe station.
Probe tips.
Protective gloves.
Measurement (V-I) Characteristics
The Semiconductor Characterization System is an automated system that provides both (V-I) and (V-C) characterization of semiconductor devices and test structures. The advanced digital sweep parameter analyzer provides sub-micron characterization with accuracy and speed. This system utilizes the Keithley Interactive Test Environment (KITE) software designed specifically for semiconductor characterization.
Procedure
1. Connect the probe tips to the probe station. Then attach the banana plugs from the probe station to the BNC connector, making sure not to connect to ground.
2. Select the appropriate connections for your test from Table $1$
3. Place your transistor sample on the probe station, but don’t let the probe tips touch the sample to prevent possible electric shock(during power up, the SMU may momentarily output high voltage).
4. Turn on power located on the lower right of the front panel. The power up sequence may take up to 2 minutes.
5. Start KITE software. Figure $9$ shows the interface window.
6. Select the appropriate setup from the Project Tree drop down (top left).
7. Match the Definition tab terminal connections to the physical connections of probe tips. If connection is not yet matched you can assign/reassign the terminal connections by using the arrow key next to the instrument selection box that displays a list of possible connections. Select the connection in the instrument selection box that matches the physical connection of the device terminal.
8. Set the Force Measure settings for each terminal. Fill in the necessary function parameters such as start, stop, step size, range, and compliance. For typical voltage sweeps you’ll want to force the voltage between the drain and source while measuring the current at the drain. Make sure to conduct several voltage sweeps at various forced gate voltages to aid in the analysis.
9. Check the current box/voltage box if you desire the current/voltage to be recorded in the Sheet tab Data worksheet and be available for plotting in the Graph tab.
10. Now make contact to your sample with the probe tips
11. Run the measurement setup by clicking the green Run arrow on the tool bar located above the Definition tab. Make sure the measuring indicator light at bottom right hand corner of the front panel is lit.
12. Save data by clicking on the Sheet tab then selecting the Save As tab. Select the file format and location.
Connection Description
SMU1 Medium power with low noise preamplifier
SMU2 Medium power source without preamplifier
SMU3 High Power
GNRD For large currents
Table $1$ Connection selection.
Measurement Analysis
Typical V-I Characteristics of JFETs
Voltage sweeps are a great way to learn about the device. Figure $10$ shows a typical plot of drain-source voltage sweeps at various gate-source voltages while measuring the drain current, ID for a n-channel JFET. The V-I characteristics have four distinct regions. Analysis of these regions can provides critical information about the device characteristics such as the pinch off voltage, VP, transcunductance gain, gm, drain-source channel resistance, RDS, and power dissipation, PD.
Ohmic Region (Linear Region)
This region is bounded by VDS < VP. Here the JFET begins to flow a drain current with a linear response to the voltage, behaving like a variable resistor. In this region the drain-source channel resistance, RDS is modeled by \ref{1}, where ΔVDS is the change in drain-source voltage, ΔID is the change in drain current, and gm is the transcunductance gain. Solving for gm results in \ref{2}.
$R_{DS}\ =\ \frac{\Delta V_{DS}}{\Delta I_{D}}\ =\ \frac{1}{g_{m}} \label{1}$
$g_m\ =\ \frac{\Delta I_{D}}{\Delta V_{DS}}\ =\ \frac{1}{R_{DS}} \label{2}$
Saturation Region
This is the region where the JFET is completely “ON”. The maximum amount of current is flowing for the given gate-source voltage. In this region the drain current can be modeled by the \ref{3}, where ID is the drain current, IDSS is the maximum current, VGS is the gate-source voltage, and VP is the pinch off voltage. Solving for the pinch off voltage results in \ref{4}.
$I_{D}\ =\ I_{DSS}(1\ -\ \frac{V_{GS}}{V_{P}}) \label{3}$
$V_{P}\ =\ 1\ -\ \frac{V_{GS}}{\sqrt{\frac{I_D}{I_{DSS}}}} \label{4}$
Breakdown Region
This region is characterized by the sudden increase in current. The drain-source voltage supplied exceeds the resistive limit of the semiconducting channel, resulting in the transistor to break down and flow an uncontrolled current.
Pinch-off Region (Cutoff Region)
In this region the gate-source voltage is sufficient to restrict the flow through the channel, in effect cutting off the drain current. The power dissipation, PD, can be solved utilizing Ohms law (I = V/R) for any region using \ref{5}.
$P_{D}\ =\ I_{D}\ \times \ V_{DC}\ =\ (I_{D})^{2}\ \times \ R_{DS}\ =\ (V_{DS})^{2}/R_{DS} \label{5}$
The p-channel JFET V-I characteristics behave similarly except that the voltages are reversed. Specifically, the pinch off point is reached when the gate-source voltage is increased in a positive direction, and the saturation region is met when the drain-source voltage is increased in the negative direction.
Typical V-I Characteristics of MOSFETs
Figure $11$ shows a typical plot of drain-source voltage sweeps at various gate-source voltages while measuring the drain current, ID for an ideal n-channel enhancement MOSFET. Like JFETs, the V-I characteristics of MOSFETS have distinct regions that provide valuable information about device transport properties.
Ohmic Region (Linear Region)
The n-channel enhanced MOSFET behaves linearly, acting like a variable resistor, when the gate-source voltage is greater than the threshold voltage and the drain-source voltage is greater than the gate-source voltage. In this region the drain current can be modeled by \ref{6}, where ID is the drain current, VGS is the gate-source voltage, VT is the threshold voltage, VDS is the drain-source voltage, and k is the geometric factor described by \ref{7}, where µn is the charge-carrier effective mobility, COX is the gate oxide capacitance, W is the channel width, and L is the channel length.
$I_{D}\ =\ 2k{(V_{GS}-V_{T})V_{DS}\ -\ [(V_{DS})^{2}/2]} \label{6}$
$k\ =\ \mu _{n} C_{OX} \frac{W}{L} \label{7}$
Saturation Region
In this region the MOSFET is considered fully “ON”. The drain current for the saturation region is modeled by \ref{8}. The drain current is mainly influenced by the gate-source voltage, while the drain-source voltage has no effect.
$I_{D}\ =\ k(V_{GS}\ -\ V_{T})^{2} \label{8}$
Solving for the threshold voltage VT results in \ref{9}.
$V_{T}\ =\ V_{GS}\ -\ \sqrt{\frac{I_{D}}{k}} \label{9}$
Pinch-off Region (Cutoff Region)
When the gate-source voltage, VGS, is below the threshold voltage VT the charge carriers in the channel are not available “cutting off” the charge flow. Power dissipation for MOSFETs can also be solved using equation 6 in any region as in the JFET case.
FET V-I Summary
The typical I-V characteristics for the whole family of FETs seen in Figure $11$ are plotted in Figure $12$.
From Figure $12$ we can see how the doping schemes that lead to enhancement and depletion are displaced along the VGS axis. In addition, from the plot the ON or OFF state can be determined for a given gate-source voltage, where (+) is positive, (0) is zero, and (-) is negative, as seen in Table $1$.
Table $1$: The ON/OFF state for the various FETs at a given gate-source voltages where (-) is a negative voltage and (+) is a positive voltage.
FET Type VGS = (-) VGS = 0 VGS = (+)
n-channel JFET OFF ON ON
p-channel JFET ON ON OFF
n-channel depletion MOSFET OFF ON ON
p-channel depletion MOSFET ON ON OFF
n-channel enhancement MOSFET OFF OFF ON
p-channel enhancement MOSFET ON ON OFF | textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/10%3A_Device_Performance/10.02%3A_Measuring_Key_Transport_Properties_of_FET_Devices.txt |
Background to chemical principles involved in the isolation and identification of cations from mixtures are described. Solubility of ionic compounds in water and the solubility variation by common ion effect, pH effect, coordination complex formation, and redox reaction are described in relation to the selective precipitation or dissolution of salts of the cations.
• 1.1: Solubility
Solubility, i.e., the ability of a substance to form a solution, its related terminologies, and the solubility guidelines for the dissolution of ionic compounds in water are described.
• 1.2: Solubility equilibria
The solubility equilibrium constant (Ksp), i.e., the equilibrium constant of the dissociation reaction of ionic compounds in water and selective precipitation by adding a reagent that precipitates one of the dissolved cations or a particular group of dissolved cations but not the others are described.
• 1.3: Varying solubility of ionic compounds
Varying the solubility of ionic compounds based on Le Chatelier's principle is described, specifically by using the common ion effect, the effect of pH, complex ion formation, and redox reactions.
• 1.4: pH Buffers
A pH buffer is an aqueous solution consisting of a weak acid and its conjugate base or vice versa, which minimizes pH change when a small amount of a strong acid or a strong base is added to it.
• 1.5: Separation of cations in groups
Cations commonly found in water are separated into five groups by adding suitable reagents that selectively precipitate a set of cations. Group I is separated as insoluble chlorides, group II as sulfides in acidic medium, group III as hydroxides and sulfides in basic medium, group IV as carbonates, and group V remains soluble in this process
1: Chemical Principles
Solution
The solution is a homogeneous mixture of two or more substances.
Solution related terminologies
• Miscible substances make a solution upon mixing with each other in any proportion. For example, ethanol and water are miscible to each other.
• Immiscible substances do not make solutions upon mixing in any proportion.
• Partially miscible substances can make a solution upon mixing up to a certain extent but not in all proportions.
• A solvent is a substance in a larger amount in the solution.
• A solute is a substance in a smaller amount in the solution.
• An unsaturated solution is a solution in which the solvent is holding solute less than the maximum limit, i.e., in which more solute can be dissolved.
• A saturated solution is a solution in which the solvent is holding the maximum amount of solute it can dissolve.
Water -a universal solvent
Water is one of the most important solvents because it is present all around us -it covers more than 70% of the earth and it is more than 60% of our body mass. Water is a polar molecule having a partial negative end on oxygen and a partially positive end on hydrogen atoms. that can dissolve most of the polar and ionic compounds. In ionic compounds, cations are held by anions through electrostatic interaction. When an ionic compound dissolves into water it dissociates into cations and anions, each surrounded by a layer of water molecules held by ion-dipole interactions. The water molecules around ions make ion-dipole interaction by orienting their partial negative end towards cations and their partial positive end towards anions. The energy needed to break ion-ion interaction in the ionic compounds is partially compensated by the energy released by establishing the ion-dipole interactions. The energy gained due to ion-dipole interactions and nature's tendency to disperse is the driving forces responsible for the dissolution of ionic compounds.
Solubility
Solubility is the ability of a substance to form a solution with another substance.
The solubility of a solute in a specific solvent is quantitatively expressed as the concentration of the solute in the saturated solution. Usually, the solubility is tabulated in the units of grams of solute per 100 mL of solvent (g/100 mL). The solubility of ionic compounds in water varies over a wide range. All ionic compounds dissolve to some extent.
For practical purposes, a substance is considered insoluble when its solubility is less than 0.1 g per 100 mL of solvent.
For example, lead(II)iodide ( \(\ce{PbI2}\) ) and silver chloride ( \(\ce{AgCl}\) ) are insoluble in water because the solubility of \(\ce{PbI2}\) is 0.0016 mol/L of the solution and the solubility of \(\ce{AgCl}\) is about 1.3 x 10-5 mol/L of solution. Potassium iodide (\(\ce{KI}\)) and \(\ce{Pb(NO3)2}\) are soluble in water. When aqueous solutions of \(\ce{KI}\) and \(\ce{Pb(NO3)2}\) are mixed, the insoluble combination of ions, i.e., \(\ce{PbI2}\) in this case, precipitates, as illustrated in Figure \(1\).
Solubility guidelines for dissolution of ionic compounds in water
There are no fail-proof guidelines for predicting the solubility of ionic compounds in water. However, the following guideline can predict the solubility of most ionic compounds.
Soluble ions
1. Salts of alkali metals (\(\ce{Li^+}\), \(\ce{Na^+}\), \(\ce{K^+}\), \(\ce{Rb^+}\), \(\ce{Cs^+}\)) and ammonia (\(\ce{NH4^+}\)) are soluble. For example, \(\ce{NaCl}\)NaCl, and \(\ce{(NH4)3PO3}\) are soluble.
2. Salts of nitrate ( \(\ce{NO3^-}\)), acetate ( \(\ce{CH3COO^-}\)), and perchlorate ( \(\ce{ClO4^-}\)) are soluble. For example, \(\ce{Pb(NO3)2}\), and \(\ce{Ca(CH3COO)2}\) are soluble.
3. Salts of chloride (\(\ce{Cl^-}\)), bromide (\(\ce{Br^-}\)), and Iodide (\(\ce{I^-}\)) are soluble, except when the cation is Lead (\(\ce{Pb^{2+}}\)), Mercury (\(\ce{Hg2^{2+}}\)), or Silver (\(\ce{Ag^{+}}\)). Remember the acronym “LMS” based on the first letter of the element name, or phrase ‘Let Me See” to recall Lead, Mercury, and Silver.
4. Sulfates (\(\ce{SO4^{2-}}\)) are soluble except when the cation is, \(\ce{Pb^{2+}}\), \(\ce{Hg2^{2+}}\), or \(\ce{Ag^{+}}\) (recall “Let Me See” for Lead, Mercury, and Silver) or a heavy alkaline earth metal ion: calcium (\(\ce{Ca^{2+}}\)), barium (\(\ce{Ba^{2+}}\)), or strontium (\(\ce{Sr^{2+}}\)). (Remember the acronym “CBS” based on the first letter of the element name, or phrase “Come By Soon” to recall calcium, barium, and strontium.)
Insoluble ions
1. Hydroxide (\(\ce{OH^{-}}\)) and sulfides (\(\ce{S^{2-}}\)) are insoluble except when the cation is a heavy alkaline earth metal ion: \(\ce{Ca^{2+}}\),\(\ce{Ba^{2+}}\), or \(\ce{Sr^{2+}}\) (recall “Come By Soon” for calcium, barium, and strontium), alkali metals and ammonium. For example, \(\ce{Mg(OH)2}\) and \(\ce{CuS}\)CuS are insoluble.
2. Carbonates (\(\ce{CO3^{2-}}\)), phosphates (\(\ce{PO4^{3-}}\)), and oxide (\(\ce{O^{2-}}\)) are insoluble except when the cation is an alkali metal ion or ammonium. For example, \(\ce{CaCO3}\), and \(\ce{Fe2O3}\) are insoluble.
3. If there is a conflict between the two guidelines, then the guideline listed first has priority. For example, \(\ce{CaCO3}\) is insoluble (rule#6), but \(\ce{Na2CO3}\) is soluble (rule#1 has priority over rule#6).
Precipitation reactions
Precipitation reactions are a class of chemical reactions in which two solutions are mixed and a solid product, called a precipitate, separates out. Precipitation reaction happening upon mixing solutions of ionic compounds in water can be predicted as illustrated in Figure \(2\). The first step is to list the soluble ionic compounds and then cross-combine the cations of one with the anion of the other to make the potential products. If any of the potential products is an insoluble ionic compound, it will precipitate out. For example when \(\ce{NaOH}\) solution is mixed with \(\ce{MgCl2}\) solution, \(\ce{Mg(OH)2}\) is a cross-combination that forms an insoluble compound, it will precipitate out.
Figure \(3\) shows precipitates of some insoluble ionic compounds formed by mixing aqueous solutions of appropriate soluble ionic compounds. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.1%3A_Solubility.txt |
Solubility product constant ($K_{sp}$)
All ionic compounds dissolve in water to some extent. Ionic compounds are strong electrolytes, i.e., they dissociate completely into ions upon dissolution. When the amount of ionic compound added to the mixture is more than the solubility limit, the excess undissolved solute (solid) exists in equilibrium with its dissolved aqueous ions. For example, the following equation represents the equilibrium between solid $\ce{AgCl(s)}$ and its dissolved $\ce{Ag^{+}(aq)}$ and $\ce{Cl^{+}(aq)}$ ions, where the subscript (s) means solid, i.e., the undissolved fraction of the compound, and (aq) means aqueous or dissolved in water.
$\ce{AgCl(s) ⇌ Ag+(aq) + Cl^{-}(aq)}\nonumber$
Like any other chemical equilibrium, this equilibrium has an equilibrium constant (Keq) :
$K_{eq} = [\ce{Ag^{+}}][\ce{Cl^{-}}]\nonumber$
Note that solid or pure liquid species do not appear in the equilibrium constant expression as the concentration in the solid or pure liquid remains constant. This equilibrium constant has a separate name Solubility Product Constant ($K_{sp}$) based on the fact that it is a product of the molar concentration of dissolved ions, raised to the power equal to their respective coefficients in the chemical equation, e.g.,
$K_{sp} = \ce{[Ag^{+}][Cl^{-}]} = 1.8 \times 10^{-10}\nonumber$
The solubility product constant ($K_{sp}$), is the equilibrium constant for an ionic compound dissolving in an aqueous solution.
Similarly, the dissolution equilibrium for $\ce{PbCl2}$ can be shown as:
$\ce{PbCl2(s) <=> Pb2+(aq) + 2Cl-(aq)} \nonumber$
with
$K_{sp} = \ce{[Pb^{2+}][Cl^{-}]^2} = 1.6 \times 10^{-5} \nonumber$
And the dissolution equilibrium for $\ce{Hg2Cl2}$ is similar:
$\ce{Hg2Cl2(s) ⇌ Hg22+(aq) + 2Cl-(aq) } \nonumber$
with
$K_{sp} = \ce{[Hg2^{2+}][Cl^{-}]^2} = 1.3 \times 10^{-18} \nonumber$
Selective precipitation
Selective precipitation is a process involving adding a reagent that precipitates one of the dissolved cations or a particular group of dissolved cations but not the others.
According to solubility rule# 5, both $\ce{Cu^{2+}}$ and $\ce{Ni^{2+}}$ form insoluble salts with $\ce{S^{2-}}$. However, the solubility of $\ce{CuS}$ and $\ce{NiS}$ differ enough that if an appropriate concentration of $\ce{S^{2-}}$ is maintained, $\ce{CuS}$ can be precipitated leaving $\ce{Ni^{2+}}$ dissolved. The following calculations based on the $K_{sp}$ values prove it.
$\ce{CuS(s) <=> Cu^{2+}(aq) + S^{2-}(aq)},\quad K_{sp} = \ce{[Cu^{2+}][S^{2-}]} = 8.7\times 10^{-36}\nonumber$
$\ce{NiS(s) <=> Ni^{2+}(aq) + S^{2-}(aq)},\quad K_{sp} = \ce{[Ni^{2+}][S^{2-}]} = 1.8\times 10^{-21}\nonumber$
Molar concentration of sulfide ions [$\ce{S^{2-}}$], in moles/liter in a saturated solution of the ionic compound can be calculated by rearranging their respective $K_{sp}$ expression, e.g. for $\ce{CuS}$ solution, $K_{sp} = \ce{[Cu^{2+}][S^{2-}]}$ rearranges to:
$\ce{[S^{2-}]} = \frac{K_{sp}}{\ce{[Cu^{2+}]}}\nonumber$
Assume $\ce{Cu^{2+}}$ is 0.1 M, plugging in the values in the above equation allow calculating the molar concentration of $\ce{S^{2-}}$ in the saturated solution of $\ce{CuS}$:
$\ce{[S^{2-}]} = \frac{K_{sp}}{\ce{[Cu^{2+}]}} = \frac{8.7\times10^{-36}}{0.1} = 8.7\times10^{-35}\text {~M}\nonumber$
Similar calculations show that the molar concentration of $\ce{S^{2-}}$ in the saturated solution of 0.1M $\ce{NiS}$ is 1.8 x 10-20 M. If $\ce{S^{2-}}$ concentration is kept more than 8.7 x 10-35 M but less than 1.8 x 10-20 M, $\ce{CuS}$ will selectively precipitate leaving $\ce{Ni^{2+}}$ dissolved in the solution.
Another example is the selective precipitation of Lead, silver, and mercury by adding $\ce{HCl}$ to the solution. According to rule# 3 of solubility of ionic compounds, chloride $\ce{Cl^-}$ forms soluble salt with the cations except with Lead ($\ce{Pb^{2+}}$), Mercury ($\ce{Hg_2^{2+}}$), or Silver ($\ce{Ag^{+}}$). Adding $\ce{HCl}$ as a source of $\ce{Cl^-}$ in the solution will selectively precipitate lead ($\ce{Pb^{2+}}$), mercury ($\ce{Hg_2^{2+}}$), and silver ($\ce{Ag^{+}}$), leaving other cations dissolved in the solution. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.2%3A_Solubility_equilibria.txt |
Le Chatelier's principle
Ionic compounds dissociate into ions when they dissolve in water. An equilibrium is established between ions in water and the undissolved compound. The solubility of the ionic compounds can be varied by stressing the equilibrium through changes in the concentration of the ions.
Le Chatelier's principle
Le Chatelier's principle can be stated as “when a system at equilibrium is subjected to a change in concentration, temperature, volume, or pressure, the system will change to a new equilibrium, such that the applied change is partially counteracted.”
If the ions in the solubility equilibrium are increased or decreased by another reaction going on in parallel, the equilibrium will counteract by decreasing or increasing the solubility of the compound. This use of Le Chatelier's principle to vary the solubility of sparingly soluble ionic compounds is explained with examples in the following.
Common ion effect
Consider dissolution of a sparingly soluble ionic compound $\ce{CaF2}$ in water:
$\ce{CaF2(s) <=> Ca^{2+}(aq) + 2F^{-}(aq)},\quad K_{sp} = \ce{[Ca^{2+}][F^{-}]^2} = 1.5\times 10^{-10}\nonumber$
The solubility (S) can be expressed in the units of mol/L or molarity (M). Similarly, the concentration of any species in square brackets, as [$\ce{Ca^{2+}}$] in the above-mentioned $\ce{K_{sp}}$ expression, is also in the units of mol/L or M.
$\ce{NaF}$ is a water-soluble ionic compound that has $\ce{F^-}$ in common with the above equilibrium. The addition of $\ce{NaF}$ into the mixture will increase the concentration of $\ce{F^-}$ causing a decrease in the solubility of $\ce{CaF2}$ because the solubility equilibrium will move in the reverse direction to counteract the rise in the concentration of the common ion. This is called the common ion effect.
The common ion effect refers to the decrease in the solubility of a sparingly soluble ionic compound by adding a soluble ionic compound that has an ion in common with the sparingly soluble ionic compound.
A quantitative estimate of this common ion effect is given with the help of the following calculations. If solubility of $\ce{CaF2}$ in pure water is S mol/L, then [ $\ce{Ca^{2+}}$] = S, and [ $\ce{F^-}$] = 2S. Plugging in these values in the $\ce{K_{sp}}$ expression and rearranging shows that the solubility of $\ce{CaF2}$ in pure water is 3.3 x 10-4 M:
$K_{sp} = \ce{[Ca^{2+}][F^{-}]^2}\nonumber$
$1.5 \times 10^{-10} = S(2S)^2\nonumber$
$S=\sqrt[3]{1.5 \times 10^{-10} / 4}=3.310^{-4} \mathrm{~M}\nonumber$
If $\ce{F^-}$ concentration is raised to 0.1M by dissolving $\ce{NaF}$ in the solution, then the molar solubility of $\ce{CaF2}$ changes to a new value Si , [ $\ce{Ca^{2+}}$] = Si, and [ $\ce{F^-}$] = (0.1 + Si) = 0.1 (Si cancels because it is negligible compared to 0.1). Plugging in these values in the $\ce{K_{sp}}$ expression and rearranging shows that the new solubility (Si) of $\ce{CaF2}$ is 1.5 x 10-5M:
$K_{sp} = \ce{[Ca^{2+}][F^{-}]^2}\nonumber$
$1.5 \times 10^{-10} = S_{i}(0.1)^2\nonumber$
$S_{i} = \frac{1.5 \times 10^{-10}}{(0.1)^2} = 1.5\times10^{-8} \mathrm{~M}\nonumber$
It means the solubility of $\ce{CaF2}$ is decreased by more than twenty thousand times by the common ion effect described above.
Generally, the solubility of sparingly soluble ionic compounds decreases by adding a common ion to the equilibrium mixture.
An example of a common ion effect is in the separation of $\ce{PbCl2}$ from $\ce{AgCl}$ and $\ce{Hg2Cl2}$ precipitates. $\ce{PbCl}$ is the most soluble in hot water among these three sparingly soluble compounds. So, $\ce{PbCl2}$ is selectively dissolved in hot water and separated. The solution is then cooled to room temperate and $\ce{HCl}$ is added to it as a source of common ion $\ce{Cl^-}$ to enforce re-precipitation of $\ce{PbCl2}$:
$\ce{Pb^{2+}(aq) + 2Cl^{-}(aq) <=> PbCl2(s)}\nonumber$
Effect of pH
The pH is related to the concentration of $\ce{H3O^+}$ and $\ce{OH^-}$ in the solution. Increasing pH increases $\ce{OH^-}$ and decreases $\ce{H3O^+}$ concentration in the solution and decreasing pH has the opposite effect. If one of the ions in the solubility equilibrium of a sparingly soluble ionic compound is an acid or a base, its concentration will change with changes in the pH. It is because acids will neutralize with $\ce{OH^-}$ at high pH and bases will neutralize with $\ce{H3O^+}$ at low pH. For example, consider the dissolution of $\ce{Mg(OH)2}$ in pure water.
$\ce{Mg(OH)2(s) <=> Mg^{2+}(aq) + 2OH^{-}(aq)},\quad K_{sp} = \ce{[Mg^{2+}][OH^{-}]^2} = 2.1\times 10^{-13}\nonumber$
Making the solution acidic, i.e., a decrease in pH adds more $\ce{H3O^+}$ ion that removes $\ce{OH^-}$ by the following neutralization reaction.$\ce{H3O^{+}(aq) + OH^{-}(aq) <=> 2H2O(l)}\nonumber$
According to Le Chatelier's principle, the system moves in the forward direction to make up for the loss of $\ce{OH^-}$. In other words, $\ce{Mg(OH)2}$ is insoluble in neutral or alkaline water and becomes soluble in acidic water.
Generally, the solubility of an ionic compound containing basic anion increases by decreasing pH, i.e., in an acidic medium.
In a qualitative analysis of cations, dissociation of $\ce{H2S}$ is used as a source of $\ce{S^{2-}}$ ions:
$\ce{H2S(g) + 2H2O(l) <=> 2H3O^+(aq) + S^{2-}(aq)}\nonumber$
The reaction is pH-dependent, i.e., the extent of dissociation of $\ce{H2S}$ can be decreased by adding $\ce{HCl}$ as a source of common ion $\ce{H3O^+}$ or increased by adding a base as a source of $\ce{OH^-}$ that removes $\ce{H3O^+}$ from the products:
$\ce{OH^{-}(aq) + H3O^{+}(aq) <=> 2H2O(l)}\nonumber$
Generally, the solubility of weak acids can be increased by increasing the pH and decreased by decreasing the pH. The opposite is true for the weak bases.
Complex ion equilibria
Transition metal ions, like $\ce{Ag^+}$, $\ce{Cu^{2+}}$, $\ce{Ni^{2+}}$, etc. tend to be strong Lewis acids, i.e., they can accept a lone pair of electrons from Lewis bases. Neutral or anionic species with a lone pair of electrons, like $\ce{H2{\!\overset{\Large{\cdot\cdot}}{O}}\!:}$, $\ce{:\!{NH3}}$, $\ce{:\!\overset{-}{C}N\!:}$, $\ce{:\!\overset{\Large{\cdot\cdot}}{\underset{\Large{\cdot\cdot}}{Cl}}\!:^{-}}$, etc. can act as Lewis bases in these reactions. The bond formed by the donation of a lone pair of electrons of a Lewis base to a Lewis acid is called a coordinate covalent bond. The neutral compound or ion that results from the Lewis acid-base reaction is called a coordination complex or a complex ion. For example, silver ion dissolved in water is often written as Ag+(aq), but, in reality, it exists as complex ion $\ce{Ag(H2O)2^+}$ in which $\ce{Ag^+}$ accepts lone pair of electrons from oxygen atoms in water molecules. Transition metal ion in a coordination complex or complex ion, e.g., $\ce{Ag^+}$ in $\ce{Ag(H2O)2^+}$ is called central metal ion and the Lewis base like $\ce{H2{\!\overset{\Large{\cdot\cdot}}{O}}\!:}$, in $\ce{Ag(H2O)2^+}$, is called a ligand. The strength of a ligand is the ability of a ligand to donate its lone pair of electrons to a central metal ion. If a stronger ligand is added to the solution, it displaces a weaker ligand. For example, if $\ce{:\!{NH3}}$ is dissolved in the solution containing $\ce{Ag(H2O)2^+}$, the $\ce{:\!{NH3}}$ displaces $\ce{H2{\!\overset{\Large{\cdot\cdot}}{O}}\!:}$ from the complex ion:
$\ce{Ag(H2O)2^{+}(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq) + 2H2O(aq)}\nonumber$
The lone pair on the ligand is omitted from the equation above and from the following equations. Water is usually omitted from the equation for simplicity that reduces the above reaction to the following:
$\ce{Ag^{+}(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq)}\quad K_f = 1.7\times10^7\nonumber$
Equilibrium constant for the formation of complex ion is called formation constant ($\ce{K_{f}}$), e.g, in the case of above reaction:
$K_f = \frac{\ce{[Ag(NH3)2^{+}]}}{\ce{[Ag^+]\times[NH3]^2}} = 1.7\times10^7\nonumber$
Large value of $\ce{K_{f}}$ in the above reaction shows that the reaction is highly favored in the forward direction. If ammonia is present in water, it increases the solubility of $\ce{AgCl}$ by removing the $\ce{Ag^+}$ ion from the products, just like acid ($\ce{H3O^+}$) increases the solubility of $\ce{Mg(OH)2}$ by removing $\ce{OH^-}$ from the products:
$\ce{AgCl(s) <<=> Ag^{+}(aq) + Cl^{-}(aq)}\quad K_f = 1.8\times10^{-10}\nonumber$
$\ce{Ag^{+}(aq) + 2NH3(aq) <=>> Ag(NH3)2^{+}(aq)}\quad K_f = 1.7\times10^7\nonumber$
$\text{Adding above reactions:}~\ce{AgCl(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq) + Cl^{-}(aq)}\quad K = 3.0\times10^{-3}\nonumber$
The equilibrium constant for the dissolution of $\ce{AgCl}(s)$ changes from 1.8 x 10-10 in pure water to 3.0 x 10-3 in the water containing dissolved ammonia, i.e., a 17 million times increase. It makes insoluble $\ce{AgCl}(s)$ quite soluble. This reaction is used to separate silver ions from mercury ions in a mixture of $\ce{AgCl}$ and $\ce{Hg2Cl2}$ mixture precipitates.
Generally, the solubility of metal compounds containing metals capable of coordinate complex formation increases by adding a strong ligand to the solution.
Manipulating chemical equations
The chemical equations can be manipulated like algebraic equations, i.e., they can be multiplied or divided by a constant, added, and subtracted, as demonstrated in the example of the silver ammonia complex formation reactions shown above. Note that the species on the right side of the equation cancel the same species on the left side of any other equation like algebraic equations, e.g., $Ag^{+}$ is canceled the final equation.
When two equilibrium reactions are added, their equilibrium constants are multiplied to get the equilibrium constant of the overall reaction, i.e, $K = K_{sp}\times{K_f}$ in the above reactions.
Redox reactions
There are three major types of chemical reactions, precipitation reactions, acid-base reactions, and redox reactions.
Precipitation reactions
Precipitation reactions of ionic compounds are double replacement reactions where the cation of one compound combines with the anion of another and vice versa, such that one of the new combinations is an insoluble salt.
For example, when silver nitrate ($\ce{AgNO3}$) solution is mixed with sodium chloride ($\ce{NaCl}$) solution, an insoluble compound silver chloride ($\ce{AgCl}$) precipitates out of the solution:
$\ce{AgNO3(aq) + HCl(aq) -> AgCl(s)v + NaNO3(aq)}\nonumber$
Acid-base reactions
Acid-base reactions are the reactions involving the transfer of a proton.
For example, $\ce{H2S}$ dissociates in water by donating its proton to water molecules:
$\ce{H2S(g) + 2H2O(l) <=> 2H3O^{+}(aq) + S^{2-}(aq)}\nonumber$
Redox reactions
Redox reactions are the reactions involving the transfer of electrons.
For example, when sodium metal ($\ce{Na}$) reacts with chlorine gas ($\ce{Cl2}$), sodium loses electrons and becomes $\ce{Na^+}$ cation and chlorine gains electrons and becomes $\ce{Cl^-}$ anion that combine to form $\ce{NaCl}$ salt:
$\ce{2Na(s) + 2Cl2(g) -> NaCl(s)}\nonumber$
An example of a redox reaction in qualitative analysis of cations is the dissolution of $\ce{NiS}$ precipitate by adding an oxidizing acid $\ce{HNO3}$. The $\ce{S^{2-}}$ is a weak base that can be removed from the product by adding a strong acid like $\ce{HCl}$:
$\ce{S^{2-}(aq) + 2H3O^{+}(aq) <=>> H2S(aq) + 2H2O(l)}\nonumber$
Therefore, addition of $\ce{HCl}$ is sufficient to dissolve $\ce{FS}$ precipitate by removal of $\ce{S^{2-}}$ from the products:
$\ce{FeS(s) + 2H3O^{+}(aq) <=>> Fe^{2+}(aq) + H2S(aq) + 2H2O(l)}\nonumber$
However, the addition of $\ce{HCl}$ does not remove $\ce{S^{2-}}$ sufficient enough to dissolve a relatively less soluble $\ce{NiS}$ precipitate. Nitric acid ($\ce{HNO3}$) that is a source of an oxidizing agent $\ce{NO3^{-}}$ is needed to remove $\ce{S^{2-}}$ to a higher extent for dissolving $\ce{NiS}$:
$\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) -> 3S(s, yellow)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber$
In this reaction, sulfur is oxidized from an oxidation state of -2 in $\ce{S^{2-}}$ to an oxidation state of zero in $\ce{S}$, and nitrogen is reduced from an oxidation state of +5 in $\ce{NO3^{-}}$ to an oxidation state of +2 in $\ce{NO}$. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.3%3A_Varying_solubility_of_ionic_compounds.txt |
Controlling pH is critically important in the qualitative analysis of cations. Often, pH needs to be maintained in a narrow range in the analysis of cations.
A pH buffer is an aqueous solution consisting of a weak acid and its conjugate base or vice versa, which minimizes pH change when a small amount of a strong acid or a strong base is added to it.
For example, the addition of 0.020 mol $\ce{HCl}$ into 1 L of water changes pH from 7 to 1.7, i.e., about 80% change in pH. Similarly, the addition of 0.020 mol $\ce{NaOH}$ to the same water changes pH from 7 to 12.3, i.e., again, about 80% change in pH. In contrast to pure water, 1 L of buffer solution containing 0.50 mol a week acid acetic acid ($\ce{CH3COOH}$) and 0.50 mol of its conjugate base $\ce{CH3COO^-}$ changes pH from 4.74 to 4.70 by the addition of the same 0.020 mol $\ce{HCl}$ and from 4.74 to 4.77 by the addition of 0.020 mol $\ce{NaOH}$, i.e., about 1% change in pH, as illustrated in Fig. 1.7.1.
The buffer contains a weak acid and its conjugate base in equilibrium. For example, acetic acid/sodium acetate buffer has the following equilibrium:
$\ce{CH3COOH + H2O <<=> H3O^{+} + CH3COO^{-}}\nonumber$
The molar concentration of hydronium ions [$\ce{H3O^+}$] defines the pH of the solution, i.e., $\mathrm{pH}=-\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$. The conjugate base consumes any strong acid added to the mixture:
$\ce{HA + CH3COO^{-} -> CH3COOH + A^{-}}\nonumber$
, where $\ce{HA}$ is any strong acid and $\ce{A^-}$ is its conjugate base. The concentration of $\ce{CH3COOH}$ increases and $\ce{CH3COO^-}$ decrease, but pH decreases little because [$\ce{H3O^+}$] is almost not affected. Similarly, the weak acid consumes any strong base added.
$\ce{MOH + CH3COOH -> CH3COO^{-} + M^{+} + H2O}\nonumber$
, where $\ce{M^+}$ is its conjugate acid. The concentration of $\ce{CH3COOH}$ decreases and $\ce{CH3COO^-}$ increases, but pH increases little because [$\ce{H3O^+}$] is almost not affected. Buffers are employed on several occasions during the qualitative analysis of cations.
1.5: Separation of cations in groups
Steps in the qualitative analysis of cations in water
Qualitative analysis of cations commonly found in water solution is usually done in three stages:
1. ions are separated into broader groups by selective precipitation based on their solubility properties,
2. member ions in a group are separated usually by selective dissolution of the precipitates, and
3. individual ions are identified by a specific confirmation test.
For the 1st stage, i.e., separation of cations in groups, a suitable reagent is selected that selectively precipitates certain ions leaving the rest of the ions in the solution.
Criteria for selecting a suitable reagent for the selective precipitation
A suitable reagent is the one that:
1. almost completely removes the ions belonging to the group so that the residual ions may not interfere in the analysis of the other ions left in the solution,
2. should not precipitate out a fraction of ions that do not belong to the group being separated, and
3. should not leave behind counter ion that does not interfere in the analysis of rest of the ions.
The reagents are added in an order such that the most selective reagent, the one that precipitates out the least number of ions, is added first.
The fourteen common cations found in water that are selected in these exercises are separated into five groups.
Group I comprises lead II ($\ce{Pb^{2+}}$), mercury (I) ($\ce{Hg2^{2+}}$), and silver (I) ($\ce{Ag^{+}}$) that are selectively precipitated as chlorides by adding 6M $\ce{HCl}$ to the mixture.
$\ce{HCl + H2O(l) -> H3O^{+}(aq) + Cl^{-}(aq)}\nonumber$
$\ce{HCl}$ solution is selected as a reagent for group I based on the facts: i) it is a source of chloride ($\ce{Cl^{-}}$) ion which is the most selective reagent that makes insoluble salts with only $\ce{Pb^{2+}}$, $\ce{Hg2^{2+}}$, and $\ce{Ag^{+}}$ (recall soluble ions rule#3 described in section 1.1), ii) it leaves behind $\ce{H3O^{+}}$ that makes the solution acidic which is beneficial for separation of cations of the next group.
Group II comprises tin(IV) ($\ce{Sn^{4+}}$), cadmium(II) ($\ce{Cd^{2+}}$), copper(II) ($\ce{Cu^{2+}}$), and bismuth(III) ($\ce{Bi^{3+}}$) that are selectively precipitated as sulfides by adding $\ce{H2S}$ reagent in an acidic medium. $\ce{H2S}$ is a source of sulfide ($\ce{S^{2-}}$) ion in water:
$\ce{H2S(aq) + 2H2O(l) -> 2H3O^{+}(aq) + S^{2-}(aq)}\nonumber$
The $\ce{S^{2-}}$ ion makes insoluble salts with many cations as stated by insoluble ions rule#1 in section 1.1, i.e., “Hydroxide ($\ce{OH^{-}}$) and sulfides ($\ce{S^{2-}}$) are insoluble except when the cation is a heavy alkaline earth metal ion: $\ce{Ca^{2+}}$, $\ce{Ba^{2+}}$, and $\ce{Sr^{2+}}$, or an alkali metal ion, or ammonia.”
$\ce{H2S}$ in acidic medium is selected as a source of $\ce{S^{2-}}$ which is the reagent for selective precipitation of group II, because the concentration of $\ce{S^{2-}}$ can be controlled by adjusting pH. Acidic medium has higher [$\ce{H3O^{+}}$] that decreases $\ce{S^{2-}}$ due to the common ion effect of $\ce{HeO{+}}$ ion. Therefore, among the sulfide insoluble salts, only the group II cations having very low solubility are selectively precipitated.
Group III comprises chromium(III) ($\ce{Cr^{3+}}$), iron(II) ($\ce{Fe^{2+}}$), iron(III) ($\ce{Fe^{3+}}$), and nickel(II) ($\ce{Ni^{2+}}$) selectively precipitated by as insoluble hydroxides and sulfides by adding $\ce{H2S}$ in alkaline medium with pH maintained at ~9 by $\ce{NH3}$/$\ce{NH4^{+}}$ buffer.
$\ce{H2S}$ in an alkaline medium is the reagent for the selective precipitation of group III cations.
When pH is set at 9 by $\ce{NH3}$/$\ce{NH4^{+}}$ buffer, $\ce{OH^{-}}$ concentration is high enough to precipitate group III cations as insoluble hydroxide except for nickel that forms soluble coordination complex ion with ammonia. When $\ce{H2S}$ is added in an alkaline medium, it produces a higher concentration of $\ce{S^{2-}}$ due to the removal of $\ce{H3O^{+}}$ from its equilibrium by reacting with $\ce{OH^{-}}$:
$\ce{H3O^{+}(aq) + OH^{-}(aq)-> 2H2O(l)} \nonumber$
All of the group III cations are converted to insoluble sulfides except chromium.
Group IV comprise of calcium ($\ce{Ca^{2+}}$) and barium ($\ce{Ba^{2+}}$) selectively precipitate as insoluble carbonates by adding ammonium carbonate ($\ce{(NH4)2CO3}$) as a source of carbonate ($\ce{CO3^{2-}}$) ion:
$\ce{(NH4)2CO3(s) + 2H2O <=> 2NH4^{+}(aq) + CO3^{2-}(aq)}\nonumber$
The $\ce{CO3^{2-}}$ ion makes insoluble salts with many cations as stated by insoluble ions rule#2 in section 1.1, i.e., “Carbonates ($\ce{CO3^{2-}}$), phosphates ($\ce{PO4^{3-}}$), and oxide ($\ce{O^{2-}}$) are insoluble except when the cation is an alkali metal ion or ammonia.” All other ions have already been precipitated at this stage in groups I, II, and III except group IV cations and alkali metal ions.
The $\ce{CO3^{2-}}$ ion is a selective reagent for group IV cations because
Group V comprises alkali metal ions, i.e., sodium ($\ce{Na^{+}}$) and potassium ($\ce{K^{+}}$) in the mixture of ions selected. According to soluble ions rule#1 in section 1.1, alkali metal and ammonium ions form soluble salts. So, group V cations remain in solution after groups I, II, III, and IV cations are removed as insoluble, chloride, sulfide in acid medium, sulfide in basic medium, and carbonates, respectively.
The separation of cations in groups, along with the separation of ions within a group and their confirmation tests are described in detail in later chapters. The flow chart shown below shows the summary of the separation of common cations in water into the five groups. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.4%3A_pH_Buffers.txt |
• 2.1: Precipitation
In a precipitation reaction, the solid product separates out from the clear solution making the solution opaque or turbid called a suspension. The solid product i.e., precipitate, may be filtered out, but usually, it is forced to settle at the bottom of the test tube as sediment or a solid pellet, by centrifugation process leaving a clear solution, i.e., supernatant, at the top.
• 2.2: Water bath
Heating a reaction mixture in a water bath is more uniform with less fire hazard than heating on a bunsen burner or directly on a hot plate. A 200 mL beaker filled to ~150 mL mark and gently boiling on a hot plate with surface temperature ~350-degree Celcius is recommended for use as a water bath for qualitative analysis of caitons.
• 2.3: Centrifugation
A precipitation reaction usually results in the formation of a suspension. The precipitate can be separated by filtration, but a more effective, faster, and easier approach is to force the precipitate to form sediment or pellet at the bottom of the test tube under the action of a centrifugal force in a centrifuge machine.
• 2.4: Separation of the precipitate
Separation of the precipitate from the clear supernatant after centrifugation is achieved by decantation, i.e., pouring out, by aspiration, i.e, drawing out using a pasture pipette, or by gravity-filtration are described.
• 2.5: pH measurement
In a qualitative analysis of cations, litmus paper is used to determine the solution is acidic or basic, and a pH paper is used to measure the approximate pH value of the solution. The solution is applied to the end of the pH paper strip and the color change in the pH paper is observed.
• 2.6: Flame test
Metal salt solutions, particularly metal chloride solutions impart a characteristic color to the flame when they are evaporated in a non-Luminus flame. The flame color is dependent on the metal and it is used to identify the metal in a test called flame test. Heavy alkaline earth metals and alkali metals are often identified by a flame test.
• 2.7: Common qualitative analysis reagents, their effects, and hazards
Reagents commonly used in the qualitative analyses of cations in water are listed along with their effects and hazards.
2: Experimental techniques
The chemical reactions in these exercises are performed in a test tube. Test tubes come in different sizes. These experiments are designed for test tubes of 9 mL capacity. The reactant is in a test tube and the reagent (2nd reactant) is added drop by drop from a reagent bottle using a dropper, while the reaction mixture is being stirred. Use a clean glass rod to stir the reaction mixture. Stirring is necessary as the reactants must mix before they can react. Figure \(1\) illustrates the test tubes and reagent bottles commonly used.
• Dissolved compounds make a clear solution, i.e., the solution may be colored but it is transparent (not opaque) -it remains see-through.
• In a precipitation reaction, the solid product separates out from the clear solution making the solution opaque or turbid called a suspension.
• The solid product i.e., precipitate, may be filtered out, but usually, it is forced to settle at the bottom of the test tube as sediment or a solid pellet, by centrifugation process leaving a clear solution, i.e., supernatant, at the top.
Figure \(2\) illustrated a precipitation reaction and the difference between solution, suspension, supernatant, and precipitate.
Precipitation reaction must be tested for completeness, as, otherwise, the residual reactant will interfere with the other tests to be performed using the supernatant. One more drop of reagent is added to the clear supernatant and if no more precipitate forms the precipitation is complete. Otherwise, repeat the centrifugation and check again. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.1%3A_Precipitation.txt |
Often a reaction mixture needs to be heated for a certain time for the reaction to happen. Heating directly on a Bunsen burner or on a hot plate is not uniform heating is associated with fire hazards. Heating the reaction mixture indirectly in a water bath achieves uniform heating with less fire hazard.
Lab water bath setup
A water bath for qualitative analysis of cations is usually a 200 mL capacity beaker filled with distilled or deionized water up to about 150 mL mark and placed on a hot plate for heating. Ramp up the temperature control knob of the hot plate to a maximum in the beginning until the water starts boiling. Then set the temperature to 350 oC to keep it gently boiling as illustrated in Figure \(1\). Add water when the water level drops to the range ½ to 1/4th, ramp up temperature again, and then re-set the temperature at 350 oC once it starts boiling again.
Caution
Hold the hot test tube with a test tube holder while stirring with a clean glass rod or while moving it to centrifuge. Never hold the hot test tube with a bare hand. Always point the mouth of the hot test tube away from you and away from any other person around. Hot test tubes and the hot liquid in the test tube can cause burns.
2.3: Centrifugation
The solid product, i.e., the precipitate is forced to form sediment or pellet at the bottom of the test tube under the action of centrifugal force as illustrated in Figure \(1\). A laboratory centrifuge machine contains a fast router with compartments to house the test tube as shown in Figure \(2\). The test tube compartments are arranged in a circle.
• Always place two test tubes across the diagonal, one containing the solution of interest and the other similar test tube containing an equal volume of water to counterbalance the weight.
Three similar test tubes with the same volume of liquid can also be placed at the corner of a triangle around the axis of the router to balance the weight. Close the lid and start the machine. If the weight is not balanced, the centrifuge machine will vibrate, shake, and may start moving or fly off causing damage when switched on.
Caution
• Always keep eye on the centrifuge when it starts -if there is any abnormal sound, shaking, or vibration, immediately switch off or unplug the centrifuge machine. When the centrifuge machine is unplugged or switched off, the router keeps running for a while before coming to stop. Never open the lid until the router comes to a complete stop.
2.4: Separation of the precipitate
Decantation and aspiration
After centrifugation, a clear liquid, called supernatant, is floating over the sediment or precipitate. Figure \(1\) shows the separation of supernatant from the precipitate by decantation and by aspiration.
• The supernatant is removed by decantation, i.e., by pouring out the supernatant.
• A pasture pipette can also be used to draw out the supernatant -a process called aspiration.
Cotton-plug technique in aspiration
Sometimes the precipitate is not fully packed after centrifugation and tends to go into supernatant during the decantation or aspiration process. In these situations, a cotton-plug technique is used, i.e., a small tuft of cotton is twisted at one end between fingers to make it pointy at one end. Then the pointy end is plugged into the tip of a pasture pipette to act as a filter during aspiration. The loose precipitate is filtered by the cotton plug during aspiration as illustrated in Figure \(2\). The cotton plug is removed and then the clear supernatant is transferred to a clean test tube for further analysis.
Washing the precipitate
The precipitate is usually washed by re-suspending by stirring with a clean glass rod in a solvent that does not re-dissolve the product but dissolves any impurity in it, as shown in Figure \(3\). The suspension is centrifuged or gravity filtered and the supernatant or filtrate of the washing step is discarded as it is just the washing liquid with some impurities in it.
Gravity filtration
Sometimes a precipitate in a suspension is separated by gravity filtration. A gravity filtration setup consists of a funnel placed in a test tube or an Erlenmeyer flask and a filter paper placed in the funnel as illustrated in Figure \(4\). Suspension is poured into the filter paper. The solution that passes through the filter paper and is collected in the test tube or Erlenmeyer flask is called the filtrate. The precipitate that is retained on the filter paper is called the residue.
The precipitate is washed by adding a washing solution drop by drop while gently stirring the residue with a clean glass rod.
Caution
• Stir the residue very gently as otherwise the wet filter paper my rupture. The room temperature gravity setup is converted to a heated gravity filtration setup by pouring hot water into the filter paper and discarding the filtrate which is just the hot water. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.2%3A_Water_bath.txt |
The pH is usually measured in laboratories by a digital pH meter. The electrode of the pH meter is first calibrated with solutions of know pH values, and then the electrode is dipped in the test solution to read its pH value.
pH papers are a cheaper alternative often used for pH measurement in qualitative analyses of cations that gives quick results, as illustrated in Figure \(1\).
Using a pH paper
If the purpose is to monitor when the solution turns from acidic to alkaline or vice versa, a litmus paper is used. A red litmus paper stays red in an acidic solution and turns blue in a basic solution. A blue litmus paper turns red in acidic and stays blue in a basic solution.
If the purpose is to determine an approximate pH value, a universal pH indicator paper is used. The test solution is applied to the end of a pH paper strip with a glass rod and the pH is read by matching the color of the test paper socked in the test solution with the color chart on the pH paper box.
Caution
A common mistake is dipping a pH paper in the test solution and withdrawing immediately to read the color change. It should be avoided as it may leave contaminants in the solution. Further, the test solution is at the bottom of the test tube requiring a long paper strip and making it difficult to avoid touching the sides of the test tube above the liquid. A better approach is to cut a piece of pH paper about 2 cm long and touch one end with a wet glass rod that was used to stir the test solution and then read the color change in the pH paper by matching it to the color on the chart.
2.6: Flame test
A flame test is a complex phenomenon that is not fully explained. In simple words, when a solution of metal salts, e.g., an aqueous solution of metal chlorides is injected into a flame, some of the metal ions may gain electrons and become neutral metal atoms. Electrons in the atom can be promoted from the ground state to a higher energy excited state by the strong heat of the flame. The excited electrons ultimately return to the ground state either in one go or in several steps by jumping to lower allowed energy states. When the excited electrons jump from higher to lower allowed energy states they emit electromagnetic radiation of a specific wavelength corresponding to the energy gap between the energy states. Some of these radiations may fall in the visible part of the electromagnetic radiation spectrum. The color we see is a combination of all the colors in the emission spectrum, as illustrated in Fig. 2.7.1.Figure \(1\).
The exact gap between the energy levels allowed for electrons varies from one metal to another metal. Therefore, different metals have different patterns of spectral lines in their emission spectrum, and if some of these spectral lines fall in the visible spectrum range, they impart different colors to the flame. For example, the ground-state electron configuration of the sodium atom is 1s22s22p6. When the sodium atom is in the hot flame some of the electrons can jump to any of the higher energy allowed stages, such as 3s, 3p, etc. The familiar intense yellow flame of sodium is a result of excited electrons jumping back from 3p1 to ground state 3s1 level.
Flame test procedure
Often metal chloride salts are used for the flame tests as they are water-soluble and easier to vaporize in flame from the solution. Metal chloride salts are first dissolved in water. Other metal salts are first treated with 6M \(\ce{HCl}\) to dissolve them as metal chlorides and then used for the flame test. An inert platinum wire is dipped in the test solution. Usually, the wire has a small loop a the end to make a film of the solution that evaporates in the flame. Air and fuel supply to the flame are adjusted to produce a non-luminous flame. The wire carrying the salt solution is touched on the outer edge of a flame somewhere in the middle of the vertical axis of the flame and the color imparted to the flame is observed. Nichrome wire is a cheaper alternative to platinum wire, though nichrome may slightly alter the flame color. A wooden splint or wooden cotton-tipped applicator are other cheaper alternatives. The wooden splint or cotton swab applicator is first dipped in deionized or distilled water overnight so that the cotton or wood may not burn when placed in a flame for a short time. The salt solution is then applied to the wooden splint end or to the cotton swab and exposed to the flame.
Wooden splint and cotton-tipped applicators are disposable, i.e., they are discarded after one flame test. Platinum wire can be reused after washing. The wire is dipped in 6M \(\ce{HCl}\) and then heated in a flame to red-hot. The process is repeated till the wire does not alter the color of the flame. Then it can be re-used. Nichrome wire can be washed the same way. However, an easier alternative is to cut out the loop of the wire and make a new loop on the fresh end portion. Then use the wire for the next flame test.
Figure \(2\) shows that flame tests tested using calcium chloride work equally well with nichrome wire, cotton-tipped applicator, and wooden splint. Figure \(3\) shows flam colors of some metal chloride salt solutions exposed to the flame on a cotton swab applicator.
2.7: Common qualitative analysis reagents their effects and hazards
Common qualitative analysis reagents, their effects, and hazards
Reagent
Effects
Hazards
6M Ammonia (\(\ce{NH4OH}\) or \(\ce{NH3}\))
increases [\(\ce{NH3}\)], increases [\(\ce{OH^-}\)], decrease [\(\ce{H3O^+}\)], precipitates insoluble hydroxides, forms \(\ce{NH3}\) complexes
Toxic, corrosive, and irritant
6M Hydrochloric acid (\(\ce{HCl}\))
increases [\(\ce{H3O^+}\)], increases [\(\ce{Cl^-}\)], decreases [\(\ce{OH^-}\)], dissolves insoluble carbonates, chromates, hydroxides, some sulfates, destroys hydroxo and \(\ce{NH3}\) complexes, and precipitates insoluble chlorides
Toxic and corrosive
3% Hydrogen peroxide (\(\ce{H2O2}\))
Oxidizing agent in acidic medium, reducing agent in basic medium
corrosive
6M Nitric acid (\(\ce{HNO3}\))
Increases [\(\ce{H3O^+}\)], decreases [\(\ce{OH^-}\)], dissolves insoluble carbonates, chromates, and hydroxides, dissolves insoluble sulfides by oxidizing sulfide ion, destroys hydroxo and ammonia complexes, good oxidizing agent when hot
Toxic, corrosive, and strong oxidant
3M Potassium hydroxide (\(\ce{KOH}\))
Increases [\(\ce{OH^-}\)], decreases [\(\ce{H3O^+}\)], forms hydroxo complexes, precipitates insoluble hydroxides
Toxic and corrosive
1M Thioacetamide (\(\ce{CH3C(S)NH2}\))
Produces \(\ce{H2S}\), i.e., a source of sulfide ion (\(\ce{S^{2-}}\)), precipitates insoluble sulfides
Toxic and carcinogen | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.5%3A_pH_measurement.txt |
• 3.1: Separation of group I cations
Group I cations are separated based on the solubility rule that states "Salts of chloride, bromide, and Iodide are soluble, except when the cation is Lead(II), Mercury(I), or Silver(I). Hydrochloric acid is the reagent that provides chloride ions. Calculations of the chloride concentration needed are shown based on the solubility product constant.
• 3.2: Separation and confirmation of individual ions in group I precipitates
Lead(II) chloride is separated from the group I precipitates by selectively dissolving in hot water and re-precipitated by cooling the water. Ammonia dissolves silver(I) chloride from the remaining precipitates by making a complex ion and turns Mercury(I) chloride from white to gray by autooxidation process. The white silver(I) chloride is re-precipitated from the ammonia complex solution by changing the medium from alkaline to acidic.
• 3.3: Procedure, flowchart, and datasheets for separation and confirmation of group I cations
Procedure, flowchart, and datasheets for separation and confirmation of group I cations, i.e., lead(II), mercury(I), and silver(I) are given.
3: Group I cations
Selective precipitation of a set of group I, i.e., lead(II) ($\ce{Pb^{2+}}$), mercury(I) ($\ce{Hg2^{2+}}$), and silver(I) ($\ce{Ag^{+}}$) is based on soluble ions rule#3 in the solubility guidelines in section 1.1 which states "Salts of chloride ($\ce{Cl^{-}}$), bromide ( $\ce{Br^{-}}$), and Iodide ( $\ce{I^{-}}$) are soluble, except when the cation is Lead ( $\ce{Pb^{2+}}$), Mercury ( $\ce{Hg2^{2+}}$), or Silver ( $\ce{Ag^{+}}$). The best source of $\ce{Cl^{-}}$ for precipitating group 1 cations from a test solution is $\ce{HCl}$, because it is a strong acid that completely dissociates in water producing $\ce{Cl^{-}}$ and $\ce{H3O^{+}}$ ions, both do not get involved in any undesired reactions under the conditions.
The $\ce{K_{sp}}$ expression is used to calculate $\ce{Cl^{-}}$ that will be sufficient to precipitate group 1 cations. The molar concentration of chloride ions i.e., [$\ce{Cl^{-}}$], in moles/liter in a saturated solution of the ionic compound can be calculated by rearranging their respective $\ce{K_{sp}}$ expression. For example, for $\ce{AgCl}$ solution, $\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ag}^{+}\right]\left[\mathrm{Cl}^{-}\right]$ rearranges to:
$\left[\mathrm{Cl}^{-}\right]=K_{s p} /\left[\mathrm{Ag}^{+}\right]\nonumber$
and for $\ce{PbCl2}$ solution, Ksp = [Pb2+][Cl-]2 rearranges to:
$\left[C l^{-}\right]=\sqrt{K_{s p} /\left[P b^{2+}\right]}\nonumber$
The concentration of ions in the unknown sample are ~0.1 M. Plugging in 0.1M value for $\ce{Pb^{2+}}$ in the above equation shows that [$\ce{Cl^{-}}$] in a saturated solution having 0.1M $\ce{Pb^{2+}}$ is 1.3 x 10-2M:
$\left[C l^{-}\right]=\sqrt{K_{s p} /\left[P b^{2+}\right]}=\sqrt[2]{1.6 \times 10^{-5} / 0.1}=1.3 \times 10^{-2} \mathrm{M}\nonumber$
It means $\ce{Cl^{-}}$ concentration up to 1.3 x 10-2M will not cause precipitation from 0.1M $\ce{Pb^{2+}}$ solution. Increasing $\ce{Cl^{-}}$ above 0.013M will remove $\ce{Pb^{2+}}$ from the solution as a $\ce{PbCl2}$ precipitate. If 99.9% removal is desired, then 1.0 x 10-4 M $\ce{Pb^{2+}}$ will be left in the solution and the [$\ce{Cl^{-}}$] have to be raised to 0.40 M:
$\left[C l^{-}\right]=\sqrt[2]{K_{s p} /\left[P b^{2+}\right]}=\sqrt[2]{1.6 \times 10^{-5} / 1.0 \times 10^{-4}}=0.40 \mathrm{M}\nonumber$
The solubility of $\ce{Hg2Cl2}$ and $\ce{AgCl}$ is less than that of $\ce{PbCl2}$. So, a 0.40M $\ce{Cl^{-}}$ will remove more than 99.9% of $\ce{Hg2^{2+}}$ and $\ce{Ag^{+}}$ from the solution.
A sample of 20 drops of the aqueous solution is about 1 mL. In these experiments, ~15 drops of the test solution are collected in a test tube and 3 to 4 drops of 6M $\ce{HCl}$ are mixed with the solution. This results in about 0.9 mL total solution containing 1 to 1.3 M $\ce{Cl^{-}}$, which is more than twice the concentration needed to precipitate out 99.9% of group 1 cations.
A concentrated reagent (6M $\ce{HCl}$) is used to minimize the dilution of the test sample because the solution is centrifuged and the supernatant that is separated by decantation is used to analyze the remaining cations. A 12M $\ce{HCl}$ is available, but it is not used because it is a more hazardous reagent due to being more concentrated strong acid and also because if $\ce{Cl^{-}}$ concentration is raised to 5M or higher in the test solution, it can re-dissolve $\ce{AgCl}$, by forming water-soluble [$\ce{AgCl2}$]- complex ion.
The addition of $\ce{HCl}$ causes precipitation of group 1 cation as milky white suspension as shown in Figure $1$ and by chemical reaction equations below. The precipitates can be separated by gravity filtration, but more effective separation can be achieved by subjecting the suspension to centrifuge in a test tube. Centrifugal force forces the solid suspension to settle and pack at the bottom of the test tube from which the clear solution, called supernatant, can be poured out -a process called decantation. The precipitate is resuspended in pure water by stirring with a clean glass rod, centrifuged, and decanted again to wash out any residual impurities. The washed precipitate is used to separate and confirm the group 1 cations and the supernatant is saved for analysis of group 2, 3, 4, and 5 cations.
$\ce{ Pb^{2+}(aq) + 2Cl^{-}(aq) <=> PbCl2(s)(v)}\nonumber$
$\ce{Hg2^{2+}(aq) + 2Cl^{-}(aq) <=> Hg2Cl2(s)(v)}\nonumber$
$\ce{ Ag^{+}(aq) + Cl^{-}(aq) <=> AgCl(s)(v)}\nonumber$ | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/3%3A_Group_I_cations/3.1%3A_Separation_of_group_I_cations.txt |
Separation and confirmation of lead(II) ion
Solubility of $\ce{PbCl2}$ in water at 20 oC is about 1.1 g/100 mL, which is significantly higher than 1.9 x 10-4 g/100 mL for $\ce{AgCl}$ and 3.2 x 10-5 g/100 mL for $\ce{Hg2Cl2}$. Further, the solubility of $\ce{PbCl2}$ increases three-fold to about 3.2 g/100 mL in boiling water at 100 oC, while solubility $\ce{AgCl}$ and $\ce{Hg2Cl2}$ remain negligible. A 15 drops sample that is used to precipitate out group I cations corresponds to about 0.75 mL, which based on the molar mass of $\ce{PbCl2}$ is 278.1 g and the concentration of each ion ~0.1M, contains about 0.02 g of $\ce{PbCl2}$ precipitate. This 0.02 g of $\ce{PbCl2}$ requires ~0.6 mL of heated water for dissolution. The precipitated is re-suspended in ~2 mL water and heated in a boiling water bath to selectively dissolve $\ce{PbCl2}$, leaving any $\ce{AgCl}$ and $\ce{Hg2Cl2}$ almost undissolved, as shown in Figure $1$.
$\ce{ PbCl2 (s) <=>[Hot~water] Pb^{2+}(aq) + 2Cl^{-}(aq)}\nonumber$
The heated suspension is filtered using a heated gravity filtration set up to separate the residue comprising of $\ce{AgCl}$ and $\ce{Hg2Cl2}$ from filtrate containing dissolved $\ce{PbCl2}$.
The solubility of $\ce{PbCl2}$ is three times less at room temperature than in boiling water. Therefore, the 2 mL filtrate is cooled to room temperature to crystalize out $\ce{PbCl2}$ :
$\ce{Pb^{2+}(aq) + 2Cl^{-}(aq) <=>[Cold~water] PbCl2(s)}\nonumber$
If $\ce{PbCl2}$ crystals are observed in the filtrate upon cooling to room temperature, it is a confirmation of $\ce{PbCl2}$ in the test solution. If $\ce{PbCl2}$ concentration is low in the filtrate, the crystals may not form upon cooling. Few drops of 5M $\ce{HCl}$ are mixed with the filtrate to force the crystal formation based on the common ion effect of Cl- in the reactants. The formation of $\ce{PbCl2}$ crystals confirms $\ce{Pb^{2+}}$ as shown in Figure $2$, and no crystal formation at this stage confirms that $\ce{Pb^{2+}}$ was absent in the test solution.
Separating mercury(I) ion from silver(I) ion and confirming mercury(I) ion
The residue left after filtering out $\ce{Pb^{2+}}$ in hot water, is washed further with 10 mL of hot water to washout residual $\ce{PbCl2}$. Then 2 mL of 6M aqueous $\ce{NH3}$ solution is passed through the residue drop by drop. Aqueous $\ce{NH3}$ dissolves $\ce{AgCl}$ precipitate by forming water soluble complex ion $\ce{[Ag(NH3)2(aq)]^+}$ through following series of reactions:
$\ce{AgCl(s) <=> Ag^{+}(aq) + Cl^{-}(aq)}\quad K_f = 1.8\times10^{-10}\nonumber$
$\ce{Ag^{+}(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq)}\quad K_f = 1.7\times10^7\nonumber$
$\text{Overall reaction:}~\ce{AgCl(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq) + Cl^{-}(aq)}\quad K = 3.0\times10^{-3}\nonumber$
The 2 mL filtrate is collected in a separate test tube for confirmation of $\ce{Ag^+}$ ion. Although $\ce{Hg2Cl2}$ precipitate is insoluble in water, it does slightly dissociate like all ionic compounds. The $\ce{Hg2^{2+}}$ ions undergo auto-oxidation or disproportionation reaction producing black Hg liquid and $\ce{Hg2^{2+}}$ ions. The $\ce{Hg2^{2+}}$ ions react with $\ce{NH3}$ and $\ce{Cl^-}$ forming white water-insoluble $\ce{HgNH2Cl}$ precipitate through the following series of reactions:
$\ce{Hg2Cl2(s) <=> Hg2^{2+}(aq) + 2Cl^{-}(aq)}\nonumber$
$\ce{Hg2^{2+}(aq) <=> Hg(l) + Hg^{2+}(aq)}\nonumber$
$\ce{Hg^{2+}(aq) + 2NH3(aq) + Cl^{-}(aq) <=> HgNH2Cl(s) + NH4^{+}(aq)}\nonumber$
$\text{Overall reaction:}\ce{~Hg2Cl2(s, white) + 2NH3(aq) <=> HgNH2Cl(s, white) + NH4^{+}(aq) + Cl^{-}(aq) + Hg(l, black)}\nonumber$
A mixture of white solid $\ce{HgNH2Cl}$ and black liquid Hg appears gray in color. Turning of white $\ce{Hg2Cl2}$ precipitate to grayish color upon addition of 6M $\ce{NH3}$ solution drops confirms $\ce{Hg2^{2+}}$ ions are present in the test solution as shown in Figure $3$. If the white precipitate redissolves leaving behind no grayish residue, it means the precipitate was $\ce{AgCl}$ and $\ce{Hg2^{2+}}$ were absent in the test solution.
Confirming silver(I) ion
Although water-soluble complex ion $\ce{[Ag(NH3)2(aq)]^+}$ is quite stable, it does slightly decompose into $\ce{Ag^+}$ and $\ce{NH3(aq)}$. The excess $\ce{NH3}$ added to dissolve $\ce{AgCl}$ precipitate and the that produced by dissociation of $\ce{[Ag(NH3)2(aq)]^+}$ is removed by making the solution acidic by adding 6M $\ce{HNO3}$. The $\ce{Cl^-}$ formed from the dissolution of $\ce{AgCl}$ precipitate in the earlier reactions is still present in the medium. Decomposition of $\ce{[Ag(NH3)2(aq)]^+}$ in the acidic medium produces enough $\ce{Ag^+}$ ions to re-form white $\ce{AgCl}$ precipitate by the following series of equilibrium reactions.
$\ce{[Ag(NH3)2]^{+}(aq) <=> Ag^{+}(aq) + 2NH3(aq)}\nonumber$
$\ce{2NH3(aq) + 2H3O^{+}(aq) <=> 2NH4^{+}(aq) + 2H2O(l)}\nonumber$
$\ce{Ag^{+}(aq) + Cl^{-}(aq) <=> AgCl(s, white)}\nonumber$
$\text{Overall reaction:}\ce{~[Ag(NH3)2]^{+}(aq) + 2H3O^{+}(aq) + Cl^{-}(aq) <=> AgCl(s, white) + 2NH4^{+}(aq) + 2H2O(l)}\nonumber$
The formation of white $\ce{AgCl}$ precipitate at this stage in the acidified filtrate confirms $\ce{Ag^+}$ ion was present in the test solution, as shown in Figure $4$, and its absence confirms that $\ce{Ag^+}$ ion was not present in the test solution. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/3%3A_Group_I_cations/3.2%3A_Separation_and_confirmation_of_Pb2__ion.txt |
Table 1: List of chemicals and their hazards*
Chemical
Hazard
0.1M Lead(II) nitrate in 0.1M nitric acid
Toxic, irritant, and oxidant
0.1M Mercury(I) nitrate in 0.1M nitric acid
Highly toxic and oxidant
0.1M silver nitrate in 0.1M nitric acid
Toxic, corrosive, and oxidant
• *Hazards of 6M ammonia, 6M hydrochloric acid, and 6M nitric acid are listed in a common reagents table in chapter 2.
Caution
• Used heavy metal ion solutions or precipitates are disposed of in a labeled metal waste disposal container, do not drain these solutions down the drain or in the regular trash.
Procedure for the analyses of group I cations
1. Take 15 drops of the unknown solution in a test tube and add 3 to 4 drops of 6M \(\ce{HCl}\) to it drop by drop while stirring. Centrifuge for 2 min and without decanting add 1 more drop of 6M \(\ce{HCl}\) to check there is no more precipitation formation. If more precipitate forms, centrifuge and check again till no more precipitate forms upon addition of a drop 6M \(\ce{HCl}\) to the supernatant. Carefully decant and keep the supernatant for analysis of group 2 cations, and use the precipitate in the next step for separation and conformation group 1 cation, i.e., \(\ce{AgCl(s, white)}\), \(\ce{Hg2Cl2(s, white)}\), \(\ce{PbCl2(s, white)}\). Record the observations in the datasheet.
2. Add 2 mL (40 drops) of distilled water to the precipitate from step 1 in a test tube, stir it with a clean glass rod to re-suspend the precipitate, and heat the test tube in a boiling water bath for 3 min while stirring. Add 15 mL of distilled water to a 2nd test tube and heat it also in the boiling water bath.
3. Prepare a gravity filtration setup and pass ~5 mL of hot water from the 2nd test tube of step 2 to make it a heated gravity filtration setup. Discard the filtrate which is just hot water. Place an empty test tube labeled "Lead(II) confirmation test" under the heated filter funnel, filter the contents of the first test tube of step 2 and collect it and keep the filtrate for \(\ce{Pb^{2+}}\) test in the test tube labeled "Lead (II) confirmation test". Keep the residue, if there is any, for \(\ce{Ag^{+}}\) and \(\ce{Hg2^{2+}}\) tests. If there is no precipitate left, it means \(\ce{Ag^{+}}\) and \(\ce{Hg2^{2+}}\) ions were absent in the test sample.
4. Let the 2 mL filtrate, in the test tube labeled "Lead (II) confirmation test", cool down to room temperature by placing the test tube in a room temperature water bath. If white crystals/precipitate forms in the filtrate upon cooling \(\ce{Pb^{2+}}\) was present in the test sample. If no crystal forms upon cooling, add 2 to 3 drops of 6M \(\ce{HCl}\) to the filtrate while stirring with a glass rod. If white crystals/precipitate forms \(\ce{Pb^{2+}}\) was present in the test sample. If no white crystals/precipitate is observed at this stage \(\ce{Pb^{2+}}\) was absent in the test sample. Discard the mixture in the metal waster container. Record the observation in the datasheet.
5. Re-suspend the residue, if there is any, from step 3 in ~5 mL of hot water from the 2nd test tube of step 2 to dissolve any residual \(\ce{PbCl2}\) and then filter it out. Wash the residue with the remaining ~5 mL hot water from the 2nd test tube of step 2. Discard the filtrate which is just the wash liquid with impurities in it and leave the precipitate on the filter paper. Put a clean empty test tube under the filtration funnel and add 40 drops (2 mL) of 6M \(\ce{NH3}\) onto the residue drop by drop with genal stirring with a glass rod. Keep the filtrate for \(\ce{Ag^{+}}\) test. If residue is still left on the filter paper and changes color from white to grayish-black \(\ce{Hg2^{2+}}\) was present in the test sample, otherwise, \(\ce{Hg2^{2+}}\) was absent in the test sample. Discard the gray residue in the metal waste container. Record the observation on the datasheet.
6. Add 6M \(\ce{HNO3}\) drop by drop to the 2 mL filtrate from step 5, while stirring and keep testing with blue litmus paper until the solution turns from alkaline to acidic indicated by the change of litmus paper color from blue to red. If white suspension/precipitate is observed at this stage it confirms that \(\ce{Ag^{+}}\) was present in the test sample, otherwise, \(\ce{Ag^{+}}\) was not present in the test sample. discard the mixture in the metal waste container. Record the observations in the datasheet.
Datasheets filling instructions for group I cations
1. Step number refers to the corresponding step number in the procedure sub-section.
2. In “the expected chemical reaction and expected observations column”, write an overall net ionic equation of the reaction that will happen if the ion being processed in the step was present, write the expected color change of the solution, the expected precipitate formed and its expected color, etc.
3. In the “the actual observations and conclusion” column write the color change, the precipitate formed and its color, etc. that is actually observed as evidence, and state the specific ion as present or absent.
4. In “the overall conclusion” row write one by one symbol of the ions being tested with a statement “present” or “absent” followed by evidence/s to support your conclusion. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/3%3A_Group_I_cations/3.3%3A_Procedure_flowchart_and_datasheets_for_separation_and_confirmation_of_group_I_cations.txt |
• 4.1: Precipitation of group II cations
After removal of group I cations, barium(III), cadmium(II), copper(II), and tin(IV) form highly insoluble sulfides that are separated by selectively precipitating them as sulfides in an acidic medium having pH range of 1 to 0.5. Calculations based on Ksp values are presented to prove that group III cations that form relatively soluble sulfides do not precipitate under these conditions.
• 4.2: Separation and confirmation of individual ions in group II precipitates
Tin(IV) sulfide is selectively dissolved in an alkaline medium and then re-precipitated as a yellow tin(IV) sulfide precipitate upon changing the solution to acidic. Then cadmium(II) sulfide is selectively dissolved in HCl and re-precipitated by neutralizing the acid with ammonia. finally, copper(II) sulfide and bismuth(III) sulfide are dissolved by heating with nitric acid, then the addition of ammonia turns the copper ion into a blue color solution, and the bismuth ion into a white precipitate
• 4.3: Procedure, flowchart, and datasheets for separation and confirmation of group II cations
The procedure, flow chart, datasheets, and filling instructions for the known sample that contains all of the group II cations and for the unknown sample that may contain some of the group II cations.
4: Group II cations
The bases of Group II cations separation
The solubility guideline#1 of insoluble ions states “Hydroxide ($\ce{OH^{-}}$) and sulfides ($\ce{S^{2-}}$) are insoluble except when the cation is alkali metal, ammonia, or a heavy alkaline earth metal ion, i.e., $\ce{Ca^{2+}}$, $\ce{Ba^{2+}}$, and $\ce{Sr^{2+}}$”. The sulfide of $\ce{Cr^{3+}}$ is also in the exceptions list as its sulfide is unstable in water. It is obvious that the number of insoluble sulfides and hydroxides is large. The solution is made acidic to decrease [$\ce{OH^{-}}$] to below the level that can cause precipitation of any ion. The [$\ce{S^{2-}}$] also remains low due to the common ion effect of $\ce{H3O^{+}}$ in the acidic medium as explained in the next section. Therefore among the insoluble sulfides, only those that have very low solubility limits are selectively precipitated. These include $\ce{Bi^{3+}}$, $\ce{Cd^{2+}}$, $\ce{Cu^{2+}}$, and $\ce{Sn^{4+}}$ among the cations selected in this study that are left in the solution after group I cations have been separated. Group II comprise of $\ce{Bi^{3+}}$, $\ce{Cd^{2+}}$, $\ce{Cu^{2+}}$, and $\ce{Sn^{4+}}$.
Precipitation of group II cations
Among the ions in the initial solution after removal of group I cations, the following ions form insoluble sulfides: $\ce{Bi^{3+}}$, $\ce{Cd^{2+}}$, $\ce{Cu^{2+}}$, $\ce{Fe^{2+}}$, $\ce{Fe^{3+}}$, $\ce{Ni^{2+}}$, and $\ce{Sn^{4+}}$. Among these, $\ce{Bi^{3+}}$, $\ce{Cd^{2+}}$, $\ce{Cu^{2+}}$, and $\ce{Sn^{4+}}$ are in group II that form very insoluble sulfides, and $\ce{Cr^{3+}}$, $\ce{Fe^{2+}}$, $\ce{Fe^{3+}}$, and $\ce{Ni^{2+}}$ are in group III form insoluble hydroxides and sulfides in basic medium, as reflected by their solubility product constants ($\ce{K_{sp}}$) listed in Table 1. The minimum concentration of $\ce{S^{2-}}$ needed to start precipitation of the cation can be calculated from the $\ce{K_{sp}}$ expressions as shown in Table 1. It can be observed from Table 1 that there is a huge difference in the minimum $\ce{S^{2-}}$ concentration (1.8 x 10-20M) needed to precipitate $\ce{Ni^{2+}}$ -the least soluble sulfide of group III and $\ce{Cd^{2+}}$ (7.8 x 10-26) -the most soluble sulfide of group II. If the $\ce{S^{2-}}$ is kept more than 1.8 x 10-20 M but less than 7.8 x 10-26 M group II cations will selectively precipitate while group III cations and the rest of the cations will remain dissolved.
Table 1: Solubility product constants of insoluble sulfides of group II, and group III and minimum sulfide ion concentration needed to start precipitation from 0.1M cation solution*.
Ion
Sulfide
Ksp at 25 oC
Minimum [S-2] needed to precipitate
$\ce{Fe^{2+}}$ $\ce{FeS}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Fe}^{2+}\right]\left[\mathrm{S}^{2-}\right]=4.9 \times 10^{-18}$
$\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Fe}^{2+}\right]=4.9 \times 10^{-17}$
$\ce{Ni^{2+}}$
$\ce{NiS}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ni}^{2+}\right]\left[\mathrm{S}^{2-}\right]=1.8 \times 10^{-21}$
$\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Ni}^{2+}\right]=1.8 \times 10^{-20}$
$\ce{Cd^{2+}}$
$\ce{NiS}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{S}^{2-}\right]=7.8 \times 10^{-27}$
$\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Cd}^{2+}\right]=7.8 \times 10^{-26}$
$\ce{Bi^{3+}}$
$\ce{Ba2S3}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Bi}^{3+}\right]^{2}\left[\mathrm{~S}^{2-}\right]^{3}=6.8 \times 10^{-97}$
$\left[\mathrm{S}^{-2}\right]=\sqrt[3]{\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Bi}^{3+}\right]^{2}}=4.1 \times 10^{-32}$
$\ce{Sn^{4+}}$
$\ce{SnS2}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Sn}^{4+}\right]\left[\mathrm{S}^{2-}\right]^{2}=1.0 \times 10^{-70}$
$\left[\mathrm{S}^{-2}\right]=\sqrt[2]{\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Sn}^{4+}\right]}=3.2 \times 10^{-35}$
$\ce{Cu^{2+}}$
$\ce{CuS}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cu}^{2+}\right]\left[\mathrm{S}^{2-}\right]=8.7 \times 10^{-36}$
$\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Fe}^{2+}\right]=8.7 \times 10^{-35}$
• * Following cations that may be present the initial solution are not listed in this table due to the following reasons: i) group I cations, i.e., $\ce{Pb^{2+}}$, $\ce{Hg2^{2+}}$, and $\ce{Ag^{+}}$ are already removed, ii) $\ce{Ca^{2+}}$ and $\ce{Ba^{2+}}$ for soluble sulfides, iii) sulfide of $\ce{Cr^{3+}}$ is not stable in water, and iv) $\ce{Fe^{3+}}$ is reduced to $\ce{Fe^{2+}}$ by $\ce{H2S}$ in acidic medium: $\ce{2Fe^{3+}(aq) + S^{2-} <=> 2Fe^{2+}(aq) + S(s)}$. Source of $\ce{K_{sp}}$ values: chem 202 lab manual, 2008, by Michael Stranz, cengag learning, ISBN 13: 978-0-534-66904-1
Source of $\ce{S^{2-}}$ is $\ce{H2S}$ gas -a week diprotic acid that dissociated in water by the following equilibrium reactions:
$\ce{H2S(g) + H2O(l) <=> H3O^{+}(aq) + HS^{-}(aq)}\quad \mathrm{K}_{\mathrm{a} 1}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{HS}^{-}\right] /\left[\mathrm{H}_{2} \mathrm{~S}\right]=1.0 \times 10^{-7}\nonumber$
$\ce{HS^{-}(aq) + H2O(l) <=> H3O^{+}(aq) + S^{2-}(aq)}\quad \mathrm{K}_{\mathrm{a} 2}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{S}^{2-}\right] /\left[\mathrm{HS}^{-}\right]=1.3 \times 10^{-13}\nonumber$
$\text{Overall reaction: }\ce{H2S(g) + 2H2O(l) <=> 2H3O^{+}(aq) + S^{2-}(aq)}\quad\mathrm{K}_{\mathrm{a}}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}\left[\mathrm{~S}^{2-}\right] /\left[\mathrm{H}_{2} \mathrm{~S}\right]=1.3 \times 10^{-20}\nonumber$
Extent of $\ce{H2S}$ dissociation, and, consequently, the concentration of $\ce{S^{2-}}$ produced is dependent on $\ce{H3O^{+}}$:
$K_a = \frac{\ce{[H3O^{+}]^{2}[S^{2-}]}}{\ce{[H2S]}}\quad\quad\text{ rearranges to: }\quad\quad\ce{[S^{2-}]} = \frac{K_{a}\ce{[H2S]}}{\ce{[H3O^{+}]^{2}}}\nonumber$
It is obvious from the above formula that $\ce{[S^{2-}]}$ is dependent on $\ce{[H3O^{+}]}$, which is related to pH ( $pH = Log\frac{1}{\ce{[H3O^{+}]}} = \text{-Log}\ce{[H3O^{+}]}$. Therefore, $\ce{[S^{2-}]}$ can be controlled by adjusting the pH.
$\ce{H2S}$ is a toxic gas. To minimize the exposure, $\ce{H2S}$ is produced in-situ by decomposition of thioacetamide ($\ce{CH3CSNH2}$) in water:
$\ce{CH3CSNH2(aq) + 2H2O <=> CH3COO^{-} + NH4^{+}(aq) + H2S(aq)}\nonumber$
The decomposition of thioacetamide is an endothermic reaction, which, according to Le Chatelier's principle, moves in the forward direction upon heating. An aqueous solution of thioacetamide is heated in a boiling water bath in a fume hood producing ~0.01M $\ce{H2S}$ solution.
Rearranging acid dissociation constant of $\ce{H2S}$ and plugging in 0.01M $\ce{H2S}$ in the rearranged formula allows calculating $\ce{S^{2-}}$ concentration at various concentrations of $\ce{H3O^{+}}$, i.e., at various pH values:
$\ce{[S^{2-}]} = \frac{K_{a}\ce{[H2S]}}{\ce{[H3O^{+}]^{2}}} = \frac{1.3\times10^{-20}\times0.01}{\ce{[H3O^{+}]^{2}}} = \frac{1.3\times10^{-22}}{\ce{[H3O^{+}]^{2}}}\nonumber$
It shows that $\ce{S^{2-}}$ concentration can be varied by [$\ce{H3O^{+}}$], i.e., by varying pH. At pH 1 and 0, $\ce{H3O^{+}}$ is 0.10 M and 1.0 M, respectively, that produces [$\ce{S^{2-}}$] in the range of 1.3 x 10-20 M S2- and 1.3 x 10-22 M S2-:
$\ce{[S^{2-}]} = \frac{1.3\times10^{-22}}{(0.10)^{2}} = 1.3\times10^{-20} ~M\quad\quad\text{ and }\quad\quad\ce{[S^{2-}]} = \frac{1.3\times10^{-22}}{(1.0)^{2}} = 1.3\times10^{-22}~M\nonumber$
This range of [$\ce{S^{2-}}$] is less than the solubility limit of $\ce{Ni^{2+}}$ -the least soluble cation of group III but more than the solubility limit of $\ce{Cd^{2+}}$ -the most soluble cation of group II. If pH of the test solution is maintained between 0 and 1, group II cations will precipitate and group III and higher group cations will remain dissolved. At pH 0.5, S2- is 5.2 x 10-22 M that will precipitate more than 99.99% $\ce{Cd^{2+}}$:
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{S}^{2-}\right]=7.8 \times 10^{-27}\quad\quad\text{gives:}\quad\quad\ce{[Cd^{2+}]} = \frac{7.8\times 10^{-27}}{\ce{[S^{-2}]}} = \frac{7.8\times 10^{-27}}{5.2\times10^{-22}} = 1.5\times10^{-5}~M\nonumber$
, which is 0.0002% of the initial [$\ce{Cd^{2+}}$].
The supernatant after removal of group I chlorides is usually within the pH range of 0.5 ±0.3, which is the appropriate pH for precipitation of group II cations under the conditions of this study. If the pH of the test sample is outside this range, the pH can be increased to ~0.5 by adding 0.5M $\ce{NH3(aq)}$ drop by drop under stirring. Determine pH by using a pH paper after each drop of 0.5M $\ce{NH3(aq)}$ is added and thoroughly mixed. Keep in mind that $\ce{NH3}$ solution in water is also labeled as $\ce{NH4OH}$. Similarly, the pH can be decreased to ~0.5 by adding 0.5M $\ce{HCl(aq)}$ drop by drop under stirring. Determine pH by using a pH paper after each drop of 0.5M $\ce{HCl(aq)}$ is added and thoroughly mixed.
Thioacetamide reagent is added to the test solution at pH ~0.5 and heated in a boiling water bath to precipitate out group II cations.
The precipitates include $\ce{SnS2}$ (yellow), $\ce{CdS}$ (yellow-orange), $\ce{CuS}$ (Black-brown), $\ce{Bi2S3}$ (black), formed by the following precipitation reactions:
$\ce{SnCl6^{2-}(aq) + 2S^{2-}(aq) <=> 6Cl^{-}(aq) + SnS2(s, yellow)}\nonumber$
$\ce{Cd^{2+}(aq) + S^{2-}(aq) <=> CdS(s, yellow-orange)}\nonumber$
$\ce{Cu^{2+}(aq) + S^{2-}(aq) <=> CuS(s, black-brown)}\nonumber$
$\ce{2Bi^{3+}(aq) + 3S^{2-}(aq) <=> Bi2S3(s, black)}\nonumber$
The overall color of the combined precipitate may vary depending on its composition. Black color dominates, i.e., if all precipitates are present, the color of the mixture will be black as shown in Figure $1$.
The solution is cooled to room temperature by using a room temperature water bath. Cooling helps precipitation of $\ce{CdS}$. A drop of 0.5 M $\ce{NH3(aq)}$ is added while stirring, which promotes precipitation of $\ce{CdS}$ and $\ce{CnS2}$, as both tend to stay dissolved in a supersaturated solution. The mixture is centrifuged and decanted to separate the supernatant that is used for the analysis of group III and higher group cations. The precipitate is washed with 0.1M $\ce{NH4Cl}$ solution and the washed precipitate is used to separate and confirm individual cations of group II. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/4%3A_Group_II_cations/4.1%3A_Precipitation_of_group_II_cations.txt |
Separation and confirmation of tin(IV) ion
Among the sulfides of group II, only $\ce{SnS2}$ is amphoteric and reacts with $\ce{OH^{-}}$ ions in an alkaline medium to produce $\ce{[Sn(OH)6]^{2-}}$ -a coordination complex anion, and stannate ion $\ce{[SnS3]^{2-}}$, both are water-soluble. 3M $\ce{KOH}$ is mixed with the precipitates of group II ions and the mixture is heated to dissolve the $\ce{SnS2}$ through the following equilibrium reaction:
$\ce{3SnS2(s, yellow) + 6OH^{-}(aq) <=> [Sn(OH)6]^{2-}(aq) + 2[SnS3]^{2-}(aq)}\nonumber$
The hot solution is centrifuged and decanted to separate the supernatant that contains $\ce{[Sn(OH)6]^{2-}}$ and $\ce{[SnS3]^{2-}}$ dissolved in it and the precipitate that contains the sulfides of the rest of the group II cations, as shown in Figure $1$. A better approach is to separate the supernatant by aspiration using the cotton-plug technique to avoid contamination of precipitates in the supernatant.
The above reaction is reversible, which means removing $\ce{OH^{-}}$ from the supernatant by acid-base neutralization reaction moves the equilibrium in the reverse direction re-producing yellow $\ce{SnS2}$ precipitate as shown in Figure $2$.
$\ce{[Sn(OH)6]^{2-}(aq) + 2[SnS3]^{2-}(aq) <=> 3SnS2(s, yellow) + 6OH^{-}(aq)}\nonumber$
$\ce{6HCl(aq) + 6OH^{-}(aq) <=> 6H2O(l) + 6Cl^{-}(aq)}\nonumber$
$\text{Overall reaction: }\ce{~6HCl(aq) + [Sn(OH)6]^{2-}(aq) + 2[SnS3]^{2-}(aq) <=> 3SnS2(s, yellow) + 6H2O(l)}\nonumber$
Some of the sulfides may be lost due to air oxidation of $\ce{H2S}$ by the following reaction:
$\ce{2H2S(aq) + O2(g) <=> 2S(s, whitish-yellow) + 2H2O(l)}\nonumber$
To compensate for the loss of sulfide, 1M thioacetamide solution is also added along with 6M $\ce{HCl}$ to the supernatant and the mixture is heated to reform the yellow $\ce{SnS2}$ precipitate that confirms the presence of $\ce{Sn^{4+}}$ in the test solution. Note that both S and $\ce{SnS2}$ are yellow solids. Add 3M $\ce{KOH}$ solution to the mixture to turn it alkaline again the $\ce{SnS2}$ precipitate will re-dissolve confirming $\ce{Sn^{4+}}$ is present in the test solution. The $\ce{S}$ particles will not re-dissolve.
Separation and confirmation of cadmium(II) ion
$\ce{CdS}$ is the most soluble sulfide among group II sulfide precipitates. According to Le Chatelier's principle, the removal of products, i.e., $\ce{Cd^{2+}}$ and $\ce{S^{2-}}$ of the dissolution reaction in this case, drives the reaction forward. $\ce{CdS}$ can be redissolved by adding 1M $\ce{HCl}$ to the precipitates after the removal of $\ce{Sn^{4+}}$. Dissociation of $\ce{HCl}$ produces $\ce{H3O^{+}}$ in water that removes S2- by forming $\ce{H2S}$ which is a weak acid. At the same time, Cl- removes $\ce{Cd^{2+}}$ b y forming soluble coordination complex anion $\ce{[CdCl4]^{2-}}$ which is quite stable with $K_f$ = 6.3×102:
$\ce{CdS(s, yellow-orange) <=> Cd^{2+}(aq) + S^{2-}(aq)}\nonumber$
$\ce{4HCl(aq) + 4H2O(l) <=> 4H3O^{+}(aq) + 4Cl^{-}(aq)}\nonumber$
$\ce{S^{2-}(aq) + 2H3O^{+}(aq) <=> H2S(aq) + 2H2O(l)}\nonumber$
$\ce{Cd^{2+}(aq) + 4Cl^{-}(aq) <=> [CdCl4]^{2-}(aq)}\nonumber$
$\text{Overall reaction:} \ce{~CdS(s, yellow-orange) + 4HCl(aq) + 2H2O(l) <=> [CdCl4]^{2-}(aq) + 2H3O^{+}(aq) +H2S(aq)}\nonumber$
The dissolution of $\ce{CdS}$is facilitated by heating the reaction mixture. The rest of the group II cations, i.e., $\ce{CuS}$ and $\ce{Bi2S3}$ are very insoluble and do not dissolve under these conditions. The solution is centrifuged and decanted or aspirated to separate the supernatant that contains $\ce{[CdCl4]^{2-}}$ and the precipitate that contains $\ce{CuS}$ and/or $\ce{Bi2S3}$ if $\ce{Cu^{2+}}$ and/or $\ce{Bi^{3+}}$ are present. The precipitate tends to go into the supernatant, so, the cotton plug technique is needed to prevent precipitates from going into the supernatant during the separation as shown in Figure $3$.
All the reactions responsible for the dissolution of $\ce{CdS}$ are reversible. The addition of $\ce{HCl}$ dissolves $\ce{CdS}$ by moving the equilibrium forward and the removal of $\ce{HCl}$ moves the equilibrium in the reverse direction to reform yellow $\ce{CdS}$ precipitate. Ammonia $\ce{NH3}$ is such a base that removes $\ce{HCl}$l:
$\ce{HCl(aq) + NH3(aq) <=> NH4Cl(aq)}\nonumber$
6M $\ce{NH3}$ solution is added drop by drop under stirring and tested with red-litmus paper till the solution turns alkaline. If yellow precipitate forms, it is $\ce{CdS}$ confirming $\ce{Cd^{2+}}$ was present in the test solution:
$\ce{[CdCl4]^{2-} +2H3O^{+} + H2S(aq) + 4NH3(aq)<=> CdS(s, yellow-orange)(v) +4NH4^{+}(aq) + Cl^{-} + 2H2O(l)}\nonumber$
If no precipitate forms, add 1M thioacetamide and heat to make up for any loss of $\ce{S^{2-}}$ in the solution. If yellow precipitate forms, it is $\ce{CdS}$ confirming $\ce{Cd^{2+}}$ is present in the test solution as shown in Figure $4$.
Separation and confirmation of copper(II) ion and bismuth(III) ion
After removal of $\ce{Sn^{4+}}$ and $\ce{Cd^{2+}}$, if there is a precipitate left it could be $\ce{CuS}$ and/or $\ce{Bi2S3}$, which are the least soluble sulfides in group II. To dissolve $\ce{CuS}$ and $\ce{Bi2S3}$, the $\ce{S^{2-}}$ in the products need to be removed to a higher extent than in the case of $\ce{CdS}$ re-dissolution.
Nitric acid provides $\ce{NO3^{2-}}$ which is a strong oxidizing agent that can remove $\ce{S^{2-}}$ sufficient to drive the equilibrium forward to dissolve $\ce{CuS}$ and $\ce{Bi2S3}$.
$\ce{Bi2S3(s, black) <=> 2Bi^{3+}(aq) + 3S^{2-}(aq)}\nonumber$
$\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber$
$\text{Overall reaction:}\ce{~Bi2S3(s, black) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2Bi^{3+}(aq) + 2NO(g)(^) + 12H2O(l)}\nonumber$
$\ce{3CuS(s, black-brown) <=> 3Cu^{2+}(aq) + 3S^{2-}(aq)}\nonumber$
$\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber$
$\text{Overall reaction:}\ce{~3CuS(s, black-brown) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2Cu^{2+}(aq) + 2NO(g)(^) + 12H2O(l)}\nonumber$
The mixture is heated to enhance the above reactions. The $\ce{S^{2-}}$ is oxidized to solid light-yellow colored sulfur particles. Brown colored fumes are observed over the solution as a result of air oxidation of nitric oxide ($\ce{NO}$) that evaporates out of the solution as shown in Figure $5$:
$\ce{2NO(g) + O2(g) <=> 2NO2(g, red-brown)}\nonumber$
Removal of $\ce{NO}$ and $\ce{S^{2-}}$ from the products drives the reaction in the forward direction based on Le Chatelier's principle.
The solid sulfur precipitate is removed by centrifugation followed by decantation.
The supernatant is acidic and appears light blue if copper ions are present, as shown in Figure $6$.
If the solution is made alkaline, $\ce{Cu^{2+}}$, and $\ce{Bi^{3+}}$ form solid hydroxides. However, aqueous ammonia ($\ce{NH3}$) selectively precipitate out$\ce{Bi(OH)3}$, while keeping copper dissolved as coordination complex ions, $\ce{[Cu(NH3)4]^{2+}}$:
$\ce{Bi^{3+}(aq) + 3NH3(aq) + 3H2O(l) <=> Bi(OH)3(s, white)(v) + 3NH4^{+}(aq)}\quad K = 3.3\times10^{39}\nonumber$
$\ce{Cu^{2+}(aq) + 4NH3(aq) <=> [Cu(NH3)4]^{2+}(aq, blue)}\quad K = 3.8\times10^{12}\nonumber$
The solution is made alkaline by adding 6M $\ce{NH3}$ drop by drop and tested using red-litmus paper. Excess $\ce{NH3}$ solution is added to make sure that any residual $\ce{Cd^{2+}}$ is also removed as $\ce{[Cu(NH3)4]^{2+}}$. If the supernatant turns blue by making it alkaline with ammonia, it confirms $\ce{Cu^{2+}}$ is present in the test sample, as shown in Figure $7$. The presence of residual $\ce{Cd^{2+}}$ does not interfere because it forms a colorless $\ce{[Cu(NH3)4]^{2+}}$ ion.
The mixture is centrifuged and decanted to separate the white precipitate of $\ce{Bi(OH)3}$, but, if ammonia addition was not sufficient, white $\ce{Cd(OH)2}$ may also form from any residual $\ce{Cd^{2+}}$ ions:
$\ce{Cd^{2+}(aq) + 2NH3(aq) + 2H2O(l)<=> Cd(OH)2(s, white)(v) + 2NH4^{+}}\nonumber$
The precipitate is resuspended in 6M $\ce{NH3}$ to redissolve $\ce{Cd(OH)2}$, if there is any present. $\ce{Bi(OH)3}$ precipitate does not dissolve in 6M $\ce{NH3}$. If the white precipitate persists after washing with 6M $\ce{NH3}$ it confirms $\ce{Bi^{3+}}$ is present in the test solution, as shown in Figure $8$. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/4%3A_Group_II_cations/4.2%3A_Separation_and_confirmation_of_individual_ions_in_group_II_precipitates.txt |
Table 1: List of chemicals and their hazards*
Chemical
Hazard
0.1M ammonium chloride (\(\ce{NH4Cl}\))
Toxic and irritant
0.1M bismuth nitrate in 0.3M \(\ce{HNO3}\)
Toxic, irritant, and oxidant
0.1M cadmium chloride in 0.3M \(\ce{HNO3}\)
Toxic and suspected carcinogen
0.1M copper(II) nitrate in 0.3M \(\ce{HNO3}\)
Toxic, irritant, and oxidant
0.1M Tin(IV) chloride in 0.3M \(\ce{HNO3}\)
Corrosive and irritant
• *Hazards of 6M ammonia, 6M hydrochloric acid, 6M nitric acid, 3M potassium hydroxide, and 1M thioacetamide are listed in the common reagents table in chapter 2.
Caution
• Used heavy metal ion solutions or precipitates are disposed of in a labeled metal waste disposal container, do not drain these solutions down the drain or in the regular trash.
Procedure for the analyses of group II cations
1. Take 15 drops of the test solution if the group I cations are not present in the sample or take the supernatant of step 1 of group I analysis. Find its pH using a short-range pH paper. If the pH is 0.5 ±3 there is no need to adjust the pH. If pH is lower, increase it to 0.5 ±3 by adding drops of 0.5M ammonia solution, one drop at a time while stirring. If pH is higher, decrease to 0.5 ±3 by adding drops of 0.5M \(\ce{HCl}\), one drop at a time while stirring. Then add 10 drops of 1M thioacetamide stir and heat for 10 min in a water bath. Add 1 drop of 0.5M \(\ce{NH3}\),stir, centrifuge for 2 min, and add 5 drops of 1M thioacetamide, stir, and heat again for 2 min. Cool in room temperature water bath and add 1 more drop of 0.5M ammonia while stirring and centrifuge for 2 min. Decant and keep the supernatant for group III cations and keep the precipitate for separation and analysis of group II cations. The precipitate may be one or more of the following: \(\ce{SnS2}\) (yellow), \(\ce{CdS}\) (yellow-orange), \(\ce{CuS}\) (black-brown), \(\ce{Bi2S3}\) (black). Record the observation in the datasheet.
2. Wash the precipitate from step 1 by re-suspending it in 1 mL (20 drops) of 0.1M \(\ce{NH4Cl}\), centrifuge for 2 min, decant, and discard the supernatant which is just the washing liquid. Re-suspend the precipitate in 1 mL (20 drops) of 3M \(\ce{KOH}\) + 1 drop of 1M thioacetamide, stir, loosely stopper the test tube, and heat in a water bath for 2 min. Centrifuge the hot mixture for 2 min and decant while it is hot. Keep the supernatant for analysis of \(\ce{Sn^{4+}}\) which exists as soluble \(\ce{Sn(OH)_{6}^{2-}}\) ion at this stage and keep the precipitate, if there is any, for analysis of the rest of the group II cations. Record the observation in the datasheet.
3. Add 6M \(\ce{HCl}\) drop by drop to the supernatant from step 2 and keep testing with blue litmus paper until the mixture turns acidic. Then add 5 drops of 1M thioacetamide, stir, and heat in a water bath for 2 min. Yellow precipitate at this stage is \(\ce{SnS2}\) which confirms \(\ce{Sn^{4+}}\) is present in the test sample, no yellow precipitate means \(\ce{Sn^{4+}}\) was not present. Record the observation in the datasheet and discard the mixture in a waste container.
4. Wash the precipitate from step 2 by re-suspending it in 10 drops of distilled water and then centrifuge for 2 min. Decant and discard the supernatant and wash the precipitate again by re-suspension in 10 drops of distilled water followed by centrifuge for 2 min, decant and discard the supernatant. Re-suspend the precipitate in 10 drops of distilled water + 2 drops of 6M \(\ce{HCl}\) and heat for 2 min. Centrifuge and decant while the mixture is still hot. If the supernatant appears turbid due to some precipitate left in it, use the cotton plug technique to aspire clean supernatant and filter out the residual precipitate. Keep the supernatant for analysis of \(\ce{Cd^{2+}}\) which may exist as dissolved \(\ce{[CdCl4]^{2-}}\) ion at this stage and keep the precipitate, if there is any, for analysis of remaining group II cations. Record the observation in the datasheet.
5. Add 6M \(\ce{NH3}\) drop by drop to the clear supernatant from step 4 and keep testing with red litmus paper until the solution turns basic. Add 2 drops of 1M thioacetamide, stir, and heat for 2 min in a water bath. If a yellow precipitate forms at this stage it is \(\ce{CdS}\) that confirms \(\ce{Cd^{2+}}\) was present in the test sample, otherwise \(\ce{Cd^{2+}}\) was not present. Record the observation in the datasheet and discard the mixture in a metal waste container.
6. Wash the precipitate from step 4, if there is any, by re-suspending it in 10 drops of distilled water, centrifuge for 2 min, decant and discard the supernatant. Re-suspend the precipitate in 10 drops of 6M HNO3 and heat in a boiling water bath for 5 min. The precipitate, i.e., \(\ce{CuS}\) and/or \(\ce{Bi2S3}\) will dissolve in the liquid, and \(\ce{Cu^{2+}}\) and/or \(\ce{Bi^{3+}}\) hydrated ions and yellow sulfur particles may form. Remove the sulfur particles by centrifugation and decantation and discard them as there is no ion in them. Keep the supernatant for the analysis of \(\ce{Cu^{2+}}\) and \(\ce{Bi^{3+}}\) and record the observation in the datasheet.
7. Add 6M NH3 drop by drop to the supernatant from step 6 and keep testing with red litmus paper till the solution turns alkaline. Add 10 more drops of 6M \(\ce{NH3}\) solution after the solution turns alkaline to make it strongly alkaline. If the mixture becomes blue color at this stage, it is due to the blue \(\ce{[Cu(NH3)4]^{2+}}\) ion that confirms \(\ce{Cu^{2+}}\) is present in the test solution. If there is a white suspension in the mixture, keep it for testing \(\ce{Bi^{3+}}\). Record the observation in the datasheet.
8. Centrifuge the mixture from step 7 for 2 min and decant and discard the supernatant. If there is any white precipitate left after decantation, it is most likely \(\ce{Bi(OH)3}\). Wash the precipitate by re-suspending it in 10 drops of 6M \(\ce{NH3}\), centrifuge for 2 min, and decant. If the white precipitate remains there after the washing, it is \(\ce{Bi(OH)3}\) that confirms \(\ce{Bi^{3+}}\) is present in the test solution, otherwise, \(\ce{Bi^{3+}}\) is absent. Discard the mixture in a metal waste container and record the observation in the datasheet.
Datasheets filling instructions for group II cations
1. Step number refers to the corresponding step number in the procedure sub-section.
2. In “the expected chemical reaction and expected observations column”, write an overall net ionic equation of the reaction that will happen if the ion being processed in the step was present, write the expected color change of the solution, the expected precipitate formed and its expected color, etc.
3. In the “the actual observations and conclusion” column write the color change, the precipitate formed and its color, etc. that is actually observed as evidence, and state the specific ion as present or absent.
4. In “the overall conclusion” row write one by one symbol of the ions being tested with a statement “present” or “absent” followed by evidence/s to support your conclusion. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/4%3A_Group_II_cations/4.3%3A_Procedure_flowchart_and_datasheets_for_separation_and_confirmation_of_group_II_cations.txt |
• 5.1: Separation of group III cations
After insoluble chlorides and acid-insoluble sulfides of group I and group II has been removed, the medium is made alkaline at pH ~9 where group III cations precipitate out as hydroxides, i.e., iron(II) hydroxide, iron (III) hydroxide, chromium(III) hydroxide, and nickel(II) hydroxides. These ions also precipitate out as insoluble sulfides in the alkaline medium.
• 5.2: Separation and confirmation if individual ions in group III precipitates
Nickel sulfide is separated based on its insolubility in HCl, then dissolved in aqua regia and confirmed by making a red precipitate of its complex. Iron(II) ions are oxidized to iron(III), dissolved in HCl, and confirmed by making a red color complex. Chromate ion is reduced to chromium(III) ion by acidic hydrogen peroxide, then confirmed by precipitating it upon making the medium alkaline.
• 5.3: Procedure, flowchart, and datasheets for separation and confirmation of group III cations
The procedure, flowchart, datasheets and filling instructions for the known sample that contains all of the group III cations and for the unknown sample that may contain some of the group III cations.
5: Group III cations
Group II cations form sulfides that have very low solubility. After group II cations are removed under a low concentration of $\ce{S^{2-}}$ in an acidic medium, the solution is made alkaline. Remember that like sulfides, hydroxides are also insoluble according to insoluble ions rule#1 of solubility guidelines described in chapter 1 states “Hydroxide ($\ce{OH^{-}}$) and sulfides ($\ce{S^{2-}}$) are insoluble except when the cation is a heavy alkaline earth metal ion: $\ce{Ca^{2+}}$, $\ce{Ba^{2+}}$, and $\ce{Sr^{2+}}$, alkali metal ions, and ammonium ion.”
Table 1 lists solubility product constants of hydroxides of group III & IV cations at 25 oC, maximum hydroxide ($\ce{OH^{-}}$) concentration, and the maximum pH that can exist in a saturated solution containing 0.1M cation solutions that may be present in the test solution at this stage. It can be observed that the ions listed in table 1 will not precipitate as hydroxides during the precipitation of group II cations under the acidic pH range of 0.5 to 1.
$\ce{Fe{3+}}$ forms the most insoluble hydroxide, but it is reduced to $\ce{Fe^{2+}}$ by $\ce{H2S}$ during precipitation of group II cations:
$\ce{2Fe^{3+}(aq) + S^{2-}(aq) <=> 2Fe^{2+}(aq) + S(s)}\nonumber$
$\ce{Fe^{3+}}$ may be present only if precipitation of group III starts from a fresh sample that has not been subjected to group II separation.
It can be observed from Table 1 that if the pH of the sample solution is increased to a range of 7 to 10, $\ce{Fe^{3+}}$, $\ce{Cr^{3+}}$, $\ce^{Ni{2+}}$, and $\ce{Fe^{2+}}$ will precipitate as $\ce{Fe(OH)3(s, rusty)}$, $\ce{Cr(OH)3(s, gray-green)}$, $\ce{Ni(OH)2(s, green)}$, and $\ce{Fe(OH)2(s, green)}$, leaving behind in the solution rest of the ions that may still be present at this stage. Group III comprise of , $\ce{Fe^{3+}}$, $\ce{Cr^{3+}}$, $\ce{Ni^{2+}}$, and $\ce{Fe^{2+}}$ ions.
Table 1: Solubility product constants of hydroxides of group III & IV cations at 25 oC, maximum hydroxide ($\ce{OH^{-}}$) concentration, and pH that can exist in a saturated 0.1M cation solution.*
Ion
Salt
Ksp at 25 oC
Minimum [OH-] and pH needed to precipitate
$\ce{Fe^{3+}}$
$\ce{Fe(OH)3}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Fe}^{3+}\right]\left[\mathrm{OH}^{-}\right]^{3}=2.8 \times 10^{-39}$
$\left[\mathrm{OH}^{-}\right]=3.0 \times 10^{-13}~M=\mathrm{pH} ~1.5$
$\ce{Cr^{3+}}$
$\ce{Cr(OH)3}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cr}^{3+}\right]\left[\mathrm{OH}^{-}\right]^{3}=1.0 \times 10^{-30}$
$\left[\mathrm{OH}^{-}\right]=2.2 \times 10^{-10}~M=\mathrm{pH} ~4.3$
$\ce{Ni^{2+}}$
$\ce{Ni(OH)2}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ni}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=5.5 \times 10^{-16}$
$\left[\mathrm{OH}^{-}\right]==7.4 \times 10^{-9}~M=\mathrm{pH} ~4.3$
$\ce{Fe^{2+}}$
$\ce{Fe(OH)2}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Fe}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=4.9 \times 10^{-17}$
$\left[\mathrm{OH}^{-}\right]=2.2 \times 10^{-9}~M=\mathrm{pH} ~5.6$
$\ce{Ca^{2+}}$
$\ce{Ca(OH)2}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ca}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=5.0 \times 10^{-6}$
$\left[\mathrm{OH}^{-}\right]=7.1 \times 10^{-4}~M=\mathrm{pH} \mathrm{} ~10.9$
$\ce{Ba^{2+}}$
$\ce{Ba(OH)2}$
$\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ba}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=2.6 \times 10^{-4}$
$\left[\mathrm{OH}^{-}\right]=5.1 \times 10^{-3}~M=\mathrm{pH} \mathrm{} 11.7$
• * Following cations that may be present in the initial solution are not listed in this table due to the reason: i) $\ce{Pb^{2+}}$, $\ce{Hg2^{2+}}$, and $\ce{Ag^{+}}$ are already removed as chloride precipitates of group I cations, ii) $\ce{Sn^{4+}}$, $\ce{Cd^{2+}}$, $\ce{Cu^{2+}}$, and $\ce{Bi^{3+}}$ has been removed as group II sulfides under pH 0.5 to 1, iii) $\ce{Na^{+}}$ and $\ce{K^{+}}$ form soluble compounds with all anions according to rule#1 of solubility described in chapter 1. Source: Engineering ToolBox, (2017). Solubility product constants. [online] Available at: https://www.engineeringtoolbox.com/s...sp-d_1952.html [Accessed Feb. 5th, 2022]
Buffers, that resist change in pH are employed in such a situation where pH needs to be maintained in a narrow range. Buffers are a mixture of a weak acid and its conjugate base or a mixture of a weak base and its conjugate acid. Ammonia ($\ce{NH3}$), i.e., a week base and ammonium ion ($\ce{NH4^{+}}$) is its conjugate acid.
The $\ce{NH3}$/$\ce{NH4^{+}}$ is a suitable buffer that can maintain pH of around 9. The buffer is prepared by adding 2 drops of 6M $\ce{HCl}$ into 15 drops of the sample and then adding 6M $\ce{NH3}$ drop by drop to neutralize the acid.
$\ce{HCl(aq) + H2O(l) -> H3O^{+}(aq) + Cl^{-}(aq)}\nonumber$
$\ce{NH3(aq) + H3O^{+}(aq) -> NH4^{+}(aq) + H2O(l)}\nonumber$
$\text{Overall reaction:} \ce{~HCl(aq) + NH3(aq) -> NH4^{+}(aq) + Cl^{-}(aq)}\nonumber$
Then 5 drops more of 6M $\ce{NH3}$ are added after the $\ce{HCl}$ has been neutralized to make a mixture of $\ce{NH3}$ and $\ce{NH4^{+}}$ that maintains pH ~9 and OH- at around 1 x 10-5 M.
The group III cations precipitate at this stage as hydroxides, as shown in Figure $1$, except $\ce{Ni^{2+}}$:
$\ce{Fe^{3+}(aq) + 3OH^{-}(aq) -> Fe(OH)3(s, reddish-brown ~or ~rusty)(v),}\nonumber$
$\ce{Cr^{3+}(aq) + 3OH^{-}(aq) -> Cr(OH)3(s, gray-green)(v),}\nonumber$
$\ce{Fe^{2+}(aq) + 2OH^{-}(aq) -> Fe(OH)2(s, green)(v).}\nonumber$
The concentration of $\ce{Fe^{2+}}$, i.e., the most soluble hydroxide of group III cations, is reduced by more than 99.99%, i.e., from 0.1M to 4.9 x 10-7 M when pH is increased to 9 and $\ce{OH^{-}}$ concentration is increased to 1 x 10-5 M:
$\mathrm{Fe}^{2+}=\frac{\mathrm{K}_{\mathrm{sp}}}{\left[\mathrm{OH}^{-}\right]^{2}}=\frac{4.9 \times 10^{-17}}{\left(1 \times 10^{-5}\right)^{2}}=4.9 \times 10^{-7} \mathrm{~M}\nonumber$
Caution
Nickle ion is not precipitated at this stage as it forms soluble coordination cation $\ce{[Ni(NH3)6]^{2+}}$ with ammonia:
$\ce{Ni^{2+}(aq, green) + 6NH3(aq) <=> Ni(NH3)6(aq, blue)}\nonumber$
Therefore, $\ce{S^{2-}}$ is introduced by adding thioacetamide and heating the mixture in a boiling water bath. Decomposition of thioacetamide produces ~0.01M $\ce{H2S}$:
$\ce{CH3CSNH2(aq) + 2H2O(l) <=> CH3COO^{-}(aq) + NH4^{+}(aq) + H2S(aq)}\nonumber$
Nearly all of the $\ce{H2S}$ dissociates to form ~0.01M $\ce{S^{2-}}$ at pH ~9:
$\ce{H2S(aq) + 2H2O(l) <=>2H3O^{+}(aq) + S^{2-}(aq)}\quad K_a = \frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}\left[\mathrm{~S}^{2-}\right]}{\left[\mathrm{H}_{2} \mathrm{~S}\right]}=1.3 \times 10^{-20}\nonumber$
The ammonia complex of nickel, i.e., $\ce{[Ni(NH3)6]^{2+}}$ precipitates out as $\ce{NiS}$, and, at the same time, $\ce{Fe(OH)3}$ and $\ce{Fe(OH)2}$ also convert to $\ce{Fe2S2}$ and $\ce{FeS}$:
$\ce{Ni(NH3)6^{2+}(aq, blue) + S^{2-}(aq) <=> NiS(s, black) + 6NH3(aq)}\nonumber$
$\ce{2Fe(OH)3(s, reddish-brown) + 3S^{2-}(aq) <=> Fe2S3(s, yellow-green) + 6OH^{-}(aq)}\nonumber$
$\ce{Fe(OH)2(s, geen) + S^{2-}(aq) <=> FeS(s, black) + 2OH^{-}(aq)}\nonumber$
Chromium remains as $\ce{Cr(OH)3}$ precipitate because chromium sulfide is unstable in water.
Group III precipitates, i.e., $\ce{Cr(OH)3(s, gray-green)}$, $\ce{NiS(s, black)}$, $\ce{Fe2Se3(s, yellow-green)}$, and $\ce{FeS(s, black)}$ in the mixture are separated as precipitates, and the rest of the ions, i.e, $\ce{Ca^{2+}}$, $\ce{Ba^{2+}}$, $\ce{Na^{+}}$ and $\ce{K^{+}}$, etc. remain dissolved in the supernatant, as shown in Figure $2$. The color of the precipitate does not give a clear indication of what ions are present at this stage as several species of different colors may be mixed at this stage. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/5%3A_Group_III_cations/5.1%3A_Separation_of_group_III_cations.txt |
Separating and confirming nickel(II) ion
Acid like $\ce{HCl}$ dissolves precipitates of group III cations, i.e., $\ce{Cr(OH)3(s, gray-green)}$, $\ce{Fe2Se3(s, yellow-green)}$, and $\ce{FeS(s, black)}$, by the following series of reactions:
$\ce{Cr(OH)3(s, gray-green) <=> Cr^{3+}(aq) + 3OH^{-}(aq)}\nonumber$
$\ce{FeS(s, black) <=> Fe^{2+}(aq) + 2S^{2-}(aq)}\nonumber$
$\ce{Fe2S3(s, yellow-green) <=> 2Fe^{3+}(aq) + 3S^{2-}(aq)}\nonumber$
$\ce{2Fe^{3+}(aq) + 2S^{2-}(aq) + 2H3O^{+} <=> 2Fe^{2+}(aq) + H2S(aq) + S(s) + 2H2O(l)}\nonumber$
$\ce{3OH^{-}(aq) + 3H3O^{+} <=> + 6H2O(l)}\nonumber$
$\ce{3S^{2-}(aq) + 6H3O^{+} <=> 3H2S(aq) + 6H2O(l)}\nonumber$
$\text{Overall reaction:}\ce{~Cr(OH)3(s, gray-green) + FeS(s, black) + Fe2S3(s, yellow-green) + 11H3O^{+} <=> Cr^{3+}(aq) + 3Fe^{2+}(aq) + 4H2S(aq) + S(s) + 14H2O(l)}\nonumber$
Removal of basic $\ce{OH^{-}}$ and $\ce{S^{2-}}$ ions from products by acid-base neutralization drives these reactions in the forward direction. $\ce{Fe^{3+}}$ is reduced to $\ce{Fe^{2+}}$ by $\ce{S^{2-}}$ under the acidic condition.
The solubility of $\ce{NiS}$ is very low and it does not dissolve in non-oxidizing acid like $\ce{HCl}$.
Therefore, the supernatant separated at this stage contains $\ce{Cr^{3+}}$ and $\ce{Fe^{2+}}$ and precipitate, if present is $\ce{NiS}$, as shown in Figure $1$.
Aqua regia, i.e., a mixture of $\ce{HCl}$ and $\ce{HNO3}$, can dissolve $\ce{NiS}$ precipitate by removing $\ce{Ni^{2+}}$ as soluble coordination anion $\ce{[NiCl4]^{2-}}$ and, at the the same time, removing $\ce{S^{2-}}$ by oxidizing it, using $\ce{NO3^{-}}$ as oxidizing agent in the acidic medium.
$\ce{NiS(s, black) <=> Ni^{2+}(aq) + 2S^{2-}(aq)}\nonumber$
$\ce{Ni^{2+}(aq) + 4Cl^{-}(aq) <=> [NiCl4]^{2-}(aq)}\nonumber$
$\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber$
Nitrogen oxide ($\ce{NO}$) evaporates from the liquid mixture further driving the equilibrium to the forward direction. Most of the $\ce{NO}$ in is oxidized to nitrogen dioxide ($\ce{NO2}$) that forms brown color fumes over the liquid mixture as shown in Figure $2$:
$\ce{2NO(g) + O2(g) <=> 2NO2(g, red-brown)}\nonumber$
The S precipitates are removed by centrifugation and decantation. The $\ce{[NiCl4]^{2-}}$ coordination anion is converted to [Ni(NH3)]2+ coordination cation by making the solution alkaline by ammonia addition:
$\ce{[NiCl4]^{2-}(aq) + 6NH3(aq) <=> [Ni(NH3)6]^{2+}(aq, blue) + 4Cl^{-}(aq)}\nonumber$
Dimethyl glyoxime $\ce{(CH3)2C2(NOH)2}$ is a ligand that is capable of forming two coordinate covalent bonds with transition metal ions. The ligands like $\ce{Cl^{-}}$, $\ce{NH3}$, $\ce{H2O}$, etc. that form one coordinate covalent bond with transition metals are called mono-dentate, and the chelates like dimethyl glyoxime form two coordinate covalent bonds are called bidentate. The ligands that can form two or more coordinate covalent bonds are called chelates or chelating agents. Coordination complexes with chelates are usually more stable, i.e., have higher formation constants than with mono-dentate ligands.
The addition of dimethyl glyoxime $\ce{(CH3)2C2(NOH)2}$ to the liquid mixture containing $\ce{[Ni(NH3)6]^{2+}}$ in an alkaline medium forms an insoluble coordination compound $\ce{NiC8H14O4}$ that separates as a red color precipitate, as shown in Figure $3$:
The structure of the dimethyl glyoxime chelating agent and its coordination complex with nickel is illustrated in Figure $4$ below.
The formation of red color precipitate upon the addition of dimethyl glyoxime at this stage confirms the presence of nickel ion in the test sample.
Separating and confirming iron ions
The supernatant containing $\ce{Fe^{2+}}$ and $\ce{Cr^{3+}}$ ions is separated from $\ce{NiS}$ precipitate after the addition of $\ce{HCl}$ to the precipitates of group III cations. The supernatant is made alkaline to pH 9 to 10 by adding ammonia solution. A pH paper is used to determine the pH. Hydrogen peroxide ($\ce{H2O2}$) is added as an oxidizing agent to the alkaline solution. $\ce{Fe^{2+}}$ is oxidized to $\ce{Fe^{3+}}$ and precipitates out as rusty-brown solid $\ce{Fe(OH)3}$, and $\ce{Cr^{3+}}$ is oxidized to soluble chromate ion ($\ce{CrO4^{2-}}$) under this condition:
$\ce{2Fe^{2+}(aq) + H2O2(aq) <=> 2Fe^{3+}(aq) + 2OH^{-}(aq)}\nonumber$
$\ce{Fe^{3+}(aq) + 3OH^{-}(aq) <=> Fe(OH)3(s, rusty-brown)(v)}\nonumber$
$\ce{2Cr^{3+}(aq) + 3H2O2(aq) + 10OH^{-}(aq) <=> 2CrO4^{2-}(aq) + 8H2O(l)}\nonumber$
The mixture is centrifuged to separate supernatant containing $\ce{CrO4^{2-}}$ ions and precipitate containing rusty brown precipitate $\ce{Fe(OH)3}$, as shown in Figure $5$.
The $\ce{Fe(OH)3}$ precipitate is dissolved in $\ce{HCl}$ solution:
$\ce{Fe(OH)3(s, rusty-brown) <=> Fe^{3+}(aq) + 3OH^{-}(aq)}\nonumber$
$\ce{3OH^{-}(aq) + 3H3O^{+}(aq) <=> 6H2O(l)}\nonumber$
Thiocyanate ($\ce{SCN^{-}}$) is a ligand that forms deep-red coordination complex ion $\ce{[FeSCN]^{2+}}$ by reacting with $\ce{Fe^{3+}}$, as shown in Figure $6$.
$\ce{Fe^{3+}(aq) + SCN^{-}(aq) <=> [FeSCN]^{2+}(aq, deep-red)}\nonumber$
Turning the supernatant color to deep-red upon addition of thiocyanate confirms iron ions are present in the test sample.
Confirming chromium(III) ion
The supernatant obtained after removal of $\ce{Fe(OH)3}$ precipitate contains $\ce{CrO4^{2-}}$ ions in an alkaline medium. The solution is made acidic by the addition of nitric acid where $\ce{CrO4^{2-}}$ converts to dichromate ion ($\ce{Cr2O7^{2-}}$):
$\ce{2CrO4^{2-}(aq) + 2H3O^{+}(aq) <=> Cr2O7^{2-}(aq) + 3H2O(l)} \nonumber$
$\ce{H2O2}$ is a reducing agent in acidic medium. $\ce{H2O2}$ is added to the acidic mixture to reduce $\ce{Cr2O7^{2-}}$ to $\ce{Cr^{3+}}$ through the following reactions:
$\ce{2Cr2O7^{2-}(aq) + 8H2O2(aq) + 4H3O^{+}(aq) <=> 4CrO5(aq, dark-blue) + 14H2O(l)} \nonumber$
$\ce{4CrO5(aq) + 12H3O^{+}(aq) <=> 4Cr^{3+}(aq, light-blue) + 7O2(g)(^) + 18H2O(l)} \nonumber$
Oxygen evolves from the mixture and can be observed as gas bubbles in the solution. $\ce{CrO5}$ intermediate is a dark blue color in which one oxygen is in -2 oxidation state and the other four oxygen are in -1 oxidation state. $\ce{CrO5}$ is unstable in solution and decomposes to $\ce{Cr^{3+}}$ which is a light blue color. Residual $\ce{H2O2}$ is destroyed by heating the mixture in a boiling water bath, which can be observed through oxygen gas bubbling out. Keep in mind that the destruction of $\ce{H2O2}$ is significantly slower in an acidic medium than in an alkaline medium. It may take a longer time to destroy $\ce{H2O2}$ in the acidic medium. Then the solution is changed from acidic to alkaline by adding 6M NaOH to the mixture. $\ce{Cr^{3+}}$ precipitates out as gray-green $\ce{Cr(OH)3}$ solid:
$\ce{Cr^{3+}(aq) + 3OH^{-}(aq) <=> Cr(OH)3(s, gray-green)(v)} \nonumber$
The formation of gray-green precipitate at this stage confirms $\ce{Cr^{3+}}$ is present in the test sample, as shown in Figure $7$. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/5%3A_Group_III_cations/5.2%3A_Separation_and_confirmation_if_individual_ions_in_group_III_precipitates.txt |
Table 1. List of chemicals and their hazards*
Chemical
Hazard
0.1M ammonium chloride (\(\ce{NH4Cl}\))
Toxic and irritant
0.1M Chromium(III) chloride
Toxic and irritant
0.1M Iron(III) chloride
Toxic and corrosive
0.1M Nickel(II) chloride
Toxic, irritant, and suspected carcinogen
• *Hazards of 6M ammonia, 6M hydrochloric acid, 6M nitric acid, 3% hydrogen peroxide, and 1M thioacetamide are listed in chapter 2 in the commonly used reagent section. Caution! Used heavy metal ion solutions are disposed of in a labeled metal waste disposal container, do not drain these solutions down the drain.
Caution
• Used heavy metal ion solutions or precipitates are disposed of in a labeled metal waste disposal container, do not drain these solutions down the drain or in the regular trash.
Procedure for the analyses of group III cations
1. Take 15 drops of the fresh test solution if group I and group II cations are not present in the sample or take the supernatant of step 1 of group II cations analysis. Add 2 drops of 6M \(\ce{HCl}\). Then add 6M \(\ce{NH3}\) drop by drop while stirring till the solution turns basic. Use red litmus paper to test -when it turns blue the solution is basic. Add 5 more drops of 6M \(\ce{NH3}\) after the solution becomes alkaline to make \(\ce{NH3}\)/\(\ce{NH4^{+}}\) buffer. Record the observations in the datasheet.
2. Add 10 drops of 1M thioacetamide to the solution from step 1, stir, and heat in a boiling water bath for 10 min. Then centrifuge for 2 min and decant. Keep the supernatant for group IV cations analysis and keep the precipitate for group III cations. Record the observations in the datasheet.
3. Wash the group III precipitates from step 2 by re-suspending in 15 drops of 0.1M \(\ce{NH4Cl}\). Then centrifuge, decant, and discard the supernatant that is just the washing liquid. Re-suspend the precipitates in 10 drops of 6M \(\ce{HCl}\), heat for 2 min in a boiling water bath, centrifuge for two minutes, and keep the supernatant that may contain \(\ce{Fe^{2+}}\) and/or \(\ce{Cr^{3+}}\) and keep the precipitate, if there is any. Record the observations in the datasheet.
4. Wash the precipitate of step 3 by re-suspending it in 15 drops of distilled water. Then centrifuge for 1 min, decant and discard the supernatant that is just the washing liquid. Re-suspend the precipitate after adding 4 drops of 6M \(\ce{HCl}\) and 6 drops of 6M \(\ce{HNO3}\) (i.e., aqua regia). Heat the suspension in a boiling water bath for 2 min, then centrifuge for 2 min, decant, and discard the precipitate which is solid sulfur that contains no ions in it, but keep the supernatant for nickel analysis.
5. Use the cotton-plug technique to aspirate clear supernatant if it is not already a clear solution. Add 6M \(\ce{NH3}\) to the clear supernatant drop by drop till the solution turns alkaline. Use red litmus paper to test -when it turns blue the solution is alkaline. If the solution turns turbid at this stage, centrifuge for 1 min, decant and discard the precipitate, but keep the supernatant.
6. Add 5 drops of dimethylglyoxime to the clear solution of step 5, stir, and leave for a minute. If a bright red precipitate is formed at this stage, it confirms \(\ce{Ni^{2+}}\) is present in the test sample. Discard the mixture in the metal waste container and record the observations in the datasheet.
7. Inspect the supernatant of step 3, if it is not clear make it clear using the cotton plug technique. Add 6M \(\ce{NH3}\) drop-by-drop to the clear supernatant while stirring till the mixture until it turns alkaline and has pH in the range of 9 to 10. Use a pH paper (not a litmus paper) to determine the pH. Add 5 drops of 3% \(\ce{H2O2}\) to the alkaline solution, stir, and leave for half-min. Heat the mixture in a boiling water bath to destroy excess \(\ce{H2O2}\) till the oxygen gas bubbles stop evolving from the solution. It may take about 3 min or more. Centrifuge for 1 min and test again for pH with a pH paper -if pH is less than 9, repeat this step 7 from the beginning, otherwise decant and keep the supernatant for analysis of chromium ions and keep the precipitate, there is any, for analysis of iron ions. Record the observations in the datasheet.
8. Dissolve the precipitate from step 7 in 5 drops of 6M \(\ce{HCl}\) under stirring. Add 5 drops of distilled water to the solution followed by 5 drops of 0.1M potassium thiocyanate (\(\ce{KSCN}\)) and stir to mix. If the solution color changes to deep red, it confirms iron ions are present in the test sample. Discard the mixture in a metal waste container, and record the observations in the datasheet.
9. To the supernatant of step 7 add 6M \(\ce{HNO3}\) drop by drop till the solution becomes acidic with pH ~3. Use pH paper(not litmus paper) to determine the pH. Then add 1 drop of 3% \(\ce{H2O2}\), mix and leave for half-min. Then heat in boiling water bath to destroy excess \(\ce{H2O2}\) till oxygen bubbles stop forming in the mixture. It may take about 3 min or more. Cool the mixture by placing it in a room temperature water bath.
10. Add 6M \(\ce{NaOH}\) drop by drop to the solution of step 9 at room temperature till the solution is basic, i.e., turn red litmus paper to blue. The formation of gray-green precipitate at this stage confirms \(\ce{Cr^{3+}}\) is present in the test sample. Caution: \(\ce{H2O2}\) decomposes slower in the acidic medium than in the basic medium. If the solution turns dark blue or yellow, it indicates chromium is present as \(\ce{CrO5}\) or \(\ce{CrO4^{2-}}\) and \(\ce{H2O2}\) was not destroyed completely. In this case, add 6M \(\ce{HNO3}\) drop by drop with stirring till the color fades away. Then repeat the addition of 6M NaOH till the solution turns basic and observe. The formation of gray-green precipitate at this stage confirms \(\ce{Cr^{3+}}\) is present in the test sample. Discard the mixture in a metal waster container and record the observations in the datasheet.
Datasheets filling instructions for group III cations
1. Step number refers to the corresponding step number in the procedure sub-section.
2. In “the expected chemical reaction and expected observations column”, write an overall net ionic equation of the reaction that will happen if the ion being processed in the step was present, write the expected color change of the solution, the expected precipitate formed and its expected color, etc.
3. In the “the actual observations and conclusion” column write the color change, the precipitate formed and its color, etc. that is actually observed as evidence, and state the specific ion as present or absent.
4. In “the overall conclusion” row write one by one symbol of the ions being tested with a statement “present” or “absent” followed by evidence/s to support your conclusion. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/5%3A_Group_III_cations/5.3%3A_Procedure_flowchart_and_datasheets_for_separation_and_confirmation_of_group_III_cations.txt |
• 6.1: Separating group IV cations
After removal of insoluble chlorides, sulfides, and hydroxides, group IV cations comprising of heavy alkaline earth metals, i.e., calcium, strontium, and barium are separated as insoluble carbonated from soluble alkali metals and ammonium ions. The separation is based on a general rule of solubility that states “Carbonates, phosphates, and oxide are insoluble except alkali metals and ammonia.”
• 6.2: Separation and confirmation of individual ions in group IV precipitates and group V mixture
Insoluble carbonates of group IV dissolve in acetic acid and barium chromate is selectively precipitated from the solution. Then calcium is precipitated as insoluble oxalate. Barium imparts yellow gree color to the flame and calcium imparts brick-red color to the flame. Group V comprises alkali metals that form soluble ionic compounds: lithium imparts carmine red, sodium imparts intense yellow, and potassium imparts purple or lilac color to the flame.
• 6.3: Procedure, flowchart, and datasheets for separation and confirmation of group IV and group V cations
The procedure, flowchart, datasheets, and filling instructions for the known sample that contains all of the group IV and group V cations and for the unknown sample that may contain some of the group IV and group V cations.
6: Group IV and Group V cations
After removing chloride insoluble salts as the group I, and sulfide insoluble salts as group II and group III, the cations that may still be present in the solution from the initial mixture include $\ce{Ca^{2+}}$, $\ce{Ba^{2+}}$, $\ce{Na^{+}}$, and $\ce{K^{+}}$. Group IV comprises $\ce{Ca^{2+}}$ and $\ce{Ba^{2+}}$ that are separated from the other two ions based on the insoluble ions rule#2 described in chapter 1 which states “Carbonates ($\ce{CO3^{2-}}$), phosphates ($\ce{PO4^{3-}}$), and oxide ($\ce{O^{2-}}$) are insoluble with the exception of alkali metals and ammonia.” Carbonate ion is introduced as ammonium carbonate ($\ce{(NH4)2CO3}$):
$\ce{(NH4)2CO3(s) + 2H2O(l) -> 2NH4^{+}(aq) + CO3^{2-}(aq)}\nonumber$
Addition of $\ce{(NH4)2CO3}$ solution cause precipitation of $\ce{Ca^{2+}}$ and $\ce{Ba^{2+}}$ as white precipitates $\ce{CaCO3}$ and $\ce{BaCO3}$, as shown in Figure $1$:
$\ce{Ca^{2+}(aq) + CO3^{2-}(aq) <=> CaCO3(s, white)(v)}\quad K_{sp} = 6\times10^{-9}\nonumber$
$\ce{Ba^{2+}(aq) + CO3^{2-}(aq) <=> BaCO3(s, white)(v)}\quad K_{sp} = 3\times10^{-9}\nonumber$
The precipitates of group IV cations are separated by centrifugation and decantation. The precipitate is used to separate and confirm group IV cations and the supernatant is kept for the analysis of group V cations.
6.2: Separation and confirmation of individual ions in group IV precipitates and group V mixture
Separating and confirming barium ion
The precipitates of Group IV cations, i.e., $\ce{CaCO3}$ and $\ce{BaCO3}$ are soluble in acidic medium. In these experiments acetic acid $\ce{CH3COOH}$ is used to make the solution acidic that results in the dissolution of $\ce{CaCO3}$ and $\ce{BaCO3}$:
$\ce{4CH3COOH(aq) + 4H2O(l) <=> 4CH3COO^{-}(aq) + 4H3O^{+}(aq)}\nonumber$
$\ce{CaCO3(s, white) <=> Ca^{2+}(aq) + CO3^{2-}(aq)}\nonumber$
$\ce{BaCO3(s, white) <=> Ba^{2+}(aq) + CO3^{2-}(aq)}\nonumber$
$\ce{2CO3^{2-}(aq) + 4H3O^{+}(aq) <=> 2H2CO3(aq) +4H2O(l)}\nonumber$
$\ce{2H2CO3(aq) <=> 2H2O(l) + 2CO2(g)(^)}\nonumber$
$\text{Overall reaction: }\ce{~4CH3COOH(aq) + CaCO3(s, white) + BaCO3(s, white) <=> 4CH3COO^{-}(aq) + Ca^{2+} + Ba^{2+} + 2H2O(l) + 4CO2(g)(^)}\nonumber$
$\ce{CO3^{2-}}$ ion is a weak base that reacts with $\ce{H3O^{+}}$ and forms carbonic acid ($\ce{H2CO3}$. Carbonic acid is unstable in water and decomposes into carbon dioxide and water. Carbon dioxide leaves the solution that drives the reactions forward, as shown in Figure $1$.
The Acetate ion ($\ce{CH3COO^{-}}$) produced in the above reactions is a conjugate base of a weak acid acetic acid ($\ce{CH3COOH}$). More acetic acid is added to the solution to make a $\ce{CH3COOH}$/$\ce{CH3COO^{-}}$ buffer that can maintain $pH$ at ~5.
Potassium chromate ($\ce{K2CrO4}$) solution is added at this stage that introduces chromate ion $\ce{CrO4^{2-}}$:
$\ce{K2CrO4(s) <=> 2K^{+}(aq) + CrO4^{2-}(aq)}\nonumber$
Although both calcium and barium ions form insoluble salt with chromate ion ($\ce{CaCrO4}$ $K_{sp}$ = 7.1 x 10-4 and $\ce{BaCrO4}$ $K_{sp}$ = 1.8 x 10-10), $\ce{BaCrO4}$ is less soluble and can be selectively precipitated by controlling $\ce{CrO4^{2-}}$ concentration. The chromate ion is involved in the following $pH$ dependent equilibrium:
$\ce{2CrO4^{2-}(aq) + 2H3O^{+}(aq) <=> Cr2O7^{2-}(aq) + 3H2O(l)}\quad K = 4.0\times10^{14}\nonumber$
At $pH$ ~5 in a $\ce{CH3COOH}$/$\ce{CH3COO^{-}}$ buffer, the concentration of $\ce{CrO4^{2-}}$ is enough to selectively precipitate barium ions leaving calcium ions in the solution:
$\ce{Ba^{2+}(aq) + CrO4^{2-}(aq) <=> BaCrO4(s, light~yellow)(v)}\nonumber$
The mixture is centrifuged and decanted to separate $\ce{BaCrO4}$ precipitate from the supernatant containing $\ce{Ca^{2+}}$ ions as shown in Figure $2$. Although the formation of a light yellow precipitate ($\ce{BaCrO4}$ ) at this stage is a strong indication that $\ce{Ba^{2+}}$ is present in the test sample, $\ce{Ca^{2+}}$ may also form a light yellow precipitate ($\ce{CaCrO4}$ ), particularly if pH is higher than the recommended value of 5.
Group IV and V cations are most often confirmed by the flame test. Figure $3$ shows the flame test results of group IV cations. The presence of barium is further confirmed by a flame test. For this purpose, $\ce{BaCrO4}$ precipitate is treated with 12M $\ce{HCl}$. The concentrated $\ce{HCl}$ removes $\ce{CrO4^{2-}}$ by converting it to dichromate ($\ce{Cr2O7^{2-}}$) that resulting in the dissolution of $\ce{BaCrO4}$:
$\ce{2BaCrO4(s, light~yellow) <=> 2Ba^{2+}(aq) + 2CrO4^{2-}(aq)}\nonumber$
$\ce{2CrO4^{2-}(aq) + 2H3O^{+}(aq) <=> Cr2O7^{2-}(aq) + 3H2O(l)}\nonumber$
A flame test is applied to the solution. $\ce{Ba^{2+}}$ imparts characteristic yellow-green color to the flame. If the yellow-green color is observed in the flame test, it confirms $\ce{Ba^{2+}}$ is present in the test sample.
Confirming calcium ion
The $\ce{Ca^{2+}}$ present in the supernatant is precipitated by adding oxalate ion ($\ce{C2O4^{2-}}$):
$\ce{Ca^{2+}(aq) + C2O4^{2-}(aq) <=> CaC2O4(s, white)(v)}\nonumber$
The formation of white precipitate, i.e., $\ce{CaC2O4}$ shown in Figure $4$, is a strong indication that $\ce{Ca^{2+}}$ is present in the test sample. However, if $\ce{Ba^{2+}}$ is not fully separated earlier, it also forms a white precipitate $\ce{BaC2O4}$. The presence of $\ce{Ca^{2+}}$ is further verified by flame test. For this purpose, the precipitate is dissolved in 6M $\ce{HCl}$:
$\ce{CaC2O4(s, white) <=> Ca^{2+}(aq) + C2O4^{2-}(aq)}\nonumber$
$\ce{C2O4^{2-}(aq) + 2H3O^{+}(aq) <=> H2C2O4(aq)}\nonumber$
Strong acid like $\ce{HCl}$ increases $\ce{H3O^{+}}$ ion concentration that drives the above reaction forward based on Le Chatelier's principle. The flame test is applied to the solution. If $\ce{Ca^{2+}}$ is present in the solution, it imparts characteristic brick-red color to the flame, as shown in Fig. 6.2.3. Observation of the brick-red color in the flame test confirms the presence of $\ce{Ca^{2+}}$ in the test sample. The flame color changes to light green when seen through cobalt blue glass.
Confirming group V cations by the flame test
Group V cations, i.e., alkali metal, $\ce{Na^{+}}$, $\ce{K^{+}}$, etc. form soluble ionic compounds. Separation of alkali metals cations by selective precipitation is not possible using commonly available reagents. So, group V cations are not separated in these analyses. However, alkali metal cations impart characteristic color to the flame that helps in their confirmation as shown in Figure $5$. The supernatant after separating group IV precipitate is concentrated by heating the solution to evaporate the solvent. A flame test is applied to the concentrated solution.
Lithium imparts carmine red, sodium imparts intense yellow, and potassium imparts lilac color to the flame. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/6%3A_Group_IV_and_Group_V_cations/6.1%3A_Separating_group_IV_cations.txt |
Table 1. List of chemicals and their hazards*
Chemical
Hazard
6M Acetic acid (\(\ce{CH3COOH}\) )
Toxic and corrosive
0.2M Ammonium oxalate
Irritant
0.1M Barium chloride
Highly toxic
0.1M Calcium chloride
Irritant
0.1M Potassium chromate
Suspected carcinogen
• *Hazards of 6M ammonia, 6M hydrochloric acid, 6M nitric acid, 3% hydrogen peroxide, and 1M thioacetamide are listed in chapter 2 in the commonly used reagent section. Caution! Used heavy metal ion solutions are disposed of in a Labeled metal waste disposal container, do not drain these solutions down the drain.
Caution
• Used heavy metal ion solutions or precipitates are disposed of in a labeled metal waste disposal container, do not drain these solutions down the drain or in the regular trash.
Procedure for the analyses of group IV and group V cations
1. Take 15 drops of the fresh test solution if the group I to III cations are not present in the sample or take the supernatant of step 2 of group III cations analysis. Add 15 drops of 3M \(\ce{(NH4)2CO3}\), stir to thoroughly mix using a clean glass rod, centrifuge for 2 min, decant and keep the supernatant for group V tests, and keep the precipitate for group IV cations analysis. Record the observations in the datasheet.
2. Wash the precipitate of step 1 by re-suspending it in 15 drops of distilled water under stirring, centrifuge for 2 min, decant and keep the precipitate and discard the supernatant which is just the washing liquid. Add 5 drops of 6M acetic acid to the precipitate and heat for half-min while stirring to dissolve the precipitate. Add 2 more drops of 6M acetic acid while heating and stirring if needed to fully dissolve the precipitate. After the precipitate has dissolved, add 3 more drops of 6M acetic acid to make a \(\ce{CH3COOH}\)/\(\ce{CH3COO^{-}}\) buffer. Record the observations in the datasheet.
3. Add 10 drops of 0.1M \(\ce{K2CrO4}\), stir to mix, and heat for 1 min. Immediately centrifuge for 2 min and decant while hot. Keep the supernatant for analysis of \(\ce{Ca^{2+}}\). If a light-yellow precipitate is formed at this stage it is most likely \(\ce{BaCrO4}\) due to \(\ce{Ba^{2+}}\) present in the test sample. Keep the precipitate for the flame test. Record the observations in the datasheet.
4. Wash the precipitate of step 2 by re-suspending it in 15 drops of distilled water, centrifuge for 2 min, decant, and discard the supernatant water which is just the washing solvent. Add 5 drops of 12M \(\ce{HCl}\) to the precipitate, stir to mix, and heat in a boiling water bath for 2 min to dissolve the precipitate. Perform the flame test, i.e., dip a clean nichrome or platinum wire loop in the solution and then place the loop on the outer edge of a blue flame of a Bunsen burner, approximately halfway between the top and bottom of the flame and observe the flam color. If the solution imparts yellow-green color to the flame, it is due to barium ion confirming \(\ce{Ba^{2+}}\) is present in the test sample. The nichrome wire can be re-used after dipping it in 6M \(\ce{HCl}\) followed by making it red-hot in a flame. Repeat this process until the wire does not impart color to the flame. Then the wire can be re-used. Another approach is to cut off the end part of the wire that was dipped in the salt, make a new loop at the fresh end, and use it for the next flame test. Discard the solution in a metal waste container and record the observations in the datasheet.
5. To the supernatant from step 3, add 10 drops of 0.2M ammonium oxalate (\(\ce{(NH4)2C2O4}\)), stir to mix, centrifuge for 2 min, decant, discard the supernatant and observe the precipitate. The formation of white precipitate at this stage is \(\ce{CaC2O4}\) which is a strong indication of \(\ce{Ca^{2+}}\) is present in the test solution. Keep the precipitate for the flame test. Record the observation in the datasheet.
6. Dissolve the precipitate of step 5 in 3 drops of 6M \(\ce{HCl}\). Perform the flame test, i.e., dip a clean nichrome or platinum wire loop in the solution and then place the loop in the outer edge of a blue flame of a Bunsen burner, approximately halfway between the top and bottom of the flame and observe the flam color. If the solution imparts brick-red color to the flame, it is due to calcium ions confirming \(\ce{Ca^{2+}}\) is present in the test sample. Discard the solution in the metal waste container and record the observations in the datasheet.
7. Group V cations: Evaporite excess water from the supernatant of step 1 by heating. If any solid residue is left it is due to group V cations, i.e., sodium, potassium, etc. Add a drop or two drops of water to dissolve the precipitate. Perform the flame test, i.e., dip a clean nichrome or platinum wire loop in the solution and then place the loop in the outer edge of a blue flame of a Bunsen burner, approximately halfway between the top and bottom of the flame and observe the flame color. If the solution imparts some color to the flame, it is due to group V cations: an intense yellow color flame confirms \(\ce{Na^{+}}\) is present in the test solution, and purple or lilac color to the flame confirms \(\ce{K^{+}}\) is present in the test solution. Discard the solution in the metal waster container and record your observations in the datasheet.
Datasheets filling instructions for group IV and group V cations
1. Step number refers to the corresponding step number in the procedure sub-section.
2. In “the expected chemical reaction and expected observations column”, write an overall net ionic equation of the reaction that will happen if the ion being processed in the step was present, write the expected color change of the solution, the expected precipitate formed and its expected color, etc.
3. In the “the actual observations and conclusion” column write the color change, the precipitate formed and its color, etc. that is actually observed as evidence, and state the specific ion as present or absent.
4. In “the overall conclusion” row write one by one symbol of the ions being tested with a statement “present” or “absent” followed by evidence/s to support your conclusion. | textbooks/chem/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/6%3A_Group_IV_and_Group_V_cations/6.3%3A_Procedure_flowchart_and_datasheets_for_separation_and_confirmation_of_group_IV_and_group_V_cations.txt |
• 1.1: Electronic transitions and luminescence
Luminescence is the emission of light due to transitions of electrons from molecular orbitals of higher energy to those of lower energy, usually the ground state or the lowest unoccupied molecular orbitals. Such transitions are referred to as relaxations.
• 1.2: Chemiluminescence Spectroscopy
The importance of chemiluminescence spectroscopy lies more in elucidating the mechanisms of chemiluminescence reactions rather than in analytical applications. In particular, spectroscopic investigations have been found useful for the identification of the emitter species in particular chemiluminescence reactions.
Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).
1: Introduction to Chemiluminescence
Luminescence is the emission of light due to transitions of electrons from molecular orbitals of higher energy to those of lower energy, usually the ground state or the lowest unoccupied molecular orbitals. Such transitions are referred to as relaxations. Figure A1.1 shows four electronic energy levels (S0,S1, S2 and T1) and the possible transitions between them. S0 represents the ground state, while S1, S2 and T1 represent higher-energy excited states;S0, S1 and S2 are singlet states in which all the electrons form pairs of opposed spins whereas T1 is a triplet excited state, in which not all electrons are paired off in this way.
Figure A1.1 – Jablonski diagram showing four electronic energy levels S0, S1, S2 and T1, with their vibrational fine structure and the transitions between them that affect luminescence.
Each energy level is subdivided into a number of vibrational states, each characterised by an amount of vibrational energy that accompanies the potential energy of the electrons occupying the orbitals. Luminescence is classified according to the excited state that gives rise to it and to the source of the energy that caused the excited state to be populated with electrons. The promotion of electrons to an excited state is called excitation. In many cases, this is brought about by absorption of visible or ultraviolet radiation. In such a case, if the luminescence arises because electrons are relaxing from a singlet excited state to a singlet ground state, then it is called fluorescence, and generally occurs within 10-11 to 10-5 s. The transition is very fast because it involves no reversal of electron spin. If, however, it arises due to relaxation from a triplet excited state, then the luminescence is called phosphorescence, which generally occurs within 10-4 to 100 s. If the excitation is the result of energy released in a chemical reaction, the luminescence is called chemiluminescence. A subset of chemiluminescence occurring in the biosphere as a result of biological processes is called bioluminescence. Electrochemiluminescence is another distinct subset of chemiluminescence phenomena, made up of those reactions in which the excited species is produced at an electrode during electrolysis.
Before luminescence occurs, there is a non-radiative loss of energy (due to collisions between molecules) as the excited state relaxes to a lower vibrational state while remaining at the same electronic energy level. This type of transition is called vibrational deactivation. It has to occur even more rapidly than fluorescence and typically occurs within 10-12 s of excitation. Therefore the luminescence involves the emission of photons of lower energy (higher wavelength) than would otherwise be the case. Another possible transition is internal conversion, in which an electron transfers from a lower vibrational state of a higher electronic energy level to a higher vibrational state of a lower electronic energy level, without any significant gain or loss of energy; such a transition, S2 → S1, is shown schematically in Figure 1.1. In intersystem crossing, internal conversion would involve also reversal of the spin of the electron as in a transition from a singlet to a triplet state; the transition S2 → T1 in Figure 1.1 is of this type. Such transitions can give rise to phosphorescence. Finally, luminescence is not inevitable. The intensity of the emission compared with the number of molecules in the excited state is called the quantum yieldF). This can be calculated for fluorescent emission by dividing the number of emitted photons by the number of absorbed photons. ΦF in a chemiluminescence phenomenon should be the same as in the fluorescence phenomenon involving the same excited state, but, because chemiluminescence does not depend on the absorption of photons, it can be calculated in the same way only by performing a separate fluorescence experiment. The intensity of chemiluminescence emission is more meaningfully compared with the number of reactant molecules; this measure is called the chemiluminescence quantum yieldCL). It is related to ΦF by the equation:
$Φ_{CL} = Φ_CΦ_EΦ_F \nonumber$
where ΦC is the proportion of reactant molecules converted into product and ΦE is the proportion of product molecules formed in the excited state. ΦCL has values from 0 to 1 and reaches 0.88 for firefly luciferin in vitro[1].
Because ΦCL depends on ΦF it would be reasonable to suppose that chemiluminescence is affected by substitution in product molecules in the same way as is fluorescence. In that case, ΦCLwould be increased by electron donors and decreased by electron acceptors. There would also be an increase in ΦCL (and a bathochromic shift in emission wavelength) due to conjugated systems and in rigidly planar molecules having facilitated π-bond delocalisation. Such generalisations must be used with great care, for the only product species to which they can apply are the molecules that are actually emitting; it is by no means obvious what these are in any particular case.
This content originates from an Analytical Chemiluminescence Wikibook and is licensed via a Creative Commons Attribution-ShareAlike License.
1.2: Chemiluminescence Spectroscopy
The wavelengths of chemiluminescence emission that are analytically useful depend on the characteristics of the detector. Visible emission (though it is seldom visible to the naked eye) has a wavelength range of about 400-750 nm, corresponding to enthalpy changes of exothermic reactions of between 180 and 300 kJ mol-1, provided that there is a pathway to an excited state that relaxes with the loss of a photon (see Figure 1.1). Emission intensity is proportional to the concentration of the emitting species, which is either an intermediate or a product in an electronically excited state. This concentration depends on the rate of the reaction producing it. Analytical detection of chemiluminescence usually involves no wavelength selection, i.e., it is emission photometry rather than emission spectrophotometry. Selectivity is achieved by on-line treatments rather than by processing of the signal, which has little fine-structure[1].
Because of this, the importance of chemiluminescence spectroscopy lies more in elucidating the mechanisms of chemiluminescence reactions rather than in analytical applications. In particular, spectroscopic investigations have been found useful for the identification of the emitter species in particular chemiluminescence reactions. Thus, experimental evidence has shown that manganese(II) ion is a common emitter in chemiluminescence arising due to the reduction either of permanganate or of other higher oxidation states of manganese[2]. Using a variety of reductants, chemiluminescence spectra, corrected for wavelength-related differences in detector sensitivity, showed maximum emission at 689 nm (in hexametaphosphate) and 734 nm, (in phosphate/orthophosphoric acid) which corresponds to the emission maxima of manganese(II) phosphorescence, and is clearly distinguishable from the intense emission at 634 nm and 703 nm from singlet oxygen, which had been earlier proposed as the emitter. Diagnosis of the emitter usually cannot be based on spectroscopic evidence alone, but must make use of chemical evidence also. Identifying the emitting species in luminol chemiluminescence in aqueous solutions is an example of such an investigation. The product of the oxidation of luminol is 3-aminophthalate. The maximum emission wavelength in these conditions is 424 nm, which corresponds to the maximum wavelength of fluorescence emission from the 3-aminophthalate dianion and this species was originally accepted as the emitter. However, chemical evidence suggests that emission is from the monoanion, which has a fluorescence maximum of 451 nm. Closer examination of the chemiluminescence reaction[3] suggests that the emitter is one particular conformer of the 3-aminophthalate monoanion that has a maximum emission wavelength resembling that of the dianion.
This content originates from an Analytical Chemiluminescence Wikibook and is licensed via a Creative Commons Attribution-ShareAlike License. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/1%3A_Introduction_to_Chemiluminescence/1.1%3A_Electronic_transitions_and_luminescence.txt |
• 2.1: Luminol
Luminol is the common name for 5-amino-2,3-dihydro-1,4-phthalazinedione (often called 3-aminopthalhydrazide). Oxidation of luminol produces excited 3-aminophthalate, which on relaxation emits light (λmax = 425 nm) with quantum yield of 1%. Alternatively, luminol chemiluminescence may be triggered electrochemically.
• 2.2: Lophine and pyrogallol
Lophine and pyrogallol are the earliest-known chemiluminescence reagents. Lophine exhibits lemon yellow chemiluminescence in solution and is one of the few long-lasting chemiluminescent molecules. It forms dimers that have piezochromic and photochromic properties. It has been proposed as an analytical reagent for trace metal ion detection.
• 2.3: Luciferins
Luciferases are enzymes that catalyse light-emitting reactions in living organisms - bioluminescence. They occur in several species of firefly and in many species of bacterium. Firefly Luciferases are extracted by differential centrifugation and purified by gel filtration. Luciferins are substrates of luciferases . Firefly luciferin emits at 562 nm on reaction with oxygen, catalysed by luciferase in the presence of adenosine triphosphate (ATP) and magnesium ions.
• 2.4: Lucigenin and coelenterazine
Lucigenin is used in a wide variety of assays, especially those involving enzymatic production of hydrogen peroxide, and as a label in immunoassays. It reacts with various reductants, including those present in normal human blood[2], such as glutathione, uric acid, glucuronic acid, creatinine, ascorbic acid and creatine. The chemiluminescence intensity for a mixture of these analytes is equal to the sum of the intensities, measured separately for each analyte present.
• 2.5: Dioxetanes and oxalates
Peroxy-oxalate chemiluminescence (PO-CL) was first reported in 1963 as a very weak bluish-white emission from oxalyl chloride, Cl-CO.CO-Cl, on oxidation by hydrogen peroxide; a similar blue emission occurs from related oxalyl peroxides. Much more intense emission is obtained in the reaction between aryl oxalates and hydrogen peroxide in the presence of a fluorophore; it is this version of the reaction that is analytically useful.
• 2.6: Organic peroxides and lipid peroxidation
• 2.7: Manganese
Manganese (VII) in the form of potassium permanganate has been used as a chemiluminescence reagent for several decades. A broad band of red light is emitted on reaction with over 270 compounds in acidic solution. Among the organic analytes are morphine and a wide range of other pharmaceuticals, phenolic substances, amines and hydrazines in addition to well-known reductants such as ascorbic acid and uric acid
• 2.8: Cerium
Cerium(IV)-based chemiluminescence systems involve the reduction of cerium(IV), which suggests that the emitter is a cerium(III) species. The chemiluminescence reaction is carried out in an acidic medium (generally sulfuric acid) and has been applied for the determination of substances of biological interest.
• 2.9: Ruthenium
The chemiluminescence involving tris(2,2'-bipyridyl)ruthenium(II), [Ru(bpy)3]2+, is most interesting. It involves the oxidation of [Ru(bpy)3]2+ to [Ru(bpy)3]3+, which is followed by reduction with an analyte species to produce an emission of light.
• 2.10: Oxygen radicals
• 2.11: Sulfites and persulfates
• 2.12: Hypohalites and halates
Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).
2: Chemiluminescence Reagents
Luminol is the common name for 5-amino-2,3-dihydro-1,4-phthalazinedione (often called 3-aminopthalhydrazide). Oxidation of luminol produces excited 3-aminophthalate, which on relaxation emits light (λmax = 425 nm) with quantum yield of ~0.01[1]; Information on the hazards of using luminol is available at the website of the United States National Toxicology Program. The reaction is triggered by a catalytic process, usually enzymatic, provided, for example, by heme-containing proteins, especially horseradish peroxidase (HRP, EC 1.11.1.7). In the presence of hydrogen peroxide this enzyme is converted into intermediary complexes before being regenerated. It has the distinct advantage in biological work of permitting the luminol reaction at pH as low as 8.0 to 8.5. HRP can be used as a label to detect analytes of interest and luminol chemiluminescence can be used to detect substrates of oxidase enzymes that generate hydrogen peroxide. Enzymatic catalysis is discussed fully in section B1f (ADD LINK). The catalyst may be chemical rather than enzymatic (e.g., transition metal cations or complex ions, e.g., ferricyanide, at high pH). Catalysis by metal ions is discussed fully in "The role of metal ions and metallo-complexes in Luminol chemiluminescence" by HL Nekimken.
Alternatively, luminol chemiluminescence may be triggered electrochemically. Sakura[2] had proposed that luminol was oxidized at the electrode surface, after which it can react with hydrogen peroxide producing one photon per hydrogen peroxide molecule (compared with 0.5 in the HRP-catalysed reaction) giving more sensitive detection and avoiding the fragility of enzyme methods[3]. Luminol electrochemiluminescence is discussed fully in section B1d.
Very many assays have been devised determining compounds by inhibiting, enhancing or catalysing luminol chemiluminescence. Detectivity reaches the sub-femtomole level but the very versatility of the chemistry limits its selectivity. This is a serious shortcoming for samples such as body fluids or natural waters are very complex; in some cases, one analyte might enhance the luminol reaction while another inhibits it and the resulting signal is a combination of effects that is difficult to interpret. The situation is rather less difficult in process analytical chemistry, where there may be one and only one expected analyte. Coupling the chemiluminescence reaction post-column with a separation step (liquid chromatography or capillary electrophoresis) (ADD LINKS) can overcome interferences and give fmol-pmol detectivity. Labelling of sample components with luminol before separation can achieve the same end. Selectivity can also be provided by coupling the luminol reaction with enzymatic reactions or with antibody detection or with recognition by molecularly imprinted polymers[3].
Many analogues of luminol have been synthesized[1]; some of them give more intense chemiluminescence than luminol itself but only if the modifications are restricted to the benzenoid ring of the luminol molecule. Changes to the heterocyclic ring abolish chemiluminescence. Phthalic hydrazide (luminol without the amine substituent) is not chemiluminescent except in aprotic solvents.
Figure B1.1 – One- and two-electron routes of primary oxidation of luminol leading to secondary oxidation and chemiluminescence.
A mechanism for the oxidative chemiluminescence of luminol has been proposed by Roswell and White[1]; some of the individual steps have been studied by Lind, Merenyi and Eriksen[4]. Figure B1.1 is a flow chart of the mechanism; the structures of the main chemical species involved in the oxidation of luminol and the abbreviations for them used in the text are shown. The model proposes two step formation of luminol diazaquinone hydroperoxide anions (LOOH), which spontaneously decompose (via a tricyclic endoperoxide transition state) to form dinitrogen and excited 3-aminophthalate anions that luminesce. The quantum yield of luminol oxidation by this route is high giving good analytical sensitivity.
b(i) Primary oxidation of luminol
The hydroperoxide intermediate (LOOH) is formed in aqueous solution by the primary oxidation of the luminol monoanion (LH) to a radical (L•–) followed by the addition of superoxide (O2•–) or by primary oxidation to diazaquinone (L) followed by addition of hydrogen peroxide anions (HO2)[5].
(a) Luminol (LH2) exists in aqueous solutions at pH 10.0 as monoanions (LH), which undergo one-electron oxidation, e.g., by hydroxyl radicals (HO, E0 = +2.8 V), to form rapidly (k = 9.7 x 10-9 dm3 mol-1 s-1) diazasemiquinone radicals (L•–):
1) LH – e → LH (e.g., LH + HO → L•– + H2O)
(b) Two-electron oxidation of luminol monoanion, e.g., with hydrogen peroxide, gives diazaquinone (L),
2) LH – 2e → L + H+ (e.g., LH + H2O2 → L + H2O + HO)
Two-electron oxidation occurs at the start of the luminol-hydrogen peroxide reaction. There is no superoxide present until hydrogen peroxide, competing with luminol for the hydroxyl radical, is converted to HO2, which rapidly deprotonates to O2•– at high pH (pKa = 4.8):
3) H2O2 + HO → O2•– + H3O+ Hydroxyl radicals reacting with luminol convert the monoanions (LH) to L•– or LH, depending on the pH; this is a one-electron oxidation process. Transfer of a second electron to form diazaquinone occurs only in the absence of superoxide, which otherwise would react with L•– or LH to form luminol diazaquinone hydroperoxide anions (LOOH).
The primary oxidation step usually determines the rate of light emission, so luminol chemiluminescence effectively measures the power of the oxidant to bring about this reaction but other factors also affect the rate of primary oxidation. Light emission from the reaction between luminol and hydrogen peroxide can be induced by the presence of cobalt(II) at concentrations low enough to be regarded as catalytic and it has been proposed that cobalt(II)-peroxide complex ions bring about the primary oxidation of luminol[6].
b(ii) Secondary oxidation of luminol
In analytical luminol chemiluminescence, the initial oxidation of luminol is the rate-determining step. But chemiluminescence also depends on the availability of superoxide or hydroperoxide ions for secondary oxidation. So experiments have been performed using pulse radiolysis to bring about primary oxidation, allowing the rate of secondary oxidation to be studied in the pauses between the pulses. Protonated diazasemiquinone radicals (LH) formed by one-electron primary oxidation add to superoxide radicals (O2•–) to form the diazaquinone hydroperoxide (LOOH):
1) LH•– + O2•– → LOOH
This reaction consumes superoxide radical anions and, in the presence of a large excess of hydrogen peroxide, the major part of LH. LH, however, can also recombine with itself. In the absence of superoxide, all luminol radicals are consumed by recombination, at least 80% of which is accounted for by dimerization. Diazaquinone (L), the product of two-electron primary oxidation of luminol, is converted to the peroxide by the addition of hydroperoxide anions:
2) L + HO2 → LOOH
b(iii). Decomposition of hydroperoxide intermediate
Secondary oxidation is followed by the decomposition of the cyclic hydroperoxide intermediate to 3-aminophthalate, which emits light on relaxation to the ground state.
LOOH → 3-aminophthalate + N2 + hν
The basic peroxide adduct (LOOH) decomposes to form the excited state of the aminophthalate emitter, while the protonated adduct undergoes a non-chemiluminescent side reaction which forms a distinct yellow product, the so-called “dark reaction”[7]. The absorbance spectrum of LOOH decays at the same rate as does the light emission. Emission intensity increases with pH up to a maximum at about pH = 11, reflecting increasing dissociation of H2O2 into its anion and the diminishing importance of the dark reaction . Decreased light output above pH 11 reflects diminished fluorescent quantum yield (ΦFL) of the emitter.
b(iv) Determination of chemiluminescence quantum yield
Lind and Merényi[8] have measured the light yield of several chemiluminescent reactions of luminol undergoing one-electron oxidation by hydroxyl radicals of radiolytic origin. Using the transfer of 100 eV from ionising particles to aqueous media, it becomes possible to calculate ΦCL by ΦCL = G(hν)/ ΦCLGOH. They propose as a standard for luminol chemiluminescence initiated by pulse radiolysis a system consisting of 10−3 mol dm−3 aqueous luminol at pH = 10.0 saturated with 10% O2 and 90% N2O. Having defined a standard with a well-defined light output, it then becomes possible to determine the chemiluminescence quantum yield of any other luminol reaction relative to the standard. This has been done by Merényi and Lind[9] by plotting integrated light intensity as a function of radiolytic dose (which has a linear relationship with hydroxyl radical concentration). The light yields and hence the relative quantum yields are obtained by comparing the slopes of the plots.
B1c. Oxidants used in Luminol Chemiluminescence
c(i). Hydrogen peroxide
Hydrogen peroxide is analytically the most useful oxidant of luminol, but requires the catalytic effect of an electrode, a metal ion or an enzyme. For example, it reacts readily with luminol in an aqueous medium in the presence of a cobalt(II) catalyst. This reaction is a very effective bench demonstration of chemiluminescence, using equal volumes of 0.1 mol dm−3 hydrogen peroxide and 1.0 x 10−3 mol dm−3 luminol in 0.1 mol dm3carbonate buffer (pH between 10 and 11). Some metal ions, used to catalyse the oxidation of luminol e.g. iron(II), react with hydrogen peroxide: Fe2+ + H2O2 → Fe3+ + HO + HO to generate hydroxyl radicals (Fenton Reaction), which have very powerful oxidizing properties and can therefore bring about the primary oxidation of luminol. But they also react with hydrogen peroxide (equation1) and hydroperoxide ions (equation 2): 1) H2O2 + HO → O2•– + H3O+ 2) HO2 + HO → O2•– + H2O The consumption of hydroxyl radicals in these reactions diminishes the rate of primary oxidation, but the generation of superoxide increases the rate of secondary oxidation.
c(ii). Oxygen
The standard reduction potential (E0) of luminol radicals to monoanions (LH + e → LH) has been determined to be +0.87 V[10]. Oxidation by molecular dioxygen (E0 = 1.229 V) is thermodynamically possible but in aqueous solutions, the reactivity is undetectably low at any pH (k = 10−8 dm3 mol−1 s−1) and so the reaction is not useful for primary oxidation. It is widely believed that air-saturated luminol solutions are indefinitely stable in the dark, even at pH = 14. In spite of this, the oxidation of luminol by dissolved oxygen in aqueous solutions is frequently reported. It is likely that what is referred to in these cases is oxidation by oxygen radicals, which may be formed from molecular dioxygen by suitable reductants such as metal ions. This phenomenon is discussed in chapter D10 along with other cases in which oxygen radicals act as chemiluminescence reagents (LINK).
In dimethylsulfoxide (DMSO) solutions, luminol exists as dianions and reacts with dissolved oxygen in the presence of a strong base with intense chemiluminescence[1]. The rate constant for the reaction is about 50 dm3mol−1s−1[10]; the rate constant for the corresponding reaction between oxygen and luminol dianions in aqueous alkali is 10−2 dm3 mol−1 s−1. Because the conditions of the reaction in DMSO solution are relatively simple, the phenomenon has found great favour as a demonstration [11], for a spatula measure of luminol in a bottle of alkaline DMSO will react on shaking at room temperature. However, while the oxidation of luminol in aqueous solution is very widely used analytically, there are no established analytical procedures making use of the reaction in dimethylsulfoxide, dimethylformamide or other organic solvents but the effect of a range of metal complexes on the reaction in DMSO has been investigated[12][13] The emitter (3-aminophthalate ion) tautomerizes in aprotic solvents such as DMSO to a quinonoid form that gives maximum emission at 510 nm; this tautomer is favoured by the pairing of luminol anions with metal cations (e.g., Na+ or K+). If luminol is oxidized in mixed solvents, there is less emission at 425 nm (reduced ion pairing) and more at 510 nm than in aqueous media. Also in mixed solvents there is less 425 nm emission in chemiluminescence than in fluorescence because in chemiluminescence the fraction of ion-pairs is determined by the transition-state rather than by the ground-state equilibrium as in fluorescence[1].
c(iii). Higher oxidation states of manganese
Permanganate ions are thermodynamically easily capable ((E0 = 1.70 V) of oxidizing luminol. A flow injection analysis of paracetamol in pharmaceutical preparations based on inhibition of luminol-permanganate chemiluminescence has been reported[14]. A little earlier, an imaginative biosensor for urea had been fabricated, in which ammonium carbonate generated by urease-catalysed hydrolysis was used to release luminol from an anion-exchange column to react with permanganate eluted from a second column, producing chemiluminescence[15]. A steady stream of novel applications of the luminol-permanganate system followed.
Oxidation of luminol by alkaline potassium permanganate produces manganate(VI) ions, which further react with luminol causing chemiluminescence. This phenomenon, termed by the authors ‘’’second chemiluminescence (SCL)’’’ has been applied in an assay for nickel(II) ions[16]. In a suitable flow injection manifold, dilute solutions of alkaline luminol and of aqueous potassium permanganate are mixed and allowed to react for a long enough time for the resulting chemiluminescence to drop to a stable minimum close to zero. The sample is then injected into the mixture and, if nickel ions are present, light emission recommences and rapidly rises to a well-defined peak before returning to baseline intensity. Optimum intensity of the second chemiluminescence was obtained by using a tenfold excess of luminol (over potassium permanganate) in 0.1 mol dm-3 aqueous sodium hydroxide and injecting sample at pH 5.10; linear relationship with nickel(II) concentration was established and the detection limit was 0.33 μg dm-3. Numerous divalent and trivalent metal ions and nitrate ions were found to interfere with the determination. The mechanism of the luminol-manganate(VI) chemiluminescence appears to be the same as that for other luminol oxidations, with the production of excited 3-aminophthalate ion emitting at 425 nm. But oxidations both by permanganate and manganate(VI) can lead to the formation of excited manganese(II), which would be an additional source of chemiluminescence. Unfortunately, in the work described, the chemiluminescence spectrum was observed only up to 490 nm, overlooking such possible contributions to the signal.
c(iv) Silver(III)
A fairly stable silver(III) complex anion, diperiodatoargentate(III) (DPA), [Ag (H2IO6)(OH)2]2−, can be readily synthesized [17]. A new chemiluminescence reaction between luminol and diperiodatoargentate has been observed in alkaline aqueous solution [18] [19]. The emission of light from this reaction is strongly enhanced by iron nanoparticles and the intensity is further increased by the addition of aminophylline[20]. This forms the basis for an assay in which the chemiluminescence signal has a linear relationship with aminophylline concentration in human serum over the range 1.0 x 10−8 to 2.0 x 10−6 mol dm−3. The detection limit is 9.8 x 10−9 mol dm−3. The relative standard deviation at 8.0 x 10−7 mol dm−3 is 4.8% (n = 10).
Penicillin antibiotics have also been found to enhance luminol-silver(III) complex chemiluminescence, which has formed the basis for a sensitive flow injection assay for these drugs in dosage forms and in urine samples. In optimized conditions, the detection limit for benzylpenicillin sodium is reported to be 70 ng cm−3, for amoxicillin 67 ng cm−3, for ampicillin 169 ng cm−3 and for cloxacillin sodium 64 ng cm−3[21].
The maximum wavelength of the light emitted is about 425 nm[18], which is the usual chemiluminescence from excited 3-aminophthalate, produced by luminol oxidation. This implies that the silver(III) complex is capable of bringing about both the primary and secondary oxidation of luminol as proposed by Shi ‘’et al.’’ who postulate one-electron primary oxidation of two luminol molecules by each diperiodatoargentate. Perhaps two-electron oxidation of one luminol molecule is more likely. The reduction potential of diperiodatoargentate(III) ion is +1.74 V[22], high enough for two-electron oxidation to convert water into hydrogen peroxide (ε0 = -1.763 V; Nernst equation indicates a millimolar H2O2 equilibrium concentration). This provides a possible mechanism for secondary as well as primary oxidation.
2.1: B1d. Electrochemiluminescence
The instrumentation of electrochemiluminescence (ECL) is dealt with in chapter D7. The resulting reaction pathways lend themselves to control of emission by varying the applied voltage or the electrode selected and are applicable to near- neutral (pH 8.0 to 8.5) aqueous solutions such as biological fluids, whereas luminol chemiluminescence usually occurs in strongly alkaline or non-aqueous solutions. It has been proposed that luminol is first oxidized at the electrode surface and, on subsequent reaction with hydrogen peroxide, the chemiluminescence quantum yield (see chapter A1 ADD LINK) is enhanced[23][24]. Typical of the early applications is an assay of lipid hydroperoxides using ECL at a vitreous carbon electrode[25]. With applied voltages of 0.5-1.0 V, luminol monoanion loses one electron giving diazasemiquinone, which dismutes to produce diazaquinone, which reacts quantitatively with the analyte. At applied voltages above 1.0 V, the –NH2 of diazaquinone and the analyte itself are oxidized giving respectively –NH• and superoxide, which causes an interfering signal. The detection limit in optimized conditions was 0.3 nmol at S/N = 1.5. Using a voltage of 0.5-1.0 V applied to a platinum electrode, both methyl linoleate hydroperoxide (MLHP) and luminol are oxidized; the detection limit for MLHP is 0.1 nmol at S/N = 2.5. There was no emission from the closely related methyl hydroxyoctadecadienoate (a reduction product of linoleic acid hydroperoxide). The inhibition of ECL signals from luminol oxidation can be used as a method of determination of inhibitors. A recent example is the determination of melamine in dairy products and in tableware[26]. Using low voltage scan rates in phosphate buffer at high pH, ECL is observed at 1.47 V and there is a linear (r2=0.9911) decrease of ECL proportional to the logarithm of the melamine concentration over the range 1 to 100 ng cm−3. The limit of detection is 0.1 ng cm−3 with high recovery. The signal arises from the reaction with luminol of reactive oxygen species (from the electrooxidation of hydroxyl ions) that are eliminated by melamine. Modification of electrodes is now a well-established way of controlling ECL and in recent years the use of nanomaterials for this purpose has grown in importance. An example involving luminol is the modification of a gold electrode by applying a composite of multi-wall carbon nanotubes and the perfluorosulfonate polymer Nafion[27]. In the course of cyclic voltammetry in carbonate buffer, three ECL peaks were obtained, up to 20 times as intense as with the unmodified electrode; in each case the emitter was identified as 3-aminophthalate anion, indicating that the improvement was due to electrode efficiency rather than to any change in the chemistry of the system.
ECL immunosensors have been fabricated that have been successfully applied to the determination of human immunoglobulin G (hIgG) in serum. The primary antibody, biotin-conjugated goat anti-hIgG, is first immobilized on an electrode modified with streptavidin-coated gold nanoparticles (AuNPs). The sensors are sandwich-type immunocomplexes formed by the conjugation of hIgG to a second antibody labelled with luminol-functionalized AuNPs. ECL is generated by a double potential step in carbonate buffer containing 1.0 mmol dm−3 hydrogen peroxide. Many luminol molecules are attached to the surface of each AuNP and act as multiple sources of light emission from each antibody molecule. The amplification of ECL in this way, linked to the analyte by the biotin-streptavidin system, leads to greatly enhanced signals. The limit of detection is 1 pg cm−3 (at S/N = 3), which surpasses the performance of all previous hIgG assays[28].
The surface of a glassy carbon electrode was modified by producing on it L-cysteine reduced graphene oxide composites onto which AuNPs were self-assembled. Cholesterol oxidase (ChOx) was subsequently adsorbed on the AuNP surface to form a cholesterol biosensor with satisfactory reproducibility, stability and selectivity. The AuNPs increase the surface area of the electrode, hence permitting a higher ChOx loading, and provide a nanostructure more favourable to ECL, improving analytical performance. The linear response to cholesterol of the sensor extends over the range 3.3 x 10−6 to 1.0 x 10−3 mol dm−3 and the limit of detection is 1.1 x 10−6 (at S/N = 3)[29].
A poly(luminol-3,3',5,5'-tetramethylbenzidine) copolymer manufactured by electropolymerization on screen-printed gold electrodes greatly improves the ECL of hydrogen peroxide. A cholesterol biosensor suitable for the analysis of serum samples was fabricated by immobilization of cholesterol oxidase onto the polymer. Under optimized conditions, the biosensor has a linear dynamic range of 2.4 x 10−5 to 1.0 x 10−3 mol dm−3 with a limit of detection of 7.3 x 10−6 mol dm−3. Precision (measured as relative standard deviation) was 10.3% at 5.0 x 104mol dm−3 and the method has the additional advantages of low cost and high speed[30]. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/2%3A_Chemiluminescence_Reagents/2.01%3A_Luminol.txt |
These are the earliest-known chemiluminescence reagents. Lophine (2,4,5-triphenyl-1H-imidazole) exhibits lemon yellow chemiluminescence in solution and is one of the few long-lasting chemiluminescent molecules. It forms dimers that have piezochromic and photochromic properties. It has been proposed as an analytical reagent for trace metal ion detection[1]. Lophine chemiluminescence was discovered by B. Radziszewski in 1877.
Pyrogallol is determined by means of chemiluminescence at 500 nm which accompanies the oxidation by hydrogen peroxide of autoxidized pyrogallol in the presence of chromium(III) and formaldehyde. Using air-segmented continuous flow analysis, the LOD (3s) was 6.0 × 10–9 mol dm–3 and the calibration was linear up to 10-4 mol dm–3. The method has the potential to be extended to other phenols[2]The chemiluminescent oxidation of pyrogallol has been known for more than one hundred years.
2.03: Luciferins
Luciferases are enzymes that catalyse light-emitting reactions in living organisms - bioluminescence. They occur in several species of firefly and in many species of bacterium. Firefly Luciferases are extracted by differential centrifugation and purified by gel filtration. Lyophilised luciferase with added stabilizer keeps for several months at -4°C.
Luciferins are substrates of luciferases. Firefly luciferin emits at 562 nm on reaction with oxygen, catalysed by luciferase in the presence of adenosine triphosphate (ATP) and magnesium ions, emission being directly proportional to luciferin concentration over the range 0.01-1000 nmol dm-3. The ATP dependence of firefly luciferin bioluminescence is exploited in many ATP determinations and assays for the products of enzymatic reactions that utilize or produce ATP, e.g., kinases and substances involved in reactions catalysed by them.
The crustacean Cypridina hilgendorfii has a luciferin of very different chemical structure, but the mechanism of its bioluminescence is the same as that of the firefly except that no co-factor is required. Analogues of Cypridina luciferin have also been synthesised and used to detect superoxide of pathological origin. Scavengers of superoxide radicals, e.g., tea leaf catechins, quench Cypridina chemiluminescence, enabling their antioxidant activities to be conveniently measured.
Figure B3.1: Principle of bacterial bioluminescence, in which light is emitted by the oxidation of a long-chain fatty aldehyde by flavine mononucleotide, which is regenerated in a coupled reaction. NAD(P)H, nicotinamide adenine dinucleotide (phosphate; FMN, flavine mononucleotide.
Luminous bacteria are found widely in marine environments. Bacterial luciferase, which acts in accordance with the outline mechanism shown in Figure B3.1, does not have a luciferin substrate as such. Instead the light emission comes from a complex of luciferase, flavine mononucleotide and a long-chain fatty aldehyde[1]. Thus bacterial bioluminescence is associated with a pyridine nucleotide rather than with the adenine nucleotide involved in firefly bioluminescence.
2.04: Lucigenin and coelenterazine
Lucigenin and related 9,9/-diacridinium salts give an intense blue-green emission when oxidized by alkaline hydrogen peroxide. The major chemiluminescence emitter is postulated[1] to be N-methyl acridone (blue light), produced via a peroxide, with other excited molecules involved. The reaction is catalysed by pyridine, piperidine, ammonia or osmium tetroxide. A proposed mechanism explains the chemiluminescence of oxidized acridinium salts by the formation of excited peroxide intermediates.
Lucigenin is used in a wide variety of assays, especially those involving enzymatic production of hydrogen peroxide, and as a label in immunoassays. It reacts with various reductants, including those present in normal human blood[2], such as glutathione, uric acid, glucuronic acid, creatinine, ascorbic acid and creatine. The chemiluminescence intensity for a mixture of these analytes is equal to the sum of the intensities, measured separately for each analyte present. Metal ions – iron(III), manganese(II) and copper(II) – also contribute to the chemiluminescence and so must be regarded as interferents. Lucigenin is also affected by a very wide range of other metal ions[3], both enhancers and inhibitors. The most effective enhancers are osmium (VIII), cobalt(II), ruthenium(III), iron(II) and iron(III) and the most effective inhibitors are europium(III), thorium(IV), ytterbium(III), terbium(III) and manganese(II). Among the enhancers, effective enhancement seems to be associated with low detection limit but this association is much less pronounced among the inhibitors.
Lucigenin chemiluminescence has been important for the determination of superoxide[4]. The mechanism for the lucigenin-superoxide reaction is believed to be:
(B4.1) Reduction to cation radical: Luc2+ + e- → Luc•+
(B4.2) Coupling to yield dioxetane: Luc•+ + O2•– → LucO2
(B4.3) Decomposition of dioxetane to N-methylacridone: LucO2 → NMA* + NMA
(B4.4) Chemiluminescence: NMA* → NMA + light
The credibility of lucigenin detection of superoxide has been questioned because of the evidence (disputed) for a process called redox cycling in which lucigenin reacts with oxygen to form more superoxide, leading to the amount of superoxide being overestimated. As a result, coelenterazine (a luminophore from the coelenterate Aequorea), became a more favoured probe for superoxide; although this also offered improved selectivity for superoxide, it was not entirely specific. Attention has therefore shifted to assays using Cypridina luciferin analogues (see chapter B3) to detect superoxide. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/2%3A_Chemiluminescence_Reagents/2.02%3A_Lophine_and_pyrogallol.txt |
Peroxy-oxalate chemiluminescence (PO-CL) was first reported in 1963 as a very weak bluish-white emission from oxalyl chloride, Cl-CO.CO-Cl, on oxidation by hydrogen peroxide; a similar blue emission occurs from related oxalyl peroxides. Much more intense emission is obtained in the reaction between aryl oxalates and hydrogen peroxide in the presence of a fluorophore; it is this version of the reaction that is analytically useful[1][2]. Liquid chromatography is a major area of application[3].
Because in PO-CL analysis, the analyte is an added fluorophore to which energy is transferred, the various applications have much in common. The rate of PO-CL depends especially on pH and on the presence of a nucleophilic base catalyst for ester hydrolysis. Aryl oxalates differ in the effect of pH on the intensity and decay of the chemiluminescence. They also differ in their solubilities, which affects their usefulness as detection reagents for HPLC. There are wide variations in their stabilities in the presence of hydrogen peroxide, so some are more suitable than others for premixing with the oxidant. Taking all these things into account, Honda et al. proposed that the preferred oxalate varied with pH as follows:
• <2: bis(pentafluorophenyl)
• 2 to 4: bis(2-nitrophenyl)
• 4 to 6: bis(2,4-dinitrophenyl)
• 6 to 8: bis(2,4,6-trichlorophenyl)
• >8: bis(2,4,5-trichloro-6-pentyloxycarbonylphenyl)
PO-CL is thought to follow a chemically initiated electron exchange luminescence (CIEEL) mechanism as proposed by Koo and Schuster[4]. An electron is transferred from the fluorophore to an intermediate, which, as it decomposes, transfers it back again; as a result the fluorophore is raised to an exited state and subsequently radiates. In support of this it has been demonstrated that the relative excitation yields of different fluorescers have a significant negative correlation with their oxidation potentials – in other words, the more difficult it is to oxidize the fluorescer, the lower its probability of excitation. High chemiluminescence intensity can be predicted if a fluorescer has a low singlet excitation energy ; a low oxidation potential is at least as important. The formation of a linear peroxide intermediate, ArO-CO.CO-OOH, which decomposes to radical ion-pairs comprising the fluorophore and a carbon dioxide molecule, has also been proposed as the mechanism of energy transfer. Background emission in the absence of a fluorophore occurs at 450 nm (which could be carbon dioxide) and at about 550 nm (which varies with the aryl group and could be due to an excited carbonyl intermediate containing the aryl group). Dioxetanes luminesce on warming, producing excited carbonyl compounds and the may have a role in PO-CL. However, decomposition of 1,2-dioxetanedione into carbon dioxide, though possible, is unlikely to be the sole source of the emission as the chemiluminescence depends on the electronegativity of the aryl group, so is unlikely to arise from an intermediate that would be the same whatever the aryl group.
2.06: Organic peroxides and lipid peroxidation
Metal ions such as iron decompose organic peroxides and hydroperoxides into free radicals[1]; the rate of formation varies very much with different metal complexes and peroxides. The chemiluminescence intensity is directly proportional to the concentration of hydroperoxide. Cyclic organic peroxides include dioxetanes which have been disussed in connection with the peroxy-oxalate reaction. The mechanisms involved in the decomposition of 1,2-dioxetanes and analogous peroxides are: (i) unimolecular decomposition into excited state carbonyl compounds; (ii) intramolecular or intermolecular CIEEL (Chemically Initiated Electron Exchange Luminescence).
Lipid peroxidation is a process of great interest, especially in biochemical research, as it is associated with damage to biological cell membranes and has a putative role in pathological phenomena such as aging, cancer and other degenerative conditions. The process is a radical chain reaction that produces an ultraweak chemiluminescence signal. It has been proposed that in cells, the major excited species responsible for light emission are triplet carbonyls and singlet oxygen, which arise through the decomposition of hydroperoxides. Initiators such as hydroxyl radicals (OH) remove hydrogen from unsaturated fatty acids (LH) to produce lipid radicals (L):
LH + OH → L + H2O
which react with atmospheric oxygen to form lipid peroxyl radicals (LO2):
L + O2 → LO2
that recombine to generate the excited products (P):
LO2 + LO2 → P* → P + Фhν
(h = Planck’s constant and ν = frequency of emitted light).
The emission intensity is determined by the quantum yield (Ф), which is low for lipid peroxidation, depending on the rate of processes competing with light emission for the deactivation of the lowest excited singlet state. Because the associated chemiluminescence is weak, it is useful to enhance the emission intensity using fluorescent dyes, as discussed in chapter 16. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/2%3A_Chemiluminescence_Reagents/2.05%3A_Dioxetanes_and_oxalates.txt |
Manganese (VII) in the form of potassium permanganate has been used as a chemiluminescence reagent for several decades. A broad band of red light is emitted on reaction with over 270 compounds in acidic solution[1]. Among the organic analytes are morphine and a wide range of other pharmaceuticals, phenolic substances, amines and hydrazines in addition to well-known reductants such as ascorbic acid and uric acid. Proteins and amino-acids are also known to reduce permanganate with chemiluminescence. Inorganic analytes include sulfur dioxide and sulfites, hydrogen sulfide, hydrogen peroxide, hydrazine and iron(II). Chemiluminescence intensity is a linear function over a very wide range of concentration, but varies considerably for different analytes. It is also affected by anions present so that acidification with sulfuric acid gives a better signal than hydrochloric, nitric or perchloric acids. Considerable signal enhancement occurs in the presence of polyphosphates; these are unstable at low pH but hexametaphosphate is more stable than the others. In a number of cases, chemiluminescence is enhanced by the presence of an ancillary reductant such as formic acid or, especially, formaldehyde. Manganese(II) is sometimes a useful signal enhancer. Fluorophores such as quinine, riboflavin or rhodamine B have also been used but sometimes give a high blank signal and a reduced signal to noise ratio.
The emitting species is an electronically excited manganese(II) species, as has been confirmed by a direct comparison of the laser-induced photoluminescence of manganese(II) chloride with the chemiluminescence from reaction of sodium borohydride with acidic potassium permanganate[2]. In many cases where permanganate is used in the presence of fluorescent compounds, e.g. enhancers or reaction products, energy transfer to the efficient fluorophore has been proposed on the basis of spectral distributions that match those obtained using other oxidants; in most cases, however, the red emission characteristic of manganese(II) is also produced and can make a significant contribution to the total light output[3], especially in the presence of polyphosphate.
More recently, manganese(III) and manganese(IV) have been explored as chemiluminescence reagents[4]. As with the +VII oxidation state, these produce on reaction with a wide range of molecules an excited manganese(II) species that emits light, but differ markedly in terms of selectivity. They also possess characteristics that provide new avenues for detection, such as the immobilisation of solid manganese dioxide, the production of colloidal manganese(IV) nanoparticles and the electrochemical generation of manganese(III).
A brown, transparent, stable solution of manganese(IV) can be prepared by dissolving freshly precipitated manganese dioxide in 3M orthophosphoric acid. Using this reagent at about 1 x 10-4 M, analytically useful chemiluminescence has been reported for a growing list of compounds, often with nanomolar detection limits. Light emission is enhanced by up to 2 orders of magnitude in the presence of 0.2 – 3.0 M formaldehyde. Numerous pharmaceuticals have been determined in commercial formulations by this reaction in flow-injection assays. Detection of drugs and biomolecules in more complex matrices such as urine or serum requires coupling with an initial separation step such as HPLC.
Manganese(III) can be obtained by oxidation of manganese(II) or reduction of manganese(IV); it readily disproportionates into the +II and +IV states but can be stabilised by acidification, by complexation with anions or by adding manganese(II). The reduction of manganese(III) produces excited manganese(II) leading to emission of light of the same spectral characteristics as that emitted in permanganate or manganese(IV) chemiluminescence. On-line electrochemical generation of manganese(III) from manganese(II) has been applied to the chemiluminescence determination of a wide range of analytes, especially pharmaceuticals, with satisfactory selectivity and typically sub-micromolar limits of detection. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/2%3A_Chemiluminescence_Reagents/2.07%3A_Manganese.txt |
Cerium(IV)-based chemiluminescence systems involve the reduction of cerium(IV), which suggests that the emitter is a cerium(III) species. The chemiluminescence reaction is carried out in an acidic medium (generally sulfuric acid) and has been applied for the determination of substances of biological interest[1]. A few pharmaceuticals in dosage forms can reduce the cerium(IV) and produce luminescence directly. As a result, many flow-injection-chemiluminescence methods have been established for such species as naproxen, acetaminophen and fluphenazine hydrochloride. The sensitivity of the assays can be improved by increasing cerium (IV) concentration. Almost all cerium(IV) chemiluminescence systems need sensitization procedures to transfer the excited-state energy to a sensitizer, which then emits light of greater intensity. Thus most determinations involving cerium(IV) as the oxidant are indirect, based on the enhancement of chemiluminescence of the cerium(IV)-sulfite system by some analytes. This type of process is used to determine reducing compounds, such as cortisone, ofloxacin, nomoxacin, ciprofloxacin, lomefloxacin, flufenamic acid, mefenamic acid and salicylic acid.
Cerium(IV) chemiluminescence systems are very popular to determine sulfur-containing substances such as sodium-2-mercaptoethane, tiopronin, captopril, menadione sodium bisulfite and some sulfur-substituted benzamides but also other substances such as paraben, phenolic compounds (by LC), phentolamine, barbituric acid and erythromycin. In addition, light emission resulting from the chemical reaction of cerium(IV) with some mercapto-containing compounds in pharmaceutical preparations can be enhanced by certain fluorometric reagents such as quinine, rhodamine B and rhodamine 6G or by lanthanide ions such as terbium(III) and europium(III). Thus, a range of flow-injection chemiluminescence methods have been developed for determination of compounds of this kind.
2.09: Ruthenium
The chemiluminescence involving tris(2,2'-bipyridyl)ruthenium(II), [Ru(bpy)3]2+, is most interesting. It involves the oxidation of [Ru(bpy)3]2+ to [Ru(bpy)3]3+, which is followed by reduction with an analyte species to produce an emission of light, thus:
$[Ru(bpy)_3]^{2+} \rightarrow [Ru(bpy)_3]^{3+} + e^- \tag{Oxidation}$
$[Ru(bpy)_3]^{3+} + e^- \rightarrow [Ru(bpy)_3]^{2+*} \tag{Reduction by analyte}$
$[Ru(bpy)_3]^{2+*} \rightarrow [Ru(bpy)_3]^{2+} + \underbrace{h\nu}_{\text{620 nm}} \tag{Chemiluminescence}$
Structure of [Ru(bpy)3]2+: The arrangement of the three 2,2'-bipyridine ligands about the central ruthenium atom in the complex ion tris(2,2'-bipyridyl)ruthenium(II); the nitrogen atoms occupy the corners of an octahedron. (Public Domain; Benjah-bmm27)
Analytical usefulness depends on the emission of light of a measurable intensity that is clearly indicative of the analyte concentration. Chemiluminescence intensity depends on the efficiency and mechanism of the reduction step (eqn. B9.2). Common to all analytical applications of ruthenium chemiluminescence is the production of the oxidant [Ru(bipy)3]3+ (eqn. B9.1), which has been obtained by a variety of methods - chemical, photochemical and electrochemical oxidation including in situ electrogenerated chemiluminescence. Each of these generation methods has been discussed in a comprehensive review by Barnett and co-workers[1]. Chemical generation of [Ru(bpy)3]3+ has been achieved by a range of reagents such as cerium(IV) sulphate, lead dioxide and potassium permanganate.
The chemiluminescence reactions between primary, secondary or tertiary amines and [Ru(bpy)3]2+are very sensitive and have been widely applied to the determination of various analytes containing an amine functionality. The chemistry of electrogenerated chemiluminescence activity of tertiary amines with [Ru(bpy)3]2+ and their chemiluminescence reaction mechanism have been reviewed by Knight and Greenway[2] and that of the chemiluminescence reaction between secondary amine and tertiary amine arising from hydrolyzed and unhydrolyzed β-lactam antibiotics, respectively, has been reported by Liang et al.. More recently, there have been several reports dealing with the detection and determination of drugs by using the [Ru(bpy)3]2+/potassium permanganate system. These included tetracyclines, cinnamic acid, enalapril maleate and metoclopramide hydrochloride | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/2%3A_Chemiluminescence_Reagents/2.08%3A_Cerium.txt |
Modest chemiluminescence occurs when solutions of iron(II) ions or titanium(III) ions are added to carbonate buffer at alkaline pH[1], the intensity increasing with the metal ion concentration. This occurs even in solutions that have been deaerated with nitrogen. Surprisingly, the chemiluminescence of deaerated solutions sometimes exceeds that observed in oxygenated solutions. If luminol is also present the intensity of the chemiluminescence is increased (by a factor of about 100 for 1 x 10-5 mol dm-3 luminol), even though the only oxidant present is dissolved oxygen. The presence of the fluorophore rhodamine B also increases the chemiluminescence intensity, but the enhanced chemiluminescence is always more intense in oxygenated solutions. It is possible that other metal ions of low oxidation number, having reducing properties, will also induce this effect. Cobalt(II) ions or copper(II) ions have been shown to give rise to chemiluminescence when added to alkaline solutions of luminol with no added oxidant.
The phenomenon can be rationalized in terms of the well-established chemistry of single electron oxidation of iron(II) in solution[2].
(B10.1) Fe2+ + O2 → Fe3+ + O2•―
(B10.2) Fe2+ + O2•― + H+ → Fe3+ + HO2; followed by HO2 + H+ → H2O2
(B10.3) Fe2+ + H2O2 → Fe3+ + HO + HO
(B10.4) Fe2+ + HO → Fe3+ + HO
The oxygen radicals so produced are the effective chemiluminescence reagent. Radicals can recombine to generate products in excited states, which emit light. The surprising result that chemiluminescence is more intense when the solutions are de-aerated may be due to the more rapid oxidation of iron(II) in oxygenated solutions, leading to initially high concentrations of radicals which fall rapidly as they are converted to hydroxyl ions, so that transient high chemiluminescence would occur too soon to be detected in the flow system used. Luminol chemiluminescence initiated by iron(II) is no doubt due to primary oxidation by hydroxyl radicals (alone or in association with Fe2+), followed by secondary oxidation by superoxide. The light emission occurring when reductants are added to an alkaline solution of luminol and potassium ferricyanide is a special case of this reaction.
The iron(II)-luminol reaction has been applied to the determination of iron(II) in water under natural conditions at nanomolar and micromolar concentrations[3]. It is claimed to be a better assay than ultraviolet/visible spectrophotometry, titrimetry or polarography, having the advantages of high sensitivity, extreme rapidity and simplicity of operation, low cost and avoiding pre-treatment of the sample. It distinguishes iron(II) from iron(III) and can be adapted to measure total iron. Titanium(III)-luminol chemiluminescence has been applied to the determination of titanium(IV) which was converted to titanium(III) by on-line reduction. Fenton’s reagent, a mixture of aqueous iron(II) ions and hydrogen peroxide, has been used to promote chemiluminescence by oxidation. An example is the determination of amines and amino-acids after derivatization to Schiff bases[4]. A selective determination of adrenaline has also been reported.
2.11: Sulfites and persulfates
Sulfite is a well-known reductant. Oxidation of aqueous sulfur dioxide by acidified permanganate, cerium(IV) or hydrogen peroxide is feebly chemiluminescent [1]; exploitation of the weak chemiluminescence improved the detectivity of atmospheric sulfur dioxide by a factor of 50. A proposed mechanism comprised an initial oxidation of HSO3 to S2O62 , which then disproportionates to SO42 and excited SO2, which emits visible light. Sulfites undergo an addition reaction with carbonyl compounds and addition of cyclohexanone to protect sulfite solutions against atmospheric oxidation led to the observation that this, at appropriate concentrations, enhanced the oxidative chemiluminescence. Light emission is also sensitized by other cyclohexyl compounds. Paulls and Townshend have suggested that the enhancement depends on β-sultine formation and have shown that the phenomenon occurs generally with higher cycloalkyl compounds, the optimum ring size being nine.
Fused cycloalkane rings also enhance the oxidative chemiluminescence of sulfites and this has given rise to a number of assays for steroids. Thus, a range of corticosteroid drugs have been determined by enhancing the chemiluminescence of sulfite oxidized by cerium(IV). Steroid hormones enhance the chemiluminescence of sulfite oxidized by bromate or by cerium(IV) and an assay based on this effect has been reported. In addition, bile acids sensitize the light emission accompanying the oxidation of sulfites by a variety of oxidants (Ce4+, MnO4, BrO3 or Cr2O72 and these reactions have been applied analytically.
There is evidence that the chemiluminescence of the permanganate-sulfite reaction has the same emitter as any other permanganate oxidation and the red emission from this persists in the presence of fluorophores as a major contributor to total light output[2]. The cerium(IV)-sulfite reaction does not have any effect on the chemiluminescence spectrum in the presence of fluorophores. The spectra emitted by bromate and dichromate oxidations have not been studied. It is therefore still possible that the chemiluminescence reactions with sulfite might have the mechanism described above, leading to emission from excited sulfur dioxide. There have been persistent reports of emission from the permanganate-sulfite reaction at lower wavelength than can satisfactorily be ascribed to manganese(II) phosphorescence – the usual mechanism – but these can be explained at least partly by the use of spectroscopic data that has not been corrected for the variation in sensitivity of the detector at different wavelengths.
Whereas sulfites promote chemiluminescence due to their reducing properties, persulfates act as oxidizing agents in chemiluminescent reactions. These do not have sulfur in a higher oxidation state than normal sulfates; rather, they contain peroxide units, where two catenated oxygen atoms take the places of two separate oxygen atoms, one in each of the two linked sulfate groups; these oxygen atoms are in oxidation state −I. Chemiluminescence has been reported from persulfates, both by electrochemical reduction at magnesium, silver or platinum electrodes and by thermal decomposition at the surface of magnesium[3]. The light-emitting species in each case are reported to be oxygen radical ions, O•―, and excited peroxide ions, O22, arising respectively by deprotonation of hydroxyl radicals, HO, or of hydrogen peroxide or hydroperoxide radicals, HO2. Persulfates are also used as oxidants in luminol chemiluminescence and as ancillary oxidants in ruthenium chemiluminescence, where they generate the oxidant [Ru(bipy)3]3+ (see eqn. B9.1).
2.12: Hypohalites and halates
Chemiluminescence reactions involving hypohalites and related oxidants have been exploited for a wide variety[1]of analytical applications, primarily for the determination of free chlorine, halides and a variety of compounds in pharmaceutical preparations and natural waters. Proposed mechanisms of the light-producing pathways are insufficiently supported by spectroscopic evidence but, where emission spectra are known, large differences show that numerous different emitters are involved. A deeper understanding of the light-producing pathways and hence the relationship between analyte structure and chemiluminescence intensity is required.
Two examples of the use of halates in chemiluminescence will now be mentioned. A novel flow-injection system for the determination of formaldehyde has been described[2]. It is based on a strong enhancement by formaldehyde of the weak emission from the reaction between potassium bromate and rhodamine 6G in sulfuric acid. The method has been applied to determine formaldehyde in the air samples and a possible mechanism has been proposed.
The oxidation reaction between periodate and polyhydroxyl compounds has also been studied[3]. A strong emission, especially in the presence of carbonate, is observed when the reaction takes place in a strongly alkaline solution (but not in acidic or neutral solution) without any other chemiluminescence reagent. Background and chemiluminescence signals of the sample are enhanced by oxygen and decreased by nitrogen. The chemiluminescence spectrum shows two main bands (at 436-446 nm and 471-478 nm). Based on these, a possible chemiluminescence mechanism has been proposed. Two emitters contribute to the chemiluminescence background, singlet oxygen and carbonate radicals.
The addition of polyhydroxyl compounds or hydrogen peroxide causes enhancement of the chemiluminescence signal. This reaction system has been developed as a flow injection assay for hydrogen peroxide, pyrogallol, and α-thioglycerol. The ions involved in the reaction - periodate, carbonate and hydroxyl - can be immobilized on a strongly basic anion-exchange resin and highly sensitive chemiluminescence flow sensors for each analyte have been assembled. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/2%3A_Chemiluminescence_Reagents/2.10%3A_Oxygen_radicals.txt |
• 3.1: Micellar Enhancement of Chemiluminescence
Well-defined mechanistic principles have emerged to rationalize micellar enhancement of chemiluminescence. These occur in the microenvironment (i.e. polarity, viscosity and/or acidity, etc.), in the chemical and photophysical pathway and in the solubilization, concentration and organization of the solute/reactant. We shall now use these principles as a framework for discussing this work and it will become clear that they are highly inter-related rather than mutually exclusive.
• 3.2: Dye Enhancement of Chemiluminescence
Chemiluminescence is often very weak and to use it, or even to investigate it, it is necessary to enhance it. One way to do this is to use fluorescent dyes. So it is necessary to find a link between the properties of the dye and the degree of enhancement achieved. One key property is the fluorescence quantum yield of the dye; this must be greater than the chemiluminescence quantum yield of the original emitter.
• 3.3: Enhancement of Chemiluminescence by Ultrasound
A novel ultrasonic flow injection chemiluminescence (FI-CL) manifold for determining hydrogen peroxide (H2O2) has been designed[1]. Chemiluminescence obtained from the luminol-H22-cobalt (II) reaction was enhanced by applying 120 W of ultrasound for a period of 4 s to the reaction coil in the FI-CL system and this enhancement was verified by comparison with an identical manifold without ultrasound.
Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).
3: Enhancement of Chemiluminescence
Well-defined mechanistic principles have emerged to rationalize micellar enhancement of chemiluminescence. The review of Lin and Yamada[1] focuses on how micelles may be used to improve chemiluminescence signals by changes that affect the reaction rate. These occur in the microenvironment (i.e. polarity, viscosity and/or acidity, etc.), in the chemical and photophysical pathway and in the solubilization, concentration and organization of the solute/reactant. We shall now use these principles as a framework for discussing this work and it will become clear that they are highly inter-related rather than mutually exclusive[2].
There follow examples of micellar enhancement which have been explained by changes in the microenvironment. In the interaction of sulfite groups in drugs with dissolved oxygen in presence of acidic rhodamine 6G, the surfactant Tween 60 can enhance chemiluminescence by 200%, attributable to a microenvironment that leads to an increase in the fluorescence quantum yield of rhodamine 6G and prevents quenching by oxygen. Sensitization of IO3/H2O2 chemiluminescence in the presence of various surfactants at various concentrations has been explained by changes in the microenvironment rather than by solubilization, electrostatic effects or changes in pH. In the chemiluminescence reaction of luminol with hypochlorite in cetyltrimethylammonium chloride (CTAC) micelles, the light reaction in micellar media results in chemiexcitation yields which are higher than those in the corresponding homogeneous aqueous media due to the less polar microenvironment of the micellar stern region but the actual chemiluminescence quantum yields are lower due to quenching, both chemical and photophysical.
In some cases there is evidence of changes in chemical or photophysical pathways or rates of particular reactions. In the system of lucigenin reduced by fructose, glucose, ascorbic acid or uric acid, the cationic surfactant cetyltrimethylammonium hydroxide (CTAOH) increases the chemiluminescence intensity better than cetyltrimethylammonium bromide (CTAB) due to the superiority of CTAOH in micellar catalysis of the rate-limiting step of the lucigenin-reductant reaction. In permanganate chemiluminescence for the analysis of uric acid in the presence of octylphenyl polyglycol ether, there is an alteration in the local microenvironment allowing the solute to associate with the micellar system and this affects various photophysical rate processes. A small amount of surfactant added to the luminol-gold(III)-hydroxyquinoline system, can stabilize gold(III) in aqueous solution, accelerate the reaction rate and hence increase chemiluminescence intensity. The surfactant Triton X-100 can accelerate the chemiluminescence reaction between colloidal manganese dioxide (MnO2) and formic acid in perchloric acid but CTAB or sodium dodecyl sulfate (SDS) cannot.
Sometimes the micelles have their enhancing effect by changing the local concentrations and organization of the reactants. The determination of iron(II) and total iron by the effect on the luminol/hydrogen peroxide system is enhanced by tetradecyltrimethylammonium bromide (TTAB) in the presence of citric acid. An iron(II)-citric acid anion complex is formed and concentrated at the surface of the cationic micelle. This then reacts with hydrogen peroxide at that surface, increasing the rate of the chemiluminescence reaction. The effect of cationic surfactant on the copper-catalysed chemiluminescence of 1,10-phenanthroline with hydrogen peroxide is that 1,10-phenanthroline concentrates in the centre of the micelles, but superoxide anion radicals are attracted to the surface where the reaction occurs more easily.
Some cases of micellar enhancement are explained by facilitation of energy transfer. Greenway et al.[3] found that a non-ionic surfactant helps to overcome the pH imbalance between codeine (in acetate buffer) and Ru(bipy33+ (in sulfuric acid and Triton X-100). The reacting species are enclosed within a micelle which enabled easier energy transfer. CTAB micellar complexes enhance the signal in the presence of fluorescein in the luminol-hydrogen peroxide system. The effect on energy transfer arises because the aminophthalate anion energy donors and the fluorescein anion acceptors will be located at distances approximately corresponding to diameter of micelle (1-3 nm). Since, the transfer of electron excitation energy in solutions can be realized up to a distance of 7-10 nm (Förster mechanism, see chapter C2) (ADD LINK), the concentration of both species in the micelle is very effective for energy transfer. The same explanation applies to the chemiluminescence reactions of luminol and its related compounds in the presence of CTAC, which is also enhanced by intramicellar transfer of electronic excitation energy. Intramicellar processes of energy transfer can easily be modified by altering surfactant concentration and optimized in order to reach maximum conversion of chemical energy to emitted light. The procedure is generally applicable, the effectiveness varying a little with different chemiluminescence reactions, acceptors of electron excitation energy, catalysts and surfactant enhancers. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/3%3A_Enhancement_of_Chemiluminescence/3.1%3A_Micellar_Enhancement_of_Chemiluminescence.txt |
Chemiluminescence is often very weak and to use it, or even to investigate it, it is necessary to enhance it. One way to do this is to use fluorescent dyes. So it is necessary to find a link between the properties of the dye and the degree of enhancement achieved. One key property is the fluorescence quantum yield of the dye; this must be greater than the chemiluminescence quantum yield of the original emitter.
There are two processes by which a luminescent signal can be enhanced, depending on the distance separating the emitting molecule (the energy donor) from the dye molecule (the energy acceptor). The Dexter mechanism applies at very short separation distances, for example when molecules collide. This very close approach allows the excited state donor to exchange a high energy electron for one of lower energy, thus returning to the ground state. The ground state acceptor molecule loses the low energy electron and gains one of higher energy, thus entering an excited state. The rate of energy transfer depends on the concentration of acceptor molecules.
For molecules that are further apart (up to 7-10 nm), the Förster mechanism applies. This involves direct transfer of energy from donor to acceptor, rather as a radio antenna transmits energy to a receiver. The relationship between the rate constant of energy transfer ($k_{ET}$) and the separation distance ($R$) is given by:
$k_{ET} = \left( \dfrac{1}{\tau_D}\right) \left(\dfrac{R_o}{R}\right)^6 \nonumber$
where $\tau_D$ is lifetime of the excited state of the donor molecule and $R_o$ is the critical separation constant. The actual rate of energy transfer depends on the rate constant and on the concentrations of donor and acceptor molecules. Also important is the extent of overlap between the emission band of the donor and the absorption bands of the acceptor. This is greatest when the maximum emission wavelength of the donor is close to the maximum absorption wavelength of the acceptor, but it also depends on the shapes of the bands and on the bandwidths. The molecular structure of the donor and acceptor molecules determine the probability of energy transfer.
The chemically initiated electron exchange luminescence model (CIEEL), proposed to explain peroxy-oxalate chemiluminescence[1] (see chapter B5) (ADD LINK) may sometimes apply to dye enhancement. It has been observed that higher and slimmer chemiluminescence signals, implying a more rapid rate of the light emitting reaction, are obtained when cerium(IV) and rhodamine 6G are premixed before the injection of the sample. Oxidation of rhodamine 6G by cerium(IV) would certainly form excited state cerium(III), but this would add to the baseline and blank signals as well as to the sample peaks; it would therefore not explain the observed premixing effect. It appears that instead an oxidation product of rhodamine 6G is responsible, for this has an opportunity to react with the sample, leading to specifically enhanced analyte signals. There was no advantage in increasing the time available for the pre-oxidation of rhodamine 6G, so it seems likely that the active product is formed on first contact and could be an intermediate formed by single electron transfer. The electron is transferred from this initial oxidation product to the analyte, reducing it back to rhodamine 6G in an excited state, giving us the analyte (A) oxidation:
$Rh6G \rightarrow Rh6G^{•+} +e^- \nonumber$
$Rh6G^{•+} + A \rightarrow Rh6G^* + A^{•+} \nonumber$
Emission from excited rhodamine 6G would occur as before. If (as is plausible) this single stage formation of excited rhodamine 6G goes further and faster than the two stages of analyte oxidation by cerium(IV) followed by energy transfer from excited cerium(III) to rhodamine 6G, it would explain the higher and slimmer analyte peaks that were observed. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/3%3A_Enhancement_of_Chemiluminescence/3.2%3A_Dye_Enhancement_of_Chemiluminescence.txt |
A novel ultrasonic flow injection chemiluminescence (FI-CL) manifold for determining hydrogen peroxide (H2O2) has been designed[1]. Chemiluminescence obtained from the luminol-H22-cobalt (II) reaction was enhanced by applying 120 W of ultrasound for a period of 4 s to the reaction coil in the FI-CL system and this enhancement was verified by comparison with an identical manifold without ultrasound. The method was applied to the determination of trace amounts of H2O2 in purified water and natural water samples without any special pre-treatments.
It is well-known that alkaline solutions of luminol emit light when subject to ultrasound of sufficient intensity to produce acoustic cavitation. Light emission is believed to occur through a process of oxidative chemiluminescence involving sonochemically generated HO·. The cyclic pressure variations associated with the propagation of ultrasound waves in aqueous solution are known to result in the growth and periodic collapse of microscopic cavitation bubbles filled with gas and/or vapour[2][40]. Furthermore, it has been shown that extremely high local temperatures and pressures may be generated during the collapse or implosion of such bubbles. Consequently, it is generally accepted that it is within the cavitation bubble, or the layer of solution immediately contacting the cavitation bubble, that the sonochemical effects take place.
Luminol chemiluminescence has been described in section B1 (ADD LINK). Light emission from the reaction between luminol and hydrogen peroxide can be induced by the presence of cobalt(II) at concentrations low enough to be regarded as catalytic. The effect of ultrasound on hydrogen peroxide is to produce hydroxyl radicals by homolytic fission of the O―O bond:
$H_2O_2 \rightarrow 2HO^• \nonumber$
Hydroxyl radicals in aqueous solution are short-lived. The consumption of these radicals by recombination is very rapid and attenuates the ultrasound enhancement:
$2HO^• \rightarrow H_2O_2 \nonumber$
Because of this, the concentration of hydrogen peroxide soon greatly exceeds that of hydroxyl radical, even if the radicals are initially produced in high yield. There is then a greater probability that radicals will instead react with hydroen peroxide molecules, forming superoxide:
$HO^• + H_2O_2 \rightarrow O_2^{•-} + H_3O^+ \nonumber$
As a result, the effect of sonication is the production in the sample of superoxide rather than hydroxyl radicals. The hydroxyl radicals initially formed would have reacted with luminol to initiate the light-emitting pathway but the primary oxidation of luminol by superoxide is negligible. Instead, when the sample merges with luminol/buffer/cobalt, the effect of this enhanced superoxide concentration is to increase the concentration of the hydroperoxide intermediate, enhancing the light emitting pathway where it has already been initiated by cobalt/hydrogen peroxide; this leads to a fivefold improvement in the detection limit.
The practical implementation of this ultrasound enhancement proved to be exacting. Small changes in the FIA manifold were found to have a considerable effect on the chemiluminescence intensity. In spite of this it was found possible to optimize a range of relevant variables. Some variables were concerned with the arrangements for administering a dose of ultrasound energy to the sample as it flowed through a coil immersed in the sonication bath. To achieve this, the coil had to be long enough to contain the sample all the time that sonication was occurring, but not so long that the enhancement of the chemiluminescence signal would be abolished either by dispersion of the sample into the carrier or by decay of the short-lived radicals generated by sonication. The optimum distances between the water surface and the probe tip and between the probe tip and the upper edge of the sonication coil correspond closely to the conditions for the establishment of standing waves in the sonic bath.
Cavitation when present is the predominant mechanism of acoustic energy absorption as well as providing the collapsing bubbles that are the sites of the sonochemical reactions. Absorption by bubbles is so effective that they provide a shielding effect and so could explain the difficulty in predicting the effect of small changes in the position of the coil within the sonication bath. It was necessary to vary the sonication arrangements in order to optimise them, but operational analytical applications of ultrasound enhancement would be more easily carried out using fixed sonication arrangements in a permanent and purpose-designed apparatus. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/3%3A_Enhancement_of_Chemiluminescence/3.3%3A_Enhancement_of_Chemiluminescence_by_Ultrasound.txt |
Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).
4: Instrumentation
The detector of choice for chemiluminescence is the photomultiplier tube, a development of the vacuum phototube that permits considerable amplification of the signal. Figure D1.1 shows how the photomultiplier works. The surface of the cathode supports a photoemissive layer that ejects electrons in direct proportion to the intensity of the incident light; several electrons are emitted for each photon and are attracted towards a positively-charged dynode. When the electron beam meets the dynode several electrons (E in figure D1.1) are ejected for each incident electron and these are attracted to a second dynode at a higher positive potential. This process is repeated along a series of dynodes, the intensity of the electron beam increasing continually until when it reaches the anode (at the greatest positive potential) there are over a million electrons for each photon incident at the cathode. The resulting current can be amplified electronically. In the absence of light, the photomultiplier generates a dark current, chiefly due to thermal emission. Thermal dark currents can be eliminated by cooling to ─30°C.
Principle of the photomultiplier (see text).
Diodes are also used for chemiluminescence detection[1], especially in low-cost applications. Photographic detection was also used in very early work[2].
4.02: Flow Injection Analysis (FIA)
Batch techniques for measuring the intensity of chemiluminescence are sometimes used, some of which incorporate automation to improve sample throughput [1], but flow methods are applied much more often. A suitable flow injection manifold[2] is shown in figure D2.1.
A flow injection manifold for measuring chemiluminescence (PMT = photomultiplier tube; REC = recorder).
Flow injection manifolds are constructed from polytetrafluoroethylene (PTFE) tubing to contain the sample while it is chemically or physically modified prior to detection. Liquid is usually transported from reservoirs by means of a peristaltic pump with suitable tubing. An accurately measured volume of sample is reproducibly introduced into a carrier stream by means of a rotary injection valve. The detector is connected to some means of data-storage. The signal depends on the rate of the reaction producing it and on flow-rate, tubing dimensions, reagent addition order and flow-cell volume, which should be large enough to ensure that a high proportion of the total emission enters the detector; optimisation will favor conditions that lead to emission occurring during the passage of the sample through the flow-cell. The flow-cell should be so positioned as to make this possible, e.g., directly in front of the window of a photomultiplier tube and in a box that excludes ambient light. FIA has important advantages over batch methods. It makes use of simple and relatively inexpensive apparatus, which is readily miniaturised and has great potential for adaptation and modification. Easy operation and high sampling rates are possible.
4.03: Sequential Injection Analysis (SIA): lab on a valve
SIA, like FIA, is based on reproducible sample handling and controlled dispersion of sample and reagents into a carrier stream. Unlike FIA, it makes use of a computer-controlled multiposition valve and pump, usually peristaltic and operated synchronously with the valve.
A sequential injection manifold suitable for the determination of morphine by acidified potassium permanganate chemiluminescence.
Morphine is solvent-extracted from opium poppies Papaver somniferum on an industrial scale. Barnett et al. have used SIA to determine the drug in aqueous and non-aqueous process streams, with chemiluminescence detection involving oxidation with acidified potassium permanganate in the presence of sodium hexametaphosphate. Figure D3.1 shows a suitable SIA manifold for carrying out this determination. The process streams contain several related alkaloids and a range of other organic compounds as well as both dissolved and suspended solids. It is a good indication of the effectiveness of SIA-chemiluminescence that in these conditions the results correlated well with high performance liquid chromatography, a standard methodology that suffers the defect a much lower sample throughput.
In SIA, sample and reagents are aspirated into the holding coil by operating the pump in reverse so that carrier is returned to the reservoir. Restoration of forward pumping is synchronised with the opening of the valve port leading to the detector. The flow reversal leads to a mixing of the stack of sample and reagent zones to form a product zone which is transported to the detector. The pump tubing comes into contact only with the carrier, the samples and reagents being aspirated (instead of pumped) into the holding coil. This is a very useful characteristic of SIA when using samples /reagents that would attack PVC pump tubing, such as those containing non-aqueous solvents.
4.04: Lab on a Chip
Micro total analytical systems (also called “chips”) are miniaturized microfluidic devices, fabricated from a variety of materials within which channels are constructed for the transport of samples and reagents. The small size minimizes the consumption of reagents, reduces manufacturing costs and increases the possibilities for automation. Miniaturization of detectors, however, leads to problems due to the reduced volume of liquid in the detector and to difficulties inherent in scaling down the size of the particular detector. One solution is to interface the chip with a macro-scale detector such as a photomultiplier tube; this is called the “off-chip” approach. This can be achieved, for example, by using optical fibres to carry light from the chip to the detector. An alternative solution – the “on-chip” approach - is to assemble a compact version of the detector and integrate this on the chip with the rest of the analytical system[1].
Chemiluminescence detection offers high sensitivity, low detection limits and instrumental simplicity but requires a relatively complex manifold on the microchip, the details depending on the chemiluminescence reaction system being used; for example, a Y-shaped channel junction works best when using peroxide-luminol chemiluminescence. Reagent is delivered by a micropump. The chip design must ensure that a high proportion of the emitted light enters the off-chip photomultiplier; this frequently involves coupling with an optical fibre. Such an arrangement typically achieves micromolar detection limits and has been used for a range of analytes including catechol, dopamine, amino-acids, cytochrome c and myoglobin as well as the determination of chip-separated chromium(III), cobalt(II) and copper(II). Horseradish peroxidise can be determined at sub-nanomolar levels. Micromolar concentrations of ATP (adenosine triphosphate) can be measured by means of luciferin-luciferase bioluminescence. The effect of antioxidants has been measured using a microfluidic system incorporating peroxy-oxalate chemiluminescence, by injecting the antioxidants into the hydrogen peroxide stream. The method is simple and rapid and excellent analytical performance is obtained in terms of sensitivity, dynamic range and precision. Electrochemiluminescence detection has been applied for microchip separations using electrodes installed during fabrication.
Photodiodes have been fabricated into chips at the bottoms of the microfluidic channels and have been used for on-chip chemiluminescence detection of DNA produced by the polymerase chain reaction and separated on the same chip by capillary electrophoresis. These devices have been used also to detect luminol chemiluminescence for the micromolar determination of hydrogen peroxide generated by the oxidation of glucose with glucose oxidase. Thin-film organic photodiodes can be fabricated by vacuum deposition and integrated into chips. Copper-phthalocyanine-fullerene small molecule diodes have high quantum efficiency and have been used to determine hydrogen peroxide by peroxy-oxalate chemiluminescence. Another example has been used for hydrogen peroxide determination by luminol chemiluminescence.
4.05: Chemiluminescence Sensors
Chemiluminescence has the advantage of lower background emission than fluorescence, avoiding noise caused by light scattering. However, because chemiluminescence reagents are irreversibly consumed, chemiluminescence sensors have shorter lifetimes than fluorescence sensors and their signals have a tendency to drift downwards due to consumption, migration and breakdown of reagents. Reagent immobilization onto suitable substrates plays an important role in the development of chemiluminescence sensors. Selectivity and sensitivity as well as lifetime of chemiluminescence sensors depends on the choice of reagent and substrate and on the method of immobilization[1].
Chemiluminescence reagents are typically aqueous solutions of ions and so can be immobilized by convenient procedures onto ion exchange resins, giving high surface coverage, and released quantitatively by appropriate eluents. Analytes can also react directly with immobilized reagents. These properties have been widely used to prepare chemiluminescence sensors containing immobilized luminol or other reagents, which typically would be packed into a flow cell positioned in front of the window of a photomultiplier. “Bleeding” columns of anion/cation exchange resins with co-immobilized luminol and metal ions such as Co2+, Cu2+ or [Fe(CN)6]3– can detect and measure analytes such as hydrogen peroxide, though this arrangement causes unnecessary dilution of samples and reagents, which impairs detectivity. Immobilized tris-(2, 2/-bipyridyl)ruthenium(II) can be regenerated from tris-(2, 2/-bipyridyl)ruthenium(III) and can be used for at least six months.
Immobilization of enzymes can be used to produce highly active and selective chemiluminescence sensors from which enzyme is not consumed, though their operational stability is limited. Encapsulation of reagents in sol-gel silica involves little or no structural alteration and is very suitable for chemiluminescence sensors because of its optical transparency and chemical stability. For example, encapsulated horseradish peroxidase displays high activity and long life, as does sol-gel immobilized haemoglobin. Chemiluminescence sensors constructed from plant and animal tissues have advantages of cost, activity, stability and lifetime; examples are soyabean tissue in sensors for urea and spinach tissue in sensors for glycolic acid.
Molecular imprinted polymers have been found to be very useful materials for fabrication of chemiluminescence sensors, both as molecular recognition agents and as chemiluminescence reaction media. Analytes that can be successfully detected in this way include 1,10-phenanthroline and dansylated amino-acids. Metal oxide particles can sometimes be entrapped onto membranes or in columns, including chemiluminescence flow cells. This affords a simple fabrication method producing long-lived sensors. Manganese dioxide has been immobilized in this way on sponge rubber for the assay of the drug, analgin, using manganese(IV) chemiluminescence.
Chemiluminescence has been detected from surface reactions on nanoparticles, opening up the possibility of chemiluminescence nanosensors of good stability and durability. Coumarin C343, a fluorescent dye, has been conjugated to silica nanoparticles entrapped in sol-gel silica to produce nanosensors capable of enhancing the weak chemiluminescence associated with lipid peroxidation[2]. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/4%3A_Instrumentation/4.01%3A_Detection_of_chemiluminescence.txt |
Chemiluminescence imaging combines the sensitive detection of chemiluminescence with the ability to locate and quantify the light emission, but above all it massively provides parallel determinations of the analyte. A digital image is made up of thousands of pixels, each generated by an independent sensor, detecting and measuring the light that falls on it. This enables simultaneous measurement of multiple samples or analytes for high throughput screening. ‘
Chemiluminescence imaging microscopy detects labelled probes more simply and more accurately than does fluorescence. It could become an important tool for rapid, early diagnosis of a wide range of diseases. Whole animal in vivo chemiluminescence imaging makes possible real-time monitoring of pathological and biological phenomena and we may anticipate important advances of great impact in drug discovery, biotechnology and medicine[1].
D6a. Imaging sensors
The last twenty years have witnessed a steady improvement in our ability to form images from analytical signals. Imaging makes use of the high sensitivity and specificity, low background and wide dynamic range of chemiluminescence to quantitate and localize analytes down to the level at which this can be achieved by emission of single photons. Early systems consisted of a low-light vacuum tube device[2] connected to an optical microscope. CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor) image sensors make use of different technologies, both invented in the late 1960s and 1970s, for capturing images digitally. Both types of imager convert light into electrical charge and process it into electronic signals, as in digital cameras. Each has characteristic strengths and weaknesses. CCD technology is in most respects the equal of CMOS. Costs are similar.
a(i). Charge coupled devices
It is central to chemiluminescence imaging that when the spatial distribution of the analyte is critical, a luminograph must be produced to adequately express the data. This can be done by a CCD, which must have a high light-collection efficiency. A CCD converts optical brightness into electrical amplitude signals. CCDs are arrays of semiconductor gates formed on an integrated circuit (IC) or chip. The gates individually collect, temporarily store and transfer charge, which represents a picture element or pixel of an image. When light falls on a CCD sensor, a small electrical charge is generated photoelectrically for each pixel; each charge is converted to voltage and output as an analogue signal, which can be converted to digital information by additional circuitry. All of the pixel can be devoted to light capture. A CCD camera, such as a digital camera includes a CCD imager IC located in the focal plane of the optical system and control circuits mounted on a printed assembly. Image data captured is stored in a storage medium such as a compact flash memory or an IC memory card and can be displayed on a monitor such as a liquid crystal display (LCD). CCDs have traditionally provided the highest image quality (as measured by quantum efficiency and noise). An intensified CCD (ICCD) camera is optically connected to an image intensifier. In an image intensifier, the photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards a micro-channel plate (MCP) by a voltage applied between photocathode and MCP. The electrons are multiplied by the MCP and accelerated towards a phosphor screen, which converts them back to photons that are guided to the CCD by an optical fibre or lens. ICCD cameras permit high frame rates and real-time visualization, the limitation being the increased noise produced by amplification. Non-intensified slow scan CCD cameras, cooled to reduce thermal noise, permit integration of the signal over a relatively long time and are suitable for steady-state signals.
a(ii). Complementary metal oxide semiconductor (CMOS) chips
CMOS image sensors have emerged as an alternative to CCD sensors. They consist of an integrated circuit (IC) containing an array of pixel sensors, each pixel containing a photodetector, an amplifier and additional circuitry to convert the charge (which represents light intensity) to a voltage. Amplification, noise-correction, and digitization often also occur on-chip. These other functions increase the design complexity and reduce the area available for light capture. Unlike CCD sensors, each pixel is doing its own conversion, so uniformity is lower, which affects image quality. But the chip requires less off-chip circuitry. The phrase "metal oxide semiconductor" implies transistor structure having a metal gate electrode on top of an oxide insulator, which in turn is on top of a semiconductor. CMOS sensors complement every n-type transistor with a p-type, connecting the pairs of gates and drains. Because ideally no current flows except when the inputs to the gates are being switched, this greatly reduces power consumption and avoids overheating, which is a major concern in all ICs. CMOS can potentially be implemented with fewer components, uses less power and provides faster readout than CCDs. CMOS imagers also offer greater integration (more functions on the chip) and smaller size.
It is possible to use a CMOS sensor chip as a microscale contact imager and quantitative photometer for chemiluminescence assays. The applicability has been investigated for chemiluminescence detection of ATP by its reaction with a proprietary reagent in 1 mm diameter wells fabricated on a glass cover-slip placed directly onto the imaging sensor[3]. Ambient light was excluded. For each well, chemiluminescence intensity was averaged over a 1 x 100 pixel region of interest and integrated over a 200 ms exposure. It correlated well with the ATP concentrations over a range of 0.1-1 mmol dm-3. The detectivity (<1 nmol amounts of ATP) is not as fine as can be obtained with a much more expensive CCD camera, but is susceptible to improvement. CMOS chips are suitable for droplet microfluidics or lab-on-a chip devices when the cost of the assay system is a factor that must be optimized, such as in “point-of-need” assays or diagnostics.
a(iii). Photon counting
Cameras suitable for luminescence imaging should be able to form an image at a brightness of 10-6 lux (1 lux = 1/621 W m-3 of 550 nm light). The most sensitive detect single photons with an efficiency of about 20% and have an average noise level of 2 x 10-11 lux = 8 photons s-1 cm-2. Available ultra-low light imaging systems include the imaging photon detector (IPD), used in conjunction with a microscope with high numerical aperture (NA) objective lens. A high NA lens collects more light than a low NA lens. The collected light is focussed onto the IPD. Photons are recorded and stored as a list of time and space coordinates created by the IPD processor. Images can be reconstituted as an array of dots over any desired time interval. This allows for continuous recording over any interval; 1 h at 100 photons per second can be stored in 1 Mb of memory (storage as images requires 0.3 Mb per frame). Chemiluminescence images are based on small numbers of photons, especially when exposure time is brief; whereas a ten minute exposure at 100 photons per second can build an image of 60000 photons, a one second exposure provides only 100 photons, so that the image comprises only 100 dots spread across the area of the image[4] A high resolution (up to 1392 x 1040) single photon counting camera system is suitable for extremely low photon emission applications such as some chemiluminescence applications. The system includes a control unit with data acquisition and image processing software. Frame rates up to 100 Hz can be obtained.
The conceptualization involved in the design of photon counting cameras is well illustrated by DELTA camera; initially designed for astronomy it has advantages for a wide range of high-resolution problems. It is a high sensitivity array detector, which yields the space and time coordinates of photon events at sustained count rates superior to one million per second[5]. It has a flat field, very high resolution (for the prototype: 512 x 591 pixels in space and 2.6 μs in time) and high throughput. Each photon produces an intensified phosphor image which has the same position in a two-dimensional field as the photon. This image is focussed onto three one-dimensional CCDs, which record its position as three coordinates on axes mutually oriented on a plane at angles of 120°. Software converts these to (x, y) coordinates on orthogonal axes and a clock signal adds the time (t) of the event to produce (x, y, t) coordinates, which are listed and stored; artefacts due to excessive tolerance or to simultaneous photon events are also removed from the data
a(iv). Chemiluminescence imaging systems
A high resolution CCD camera (up to about 6.0 Mpixel), cooled to about -70ºC, gives the best quality images, better accuracy, longer exposure times (up to 24 hours), minimal dark noise and enhanced stability. There should be a light-tight dark chamber and a height adjustable sample platform with a numeric counter for exact positioning in specific, repeatable positions. The alternative is an advanced motorised robotic camera which is driven up and down allowing placement very close to the sample, outperforming a standard zoom lens, having a wider field of view, easier and faster operation, and better sensitivity. These benefits are especially useful for faint samples. The image acquisition and analysis software provides comprehensive tools for simple image capture and analysis of gels, plates and membranes as well as colony counting. Images can be enhanced, user preferences defined, reports generated and data exported. Some systems include an overhead white light so that chemiluminescence can be overlaid onto a reflected light image (the so-called “live image”) and combine facilities for fluorescence, chemiluminescence and colorimetric applications.
Not all chemiluminescent reactions are suitable for imaging; the main requirement, especially for imaging microscopy, is micrometre scale localization[1]. Excited species of short half-life are suitable and conditions (especially reactant concentrations) can be optimized to minimize the diffusion of excited products. Glow-type kinetics, arising from the attainment of a steady state, facilitates measurement procedures. Enzyme labels are widely used.
D6b. High throughput screening (HTS)
Imaging is very suitable for high-throughput screening. Examples of assays with very high sample throughput include the determination of bioavailable mercury which has been determined in urine using E. coli expression of luciferase under the control of a mercury-inducible promoter. Throughput is more than 5000 samples per hour and the limit of detection is 10-13 mol dm-3. Acetylcholinesterase inhibitors can be assayed using acetylcholinesterase, choline oxidase and horseradish peroxidase (HRP). Kinetic analysis of luminol chemiluminescence is carried out with a throughput of 180-360 samples per hour.
b(i). Miniarrays
Imaging with flat field correction lenses can be used to read microtitre plates (up to 4 of 384 wells) faster than by a luminometer, but the latter is, however, more sensitive and has the ability to measure fast, flash-type reactions. Miniarrays of antibody or gene probes can be spotted onto 96-well microtitre plates and assayed with an enzyme-labelled detection reagent and a chemiluminescence substrate. The whole plate is imaged with a CCD camera to measure the light emission from each well. This principle has been applied to a sandwich-type enzyme-linked immunosorbent assay (ELISA) for cytokines and a hybidization-based mRNA assay with up to 16 spots in each well; a 16 x 96 array contains 1536 dots, making possible high-throughput multianalyte assays, using standard plates and their associated sample handling devices.
Oligonucleotide probes specific for human papilloma virus (HPV) genotypes have been used for a multianalyte chemiluminescence imaging assay for the simultaneous determination of up to 7 HPV DNAs. Amplification by the polymerase chain reaction(PCR) in the presence of a digoxigenin-labelled nucleotide (dUTP) is followed by an ELISA using a novel polystyrene microtitre plate having an array of 24 main wells (containing digoxigenin-labelled PCR product) each divided into 7 subwells (containing the immobilized probes). The digoxigenin label was subsequently detected by peroxidase-labelled antibody and a chemiluminescent substrate. Imaging was performed using an ultrasensitive CCD camera[6]. Results were comparable with conventional colorimetric PCR-ELISA.
b(ii). Microarrays
Thousands of simultaneous determinations can be made by high-resolution imaging of chemiluminescence at array densities of up to hundreds of spots per square centimetre. Array-based gene expression analysis is a good example. Protein analysis based on antigen-antibody or ligand-receptor interactions is increasingly used in clinical and research work and in drug discovery. As well as their use for protein expression profiling, there are high-throughput protein microarrays that detect up to 35 cytokines. Specific antibodies are spotted onto membranes, which are incubated with the samples and captured analytes are detected by enhanced chemiluminescence with HRP-labelled antibodies and an HRP-substrate. A protein chip for parallel ELISAs of tumour marker allows the discovery of patterns that can increase the sensitivity and specificity of the diagnosis[7]. Immobilized on the chip were 12 monoclonal antibodies against different tumour markers captured by incubating the chip with serum samples. An HRP-conjugated second antibody was used for detection by chemiluminescence imaging. The chip has been successfully applied both for cancer diagnosis and for screening asymptomatic populations at high risk.
b(iii). Small scale analytical devices
Small scale analytical devices use extremely small sample volumes and so need very sensitive detection techniques; chemiluminescence imaging has the high resolution and high sensitivity necessary. Assays include the ELISA determinations of the herbicide 2,4-D in multiple samples using gold-coated surfaces or glass capillaries and of up to ten antibiotics in milk in five minutes: the sample is incubated with mixed monoclonal antibodies followed by detection with an HRP-labelled second antibody and a suitable chemiluminescent substrate. Multiple hybridizations can be performed in a three-dimensional chip incorporating an array of vertical glass channels. Specific gene probes are immobilized on the inner walls of the channels. This strengthens the signal by providing a larger area for probe-immobilization than is available in a two-dimensional microarray. The sample flows through the channels and the analyte is detected by an enzyme-labelled antibody followed by a chemiluminescent substrate. Lateral diffusion of the emitting species is prevented by the walls of the microchannel; this improves resolution of the image. Chemiluminescence imaging of miniaturized analytical devices is also useful for multiplexing (simultaneous quantitation of different analytes or on different samples) by integrating the chemiluminescence over a different target area for each analyte or sample.
b(iv). Documentation of gels and membranes
Chemiluminescence imaging detection with CCD cameras can be used for reactions that take place on gels and membranes. This allows intensity measurement over a wide dynamic range and software exists to compute the total emission from particular zones of the image for analytical purposes. Images can be stored on discs or printed out.
Electrophoresis is the movement, and hence the separation, of charged molecules in an electrical field; electrophoresis on polyacrylamide gel (PAGE) is particularly good for separation of molecules at low concentrations. Separated molecules can be transferred onto a nitrocellulose membrane by electroblotting - electrophoresis in a direction at right angles to the gel surface; this is also called western blotting and it is used to detect specific proteins in a sample of tissue homogenate or extract. By similar procedures, Southern blotting identifies particular sequences of DNA within a complex mixture and northern blotting locates RNA. In western blotting, electroblotting is followed by immunostaining, in which particular proteins are identified by labelled antibodies. DNA and RNA can be similarly identified by hybridization with labelled probes.
Dot blot is an immunological technique to detect with antibodies specific proteins in mixtures or in samples such as tissue lysates. It is based on western blotting but there is no separation of the protein on SDS-PAGE. One such assay is the detection of B19 parvovirus. After spotting samples onto a membrane, hybridization with digoxigenin-conjugated DNA probes and treatment with HRP- or AP-labelled anti-digoxigenin antibodies, chemiluminescence imaging gives a limit of detection ten times better than using colorimetry.
Cytochromes can be separated by PAGE with sodium dodecyl sulfate (SDS-PAGE) and transferred to a nitrocellulose membrane; cytochrome c (which contains a catalytic haeme group) is detected by peroxidase-luminol chemiluminescence[8]. CCD imaging results in detection 50 times more sensitive than the 3,3',5,5'-tetramethylbenzidine staining method. A sample of less than 1 ml of a bacterial culture is needed. A similar assay, based on luminol/hydrogen peroxide chemiluminescence with ammonium persulfate enhancement, detects haptoglobin phenotyping after PAGE on a 15 μL sample[9]. Other iron-containing proteins, such as catalase and ferritin, can also be detected. The proposed detection is very fast, compared to traditional staining methods (minutes versus hours).
D6c. Molecularly Imprinted Polymers (MIPs)
MIPs have artificial recognition sites with shapes, sizes and functionalities complementary to the analyte, which is thus selected in preference to other closely related structures. They are cheaper and more robust than antibodies, enzymes and biological receptors and can serve when these biomolecules are not available. The recognition sites are fabricated around a suitable template, preferably the analyte itself, which is extracted after polymerization. Generally, when a template molecule and a functional monomer are mixed in an organic solvent a complex is formed between the template and the monomer through polar interactions. Polymerization with a cross-linker fixes the positions of the polar groups. Removal of the template with a suitable solvent leaves specific recognition sites. The functional monomers are chosen to promote hydrogen bonding with the template to obtain good selectivity and reversibility. Optimum binding occurs when the MIP is exposed to the same conditions as those used in polymerization, because it depends on the shape of the imprinted cavity and on the spatial positioning of the coordinated functional groups. Both of these depend on the conditions and are affected by swelling of the polymer, which can be exploited to achieve fast and controllable release of adsorbed molecules prior to detection.
c(i). Chiral recognition of dansyl-phenylalanine
Molecular imprinting of polymers has been linked with chemiluminescence imaging detection to achieve chiral recognition of dansyl derivatives of phenylalanine (Phe)[10]. The MIP microspheres were synthesized using precipitation polymerization (which produces uniform microspheres) with dansyl-L-Phe as template and the microspheres were immobilized on microtitre plates (96 wells) using poly(vinyl alcohol) (PVA) as glue. The analyte was selectively adsorbed onto the MIP microspheres. After washing, the bound fraction was quantified using peroxyoxalate chemiluminescence (POCL) analysis, a general method for all fluorescent and fluorescence-labelled analytes, which has a greater quantum yield than most other chemiluminescence systems. In the presence of dansyl-Phe, bis(2,4,6-trichlorophenyl)oxalate reacted with hydrogen peroxide (H2O2) with chemiluminescence emission. The signal was detected and quantified with a highly sensitive cooled CCD. The intensity of the image of each well of the plate was determined using software to sum the intensities of all the pixels making up the spot. Chemiluminescence intensity increases with the proportion of the L-enantiomer in the sample. Chiral composition can thus be determined by comparison of the intensity for the mixture and for pure D- and L- enantiomers at the same concentration. The results show that MIP-based chemiluminescence imaging is useful for quick chiral recognition and, because the method can perform many independent measurements simultaneously in 30 min, high-throughput screening is possible.
c(ii). High throughput detection of dipyridamole
A simple, sensitive and specific method has been developed for high throughput detection of the vasodilator dipyridamole[11]. The proposed method is based on a chemiluminescence imaging assay with MIP recognition providing selectivity.
Molecularly imprinted microspheres were prepared using precipitation polymerization with methacrylic acid (MAA) as functional monomer, trimethylolpropane trimethacrylate (TRIM) as the crosslinker and dipyridamole as the template. Non-imprinted polymer (NIP) was prepared without template to use as a control. The microspheres were coated in 96-well microtitre plates using 0.1% PVA as glue. After incubation with the sample, the amount of polymer-bound dipyridamole was determined by POCL. The emitted light was measured with a cooled high-resolution CCD camera. The intensity of the image of each well was determined as in subsection c(i).
Under the optimum conditions, there is a linear relationship between relative chemiluminescence intensity and concentration of dipyridamole ranging from 0.02 to 10 μg ml-1. The detection limit is 0.006 μg ml-1. The method was validated by measuring dipyridamole concentrations in spiked urine samples. High tolerance for a number of normal constituents of urine was demonstrated to be much greater in the presence of MIP rather than NIP. MIP-based chemiluminescence imaging exhibits high selectivity and sensitivity to dipyridamole, combined with high sample throughput and economy (50 μl/well)[12].
D6d. Spatial distribution of targets
Target molecules of chemiluminescence imaging include antigens, DNA sequences, enzymes and metabolites. Chemical processes in cells, tissues or whole animals may also be targeted. Methods used include imaging microscopy, immunohistochemistry (IHC), in situ hybridization (ISH); other chemical or enzymatic reactions may also be used. The chemiluminescence image is overlaid onto the visible light image and processed by background subtraction, contrast enhancement, pseudocolour and quantitation over defined areas; absolute quantitation needs reproducible conditions, a calibration system and appropriate sample properties.
d(i). Imaging microscopy
Chemiluminescence imaging microscopy uses ordinary microscopes with optimized light collection. Light loss is minimized by having a simple lens coupling system; coverslips are dispensed with. The microscope, or at least the sample, is contained in a dark box to exclude ambient light and has a motorized micrometric stage to permit automatic adjustment. The sample is incubated with the chemiluminescence reagent until a steady-state emission is obtained. The objective lens has the highest numerical aperture (NA) compatible with acceptable focal aberration and depth of field. Dry, rather than oil-immersion, objectives are used and give adequate magnification and spatial resolution for the localization of analytes in single cells or tissue sections[13]; the detection limit for HRP is about 500 molecules/μm2.[14]
Chemiluminescence microscopy has become a standard tool in biomedical research. Photon detectors have been attached to microscopes and allow imaging of chemiluminescent probes and reporter genes in cells and tissues. Photon counting techniques allow days of continuous imaging without creating oversized files. Fluorescence imaging, however, gives better spatial resolution than chemiluminescence imaging and makes multiple determinations easier.
d(ii). Calcium imaging with chemiluminescence microscopy
Calcium can be determined in cytosol and in organelles by using the photoprotein aequorin, an intracellular calcium indicator extracted from the jellyfish Aequorea victoria. Natural aequorin consists of a polypeptide, apo-aequorin, covalently bound to a hydrophobic prosthetic group, coelenterazine. The principle of imaging free cytosolic calcium with aequorins[3] is the conformational change of aequorin molecules on calcium binding, causing coelenterazine to be oxidized to coelenteramide with production of carbon dioxide and emission of blue light (466 nm). Aequorin cannot penetrate the plasma membrane of the cell. Microinjection is the method of choice for determining cytosolic calcium in large cells. For small cells, cloning and transfection of the cDNA of apo-aequorin makes microinjection unnecessary, greatly simplifying calcium recording. Genetically expressed apo-aequorin contains no coelenterazine and so does not emit light. It is reconstituted as aequorin by soaking the specimens with coelenterazine. Apo-aequorin can be targeted to specific organelles by incorporating signal translocation sequences in the polypeptide chain.
Aequorin is sensitive and specific, though single cells, containing a low concentration, give feeble chemiluminescence. Intensity is proportional to cell volume and therefore to the cube of the diameter. Small cells present problems because the amount of aequorin is low. In a cell of 10 μm diameter, the resting calcium concentration leads to emission of less than one photon per hour – so fluorescence must be used instead. But elevated calcium concentrations or large numbers of cells can be imaged by chemiluminescence using photon-counting cameras. Natural aequorin accurately measures Ca2+ concentrations in the range 0.5 to 10 μmol dm-3, which is suitable for transient changes but for higher concentrations, a mutant form has been constructed which, by raising its dissociation constant and thus lowering its affinity for calcium, extends the working range up to 100 to 1000 μmol dm-3. It has the advantage over calcium-specific fluorescent probes of permitting real-time measurements over a long period; this is possible as there is no disturbance of the intracellular environment (including Ca2+ buffering capacity) because of the low aequorin concentration (about 5 nmol dm-3) but it has poorer resolution and it is used up rapidly by high calcium concentrations. Aequorin chemiluminescence, however, has an excellent signal to noise ratio and extremely low background noise.
Chemiluminescence calcium imaging using aequorin is the method of choice for exploratory studies, since it is extremely sensitive, can detect a broad range of calcium concentrations. The kinetic order with respect to calcium concentration of the chemiluminescence reaction is 2.1 or higher, which gives inherent contrast enhancement. Unlike fluorescence, it does not require the analyst to make preliminary predictions or assumptions which exclude calcium signals outside the expected range. But it cannot match the high spatial resolution of fluorescence methods. In addition, chemiluminescence microscopy uses a large depth of field and optical sections are not yet possible.
d(iii). Aequorin associated with green fluorescent protein (GFP)
In Aequorea victoria, the chemiluminescent calcium-binding protein, aequorin, is associated with GFP. Calcium-sensitive bioluminescent reporter genes have been constructed that fuse GFP and aequorin to increase the quantum yield of calcium-induced bioluminescence[15]. Co-expression of GFP with free aequorin does not have the same effect. The constructs were varied by including different lengths of peptide spacer between the GFP and the aequorin; much more light was emitted in all cases and the constructs were much more stable in cytosol and more sensitive to calcium than recombinant apo-aequorin alone.
Resonance (non-radiative) energy transfer to the GFP chromophore from the excited oxidation product of coelenterazine depends on their relative positions. The peptide spacer is therefore flexible and of variable length. The green:blue ratio (500 nm:450 nm) of the light emitted by different constructs was measured 48 h after the introduction of the reporter genes into the cells (transfection). The green:blue ratio was increased by the covalent attachment of GFP to aequorin and further increased when the linker was added; as the linker was made longer, the wavelength of maximum emission increased and the bandwidth of the spectrum decreased. The efficiency of intramolecular energy transfer is enhanced to a level comparable to that achieved by resonance energy transfer in vivo due to the more favourable configuration made possible by the linker.
Using GFP-aequorin fusions it is possible to detect physiological calcium signals in single cells. Transfection of the cells is followed by aequorin reconstitution with coelenterazine. The result is calcium-induced photon emission detectable with a cooled, ICCD camera, using an integration time of only one second. Cytoplasmic aequorin had previously detected Ca2+ activities only by the use of a photomultiplier, which is more sensitive but lacks any spatial resolution, or by using targeted fluorescence probes, which give a quicker response. The use of the transgenes in which aequorin reports Ca2+ activity while GFP enhances bioluminescence could lead to real time imaging of calcium oscillations in integrated neural circuits in whole animals as well as in specific subcellular compartments. Aequorin and GFP-enhancement probes along with synthetic fluorescent dyes can be targeted to the endoplasmic reticulum (ER)[16], a membrane network within the cytoplasm of cells involved in the synthesis, modification, and transport of cellular materials; this has enabled the role of ER to be clarified.
D6e. Enzyme and metabolite mapping
If the sample’s enzyme activity has been preserved and if the sample is in such a condition that the substrate has access to the active site, enzyme activity can be localized by chemiluminescence imaging. The best spatial resolution is obtained by applying a chemiluminescent substrate directly to an enzyme, e.g., alkaline phosphatise can be detected by dioxetane phosphate. Enzyme reactions coupled with chemiluminescence can be used for metabolite mapping but give lower resolution. Metabolites can be determined in shock frozen tissue biopsies at femtomole levels and with micrometre resolution. The tissue is frozen as soon as possible to stop enzyme activity and fix the metabolites. The specimen is then placed on a temperature-controlled microscope stage and the chemiluminescence reagent is added. Emission intensity, recorded as soon as the temperature rises sufficiently, is converted into metabolite concentrations.
e(i). Luciferase-based detection of energy metabolites
Measurement[17] of the spatial distribution of metabolites, such as ATP, glucose, and lactate, in rapidly frozen tissue is based on enzymatic reactions that link the metabolites to luciferase with subsequent light emission. Using an array, cryosections are brought into contact with the enzymes in a reproducible way inducing emission of light from the section in proportion to the metabolite concentration, with high spatial resolution. There is a close correlation between the distribution of ATP and cell viability; there are also distribution differences between tumours and normal tissue. ATP, glucose, glycogen and lactate have been determined at microscopic levels and at high spatial resolution in arterial wall cryosections using luciferase-based chemiluminescence imaging[18], which is a powerful tool to measure energy metabolites. It has been used to quantify local metabolite concentrations in artery rings. Distributions of energy metabolites are heterogeneous under hypoxic in vitro conditions. Diffusion distances for oxygen and nutrients can be long and might make vessels prone to develop local deficiencies in energy metabolism that could contribute to atherogenesis.
e(ii). Expression of luciferases in living cells and organisms
Luciferases are enzymes that emit light in the presence of oxygen and a luciferin (ADD LINK). They have been used for real-time, low-light imaging of gene expression; coding sequences have been detected by luciferase-labelled gene probes[19] . These labels include bacterial lux and eukaryotic luciferase luc and ruc genes. Different luciferases differ in the stability/variability of the emitted signal. Luciferases have served as reporters in a number of promoter search and targeted gene expression experiments. Photon-counting CCD imaging of luciferase has been used, for example, to show promoter activity in single pancreatic islet β-cells and regulation of human immunodeficiency virus (HIV) and cytomegalovirus. Luciferase imaging has also been used to trace bacterial and viral infection in vivo and to visualize the proliferation of tumour cells in animal models. Infected cells are readily detectable at an incidence of one in a million cells. Single bacterial cells, whether transformed or naturally luminescent, have also been imaged and variation in expression over time, due to fluctuations in metabolic activity, has been demonstrated. Low-light CCD imaging is in itself a non-invasive technique that is useful for observing (see subsection b(iv). Documentation of gels and membranes ADD LINK) intracellular gene expression and small-scale assays such as in situ hybridization (ISH) as well as for immunoassays, gels and blots [ADD LINK], DNA probes and in vivo imaging (see section D6h, ADD LINK). Slow-scan liquid nitrogen cooled CCD cameras are preferable for high resolution imaging with long exposures, but photon-counting CCD cameras are better for shorter exposure times. Flashing at frequencies greater than 1 Hz can be detected by ICCD cameras.
e(iii). Other applications of bioluminescence imaging
Bioluminescence imaging has been applied in experimental biomedical research, e.g., development of necrosis, and in other areas of biology[20]. It has also been used particularly on tumour biopsies in clinical oncology. In combination with immunohistochemistry,autoradiography or in situ hybridization it can be particularly powerful. It has been shown for squamous cell carcinomas that accumulation of lactate in the primary lesions is associated with a high risk of metastasis. In this way, metabolic mapping indicates the degree of malignancy and the prognosis of tumours; it has stimulated a number of fundamental investigations.
e(iv). Other methods of determination of metabolites
There are numerous other examples of methods to determine metabolites in living cells and tissues, including real-time imaging of metabolite production. Endogenous acetylcholinesterase (ACE) activity has been detected in rat coronal brain slices using coupled reactions with choline oxidase and horseradish peroxidase[21]. The reagent is optimized to minimize diffusion of emitting species, giving sharp localization and a very low background. This imaging assay is more predictive than in vitro systems; and can be used to determine pathophysiological changes in ACE distribution or the effect of in vivo ACE inhibitors, which could be useful for screening candidate drugs.
Nitric oxide (NO) released from cell cultures and living tissue has been visualized by a reaction with luminol and hydrogen peroxide to yield photons which were counted using a microscope coupled to a photon counting camera, giving new insight into release time course and diffusion profile[22]. The method allowed integration times in the order of minutes to improve signal-to-noise ratio. However, the high sensitivity of this method also makes it possible to generate an image in seconds, allowing the production of real time moving pictures. This method has demonstrated potential for real time imaging of NO formation, with high temporal and spatial resolution. There was little earlier knowledge of this phenomenon due to the short half-life of NO.
D6f. In situ hybridization (ISH) and immunohistochemistry (IHC)
ISH and IHC are techniques that localize analytes in a wide range of suitable specimens such as cellular smears, or frozen or paraffin-embedded sections. Chemiluminescence detection does not require any special specimen preparation, but an accurately controlled section thickness of from 3 to 5 μm is necessary for reproducibility. Incorporating chemiluminescence detection (CL-IHC and CL-ISH) increases sensitivity compared with colorimetry or fluorometry. This adds reliable and accurate quantitative evaluation of spatial distribution to the specificity of the probe. The “theoretical” limit of detection of the enzyme label by chemiluminescence is 10-21 to 10-18 mol; as a detector for ISH,chemiluminescence is almost as sensitive as 35S autoradiography giving a nontoxic alternative to the use of radioactivity[23].
Localization within cells of nucleic acid sequences, e.g., the sites of genes in chromosomes, can be achieved by hybridization to complementary nucleic acid probes. The two general types of in situ hybridization involve nuclear DNA and cellular RNA respectively; they are conceptually similar but differ in practical detail. The technique is usually performed on specimens prepared for light microscopy. It is claimed that little or no microscopic training is necessary to evaluate the chemiluminescence images.
Fig. D6.1 – Flow chart of operations for the performance of in situ hybridization. (All incubations are at room temperature).
f(i). CL-ISH assay of human papilloma virus (HPV)
ISH involves nucleic acid probe hybridization with DNA or RNA, endogenous to the specimen or exogenous (viral/bacterial). The procedure is summarized in Fig. D6.1. Sensitivity is increased by indirect labelling, in which the label binds with a biospecific chemiluminescence reagent, e.g., biotin binds to streptavidin; fluorescein or digoxigenin bind to their respective antibodies, the chemiluminescence reagent having a covalently-bound signalling group (usually AP or HRP). ISH of HPV can be imaged using a digoxigenin-labelled gene probe followed by an HRP-labelled anti-digoxigenin antibody and a chemiluminescence reagent[24] . To localize the virus, the chemiluminescence image (with intensities represented by pseudocolours) is overlayed onto a transmitted light image. ISH has also been performed on three human carcinoma cell lines and 40 biopsy specimens of human cervical neoplastic and preneoplastic lesions by using biotin-labelled complementary DNA probes of HPV, detected by HRP-labelled secondary antibodies; the chemiluminescence was detected by an ICCD camera[15]. After only 10 min of photon accumulation, on cell line smears as well as on serial tissue sections, chemiluminescence gave comparable results to those obtained by a 3-week exposure for 35S-autoradiography.
CL-ISH is quantitative because chemiluminescence is proportional to the enzyme activity of the label, and to the number of gene copies per cell. In a separate study of cytomegalovirus, chemiluminescence was proportional to the number of cells infected (following virus replication).
f(ii). CL-ISH of cytomegalovirus
An early ISH assay of cytomegalovirus DNA in infected human fibroblasts[13] used dioxigenin-labelled probes and AP-labelled anti-digoxigenin antibody. Employing a low-light imaging luminograph 400 amol of AP were detected using 1,2-dioxetanes . Chemiluminescence was intense and stable, making possible quantitation within single cells, with a spatial resolution of 1 μm and very low background. Multiplexed CL-ISH assays have been developed in which probes with different enzyme labels detect different targets. One example of such techniques localizes the DNA of herpes simplex and cytomegalovirus in the same specimen using the following protocol. The faster HRP/luminol is added to the specimen and chemiluminescence is imaged, the specimen is given a short wash, then AP/dioxetane is added, and a second chemiluminescence image is recorded. A longer wash is needed if AP is added first.
f(iii). CL-ISH of parvovirus B19 nucleic acids in single infected cells
Human parvovirus B19 is responsible for wide range of diseases. CL-ISH gives high resolution, providing precise localization and quantitative detection of the viral nucleic acids in single cells in cultures at different times after infection, giving an objective evaluation of the infection process with higher sensitivity than colorimetric ISH detection assessed by panels of observers. The improved sensitivity of CL-ISH detects more positive cells per sample, making possible earlier diagnosis.
A peptide nucleic acid (PNA) has been developed which has improved specificity and faster, stronger binding than other DNA probes. The assay is based on the use of a biotin-labelled PNA probe which is detected by a streptavidin-linked alkaline phosphatase (AP), using the well-known biotin-streptavidin affinity:
PNA–biotin + streptavidin–AP → PNA–biotin–streptavidin–AP
adamantyl 1,2-dioxetane phosphate + AP → excited fragmentation products → light
The chemiluminescence signal which arises was quantified and imaged with an ultrasensitive nitrogen-cooled CCD camera connected to an epifluorescence microscope with high-transmission optics and modified for acquisition of chemiluminescence. A threshold signal (representing non-specific binding of the probe and endogenous alkaline phosphatase activity) was established using mock-infected cells as negative controls. Following a B19 virus infectious cycle the percentage of infected cells, which reached its maximum at 24 h after infection, could be accurately monitored. The advantages of chemiluminescence detection (high detectability and wide linear range) allow the quantitative analysis of viral nucleic acids in infected single cells, showing a continuous increase with time after infection. Such investigations could be powerful tools for the assessment and diagnosis of viral infections and for measuring the virus load of infected cells[25].
f(iv). IHC with chemiluminescence detection (CL-IHC)
IHC involves the use of antibodies that bind to endogenous, viral or bacterial antigens (a protein usually) with subsequent detection by enzyme-conjugated antibodies. CL-IHC detects epithelium in thyroid tissue by HRP-labelled antibodies and luminol/H2O2, with adequate resolution and greater sensitivity than colorimetry or fluorescence. CL-IHC can also with advantage be applied to Interleukin 8 (IL-8) localization in gastric biopsy specimens infected by Helicobacter pylori, an organism asssociate with gastric ulcers. It shows with greater sensitivity than other detection systems the variability of the IL-8 concentration in the mucosa and the foci of high concentration in the epithelial cells.
Fig. D6.2 – Flow chart of operations for the performance of immunohistochemistry. (All incubations are at room temperature).
f(v). HPV and p16(INK4A) marker in cervical cancer
Cervical cancers (cervical intraepithelial neoplasms, CIN) are classified into low- (CIN1) or high-grade (CIN2 or CIN3) in order to predict the risk of progression of early lesions and enable decisions to be made concerning surgical intervention. Judgments based on histology are imprecise in that different observers assign different grades to the same biopsy specimen. One way of overcoming this difficulty is to redefine the diagnostic criteria in terms of analytical chemistry.
Diagrammatic representation of the localization of p16INK4A by CL-IHC in a biopsy section of a cervical cancer. Paler shades show increased chemiluminescence emission. (An actual chemiluminescence image is shown in Fig. 1 in reference 27, on which this diagram is based.)
An immunohistochemical assay (see Fig. D6.2) with chemiluminescence detection (CL-IHC) has been used to quantitatively evaluate the overexpression of the protein p16INK4A and its localization in the epithelium of samples from cervical cancers and from non-cancerous cervical lesions. Fig. D6.3 shows that chemiluminescence (and hence p16INK4A protein content) generally increases from left to right. High-grade lesions give generally more intense chemiluminescence signals in the epithelium than low-grade and show a different distribution of p16INK4A protein. From the intensity of the chemiluminescence signal and the percentage of the epithelium involved in the overexpression of p16INK4A an expression score was obtained which discriminated well among different lesions. A cut-off value was determined to distinguish between low and high grades. The differences between the average scores of different CIN grades were statistically significant[26].
Diagrammatic representation of the co-localization of p16INK4A and HPV DNA in a tissue section from a cervical biopsy. (A) transmitted light microphotographic image, (B) CL-IHC image, the lighter tones showing p16INK4A, (C) CL-ISH image, the lighter tones showing HPV DNA, (D) CL-ISH image pseudocoloured in blue, yellow and red to indicate increasing chemiluminescence intensity. (B), (C) and (D) show images overlaid onto a transmitted light image to show the localization of the signal. (Actual chemiluminescence images are shown in Figs. 3 and 4 in reference 27, on which this diagram is based.)
The determination of p16INK4A overexpression by CL-IHC used AP as the enzyme label and then, after washing in buffer, HPV DNA was determined by CL-ISH[27] (see Fig. D6.1 and section f(i) ADD LINKS) using HRP as label to avoid interference between the two assays[28]. To circumvent the non-equivalence of consecutive tissue sections, the two assays were carried out on the same sample. The assays cannot be carried out in the reverse order as the high-temperature step in ISH denatures the p16 protein to be determined by IHC. The high detectability of chemiluminescence gives improved discrimination between lesions as non-cancerous, CIN1 or high-grade CIN. This could become an objective and accurate diagnostic test.
f(vi). Mucosal human papilloma virus in malignant melanomas
High-risk (HR) mucosal human papilloma virus (HPV) is strongly associated with cancer. It has been found in primary melanoma and in pigmented skin blemishes (birthmarks, moles) but has rarely been reported in normal skin, which is instead commonly infected with other relatively harmless strains of HPV. HPV DNA in skin cancer has been detected by polymerase chain reaction (PCR). In order to understand the relationship between HPVs and primary melanoma, it is necessary to know whether the presence of HPV is localized in cancer cells rather than in normal skin cells present in the tumour biopsy, what proportion of the cells harbours the virus or whether it can be due to contamination of the tumour surface by viruses from healthy skin. Because PCR methods measure only total DNA they are not suitable to ascertain this.
Schematic representation of a tissue section that has undergone the combined procedures of FL-ISH for HPV DNA and CL-IHC for tumour marker HMB-45. The large coloured dots represent cells. Red pseudocolour was assigned to chemiluminescence signals and yellow to fluorescence signals, different signal intensities represented by different shades. Colocalization of HPV and HMB-45 is represented by the combined pseudocolour, orange.
To localize HR-HPV a rapid, specific and very sensitive method has been developed that combines an enzyme-amplified fluorescence in situ hybridization (FL-ISH, see Fig. D6.1 ADD LINK) for the detection of HPV nucleic acids (types 16 and 18, which are the types most likely to cause cancer) with a chemiluminescence immunohistochemistry (CL-IHC, see Fig. D6.2 ADD LINK) method for the detection sequentially in the same section of the tumoural melanocytic marker HMB-45. It is necessary to use the same section because the melanoma cells are distributed heterogeneously in the specimens. HMB-45 determination is an indicator of melanoma cell differentiation and is widely used in diagnostic pathology. Digital images of FL-ISH and CL-IHC were separately recorded, assigned different pseudocolours (see Fig. D6.5) and merged using specific software for image analysis. The results demonstrated a sharp colocalization (to an extent of about 70% of the total luminescent area of the specimen) of HPV nucleic acids and the melanoma marker in the same biopsy sections. In smaller areas, HPV was detected without HMB-45 (9.5% of total) or HMB-45 without HPV (20.5%). This demonstrates that viral nucleic acids were specifically present in melanoma cells and supports a possible active role of HPV in malignant melanoma[29].
D6g. Chemiluminescence imaging of fluorescence reporters
Fluorescence detection remains of great value for in situ hybridization and in immunohistochemistry, particularly because of its greater precision of spatial localization compared with chemiluminescence. Chemiluminescence can, however, be harnessed as a means of exciting fluorescent probes and labels alternative to photo-excitation. Two examples of the principle are considered in this section.
g(i). Peroxy-oxalate chemiluminescence
Using imaging chip-based devices, detection of aqueous peroxyoxalate chemiluminescence (POCL) from oxamide I in aqueous environments has been reported[30] for fluorescence-labelled analytes and proved to be at least as sensitive as that using direct fluorescence detection requiring a light source for excitation. Using a CCD camera to record the chemiluminescence intensity from a 1000-fold range of analyte concentrations, POCL detection sensitivity of fluorescence-labelled immunoglobulins on a nitrocellulose membrane was investigated. Aqueous POCL of Staphylococcus aureus enterotoxin B (SEB) and its antibody were also used to demonstrate immuno- and affinity-detection using a CCD camera. SEB was detected by an immune sandwich assay in which SEB was captured by sheep polyclonal antibody spotted onto a nitrocellulose membrane and subsequently captured mouse monoclonal antibody, which was detected by fluorescence-labelled anti-mouse antibody. Affinity detection of biotin-labelled anti-SEB antibody used fluorescence-labelled streptavidin.
Simultaneous detection by POCL of bovine serum albumin labelled with two different fluorescent labels has been demonstrated, using contact imaging with a CMOS colour imaging chip (ADD LINK). The proteins were spotted onto a membrane disc fixed to a cover slip which was placed on the sensing surface of the chip. They were visible on 8 s exposure as red and green spots respectively; a mixture of the labelled samples emitted yellow light. This procedure might be applicable to reading microarrays.
g(ii). Bioluminescence resonance energy transfer (BRET)
A self-illuminating fluorescence reporter, comprising a dye conjugated to AP, has been demonstrated “in principle” for imaging detection using a CCD camera or a CMOS colour chip, making possible the imaging of fluorescent signals without the need for an external light source or sophisticated optics[31]. It is based on bioluminescence resonance energy transfer (BRET), an example of which, already cited, is the use of GFP to enhance the light emission from the photoprotein aequorin (see section d(iii) ADD LINK). The efficiency of BRET is increased by minimizing the distance between the bioluminescent energy donor and the fluorescent acceptor and is also found to depend on the ratio of AP to fluorophore in the conjugate, on the fluorescent dye used and on the chemiluminescent substrate. Chemiluminescence detection is low-cost, suitable for low concentrations and portable but diffusion of the luminescent products leads to poor spatial resolution. BRET is a potential solution to this problem, but it has not yet been applied to a real analytical problem.
In the demonstration, antibody, immobilized on the CMOS surface, captured a biotin-labelled target molecule that was then bound to the streptavidin-labelled AP-dye conjugate. The AP was used to generate light and the captured array images were viewed on a computer monitor. Images were also obtained by using a CCD camera. The chemiluminescent substrate for AP emitted at 450 nm; the energy from this emission was transferred to the fluorescent dye. This resulted in a second light emission with a longer wavelength (580 nm), which was localized at the position of target molecules, avoiding the problem of diffusion of the chemiluminescent product. In this way, image spatial resolution was greatly improved compared with conventional chemiluminescence detection. The shorter wavelength first emission that escaped absorption by the dye was removed by a high pass filter.
D6h. Whole-organ and whole-organism imaging
The use of cameras remote from the site of light emission makes it possible to image events occurring in the interior of organs and organisms. This technique can be applied to the study of a wide range of phenomena such as tumour growth, metastasis and drug efficacy, assessed by injecting and imaging recombinant light-emitting tumour cells[32], which can be used as probes for tumour location. Another application of molecular imaging techniques is the non-invasive monitoring of transplanted embryonic cardiomyoblasts expressing firefly luciferase (Fluc) reporter gene[33]. The movement in the rat gut of bioluminesent E. coli (expressing luciferase and the enzyme necessary for substrate synthesis) has also been imaged[34].
Diagrammatic representation of Renilla luciferase expression in a transgenic tobacco leaf, imaged against a dark background.. Light emission is indicated by white or grey areas. (The diagram is based on the image reproduced in figure 1 of reference 34)
Luciferase enzymes to label cells, pathogens, and genes are internal indicators that can be detected externally. Transgenic organisms have been produced in which the gene for the luciferase of Renilla reniformis functions stably in tobacco (Nicotiana tabaca), tomato (Lycopersicon exculentum) and potato (solanum tuberosum). Strong light emission was imaged with a low-light video camera after only a few seconds immersion of leaves, slices and seedlings (see Fig. D6.6) in 3 μmol dm-3 2-benzyl luciferin solution; at this concentration, the substrate was nontoxic and no other abnormalities were apparent[35].
Luciferase imaging enables complex gene activation effects to be modelled and observed in live animals[19]. Bioluminescent reporters for given biological processes have been used widely in cell biology; in whole animal models, including light-producing transgenic animals as models of disease, they are useful in drug discovery and development. In vivo imaging of intact organs also furthers the understanding of biological processes. The application of this technology to living animal models of infectious disease has provided insights into disease processes, therapeutic efficacy and new mechanisms by which pathogens may avoid host defences[36]. Progress of infections and efficacy of treatment are assessed by bioluminescent markers, e.g., light-emitting pathogens. Rapid, accessible high throughput screening is effective in vivo for pharmacokinetics, toxicology and target validation. Study of spatio-temporal patterns helps to characterize the site and time of action of drugs. There is also a possible clinical use in gene therapy and assessing gene vaccine delivery and efficacy.
There are several advantages in the use of in vivo imaging. In continuous monitoring each animal serves as its own control – introducing less variability than comparing groups of animals each analyzed at a different time; in addition, fewer animals are used in the experiments. Multiplex in vivo assays are also possible, using two or more reporters in the same animal. Chemiluminescence imaging is validated by the correlation of bioluminescence with culture cell counts (e.g., of E. coli). Investigations of this kind increase the number of data and can guide tissue sampling for subsequent biochemistry or histology.
There are also several drawbacks. Red and infrared light (590-800 nm) penetrates tissue well, but blue/green light (400-590nm), usually the bulk of the bioluminescence emission, is strongly attenuated. Luciferase, however, is the most widely-used reporter and has a broad emission including red light. The spatial resolution (3-5 mm) is worse than in magnetic resonance imaging or computed tomography.
Some pathological processes result spontaneously in light production. This is due to a weak spontaneous photon emission associated with oxidative phenomena, such as oxygen free radical (OFR) formation (ADD LINK), in whole organs removed from living animals. OFR have been imaged in rat livers that have been subjected to ischaemia (occlusion of blood supply) and reperfusion (restoration of blood supply), showing distribution in space and time of superoxide radicals on the liver surface and the effects on them of antioxidants, which remove OFR and can be screened by this model. The roles of aging, ethanol consumption and fat deposition on OFR formation in the liver have also been assessed. This system can be used to monitor the storage of organs for transplantation and to test agents and procedures for preserving them. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/4%3A_Instrumentation/4.06%3A_Chemiluminescence_Imaging.txt |
Electrochemiluminescence is chemiluminescence arising as a result of electrochemical reactions. It includes electrochemical initiation of ordinary chemiluminescent reactions, electrochemical modification of an analyte enabling it to take part in a chemiluminescent reaction, or electron transfer reactions between radicals or ions generated at electrodes. Prominent in the work done on electrochemiluminescence are reactions involving polyaromatic hydrocarbons or transition metal complexes, especially those of ruthenium, palladium, osmium and platinum.
Applications have made use of the sensitivity, selectivity and wide working range of analytical chemiluminescence, but electrochemiluminescence offers additional advantages without adding much to the inexpensive instrumentation[1]. Electrodes can be designed to achieve maximum detection of the light emitted and electrochemical measurements can be made simultaneously with the light output. Generation of chemiluminescence reagents at electrodes gives control over the course of light producing reactions, which can effectively be switched on and off by alteration of the applied potential; this is particularly useful when using unstable reagents or intermediates. Other possible benefits include generation of reagents from inactive precursors and regeneration of reagents, which permits the use of lower concentrations or immobilization of the reagents on the electrode. Analytes can also be regenerated, so that each analyte molecule can produce many photons, increasing sensitivity, or they can be modified to make them detectable by the chemiluminescence reaction in use. Electrochemiluminescence can be coupled with high performance liquid chromatography or with capillary electrophoresis.
The usefulness of tris-(2, 2/-bipyridyl)ruthenium(II) (discussed in chapter B9 ADD LINK) in electrochemiluminescence rests on its activity with very high efficiency at easily accessible potentials and ambient temperature in aqueous buffer solutions in the presence of dissolved oxygen and other impurities. The reaction sequence that leads to electrochemiluminescence is shown in equations D7.1 to D7.4:
(D7.1) Oxidation: [Ru(bipy)3]2+ ─ e → [Ru(bipy)3]3+
(D7.2) Reduction by analyte: [Ru(bipy)3]2+ + e → [Ru(bipy)3]+
(D7.3) Electron transfer: [Ru(bipy)3]3+ + [Ru(bipy)3]+ → [Ru(bipy)3]2+ + [Ru(bipy)3]2+*
(D7.4) Chemiluminescence: [R(bipy)3]2+* → [Ru(bipy)3]2+ + light
A flow injection manifold for measuring electrochemiluminescence.
The oxidation occurs electrochemically at the anode, whereas the reduction is brought about chemically by the analyte in the free solution. Electron transfer and subsequent chemiluminescence also occur in the free solution close to the anode, where the [Ru(bipy)3]3+ is concentrated. Other analytes, e.g. alkylamines, are oxidized at the anode to form a highly reducing radical intermediate that reacts with [Ru(bipy)3]3+ to form [Ru(bipy)3]2+*, which emits light. Oxalates, on the other hand, are oxidized by [Ru(bipy)3]3+ to radicals that then reduce more [Ru(bipy)3]3+ to give [Ru(bipy)3]2+* and chemiluminescence.
Instrumentation for electrochemiluminescence differs from that for other chemiluminescence only in having a flow cell provided with working, counter and reference electrodes, regulated by a potentiostat, which is in turn controlled by the computer that receives input from the photomultiplier or other transducer that receives the light signals. Figure D7.1 shows the usual flow injection manifold used for measuring electrochemiluminescence. The flow cell is in a light-tight box to exclude ambient light. A more portable alternative is a probe containing a set of electrodes and a fibre optic bundle to carry emitted light to a photomultiplier. Ambient light is excluded by means of baffles in the channels that admit the test solution. Because it can be electrochemically regenerated, it is useful to immobilize [Ru(bipy)3]3+ in a cation exchange resin deposited on the electrode to form a sensor that does not need a continual reagent supply. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/4%3A_Instrumentation/4.07%3A_Electrochemiluminescence.txt |
Photo-induced chemiluminescence (PICL) involves irradiating an analyte with ultra-violet light in order to convert it to a photoproduct of different chemiluminescence behaviour, usually substantially increased emission. Such reactions form the basis of highly sensitive and selective analytical techniques. Irradiating a molecule can break it into fragments of smaller molecular weight (photolysis) or can induce reactions such as oxidation, reduction, cyclization or isomerization. Direct photolysis involves absorption of photons by the target molecule; in indirect photolysis, the target molecule absorbs energy from another molecule that has previously absorbed photons[1].
Photochemistry is concerned with excited electronic states induced by the absorption of photons. Photoexcitation is more selective than thermal excitation and leads to a different energy distribution within the molecule. The excited molecule can undergo photochemical processes, the products of which are sometimes involved in side processes. In analytically useful photochemical reactions the light is strongly absorbed by the analyte but not by the photoproducts; the photochemical yield is high; the photoproducts are stable for as long as is needed to complete the analysis and are structurally rigid enough for the emission to have an adequate quantum yield. Successful analytical application also depends on appropriately designed photoreactors. When those conditions are fulfilled, using light has several advantages over the use of chemical derivatization. Lamps are inexpensive and their stable light output allows reproducible results. They differ in their spectral characteristics, which gives scope for increasing selectivity. The use of light has minimal environmental impact and can be effected in ambient conditions. Analysis times are shorter because photochemical reactions are fast and can be shortened further by optimizing reactor configuration or increasing lamp power. PICL has a linear relationship with analyte concentration over a wide concentration range and extends the range of analytes that can be detected by chemiluminescence. It is not necessary to identify or separate the photoproducts.
In PICL-based methods, the sample is irradiated on-line and subsequently merged with the chemiluminescence reagents prior to reaching the flow cell in front of the detector. Flow methods allow the irradiation time to be easily controlled and provide better reproducibility than stationary methods, coping better with the very fast rate of chemiluminescence reactions. Sample throughput, ease of automation and reagent consumption are also improved using flow methods.
PICL has the same instrumentation as other chemiluminescence but, in addition, a photoreactor is required and this has two essential elements – a light source and a container for the sample. Lamps are selected on the basis of power and spectrum (continuous or discrete). Continuous spectra span a wide zone, whereas discrete spectra are series of individual lines in a narrow wavelength range. The mercury-xenon lamp provides a continuous spectrum and is used when its high power is necessary, though it needs cooling. The low-pressure mercury lamp generates little heat and is a typical discrete-spectrum lamp, emitting over the range 200-320 nm, maximally at 254 nm; most substances absorb in this zone. The absorption zone of the selected lamp must be the most useful for excitation and bond-breaking. In flow systems, the sample is usually contained in PTFE tubing, which admits little light but maximises its effect by repeated reflections from the inner tube surfaces; the tubing can be coiled around a low-power lamp. Batch methods instead use quartz cells, which are transparent to ultra-violet. Quartz is more inert than PTFE, but also more fragile. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/4%3A_Instrumentation/4.08%3A_Photo-induced_chemiluminescence.txt |
Gas chromatography is a major separation method. It consists of the injection of a gaseous or liquid sample into a gaseous mobile phase which is passed through a column of solid support particles carrying a liquid stationary phase, maintained in an oven at a suitable temperature (which is usually above ambient but need not be above the boiling points of the analytes). Separation is the result of partition between the stationary and mobile phases and the separated constituents of the sample are usually detected by a flame ionization detector. A flow chart of the typical instrumentation for gas chromatography is illustrated in figure D9.1.
Figure D9.1: Flow chart of a gas chromatograph.
Samples of increasing complexity are being analysed by gas chromatography. Universal detectors, such as the flame ionization detector, are not adequate for such a task but selective detectors can provide the additional discrimination that is needed. Nitrogen- and sulfur-containing compounds commonly occur as trace-level analytes in complex samples and highly selective detectors have been developed. Among these, the nitrogen chemiluminescence detector and the sulfur chemiluminescence detector have emerged as powerful tools in gas chromatography, supercritical fluid chromatography and high performance liquid chromatography; stand-alone nitrogen/sulfur analysers can be based on the same chemiluminescence reactions. Detectors of either element are based on the same ozone-induced gas phase chemiluminescence[1]. The chemiluminescence is preceded by high temperature pyrolysis which oxidizes the nitrogen in the sample (RN) to nitric oxide (NO):
Oxidation:
$RN + O_2 → NO + CO_2 + H_2O \label{D9.1}$
and it is believed that the sulfur in the sample ($RS$) is converted first into sulfur dioxide ($SO_2$), which is then reduced in the presence of hydrogen to sulfur monoxide ($SO$):
Oxidation:
$RS + O_2 → SO_2 + CO_2 + H_2O\label{D9.2}$
Reduction:
$SO_2 + H_2 → SO + H_2O\label{D9.3}$
Overall:
$RS + O_2 + H_2 → SO + CO_2 + H_2O\label{D9.4}$
These reactions produce the species that react with ozone, producing excited nitrogen dioxide and excited sulfur dioxide respectively (Equations $\ref{D9.5}$ and $\ref{D9.7}$):
Reaction with ozone:
$NO + O_3 → NO_2^* + O_2\label{D9.5}$
Chemiluminescence:
$NO_2^* → NO_2 + light (~ 1200 nm)\label{D9.6}$
Reaction with ozone:
$SO + O_3 → SO_2^* + O_2\label{D9.7}$
Chemiluminescence:
$SO_2^* → SO_2 + light (~ 360 nm)\label{D9.8}$
The nitrogen chemiluminescence reaction emits in the near infra-red (Equation $\ref{D9.6}$), whereas the sulfur reaction emits in the ultra-violet (Equation $\ref{D9.8}$). This wide spectral separation of the emission bands enables nitrogen and sulfur to be determined selectively. A few small gaseous molecules containing sulfur also enter into the chemiluminescent reaction with ozone without undergoing preliminary pyrolysis.
Figure D9.2: Flow diagram of a nitrogen-sulfur detector.
The instrumentation for nitrogen-sulfur chemiluminescence detection is depicted in Figure D9.2. The pyrolyser converts the analytes in the gas chromatograph column effluent into the corresponding chemiluminescent species, which pass to the reaction chamber where they react with ozone supplied by a generator. The light emitted is detected by a photomultiplier.
4.10: Chemiluminescence detection in high performance liquid chromatography
High performance liquid chromatography consists of the injection of a liquid sample into a liquid mobile phase which is passed through a column of solid or supported liquid stationary phase. Separation is the result of partition, adsorption, size exclusion or ion exchange between the stationary and mobile phases and the separated constituents of the sample are usually detected by an ultra-violet absorption detector. A flow chart of the typical instrumentation used is illustrated in figure D10.1.
Flow chart of high performance liquid chromatography with chemiluminescence detection.
Coupling with chemiluminescence detection adds the sensitivity of this technique to selectivity of a powerful separation method. It requires measurement of the emitted light due to a post-column reaction between the analytes in the column eluents and the chemiluminescence reagents, which are delivered by additional pumps with the incorporation of devices for rapid mixing. Measurement of the chemiluminescence intensity at its maximum requires optimization of the transit time (dependent on length of tubing and flow rate) between the mixing point and the detector. The most important problem in designing the coupling instrumentation is ensuring compatibility between the conditions necessary for efficient chromatographic separation and those needed for intense chemiluminescence. Separation depends heavily on mobile phase composition, whereas chemiluminescence emission is known to be affected by solvent, pH, reaction temperature and the presence of enhancers and/or catalysts[1].
Figure D10.2 – Post-column instrumentation for measuring chemiluminescence after derivatization of analytes (PMT = photomultiplier tube; REC = recorder).
Interfaces between chromatography columns and chemiluminescence can become very complex. For example, peroxy-oxalate chemiluminescence is frequently coupled with HPLC. As it detects only fluorescent analytes (see chapter B5), successful detection depends on derivatization of the analytes eluted from the column before the addition of the chemiluminescence reagents. Figure D10.2 shows part of the post-column arrangements used for the determination of catecholamines by peroxy-oxalate chemiluminescence after reaction with ethylene diamine, which produces fluorescent derivatives[2]. For simplicity, the arrangements for different temperatures in different parts of the system have not been shown.
4.11: Chemiluminescence Detection in Capillary Electrophoresis
Capillary electrophoresis has outstanding resolving power for extremely small samples, but this poses a challenge for detectors. Compared with other candidates, chemiluminescence detection has the advantages of being highly sensitive and requiring inexpensive equipment of simple design. In addition it is not affected by the high voltage used in the separation system, a particular problem for electrochemical detection, which is also highly sensitive. Ultra-violet absorbance detectors also have low cost and are widely used, but narrow capillaries make it difficult to arrange for long enough optical path lengths. Laser induced fluorescence has high sensitivity, but the equipment is costly and pre- or post-column derivatization of nonfluorescent analytes is necessary[1].
On-column coaxial flow interface between capillary electrophoresis and chemiluminescence detection (adapted from reference D9.1).
As with HPLC, there is an inherent problem of compatibility between the conditions needed for separation and those needed for chemiluminescence. Additionally, there is a potential problem with the stability of chemiluminescence reagents. Both of these are addressed by using the post-column mode rather than pre-column. Post-column interfaces are devices for mixing the eluent with the chemiluminescence reagents and for this purpose designs may make use of merging flow, coaxial flow or reservoir mixing. Interfaces may also be classified as off-, on- or end-column, depending on the site of detection and on whether this is isolated from the high voltage supply used for capillary electrophoresis.
The simplest interface, off-column and merging flow, did not find widespread application. Buffer flowed from a reservoir through the separation capillary and merged with the reagent at a four-way connector at the end of the column. The outlet arm carried the mixture to the chemiluminescence reaction coil and flow cell adjacent to a photomultiplier, while the fourth arm connected through a semi-permeable membrane to a second buffer reservoir (containing the ground electrode) immediately downstream of the merging point. This arrangement isolates the high voltage from the detection zone.
In contrast, an on-column coaxial flow interface has proved to be effective for a large number of applications[2]. Figure D11.1 shows the detector is located at the capillary outlet tip, so detection is “on-column”. The ground electrode is located in the effluent reservoir so that detection takes place within the high voltage zone. The separation capillary outlet is inserted coaxially into the reaction tube, giving rise to minimum turbulence and reproducible mixing. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/4%3A_Instrumentation/4.09%3A_Chemiluminescence_detection_in_gas_chromatography.txt |
This module introduces students to the process of developing an analytical method using, as a case study, the quantitative analysis of eight analytes in the medicinal plant Danshen using a combination of a microwave extraction to isolate the analytes and HPLC with UV detection to determine their concentrations. The module is divided into nine parts:
• Part I. Context of Analytical Problem
• Part II. Separating and Analyzing Mixtures Using HPLC
• Part III. Extracting Analytes From Samples
• Part IV. Selecting the Solvent, Temperature, and Microwave Power
• Part V. Optimizing the Solvent-to-Solid Ratio and the Extraction Time
• Part VI. Finding the Global Optimum Across All Analytes
• Part VII. Verifying the Analytical Method’s Accuracy
• Part VIII. Applying the Analytical Method
• Part IX. Closing Thoughts
Interspersed within the module’s narrative are a series of investigations, each of which asks students to stop and consider one or more important issues. As students progress through the module they are introduced to chromatographic separations, solvent extractions, response surfaces, one-factor-at-a-time optimizations, central-composite designs, desirability functions, and spike recoveries.
This exercise is based loosely on work described in the paper
“Simultaneous extraction of hydrosoluble phenolic acids and liposoluble tanshinones from Salvia miltiorrhiza radix by an optimized microwave-assisted extraction method”
the full reference for which is Fang, X.; Wang, J; Zhang, S. Zhao, Q.; Zheng, Z.; and Song, Z. Sep. Purif. Technol. 2012, 86, 149-156 (DOI:10.1016/j.seppur.2011.10.039). Although most of the data in this exercise are drawn directly from or extrapolated from data in the original paper, additional data are drawn from other papers or generated artificially; specific details of differences between the data in the original paper and the data in this case study are included in the instructor’s manual.
• Student Handout (Word, PDF)
• Assessment Questions: For Assessment questions that accompany this module, please contact David Harvey ([email protected])
Developing an Analytical Method for the Analysis of a Medicinal Plant
In the presence of hydrogen peroxide, H2O2, and sulfuric acid, H2SO4, a solution that contains vanadium ions forms a reddish-brown color. Although the exact chemistry of the reaction is uncertain, it serves as a simple qualitative "spot test" for vanadium: the formation of a reddish-brown color upon adding several drops of H2O2 and H2SO4 to a sample is a positive test for vanadium.
A spot test provides nothing more than a simple binary response: yes, the sample contains vanadium, or no, the sample does not contain vanadium (at least at a concentration we can detect). Suppose we wish to adapt this qualitative test into a more quantitative method of analysis, one that allows us to report the concentration of vanadium in a sample. How might we accomplish this?
Given the reddish-brown color of a positive test, we might choose the solution's absorbance at a wavelength of 450 nm as the analytical signal. In addition to the concentration of vanadium, the intensity of the solution's color—and thus its absorbance—also depends on the amounts of H2O2 and H2SO4 added; in particular, a large excess of hydrogen peroxide decreases absorbance as the solution’s color changes from red-brown to yellow. As well, we must ensure that the development of color is reproducible and that the method yields accurate and precise results. We also need to determine if the method is susceptible to interferences and determine the smallest concentration of vanadium we can report with confidence. Finally, we want a rugged method, so that different analysts obtain similar results when analyzing the same sample. We call this process of optimizing and verifying a procedure method development.
This case study introduces method development within the context of an analysis for several pharmacolgically important constituents in a medicinal plant using a combination of a microwave extraction to isolate the analytes from the plant's roots and HPLC with UV detection to separate the analytes and to determine their concentrations.
Interspersed within the case study's narrative are a series of investigations, each of which asks you to stop and consider one or more important issues. Some of these investigations include data for you to analyze in the form of interactive figures, created using plot.ly, that allow you to manipulate the data and that provide access to the original data in the form of a spreadsheet. The image below
shows the tools for interacting with the data, which are available when the cursor enters the figure; from left-to-right, the tools are:
• zoom by clicking and dragging within the figure
• pan from side-to-side by clicking and dragging
• zoom in
• zoom out
• autocale (returns figure to original magnification; double-clicking within a figure also autoscales the data)
• show closest on hover (provides x-axis and y-axis values for one data set)
• compare data on hover (provides x-axis and y-axis values for all data sets)
• link to plot.ly
Some figures include data for multiple analytes or data sets and, as a consequence, include a legend; clicking on an analyte's or a data set's name in the figure's legend toggles on and off the display of the corresponding data. Clicking on the text "Play with this data!" provides access, via plot.ly, to the figure and its underlying data; if you have a (free) account with plot.ly, you can fork and edit the figure and data.
01 Part I. The Analytical Proble
Investigation 1: Properties of Danshen's Constituent Compounds
The dried root of Salvia miltiorrhiza—also known as red sage, Chinese sage, or Danshen, where "dan" and "shen" are Chinese for "red-colored" and "tonic herb," respectively—is a traditional Chinese herbal medicine used to treat a variety of cardiovascular and cerebrovascular diseases, presumably because of its ability to prevent the formation of blood clots and its ability to dilate blood vessels [1]. Danshen is widely available throughout China, and is available, although to a lesser extent, in Europe and in the United States. The drug Dantonic®, a formulation that includes Danshen, is approved in 26 countries for the treatment of and prevention of angina; it currently is in phase III testing for use in the United States [2].
As with any medicinal plant, the chemical composition of Danshen is complex with more than 70 constituent compounds identified in the literature. Early studies of Danshen's chemical composition focused on lipophilic molecules, the four most important examples of which are:
Danshen also contains hydrophilic constituents, the four main examples of which are:
Investigation 1
What does it mean to characterize a molecule as hydrophilic or as lipophilic? How do they differ in terms of their chemical or physical properties [3]? Are there structural differences between these two groups of molecules that you can use to classify them as hydrophilic or as lipophilic? Consider the molecules below, both minor constituents of Danshen, and classify each molecule as lipophilic or hydrophilic.
[1] For a review of Danshen’s medicinal properties and uses, see “Danshen: An Overview of Its Chemistry, Pharmacology, Pharmacokinetics, and Clinical Uses,” the full reference for which is Zhou, L.; Zuo, Z.; Chow, M. S. S. J. Clin. Pharmacol. 2005, 45, 1345-1359 (DOI).
[2] You can view details regarding the phase III trial at clinicaltrials.gov; the estimated completion date for the study is December 2016.
[3] A useful resource for exploring the chemical and physical properties of molecules is the Royal Society of Chemistry’s ChemSpider, a free database that provides access to the properties of over 30 million compounds. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/00_Developing_an_Analytical_Meth.txt |
Investigations 7–9: Solvent Extraction of Danshen
The chromatogram in Figure 1 was obtained using samples of the eight analytes purchased from commercial sources. Because the analytes are available in pure form, there was no need to complete an extraction prior to injecting the standard sample into the HPLC; however, to analyze samples of Danshen, we first must extract the analytes from its roots using a suitable solvent.
Investigation 7
Brewing coffee is nothing more than a simple solvent extraction, which makes it a useful and a familiar model for considering how a solvent extraction works. There are a variety of methods for brewing coffee that differ in how the solvent and the coffee are brought together. Investigate at least five of the following methods for preparing coffee: Turkish, French Press, Aeropress, Chemex, Pour Over, Stovetop, Vacuum Pot, Espresso, and Cold Brew. In what ways are these methods similar to each other and in what ways are they different from each other? What variables in the extraction process are most important in terms of their ability to extract caffeine, essential oils, and fragrances from coffee?
The most common method for extracting an analyte from a natural material—such as the roots, stems, and leaves of a medicinal plant—is to place a powdered sample in a suitable solvent and allow it to steep for 60 min at or near the solvent's boiling point. After filtering, the solid residue is extracted a second time and the two extracts combined to give a final sample [9].
Investigation 8
Why might a combination of high temperature, a lengthy extraction time, and the need for two extractions be undesirable when working with a medicinal plant such as Danshen?
Microwave-assisted solvent extractions are a promising method for addressing the limitations of a traditional solvent extraction because they use shorter extraction times and use smaller volumes of solvent [10]. In this case study we will develop a microwave-assisted solvent extraction for the quantitative analysis in Danshen of the four lipophilic and the four hydrophilic compounds identified earlier.
Investigation 9
What variables might we choose to control if we want to maximize the microwave extraction of Danshen's constituent compounds? For each variable you identify, predict how a change in the variable's value will affect the ability to extract from Danshen a hydrophilic compound, such as rosmarinic acid, and a lipophilic compound, such as tanshinone I.
[9] For a review of methods used for the quantitative analysis of Danshen, including different methods for extracting its active constituents, see "Advancement in Analysis of Salaviae miltiorrhiza Radix et Rhizoma (Danshen)," the full reference for which is Li, Y-G.; Song, L.; Liu, M.; Hu, Z-B.; Wang, Z-T. J. Chromatogr. A. 2009, 1216, 1941-1953 (DOI).
[10] For additional information on microwave extractions, see "Analytical-scale microwave-assisted extraction" the full reference for which is Eskilsson, C. P.; Björklund, E. J. Chromatogr. A 2000, 902, 227–250 (DOI), and "Standardizing the World with Microwaves," the full reference for which is Erickson, B. Anal. Chem. 1998, 70, 467A–471A (DOI). The microwave ovens used for solvent extractions essentially operate in the same manner as microwave ovens found in the home, although they are designed to provide more control over the microwave"s settings, and to handle better the harsher chemical environment found in a laboratory.
08 Part VIII. Applying the Analy
Investigation 34: Analysis of Wild and Cultivated Samples of Danshen
With our analytical method optimized and its accuracy verified, we turn, at last, to applying our method to the analysis of samples of Danshen roots. Table 6 provides absorbance values (in mAU) for danshensu and for tanshinone I in wild plants harvested from five different cities in the province of Shandong, China, and in five plants harvested from a single cultivated field in which good agricultural practices that emphasize agricultural sustainability are used.
Table 6. Results for Analysis of Danshen Samples
Danshen Source absorbance (mAU)
for danshensu
absorbance (mAU)
for tanshinone I
Wild Samples (Cities in Shandong Province)
Sanshangou 21.6 123.8
Yuezhuang 10.3 055.3
Dazhangzhuang 11.8 067.6
Pingse 37.2 042.1
Mengyin 10.0 132.0
Cultivated Samples (Lot Number)
020208 23.4 136.6
020209 23.7 137.1
020210 23.3 137.5
020211 22.8 148.0
020212 23.5 150.8
Investigation 34
Calculate the concentration of danshensu and the concentration of tanshinone I in each sample (as mg analyte/g sample). For each set of samples—wild samples and cultivated samples—calculate the mean, the standard deviation, and the relative standard deviation for each analyte and comment on your results.
09 Part IX: Summary of Case St
Closing Thoughts
The results for Investigation 34 are reported as the concentration, in mg/g, of danshensu and tanshinone I in samples of Danshen roots. Despite reporting the results this way, we cannot assume these values are the actual concentrations of danshensu and tanshinone I in these sample; they are, instead, the concentration of danshensu and tanshinone I extracted using 35.0 mL of a solvent that is 80% methanol and 20% water (by volume) per 1.000 g of sample, and using a microwave oven at 800 W to heat the solvent and sample for 7.50 min at 70°C. Different methods of extracting samples of Danshen yield different extraction yields, some of which recover smaller amounts of analytes (see Table 3 and Investigation 31), and some of which recover larger amounts of analytes (see Figures 3–5 and Investigation 12).
Although our analytical method reports the concentrations in Danshen of extractible hydrophilic and lipophilic compounds instead of their total concentrations, the analysis still has value because ultimately we are interested in the concentrations of these compounds that are recovered easily from harvested plants. In addition, and as suggested by Investigation 34, our analytical method provides us with a standard method for comparing the relative potency of different sources of Danshen and as a means of evaluating how changes in cultivation practices affect the relative potency of commercially grown Danshen. These are important and useful applications. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/03_Part_III._Extracting_Analytes.txt |
Developing an Analytical Method: Instructor’s Guide
Suggested Responses to Investigations & Supplementary Materials
This module introduces students to the process of developing an analytical method using, as a case study, the quantitative analysis of eight analytes in the medicinal plant Danshensu using a combination of a microwave extraction to isolate the analytes and HPLC with UV detection to determine their concentrations. The module is divided into nine parts:
• Part I. Context of Analytical Problem
• Part II. Separating and Analyzing Mixtures Using HPLC
• Part III. Extracting Analytes From Samples
• Part IV. Selecting the Solvent, Temperature, and Microwave Power
• Part V. Optimizing the Solvent-to-Solid Ratio and the Extraction Time
• Part VI. Finding the Global Optimum Across All Analytes
• Part VII. Verifying the Analytical Method’s Accuracy
• Part VIII. Applying the Analytical Method
• Part IX. Closing Thoughts
Interspersed within the module’s narrative are a series of investigations, each of which asks students to stop and consider one or more important issues. Some of these investigations include data sets for students to analyze; for the data in the module’s figures, you may wish to have students use the interactive on-line versions that provide access to a cursor and the ability to pan and zoom. The on-line figures, created using Plotly (https://plot.ly/), also provide access to the underlying data in the form of a spreadsheet.
This exercise is based loosely on work described in the paper
“Simultaneous extraction of hydrosoluble phenolic acids and liposoluble tanshinones from Salvia miltiorrhiza radix by an optimized microwave-assisted extraction method”
the full reference for which is Fang, X.; Wang, J; Zhang, S. Zhao, Q.; Zheng, Z.; and Song, Z. Sep. Purif. Technol. 2012, 86, 149-156 (DOI:10.1016/j.seppur.2011.10.039). Although most of the data in this exercise are drawn directly from or extrapolated from data in the original paper, additional data are drawn from other papers or generated artificially; specific details of differences between the data in the original paper and the data in this case study are discussed below as part of the suggested responses to the cases study’s investigations.
Suggested responses are presented in normal font.
Supplementary materials are in italic font.
10 Instructors Guide
Investigation 1
What does it mean to characterize a molecule as hydrophilic or as lipophilic? How do they differ in terms of their chemical or physical properties? Are there structural differences between these two groups of molecules that you can use to classify them as hydrophilic or as lipophilic? Consider the molecules below, both minor constituents of Danshen, and classify each molecule as lipophilic or hydrophilic.
Hydrophilic molecules form hydrogen bonds with water and are soluble in water and other polar solvents; not surprisingly, hydrophilic is derived from Ancient Greek for water loving. Lipophilic molecules, where lipos is Ancient Greek for fat, are soluble in fats, oils, lipids, and non-polar solvents. Hydrophilic molecules are more polar than lipophilic molecules, have more ionizable functional groups, and have more sites for hydrogen bonding.
For the eight constituents of Danshen included in this exercise, those that are hydrophilic are soluble, to varying extents, in water. Each hydrophilic compound has one or more ionizable carboxylic acid groups (–COOH) and, as the pKa values for these carboxylic acid functional groups are in the range 2.9–3.6, they are ionized and carry a negative charge at a neutral pH. The lipophilic constituents of Danshen do not have ionizable groups and they are not soluble in water, although they are soluble, to some extent, in polar organic solvents, such as methanol and ethanol.
Each lipophilic molecule in this exercise has three hydrogen bond acceptors and no hydrogen bond donors; the hydrophilic molecules, on the other hand, have between five (danshensu) and 12 (lithospermic acid) hydrogen bond acceptors, and between four (danshensu) or seven (lithospermic acid and salvianolic acid I) hydrogen bond donors.
Based on the structures of the two additional compounds, the one on the left is hydrophilic and the one on the right is lipophilic; the presence or absence of a carboxylic acid function group provides for a definitive classification. The two compounds are salvianolic acid F (left) and dihydroisototanshinone I (right).
Note: The structures for lithospermic acid and salvianolic acid A in the original paper are incorrect in their stereochemistry around the alkene double bonds, which, as shown in this exercise, are all trans; the original paper shows the alkene double bond in lithospermic acid as cis, and shows one of the two alkene double bonds in salvianolic acid A as cis. The structure for lithospermic acid in the original paper incorrectly shows an –OH group on the five-membered ring; as shown in this exercise, it is a –COOH group. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/01_Part_I._.txt |
Investigation 2
For this study we will use a reverse-phase HPLC equipped with a UV detector to monitor absorbance. What is a reverse-phase separation and how is it different from a normal-phase separation? How does the choice between a reverse-phase separation and a normal-phase separation affect the order in which analytes elute from an HPLC?
In a reverse-phase HPLC separation, the stationary phase is non-polar and the mobile phase is polar. For a normal-phase separation, the stationary phase is polar and the mobile phase is non-polar. Separations in HPLC depend on a difference in the solubility of the analytes in the mobile phase and in the stationary phase. In a reverse-phase separation, for example, analytes of lower polarity are more soluble in the non-polar stationary phase, spending more time in the stationary phase and eluting at a later time than more polar analytes. In a normal-phase separation, the order of elution is reversed, with less polar analytes spending more time in the mobile phase and eluting before more polar analytes.
Investigation 3
Using the data in Figure 1 determine each analyte’s retention time. Based on your answers to Investigation 1 and Investigation 2, does the relative order of elution order make sense? Why or why not?
The retention times for the analytes are:
hydrophilic compounds
tr (min)
lipophilic compounds
tr (min)
danshensu
4.81
dihydrotanshinone
50.50
rosmarinic acid
27.93
cryptotanshinone
55.21
lithospermic acid
29.44
tanshinone I
56.47
salvianolic acid A
35.80
tanshinone IIA
62.60
As seen in Investigation 2, for a reverse-phase HPLC separation, we expect more polar compounds to elute earlier than less polar compounds, a trend we see here as all four hydrophilic compounds elute before the four lipophilic compounds. The trend in retention times within each group is harder to discern, particularly given the changing composition of the mobile phase; however, danshensu is significantly more soluble in water than the other hydrophilic compounds and elutes much earlier.
Note: The data used to create Figure 1 are not drawn directly from the original paper. Instead, the retention times and the relationships between peak height and analyte concentrations, in μg/mL, were determined using the HPLC data in Figure 8b and the corresponding extraction yields, in mg/g, from the first row of Table 3, obtained using a 1.00-g sample of Danshen and 35.0 mL of solvent. The resulting values for k in the equation A = kC were used to generate the data for this chromatogram and for all subsequent chromatograms. Details on the standard used to generate Figure 1 are included in Investigation 7. Although the original paper reports peak areas instead of peak heights, the latter is used in this exercise as it is easier for students to measure.
Investigation 4
Based on Figure 2, are there features in these UV spectra that distinguish Danshen’s hydrophilic compounds from its lipophilic compounds? What wavelength should we choose if our interest is the hydrophilic compounds only? What wavelength should we choose if our interest is the lipophilic compounds only? What is the best wavelength for detecting all of Danshen’s constituents?
The UV spectra for the lipophilic compounds cryptotanshinone and tanshinone I show a single strong absorption band between 240 nm and 270 nm. The hydrophilic compounds danshensu and salvianolic acid A, on the other hand, have strong adsorption bands at wavelengths below 240 nm and at wavelengths above 270 nm. Clearly choosing a single wavelength for this analysis requires a compromise. Any wavelength in the immediate vicinity of 280 nm is an appropriate choice as the absorbance value for salvianolic acid A is strong, and the absorbance values for tanshinone I, cryptotanshinone, and danshensu are similar in magnitude. At wavelengths greater than 285 nm the absorbance of tanshinone I, cryptotanshinone, and danshensu decrease in value, and the absorbance of danshensu decreases toward zero as the wavelength approaches 250 nm. All four compounds absorb strongly at wavelengths below 230 nm, but interference from the many other constituents of Danshen extracts may present problems. The data in the figures that follow were obtained using a wavelength of 280 nm.
Note: The data for Figure 2 are not drawn from the original paper. The UV spectra for cryptotanshinone and for tanshinone I are adapted from “Analysis of Protocatechuic Acid, Protocatechuic Aldehyde and Tanshinones in Dan Shen Pills by HPLC,” the full reference for which is Huber, U. Agilent Publication Number 5968-2882E (released 12/98 and available at https://www.chem.agilent.com/Library...s/59682882.pdf), and the UV spectra for danshensu and for salvianolic acid A are adapted from “Simultaneous detection of seven phenolic acids in Danshen injection using HPLC with ultraviolet detector,” the full reference for which is Xu, J.; Shen, J.; Cheng, Y.; Qu, H. J. Zhejiang Univ. Sci. B. 2008, 9, 728-733 (DOI:10.1631/jzus.B0820095). These sources also provide UV spectra for tanshinone IIA and for rosmarinic acid, but not for dihydrotanshinone nor for lithospermic acid.
Investigation 5
For a UV detector, what is the expected relationship between peak height and the analyte’s concentration in μg/mL? For the results in Figure 1, can you assume the analyte with the smallest peak height is present at the lowest concentration? Why or why not?
For a UV detector, we expect the absorbance to follow Beer’s law, A = kC, where A is the analyte’s absorbance, C is the analyte’s concentration, and k is a proportionality constant that accounts for the analyte’s wavelength-dependent absorptivity and the detector’s pathlength. Because each analyte has a different value for k, we cannot assume that the analyte with the smallest peak height is also the analyte present at the lowest concentration.
Investigation 6
Calculate the concentration, in μg/mL, for each analyte in the standard sample whose chromatogram is shown in Figure 1. Using this standard sample as a single-point external standard, calculate the proportionality constant for each analyte that relates its absorbance to its concentration in μg/mL. Do your results support your answer to Investigation 5? Why or why not?
The table below shows the absorbance values (in mAU) for each analyte from Figure 1, the analyte’s concentration in the standard sample, and its value for k.
analyte
absorbance (mAU)
C (μg/mL)
k (mAU•mL/μg)
danshensu
96.3
60.0
1.605
rosmarinic acid
125.6
143.1
0.878
lithospermic acid
71.4
133.1
0.536
salvianolic acid A
66.1
41.7
1.585
dihydrotanshinone
442.9
15.1
2.841
cryptotanshinone
54.4
28.9
1.882
tanshinone I
59.5
37.2
1.599
tanshinone IIA
105.2
71.7
1.467
Using danshensu as an example, concentrations are derived from the data for the stock standard, accounting for its dilution and converting from mg to μg
$C= \mathrm{\dfrac{6.00\: mg}{10.00\: mL}\times\dfrac{1.00\: mL}{10.00\: mL}\times\dfrac{1000\: μg}{mg}=60.0\: μg/mL}\nonumber$
and k is calculated as
$k=\dfrac{A}{C}=\mathrm{\dfrac{96.3\: mAU}{60.0\: μg/mL}=1.605\: mAU•mL/μg}\nonumber$
Although dihydrotanshinone is present at the lowest concentration and has the smallest peak height, it has the largest value for k and is the strongest absorbing analyte. If, for example, dihydrotanshinone is present at a concentration of 25.0 μg/mL (a concentration smaller than the other seven compounds), its absorbance of 71.0 mAU will be greater than that for lithospermic acid, salvianolic acid A, cryptotanshinone, and tanshinone I. This is consistent with our expectations from Investigation 5.
Note: See the comments for Investigation 3 for details on the data used in this investigation. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/02_Part_II..txt |
Investigation 7
Brewing coffee is nothing more than a simple solvent extraction, which makes it a useful and a familiar model for considering how a solvent extraction works. There are a variety of methods for brewing coffee that differ in how the solvent and the coffee are brought together. Investigate at least five of the following methods for preparing coffee: Turkish, French Press, Aeropress, Chemex, Pour Over, Stovetop, Vacuum Pot, Espresso, and Cold Brew. In what ways are these methods similar to each other and in what ways are they different from each other? What variables in the extraction process are most important in terms of their ability to extract caffeine, essential oils, and fragrances from coffee?
The intention of this investigation is to place solvent extraction in a context more familiar to students. The various methods for brewing coffee generally fall into four groups based on how the coffee grounds and water are brought together: boiling (or decoction), steeping (or infusion), gravity filtration, and pressure.
Whatever the method, there is general agreement that the ideal extraction yield (the percentage, by weight, of the coffee grounds solubilized during brewing) is approximately 20% and that the ideal strength (the amount of dissolved coffee solids per unit volume) varies by geographic region, but is approximately 1.25 mg per 100 mL in the United States. Extraction yields and strength depend on the ratio of coffee and water, the coarseness of the coffee’s grind, the brew temperature, and the brew time. Methods relying on courser grounds, such as French Press, require longer brew times; drip filtration methods use a finer grind and require shorter brew times. Extraction yields that are too high result in bitter-tasting coffee and extraction yields that are too small result in a more acidic-tasting coffee. The greater the strength, the darker, thicker, and oilier the brew.
Investigation 8
Why might a combination of high temperature, a lengthy extraction time, and the need for two extractions be undesirable when working with a medicinal plant such as Danshen?
An extraction at a high temperature runs the risk of destroying some of Danshen’s analytes through thermal degradation; this is a more significant problem at higher temperatures, particularly when using a longer extraction time. The concentration of analytes in the final sample is smaller if we must combine two (or more) extracts of equal volume; if an analyte already is present at a low concentration in Danshen, then its concentration as analyzed may be too small to detect without first concentrating the extract.
Investigation 9
What variables might we choose to control if we want to maximize the microwave extraction of Danshen’s constituent compounds? For each variable you identify, predict how a change in the variable’s value will affect the ability to extract from Danshen a hydrophilic compound, such as rosmarinic acid, and a lipophilic compound, such as tanshinone I.
The intention of this investigation is to have students begin considering how experimental conditions will affect the extraction of hydrophilic and lipophilic analytes from Danshen. As the investigations that follow demonstrate, the variables explored here are not independent of each other, which makes impossible accurate predictions; of course, this is why method development is necessary! The comments below outline important considerations for five possible variables: the solvent; the solvent-to-solid ratio; the extraction temperature; the extraction time; and the microwave’s power.
The choice of solvent must meet two conditions: the analytes of interest must be soluble in the solvent, and the solvent must be able to absorb microwave radiation and convert it to heat. All three options for the solvent included in this study—methanol, ethanol, and water—are effective at absorbing microwave radiation and converting it to heat, although water is better than methanol and ethanol at absorbing microwave radiation and methanol is better than ethanol and water at converting absorbed microwave radiation into heat. In terms of solubility, we cannot predict easily the relative trends in solubility for either the hydrophilic or the lipophilic analytes when using methanol or ethanol as a solvent; however, we expect that the lipophilic analytes will not extract into water. Although the lipophilic analytes may be more soluble in a non-polar solvent, such as hexane, a non-polar solvent cannot absorb microwave radiation.
In general, we expect that increasing the solvent-to-solid ratio will increase extraction efficiency for all analytes; this certainly is the case with conventional extractions. For some microwave extractions, and for reasons that are not always clear, increasing the solvent-to-solid ratio beyond an optimum value decrease extraction efficiency.
For all analytes, extraction efficiency generally increases at higher temperatures for a variety of reasons, including the easier penetration of a solvent into the sample’s matrix as a result of a decrease in the solvent’s viscosity and surface tension. This increase in extraction efficiency with increasing temperature is offset if the analytes are not thermally stable. It is important to note, as well, that for an open-vessel atmospheric pressure microwave extraction, the method used here, the highest possible temperature is the solvent’s boiling point.
In general, we expect that extraction efficiency for all analytes will increase with longer extraction times. As is the case with temperature, however, the increase in extraction efficiency at longer times is offset if the analytes are not thermally stable.
For all analytes, the relationship between microwave power and extraction efficiency is not intuitive. An increase in microwave power results in greater localized heating. In some extractions, the increased localized heating helps break down the sample matrix, increasing extraction efficiency; in other cases, extraction efficiency decreases because the increase in localized heating results in more thermal degradation of the analytes. For other extractions, a change in microwave power has little effect on extraction efficiency. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/03_Part_III.txt |
Investigation 10
A one-factor-at-a-time optimization is an effective and an efficient algorithm when the factors behave independently, and an effective, although not necessarily an efficient, algorithm when the factors are dependent. What does it mean to say that two factors are independent or dependent? What does it mean to say that an optimization is efficient or effective? Why do dependent factors generally require that we optimize each factor more than once? Although the choice of solvent, temperature, and microwave power are dependent factors, for this case study you will optimize each factor once only. Explain why. For the analysis in this case study, is the order in which these three factors are optimized important? Why or why not?
Two factors are independent if the effect on the response of a change in the level of one factor does not depend on the second factor’s level. In the table below, for example, factors A and B are independent because a change in factor A’s level from 10 to 20 increases the response by 40 both when factor B’s level is 10 (increasing from 40 to 80) and when its level is 40 (increasing from 50 to 90).
level of factor A
level of factor B
response
10
10
40
20
10
80
10
40
50
20
40
90
For dependent factors, the effect on the response of a change in the level of one factor is not independent of the other factor’s level. For example, in the table below factors A and B are dependent because a change in factor A’s level from 10 to 20 increases the response by 40 (from 40 to 8) when factor B’s level is 10, but it increases the response by 20 (from 50 to 70) when factor B’s level is 40.
level of factor A
level of factor B
response
10
10
40
20
10
80
10
40
50
20
40
70
An effective optimization is one that correctly finds the system’s global optimum. An optimization is not effective if it finds a local (or regional) optimum instead of the global optimum. An efficient optimization is one that finds the global optimum using as few experiments as possible. The most efficient optimization considers all factors at the same time, or optimizes each factor one time only; a less efficient optimization considers each factor separately and requires that we cycle through each factor multiple times.
When two factors are independent, the optimization of one factor does not depend on the level of the other factor; we can, therefore, find the global optimum by optimizing each factor once. Having optimized factor A, we can optimize factor B without changing the effect on the response of factor A. The optimization is efficient because we need only optimize each factor one time.
For dependent factors, however, the optimization of one factor depends on the other factor’s level. If we optimize factor A and then optimize factor B, the level for factor A is no longer at its optimum value (unless we are extraordinarily lucky!). As a result, to find the global optimum, we must repeat the process of optimizing each factor through additional cycles.
To optimize a factor we change its level along a continuous range of possible values with, perhaps, lower and upper limits. For example, we can set the microwave power to any value between a lower limit of 0 W (no power) to an upper limit equal to the microwave’s maximum power. The initial choice of solvent, however, is not continuous as it is limited to individual pure solvents, in this case pure water, methanol, or ethanol. Because we cannot vary the initial choice of solvent through a continuous range of values, we cannot reasonably cycle through the factors.
The order in which the factors are optimized is solvent, extraction temperature, and microwave power. This order is necessary because the maximum possible temperature depends on the solvent’s boiling point, and the choice of microwave power depends on the solvent’s temperature.
Investigation 11
For the choice of solvent, consider ethanol, methanol, and water, as well as mixtures of water with ethanol or methanol, and predict how effective each is at extracting hydrophilic or lipophilic compounds. Why is a non-polar solvent, such as hexane, not a useful option for a microwave extraction? What limits, if any, might the choice of solvent place on the choice of temperature or microwave power?
Given the structures of the analytes it is reasonable to assume that each is soluble, to some extent, in methanol and ethanol. Although the hydrophilic compounds likely are soluble in water, the lithophilic compounds are insoluble in water. Because water has a greater solvent strength than methanol or ethanol, binary mixtures of methanol/water or of ethanol/water may be more effective solvents for the hydrophilic analytes; it is less clear if this is the case for the lithophilic analytes.
A non-polar solvent is not a useful option because it cannot absorb microwave energy and, therefore, cannot dissipate that energy to the sample in the form of heat.
The choice of solvent places an upper limit on temperature as it cannot exceed the solvent’s boiling point; the choice of solvent, on the other hand, places no limits on the microwave power.
Investigation 12
Consider the data in Figures 3–5 and explain any trends you see in the relative extraction efficiencies of these three solvents. Are your results consistent with your predictions from Investigation 11? Why or why not? Which solvent is the best choice if you are interested in analyzing hydrophilic analytes only? Which solvent is the best choice if you are interested in analyzing lipophilic analytes only? Which solvent is the best choice if you are interested in analyzing both hydrophilic and lipophilic analytes?
The absorbance values for the analytes are summarized here:
absorbance in mAU using 100%
analyte
methanol
ethanol
water
danshensu
043.0
032.1
72.6
rosmarinic acid
066.7
049.9
54.6
lithospermic acid
047.2
025.2
56.3
salvianolic acid A
037.3
023.2
23.3
dihydrotanshinone
033.9
038.2
0.0
cryptotanshinone
067.7
071.1
0.0
tanshinone I
082.4
080.4
0.0
tanshinone IIA
151.7
167.4
0.0
If we compare methanol to ethanol we see that extraction yields using methanol are greater than those using ethanol for danshensu, rosmarinic acid, lithospermic acid, and salvianolic acid; that the extraction yields using methanol and ethanol are similar for dihydrotanshinone, cryptotanshinone, and tanshinone I; and that the extraction yield using methanol is smaller than when using ethanol for tanshinone IIA. Water is a useful solvent for the hydrophilic compounds—indeed, it is the best solvent for danshensu and lithospermic acid—but, as expected, it does not extract the lipophilic compounds.
If we are interested in extracting hydrophilic compounds only, then methanol or water are appropriate options (or, perhaps, a mixture of the two); ethanol is not an unreasonable option, but it does not extract these compounds as efficiently as methanol or water. If we are interested in extracting lipophilic compounds only, then methanol or ethanol are suitable choices, although ethanol has a slight advantage over methanol for tanshinone IIA. Methanol is the best choice for extracting both hydrophilic and lipophilic compounds.
Note: The chromatograms in Figure 3 and Figure 4 are derived from data in the original paper. The chromatogram in Figure 5 uses data from the paper “Simultaneous quantification of six major phenolic acids in the roots of Salvia miltiorrhiza and four related traditional Chinese medicinal preparations by HPLC-DAD method,” the full reference for which is Liu, A; Li, L; Xu, M.; Lin, Y.; Guo, H.; Guo, D. J. Pharm. Biomed. Anal. 2006, 41, 48–56 (DOI:10.1016/j.jpba.2005.10.021). For reasons of simplicity, the chromatograms in this exercise are cleaned up by excluding peaks from other compounds in Danshen extracts and eliminating baseline noise.
Investigation 13
Propose a set of experiments that will effectively and efficiently allow you to determine the optimum mixture of methanol and water to use for this extraction. What range of methanol/water mixtures will you explore? How many samples will you run? Explain the reasons for the range of mixtures and the number of samples you selected. In describing the solvent mixtures, report values as percent methanol by volume (e.g. 55% methanol by volume).
Because the lithophilic analytes are not soluble in water, there is little point in considering mixtures in which water is the predominate solvent; for this reason, it makes sense to limit the mixtures to a lower limit of 50% methanol by volume to an upper limit of 100% methanol by volume. Increasing the percent methanol in steps of 10%, a total of six treatments, provides sufficient information to determine the trend in each analyte’s solubility.
Investigation 14
Consider the data in Figure 6 and explain any trends you see in the relative extraction efficiencies using different mixtures of methanol and water. What is the optimum mixture of methanol and water for extracting samples of Danshen? Are your results consistent with your predictions from Investigation 11 and the data from Investigation 12? Why or why not?
The optimum solvent is 80% methanol and 20% water (by volume). The effect of adding water is not surprising for the hydrophilic compounds, given our observations in Investigations 11 and 12; however, the increased extraction efficiency for lipophilic compounds in the presence of added water is unexpected.
Note: The data in Figure 6 are derived, in part, using data from the original paper and, in part, data from the paper “Simultaneous quantification of six major phenolic acids in the roots of Salvia miltiorrhiza and four related traditional Chinese medicinal preparations by HPLC-DAD method,” the full reference for which is Liu, A; Li, L; Xu, M.; Lin, Y.; Guo, H.; Guo, D. J. Pharm. Biomed. Anal. 2006, 41, 48–56 (DOI:10.1016/j.jpba.2005.10.021). Additional data was synthesized, based on trends in the original data, to extend the data set to a greater range of methanol–water mixtures.
Investigation 15
Propose a set of experiments that will effectively and efficiently allow you to optimize the extraction temperature using the solvent selected in Investigation 14. What range of temperatures will you explore? How many samples will you run? Explain the reasons for the range of temperatures and the number of samples you selected.
The boiling point for a solvent that is 80% methanol and 20% water (by volume) is slightly greater than 70°C; thus, selecting 70°C for an upper limit is a reasonable choice. A lower limit of 50°C and intervals of 5°C will provide sufficient information to determine the trend in each analyte’s solubility.
Investigation 16
Consider the data in Figure 7 and explain any trends you see in the relative extraction efficiencies as a function of temperature. What is the optimum temperature for extracting samples of Danshen? Are your results consistent with your expectations? Why or why not?
The optimum temperature is 70°C, which is consistent with the general expectation that higher temperatures increase extraction yields, assuming no thermal degradation. Interestingly, the effect is somewhat more pronounced for the lipophilic compounds than for the hydrophilic compounds.
Note: The data in Figure 7 are derived, in part, using data from the original paper for temperatures of 50°C, 60°C and 70°C. To extend the data set, additional data were synthesized for temperatures of 55°C and for 65°C based on trends in the original data.
Investigation 17
Propose a set of experiments that will effectively and efficiently allow you to optimize the microwave power using the solvent and temperature selected in Investigation 16. What range of powers will you explore given that the microwave’s power is adjustable between the limits of 0 W and 1000 W? How many samples will you run? Explain the reasons for the range of microwave powers and the number of samples you selected.
Although the effect of microwave power on the extraction yield is not likely significant, it also is unpredictable. For this reason, we might opt for a large range, but with relatively few samples. If the resulting data suggest that extraction yields are particularly sensitive to microwave power, then we can run additional samples as needed. Setting a lower limit of 400 W and an upper limit of 1000 W, with steps of 200 W are reasonable choices and will provide sufficient information to determine the trend in each analyte’s solubility.
Investigation 18
Consider the data in Figure 8 and explain any trends you see in the relative extraction efficiencies as a function of the microwave’s power. What is the optimum power for extracting samples of Danshen using a solvent that is 80% methanol and 20% water by volume and an extraction temperature of 70°C?
Although, as expected, microwave power does not affect significantly the extraction efficiency for most compounds, the extraction efficiency for some lipophilic compounds decreases at 1000 W; for this reason, the optimum microwave power is 800 W.
Note: The data in Figure 8 are derived using data from the original paper. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/04_Part_IV..txt |
Investigation 19
When optimizing the choice of solvent, temperature, and microwave power, we used absorbance values taken directly from the HPLC analysis (see Figures 3–8) without first converting them into extraction yields reported in mg analyte/g sample. Why is it possible to use absorbance values for the optimizations in Part IV? Can you use absorbance values when optimizing the solvent-to-solid ratio or the extraction time? Why or why not? Using the optimum conditions from Figure 8 and your results from Investigation 7, report the extraction yield for each analyte as mg analyte/g sample.
It helps to begin by considering how to convert an analyte’s peak height in mAU to the analyte’s extraction yield in mg/g. We know from Investigations 6 and 7 that Beer’s law is A = kC, where A is the analyte’s absorbance, C is the analyte’s concentration in the extracting solvent (in μg/mL), and k is an analyte-specific calibration constant (with units of mAU•mL/μg); thus
$C= \dfrac{A}{k}\nonumber$
To convert C to the analyte’s extraction yield, EY, we account for the volume of solvent, V, and the mass of sample, m
$EY\left(\mathrm{\dfrac{mg}{g}}\right)=\dfrac{C\left(\mathrm{\dfrac{μg}{mL}}\right)×V\mathrm{(mL)}}{m\: \mathrm{(g)}}×\mathrm{\dfrac{1\: mg}{1000\: μg}}=\dfrac{A \mathrm{(mAU)}×V\mathrm{(mL)}}{k\mathrm{\left(\dfrac{mAU•mL}{μg}\right)}× m\: \mathrm{(g)}}×\mathrm{\dfrac{1\: mg}{1000\: μg}}\nonumber$
When optimizing the choice of solvent, extraction temperature, and microwave power, we maintained a constant solvent-to-solid ratio, using 60.0 mL of solvent and a 3.00-g sample for each experiment. Because V, k, and m, are constants, the analyte’s absorbance and its extraction yield are directly proportional: if the absorbance doubles, we know the extraction yield also doubles. This is why we can use absorbance values when optimizing the choice of solvent, the extraction temperature, and the microwave power.
To optimize the solvent-to-solid ratio we must change the solvent’s volume and/or the sample’s mass, which means we no longer can assume that an increase in the analyte’s absorbance evinces a proportionate increase in the analyte’s extraction yield; instead, we must calculate the analyte’s extraction yield from its absorbance. The following table summarizes the extraction yields for the optimum extraction conditions in Figure 8.
analyte
absorbance (mAU)
k (mAU•mL/μg)
EY (mg/g)
danshensu
064.4
1.605
0.802
rosmarinic acid
098.9
0.878
2.253
lithospermic acid
062.2
0.536
2.320
salvianolic acid A
042.3
1.585
0.534
dihydrotanshinone
065.8
2.841
0.463
cryptotanshinone
084.4
1.882
0.897
tanshinone I
104.4
1.599
1.306
tanshinone IIA
201.9
1.467
2.752
Investigation 20
We can divide the points in a central-composite design into three groups: a set of points that allow us to explore the effect on the extraction yield of extraction time only; a set of points that allow us to explore the effect on the extraction yield of the solvent-to-solid ratio only; and a set of points that allow us to explore the effect on the extraction yield of the interaction between extraction time and the solvent-to-solid ratio. Explain how each of these is accomplished in this experimental design.
For the points (2.18, 25.0), (5.00, 25.0), and (7.82, 25.0) we are changing the extraction time while maintaining a constant solvent-to-solid ratio; these points allow us to explore the effect of extraction time only. For the points (5.00, 10.9), (5.00, 25.0), and (5.00, 39.1) we are changing the solvent-to-solid ratio while maintaining a constant extraction time; these points allow us to explore the effect of the solvent-to-solid ratio only. Finally, for the points (3.00, 15.0), (7.00, 15.0), (3.00, 35.0), and (7.00, 35.0) we vary both the extraction time and the solvent-to-solid ratio; these points allow us to explore possible interactions between these factors.
Note: In Table 1 of the original paper, the extraction time’s lower limit and upper limit are reported as 2.00 min and 8.00 min, respectively, instead of 2.18 min and 7.82 min, as used in this case study. The original paper also reports the lower limit and the upper limit for the solvent-to-solid ratio as 10.0 mL/g and 40.0 mL/g, respectively, instead of 10.9 mL/g and 39.1 mL/g, as used in this case study. This is the result of an inconsistency in the original paper between the reported actual factor levels and the reported coded factor levels used for building a regression model. If, as the paper indicates, the experimental design’s axial points are ±1.41, then the reported factor levels of 2.00 min, 8.00 min, 10.0 mL/g, and 40.0 mL/g are in error and should be listed as 2.18 min, 7.82 min, 10.9 mL/g, and 39.1 mL/g, respectively. On the other hand, if the reported factor levels of 2.00 min, 8.00 min, 10.0 mL/g, and 40.0 mL/g are correct, then the reported coded factor levels of ±1.41 are in error and should be listed as ±1.50. For the purpose of this case study, we assume the axial point’s coded factor levels are ±1.41 and that 2.18 min, 7.82 min, 10.9 mL/g, and 39.1 mL/g are the actual factor levels for these points. Fortunately, the effect on the regression results of this inconsistency is not important within the context of this case study. See the comments accompanying Investigation 23 for additional details.
Investigation 21
Identify the five trials at the center of central-composite design and, for these trials, calculate the extraction yield’s mean, standard deviation, relative standard deviation, variance, and 95% confidence interval about the mean. What is the statistical meaning for each of these values? Transfer to Figure 9 the extraction yield for each experiment, using the mean extraction yield for the design’s center point. What conclusions can you reach regarding the effect on danshensu’s extraction yield of extraction time and solvent-to-solid ratio? Estimate the optimum conditions for maximizing danshensu’s extraction yield and explain your reasoning?
The center of the central-composite design is an extraction time of 5.00 min and a solvent-to-solid ratio of 25.0 mL/g. The extraction yields for these five trials are 0.790, 0.813, 0.785, 0.801, and 0.773. The mean, which is 0.792 mg/g, is the average value for the five trials and is our best estimate of danshensu’s true extraction yield, μ, in the absence of systematic errors in the analysis. The standard deviation of 0.0153 is one measure of the dispersion about the mean for these five trials. Two other measures of dispersion are the relative standard deviation, the ratio of the standard deviation to the mean, which is 1.93% in this case, and the variance, which is the square of the standard deviation, or 2.34×10–4 in this case. The standard deviation, relative standard deviation, and variance each provide a measure of the uncertainty in our results resulting from random error in the extraction and analysis. The 95% confidence interval combines the mean and the standard deviation to estimate danshensu’s true extraction yield when using an extraction time of 5.00 min and a solvent-to-solid ratio of 25 mL/g. Its value is given by
$μ=\bar{X} ±\dfrac{ts}{\sqrt{n}}\nonumber$
where $\bar{X}$ is the mean, s is the standard deviation, n is the number of trials, and t is a value that depends on the confidence level and the degrees of freedom, which is n – 1. The value of t for a 95% confidence interval and n = 5 (four degrees of freedom) is 2.776; the 95% confidence interval is
$0792±\dfrac{(2.776×0.0152)}{\sqrt{5}}=0.792±0.019\nonumber$
This confidence interval is important because it helps us evaluate whether a change in a factor’s level affects the extraction yield. Consider, for example, the extraction yields of 0.742, 0.792, and 0.820 for the three points that include a change in extraction time only: (2.18, 25.0), (5.00, 25.0), and (7.82, 25.0). If extraction time does not affect the extraction yield, then we expect the extraction yields at (2.18, 25.0) and at (7.82, 25.0) to fall within the 95% confidence interval around the mean value at (5.00, 25.0). The actual yields do not fall with this range, suggesting that extraction time does affect danshensu’s extraction yield, with longer extraction times favoring greater extraction yields.
A similar analysis of the data for the points (5.00, 10.9), (5.00, 25.0), and (5.00, 39.1) suggests that the solvent-to-solid ratio significantly affects the extraction yields, with solvent-to-solid ratios less than 25.0 mL/g resulting in a decrease in the extraction yield. Finally, the data for the points (3.00, 15.0), (7.00, 15.0), (3.00, 35.0), and (7.00, 35.0) suggests that the interaction between the extraction time and the solvent-to-solid ratio is not significant. For example the effect on the extraction yield of a change in extraction time when the solvent-to-solid ratio is 35.0 is
$0.805 - 0.754 = 0.051\nonumber$
and the effect on the extraction yield of a change in extraction time when the solvent-to-solid ratio is 15.0 is
$0.785 - 0743 = 0.042\nonumber$
The difference between these values
$0.051 - 0.042 = 0.009\nonumber$
is smaller than the 95% confidence interval, suggesting that the difference is not significant and that these are independent, not dependent factors (see Investigation 10).
Based on Figure 9, the optimum condition for extracting danshensu is an extraction time of 7.80 min and a solvent-to-solid ratio of 25.0 mL/g as this yields the greatest extraction yield.
Investigation 22
What does it mean to describe a model as empirical instead of theoretical? What are the advantages and the disadvantages of using an empirical model? What is the significance for this empirical model of the coefficients β0, βa, βb, βaa, βbb, and βab? How does an empirical model that includes the coefficients βaa and βbb differ from a model that does not include these coefficients?
For an empirical model there is no established mathematical relationship between the response and the factors affecting the response. To fit an empirical model to data, we search for a mathematical expression that reasonably fits the data, which means an empirical model is not independent of the data used to build the model. A theoretical model, as its name suggests, is derived from a theoretical understanding of the relationship between the response and the factors affecting the response; as such, a theoretical model is independent of the data we may wish to model. Although a theoretical model can emerge from the understanding engendered by an empirical model, it still must be explained in terms of existing theory. Boyle’s law (PV = constant) is an example of an empirical model that emerged from the careful study of the relationship between a gas’s pressure and its volume. The derivation of Boyle’s law from the kinetic theory of gases transformed Boyle’s law from an empirical model to a theoretical model.
The advantage of an empirical model is that it allows us to model a response, such as an extraction yield, when there is no existing theoretical model that explains the relationship between the response and its factors. The disadvantage of an empirical model is that its utility is limited to the range of factor levels studied. For example, we might use a straight-line
$y=β_0+β_x x\nonumber$
to successfully model a response, y, over a limited range of levels for a factor, x, even though the relationship between y and x over a wider range of levels is much more complex. If we use the model to predict values of y for values of x within the range modeled, a process we call interpolation, then we are confident in our results; attempting to predict values of y for values of x outside of the range modeled, a process we call extrapolation, is likely to introduce substantial errors into our analysis.
For the empirical model in this exercise the coefficient β0 is the intercept, the coefficients βa and βaa provide the first-order and second-order effects on the response of extraction time, the coefficients βb and βbb provide the first-order and second-order effects on the response of the solvent-to-solid ratio, and βab provides the interaction between the extraction time and the solvent-to-solid ratio. A model that includes the coefficients βaa and βbb allows for curvature in the response surface; a response surface without these coefficients is a flat plane.
Investigation 23
What does it mean to say that the regression analysis is significant at p = 0.0057? Do the results of this regression analysis, as expressed in the model’s coefficients, agree with your results from Investigation 21? Why or why not? What is the meaning of the intercept in this model and how does it affect your understanding of the empirical model’s validity? Use the full regression model to calculate danshensu’s predicted extraction yields for the central-composite design in Table 2. Organize your results in a table with columns for the factor levels, the experimental extraction yields, and the predicted extraction yields. Add a column showing the difference between the experimental extraction yields and predicted extraction yields. Calculate the mean, standard deviation, and the 95% confidence interval for these difference values and comment on your results.
In a linear regression analysis, we want to determine if a factor’s levels affect the response or if the response is independent of the factor’s levels. For each experiment used to build the model, we consider three possible responses: the measured response, $y_i$, the response predicted by the model, $\hat{y}_i$, and the average response over all experiments, $\bar{y}$, which is our best estimate of the response if it is independent of the factors. If the total difference between the experimental responses and the average response, $∑(y_i-\bar{y})^2$, is signficantly greater than the total difference between the experimental responses and the predicted responses, $∑(y_i-\hat{y}_i )^2$, then we have evidence that random errors in our measurements cannot explain the differences in experimental responses; that is, we have evidence that the response is dependent on the factors. A p value of 0.0057 means there is but a 0.57% probability that random error can account for the differences in the extraction yields reported in Table 2. Note that a regression analysis can not prove that a model is correct, but it does provide confidence that the model does a better job of explaining the experimental data than does random error.
In Investigation 21 we concluded that an increase in extraction time increases the extraction yield and that a decrease in the solvent-to-solid ratio results decreases the extraction yield; both of these conclusions are consistent with the positive values for βa and βb, and consistent with their p values. We also concluded that there was no evidence for a significant interaction between the extraction time and the solvent-to-solid ratio, which is consistent with βab not having a p value less than 0.05 (it actually is >0.7). Interestingly, the model suggests that the solvent-to-solid ratio has a significant second-order effect on the response as the p value for βbb is less than 0.05, a conclusion we did not draw in Investigation 21.
The intercept for this model gives the extraction yield for an extraction time of 0 min and a solvent-to-solid ratio of 0 mL/g. That the intercept is not 0 mg/g and that it is highly significant seems troubling; after all, how we can extract the analyte if we do not use solvent and if we do not carry out the extraction! Here is where we need to recall that we are using an empirical model to explain the relationship between the factors and the response. As noted in Investigation 22, we cannot extrapolate an empirical model outside the range of the factor levels used to build the model. In this case, we cannot safely predict extraction yields for extraction times less than 2.18 min or for solvent-to-solid ratios of less than 10.9 mL/g.
The following table compares the experimental extraction yields from Table 2 with the extraction yields predicted using our model.
extraction time
(min)
solvent-to-solid ratio
(mL/g)
experimental extraction yield
(mg/g)
predicted extraction yield
(mg/g)
difference
(mg/g)
5.00
10.9
0.721
0.741
–0.020
5.00
25.0
0.790
0.792
00.002
3.00
15.0
0.743
0.734
–0.009
2.18
25.0
0.742
0.747
00.005
3.00
35.0
0.754
0.756
00.002
5.00
25.0
0.813
0.792
–0.021
7.00
15.0
0.785
0.780
–0.005
5.00
25.0
0.785
0.792
00.007
5.00
39.1
0.784
0.777
–0.007
7.00
35.0
0.805
0.810
00.005
5.00
25.0
0.801
0.792
–0.009
5.00
25.0
0.773
0.792
00.019
7.82
25.0
0.820
0.817
–0.003
The mean difference between the predicted extraction yields and the experimental extraction yield is –4.6×10–4 with a standard deviation of 0.011 and a 95% confidence interval (t = 2.179 for 12 degrees of freedom) of ±0.0069. The mean difference is small, as we expect if the model explains our data, and there is no evidence that it deviates significantly from 0. For three trials—highlighted above in bold—the differences between the predicted extraction yields and the experimental extraction yields are more than twice the 95% confidence interval; nevertheless, the agreement between the experimental and the predicted extraction yields is encouraging.
Note: The regression models in the original paper are reported using coded factor levels, which normalize each factor’s level so that each factor has the same scale. The equation for the regression model provided in this exercise translates back into the actual factor levels the coded regression model reported in the original paper. A regression analysis of the data in Table 1 of the original paper yields results that are slightly different than those reported in Table 2 of the original paper. This is not a result of uncertainty in the assignment of coded levels for the central-composite design’s axial points, as described in the notes accompanying Investigation 20. As shown here
coefficient
reported in paper using ±1.41
calculated using ±1.41
calculated using ±1.5
β0
00.7920
00.7920
00.7930
βa
00.0250
00.0254
00.0254
βb
00.0130
00.0150
00.0148
βaa
–0.0050
–0.0044
–0.0046
βbb
–0.0165
–0.0187
00.0173
βab
00.0020
00.0022
00.0022
a regression analysis of the extraction yields using coded factor levels of ±1.41 for the axial points and using coded factor levels of ±1.5 give coefficients (in coded form) that look similar to each other, but that are not the same as those reported in Table 2 of the original paper. A more likely explanation is that the reported extraction yields in Table 1 of the original paper are the average of three trials; although not stated, the regression results reported in the original paper presumably uses the full set of individual extraction yields instead of the average extraction yields, as is the case here.
Investigation 24
Does Figure 10 agree with your results from Investigations 21 and 23? Why or why not? Estimate the optimum conditions for maximizing danshensu’s extraction yield and explain your reasoning. How sensitive is the optimum extraction yield to a small change in extraction time? How sensitive is the optimum extraction yield to a small change in the solvent-to-solid ratio?
The shape of the response surface is consistent with our observations from earlier investigations. The contour lines show that the extraction yield increases for longer extraction times (as seen in both Investigations 21 and 23), and shows that the extraction yield decreases both for larger and for smaller solvent-to-solid ratios (as seen in Investigation 23). The contour lines are more spherical than elliptical, which is consistent with a model that does not have a significant interaction between its factors (as seen in Investigations 21 and 23).
In Investigation 21, we concluded that the optimum condition for extracting danshensu is an extraction time of 7.80 min and a solvent-to-solid ratio of 25.0 mL/g with an extraction yield of 0.817. Based on the response surface, the optimum condition for extracting danshensu is an extraction time of 7.80 min and a solvent-to-solid ratio of 35.0 mL/g with an extraction yield of 0.821; the difference in the extraction yields, however, is not significant and is consistent with the response surface’s broad contours at longer extraction times.
The relative sensitivity of a response to a change in a factor’s level is indicated by the steepness of the slope along the direction of that change (consider, for example, the closely spaced contour lines on a topographic map for a deep canyon compared to the widely spaced contour line for a gently sloping field). For danshensu, the optimum extraction yield is equally sensitive to a change in the extraction time and the solvent-to-solid ratio, although the sensitivity is small given that the optimum is on a broad hill with a shallow slope.
Investigation 25
Using Figures 11–15, determine the optimum extraction time and solvent-to-solid ratio for lithospermic acid, salvianolic acid A, cryptotanshinone, tanshinone I, and tanshinone IIA. How sensitive is the extraction of each analyte to a small change in the optimum extraction time and in the optimum solvent-to-solid ratio? Considering your responses here and to Investigation 24, are there combinations of extraction times and solvent-to-solid ratios that will optimize the extraction yield for all six of these analytes?
The following table summarizes the maximum extraction yields and the optimum extraction times and solvent-to-solid ratios from Figures 11–15, with the results for danshensu, from Investigation 24, included as well.
analyte
extraction time (min)
solvent-to-solid ratio (mL/g)
maximum extraction yield (mg/g)
danshensu
7.80
35.0
0.821
lithospermic acid
7.80
39.0
2.784
salvianolic acid A
7.80
39.0
0.624
cryptotanshinone
7.80
34.5
0.920
tanshinone I
7.80
31.5
1.353
tanshinone IIA
6.20
34.5
2.784
The optimum conditions for extracting cryptotanshinone, tanshinone I, and tanshinone IIA, which sit on plateaus or ridges with shallow slopes, are not particularly sensitive to small changes in the extraction time or the solvent-to-solid ratio. The optimum conditions for extracting lithospermic acid and salvianolic acid A are on more steeply rising slopes and, therefore, are more sensitive to a change in either the extraction time or the solvent-to-solid ratio.
For all six analytes, longer extraction times and larger solvent-to-solid ratios favor a greater extraction yield; however, as the table above shows, the optimum extraction yield for tanshinone IIA has a shorter extraction time than the other five analytes, and the optimum extraction yield for lithospermic acid and for salvianolic acid A favors a larger solvent-to-solid ratio. How to determine a single set of extraction conditions is the subject of Part VI.
Note: Figures 11–15 use the regression results reported in Table 2 of the original paper. The value of βab for tanshinone I is reported in Table 2 as 0.247 instead of its more likely value of 0.0247, which was used to generate Figure 14; unlike the corrected value, the reported value does not produce a response surface consistent with that shown in Figure 7b of the original paper. As noted in the case study, the regression models are not significant for rosmarinic acid and for dihydrotanshinone; the extraction yields of 2.317 mg/g for rosmarinic acid and 0.424 mg/g for dihydrotanshinone reported in the exercise are the average of the five replicate trials at the center of the central-composite design. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/05_Part_V._.txt |
Investigation 26
To explore the effect of s on individual desirability, calculate di for responses from 0.0 to 1.0, in steps of 0.1, using an upper limit of 0.75 and a lower limit of 0.25, and values of 0.5, 1.0, and 5.0 for s. Examine your results and comment on any trends you see.
The table and figure below summarize the individual desirability for each response at each value of s.
response
di (s = 0.5)
di (s = 1)
di (s = 5)
0.0
0.000
0.000
0.000
0.1
0.000
0.000
0.000
0.2
0.000
0.000
0.000
0.3
0.316
0.100
0.000
0.4
0.548
0.300
0.002
0.5
0.707
0.500
0.031
0.6
0.837
0.600
0.168
0.7
0.949
0.700
0.590
0.8
1.000
1.000
1.000
0.9
1.000
1.000
1.000
1.0
1.000
1.000
1.000
When s = 1 there is a linear relationship between Ri and di for responses between the lower limit and the upper limit. For smaller values of s, the individual desirability increases more quickly than the response, and for larger values of s, the individual desirability increases more slowly than the response. Choosing a value of s greater than 1.0 delays the increase in the individual desirability, giving more weight to those responses closer to the lower limit; choosing a value of s less than 1.0 accelerates the increase in individual desirability, giving more weight to those responses closer to the upper limit.
Investigation 27
Compare the response surface for danshensu’s individual desirability (Figure 16) to its response surface in terms of extraction yield (Figure 10). In what ways are these response surfaces similar and in what ways are they different?
The two response surfaces are similar in showing how a change in extraction time affects the extraction yield, with longer extraction times resulting in greater extraction yields. Both response surfaces show that the extraction yield increases as the solvent-to-solid ratio increases from its lower limit, although the response surface for danshensu’s individual desirability does not show a decrease in the extraction yield for larger solvent-to-solid ratios. The response surface for danshensu’s individual desirability, with its large plateau, shows more clearly that the optimum condition for extracting danshensu is relatively insensitive to a change in the extraction time and the solvent-to-solid ratio.
Investigation 28
To explore the effect on the global desirability of weighting analytes, let’s assume we have four analytes with individual desirabilities of 0.90, 0.80, 0.70, and 0.60. What is the global desirability if you (a) weight the factors evenly by assigning each an r of 1; (b) assign a weight of 3 to the first analyte and a weight of 1 to the other three analytes; (c) assign a weight of 5 to the first analyte and a weight of 1 to the other three analytes; (d) assign a weight of 3 to the last analyte and a weight of 1 to the other three analytes; and (e) assign a weight of 2 to the second and third analytes and a weight of 1 to the first and last analyte? Examine your results and discuss any trends you see.
The global desirabilities are
(a) \(D=(0.90×0.80×0.70×0.60)^{1/4}=0.742\)
(b) \(D=((0.90)^3×0.80×0.70×0.60)^{1/6}=0.791\)
(c) \(D=((0.90)^5×0.80×0.70×0.60)^{1/8}=0.817\)
(d) \(D=(0.90×0.80×0.70×(0.60)^3 )^{1/6}=0.691\)
(e) \(D=(0.90×(0.80)^2×(0.70)^2×0.60)^{1/6}=0.744\)
Let’s use the global desirability of 0.742 in (a) as a reference as in this case we assign an equal importance to each analyte. In (b) we see that increasing the relative importance of the first analyte, which has the largest individual desirability, increases the global desirability; increasing the first analyte’s relative importance further increases the global desirability, as seen in (c). For (d) we see that increasing the relative importance of the last analyte, which has the smallest individual desirability, decreases the global desirability. In (e) we see that increasing the relative importance of the middle two analytes—one with an individual desirability slightly larger than 0.742 and one with an individual desirability slightly smaller than 0.742—yields a global desirability similar to that in (a); it is slightly larger than 0.742 because 0.80 is further from 0.742 than is 0.70.
Investigation 29
A comparison of Figure 16 and Figure 17 shows that the global desirability function has a smaller range of maximum values than does the individual desirability function for danshensu. Which analytes limit the range of optimum values for the global desirability function? Based on Figure 17, what is the range of extraction times and range of solvent-to-solid ratios that result in an optimum global desirability? Given the range of possible values for the extraction time and the solvent-to-solid ratio, what values are the best option? Why?
To evaluate the relative importance of an analyte, recall that its individual desirability has a value of 1.00 when its extraction yield is greater than 95% of its maximum extraction yield. As an example, consider the response surface for danshensu, an annotated version of which is shown here. Danshensu’s maximum extraction yield (see Investigation 25) is 0.821 mg/g, and 95% of this value is 0.78 mg/g. From Figure 10, we see that 0.78 mg/g corresponds to the fifth contour line and that everything to the right of this contour line has a individual desirability of 1.00.
A similar analysis for the other analytes shows that cryptotanshinone has an individual desirability of 1.00 when its extraction yield is greater than 0.84 mg/g, that tanshinone I has an individual desirability of 1.00 when its extraction yield is greater than 1.28 mg/g, and that tanshinone IIA has an individual desirability of 1.00 when its extraction yield is greater than 2.65 mg/g; in all three cases, these encompass large portions of their overall response surfaces (see Figures 13, 14, and 15, respectively). This is not the case for lithospermic acid (its individual desirability is 1.00 when its extraction yield exceeds 2.58 mg/g) or for salvianolic acid A (its individual desirability is 1.00 when its extraction yield exceeds 0.59 mg/g); for both analytes, the maximum individual desirability is limited to a small area in the response surface’s upper right corner (see Figures 11 and 12, respectively; thus, lithospermic acid and salvianolic acid A limit the choice of factor levels.
From Figure 17, the area encompassing a global desirability of 1.00 includes the following combinations of extraction times and solvent-to-solid ratios:
7.20 min and 37.0–39.0 mL/g
7.40 min and 34.0–39.0 mL/g
7.60 min and 33.0–39.0 mL/g
7.80 min and 32.0–39.0 mL/g
Any extraction time and solvent-to-solid ratio in this area will work; an extraction time of 7.50 min and a solvent-to-solid ratio of 35.0 mL/g, which is in the center of this area, is a reasonable compromise.
Our final optimized conditions for extracting Danshen are a solvent that is 80% methanol and 20% water (by volume), a temperature of 70°C, a microwave power of 800 W, an extraction time of 7.50 min, and a solvent-to-solid ratio of 35.0 mL/g.
Note: Although this case study does not include salvianolic acid B, which is included in the original paper, the optimized conditions arrived at here are identical to those in the original paper. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/06_Part_VI..txt |
Investigation 30
In Part V we found that the empirical model for the extraction of danshensu is
$EY=0.575+0.0225A+0.00905B-0.00125A^2-0.000165B^2+0.000100AB\nonumber$
where EY is the extraction yield (in mg/g), A is the extraction time (in min), and B is the solvent-to-solid ratio (in mL/g). Using this model, calculate danshensu’s predicted extraction yield for an extraction time of 7.50 min and a solvent-to-solid ratio of 35.0 mL/g. Is your predicted extraction yield consistent with the data in Table 2 and your response to Investigation 25?
Substituting into our empirical model for the extraction of danshensu an extraction time of 7.50 min and a solvent-to-solid ratio of 35.0 mL/g gives a predicted extraction yield of 0.814 μg/g. This results is consistent with the data in Table 2 for danshensu’s central-composite design, which suggests that its extraction yield is between 0.805 μg/g (for an extraction time of 7.00 min and a solvent-to-solid ratio of 35.0 mL/g) and 0.820 μg/g (for an extraction time of 7.82 min and a solvent-to-solid ratio of 25.0 mL/g), with its value closer to 0.805 μg/g.
Investigation 31
Figure 18 shows the chromatogram for a sample of Danshen extracted using the optimized conditions from Part VI. Using this chromatogram, calculate the actual extraction yield for each analyte and report its experimental extraction yield as a percentage of its predicted extraction yield from Table 3. Do your results provide confidence in our analytical method? Why or why not?
From Investigation 19, we know that the extraction yield, EY, is
$\mathrm{\mathit{EY}\left(\dfrac{mg}{g}\right)=\dfrac{\mathit{A}\: (mAU)×\mathit{V}\:(mL)}{\mathit{k}\left(\dfrac{mAU•mL}{μg}\right)× \mathit{m}\: (g)}×\dfrac{1\: mg}{1000\: μg}}\nonumber$
where A is the absorbance, V is the volume of solvent, k is an analyte-specific calibration constant, and m is the sample’s mass. The table below provides the absolute experimental extraction yields, EY, and the experimental extraction yields expressed as a percentage of the predicted extraction yield, %EY, using a volume of 35.0 mL and a sample of 1.000 g.
analyte
absorbance (mAU)
k (mAU•mL/μg)
EY (mg/g)
%EY
danshensu
36.4
1.605
0.794
97.5
rosmarinic acid
62.8
0.878
2.503
108.2
lithospermic acid
39.7
0.536
2.592
97.6
salvianolic acid A
26.4
1.585
0.583
97.2
dihydrotanshinone
33.1
2.841
0.408
96.2
cryptotanshinone
49.6
1.882
0.922
100.5
tanshinone I
59.4
1.599
1.300
97.3
tanshinone IIA
115.7
1.467
2.760
99.9
The percent extraction yields range from a low of 96.2% for dihydrotanshinone to a high of 108.2% for rosmarinic acid—the two analytes whose extraction yields could not be modeled—with an average percent extraction yield of 99.3%. These results suggest the empirical models for each analyte’s extraction yield provide a good estimation of the actual extraction yields.
Note: The predicted extraction yields are derived from Table 2 of the original paper. The chromatogram in Figure 18 is derived from Table 3 of the original paper.
Investigation 32
Compare your results from Investigation 31 with the results reported in Table 4. Do these results support a concern that heat-reflux extractions may distort the apparent composition of Danshen? As you consider this question, you may wish to review the chemical structures of these compounds, which are shown in Part I, and the HPLC data in Figure 19 for samples drawn at different times during an extended heat-reflux extraction of Danshen.
The table below reports the extraction yields for the three heat-reflux extractions as a percentage of the extraction yields from Investigation 30. With the exception of danshensu and lithospermic acid using HRE-1, the results for the remaining analytes are significantly less than 100%—suggesting that heat reflux extractions result in the thermal degradation of the analytes—ranging from a low of 62.0% for cryptotanshinone using HRE-1 to a high of 85.9% for dihydrotanshinone using HRE-2. The percentage extraction yield of 101.2% for lithospermic acid using HRE-1 is inconsistent with its results using HRE-2 and HRE-3 and most likely is an outlier.
Extraction Yields as % of Extraction Yield for Microwave Extraction
analyte
HRE-1
HRE-2
HRE-3
danshensu
205.3
104.8
133.5
rosmarinic acid
81.1
80.4
64.6
lithospermic acid
101.2
67.6
85.7
salvianolic acid A
76.4
76.8
79.8
dihydrotanshinone
85.4
85.9
71.6
cryptotanshinone
62.0
65.0
59.0
tanshinone I
69.6
74.8
70.6
tanshinone IIA
72.1
84.2
64.2
The results for danshensu require a closer consideration as we need to determine if they represent an underreporting of danshensu when using a microwave-assisted extraction or if they are the result of thermal degradation of other compounds during a heat-reflux assisted extraction. Two observations lead us to the latter possibility. First, the HPLC chromatograms in Figure 19, which focus on danshensu’s peak, show an increase in its peak height and, therefore, an increase in danshensu’s concentration with longer exposures to an elevated temperature; this suggests that the concentration of danshensu may increase as a result of the thermal degradation of other compounds. The structures of rosmarinic acid, lithospermic acid, and salvianolic acid A support this possibility as each compound is an ester, one part of which is danshensu. It seems likely that hydrolysis of the ester bond releases danshensu; thus, as the concentrations of rosmarinic acid, lithospermic acid, and salvianolic acid A decrease, the concentration of danshensu increases. This further supports the concern that a heat-reflux extraction distorts our understanding of Danshen’s composition.
Note: The data in Table 4 is taken from Table 3 of the original paper.
Investigation 33
Explain why analyzing a sample before and after adding a known amount of an analyte allows you to evaluate a method’s accuracy. Figure 20 shows the chromatogram for a sample of Danshen spiked prior to the microwave extraction with known amounts of each analyte, the concentrations of which are shown in Table 5. Using this data and your results for the unspiked sample in Investigation 31, how confident are you in the accuracy of our analytical method?
The process of analyzing a sample before and after adding a known amount of analyte is called a spike recovery. We first analyze a sample and determine the concentration of analyte in the sample. Next, we spike an identical sample with a known amount of analyte and determine the concentration of analyte in the spiked sample. The percent recovery is defined as
$\dfrac{C_\ce{spiked}-C_\ce{unspiked}}{C_\ce{spiked}} ×100\nonumber$
where Cspiked is the analyte’s concentration in the spiked sample and Cunspiked is the analyte’s concentration in the original, unspiked sample. If we lose some analyte to thermal degradation during the extraction, then we will obtain a spike recovery significantly less than 100%, and if a different analyte converts to our analyte during the extraction (as is the case for the data in Table 4 and in Figure 19), then we will obtain a spike recovery significantly greater than 100%. Obtaining a spike recovery of 100% provides confidence that the analytes are not degraded during the extraction.
The table below summarizes results for the spike recoveries. The first column gives the absorbance values extracted from Figure 20. The concentrations of analytes in the spiked sample were calculated as in Investigation 30 and the concentrations of analytes in the unspiked sample are taken from Figure 18 and from Investigation 30. Individual spike recoveries range from a low of 93.7% for tanshinone I to a high of 103.4% for tanshinone IIA. The average spike recovery is 99.6%. With the possible exception of the spike recovery for tanshinone I, which is a bit low, these results provide confidence in our analytical method’s accuracy.
analyte
absorbance (mAU)
Cspiked (mg/g)
Cunspiked (mg/g)
Cadded (mg/g)
% Recovery
danshensu
59.3
1.293
0.794
0.500
99.8
rosmarinic acid
126.1
5.023
2.503
2.500
100.8
lithospermic acid
78.8
5.146
2.592
2.500
102.2
salvianolic acid A
49.4
1.091
0.583
0.500
101.6
dihydrotanshinone
73.3
0.903
0.408
0.500
99.0
cryptotanshinone
101.5
1.888
0.922
1.000
96.6
tanshinone I
102.2
2.237
1.300
1.000
93.7
tanshinone IIA
224.0
5.344
2.760
2.500
103.4
Note: The original paper reports that the spike recoveries range from a low of 94.6% to a high of 106.3%, but do not report the spike recoveries for individual analytes. The data for this investigation were generated artificially. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/07_Part_VII.txt |
Investigation 34
Calculate the concentration of danshensu and the concentration of tanshinone I in each sample. For each set of samples—wild samples and cultivated samples—calculate the mean, the standard deviation, and the relative standard deviation for each analyte and comment on your results.
The table below reports the concentration of danshensu and tanshinone I in each sample.
Danshen Source
concentration (mg/g) for danshensu
concentration (mg/g) for tanshinone I
Wild Samples (Cities in Shandong Province)
Sanshangou
0.472
2.71
Yuezhuang
0.224
1.21
Dazhangzhuang
0.257
1.48
Pingse
0.812
0.92
Mengyin
0.217
2.89
Cultivated Samples (Lot Number)
020208
0.511
2.99
020209
0.517
3.00
020210
0.509
3.01
020211
0.497
3.24
020212
0.512
3.30
For the wild samples of Danshen originating from Shandong Province, the mean concentration for danshensu is 0.396 mg/g with a standard deviation of 0.255 and a relative standard deviation of 64.3%; the mean concentration for tanshinone I is 1.84 mg/g with a standard deviation of 0.899 and a relative standard deviation of 48.8%.
For the cultivated samples of Danshen, the mean concentration for danshensu is 0.509 mg/g with a standard deviation of 0.0074 and a relative standard deviation of 1.5%. The mean concentration for tanshinone I in the cultivated samples is 3.11 mg/g with a standard deviation of 0.150 and a relative standard deviation of 4.8%.
The concentrations of danshensu and tanshinone I in the wild samples of Danshen show substantial variability for plants collected from different locations, as evinced by the large standard deviations and relative standard deviations. The small standard deviations and relative standard deviations for danshensu and for tanshinone I in the cultivated samples show that there is a much smaller variation between plants. These results are not surprising as we might reasonably expect the concentrations of Danshen’s hydrophilic and lipophilic compounds to be sensitive to their local environment and to the consistency in the water and nutrients reaching the plants.
Note: The data in Table 6 are drawn from the paper “Simultaneous Determination of Seven Active Compounds in Radix Salviae Miltiorrhizae by Temperature-Controlled Ultrasound-Assisted Extraction and HPLC,” the full reference for which is Qu, H.; Zhai, X.; Shao, Q.; Cheng, Y. Chromatographa 2007, 66, 21–27. (DOI: 10.1365/s10337-007-0244-4).
09 Part IX.
For instructors interested in building into their laboratory curriculum a method development exercise based on the use of response surfaces, the following experiments from the Journal of Chemical Education may be of interest:
“Introduction to the Design and Optimization of Experiments Using Response Surface Methodology. A Gas Chromatography Experiment for the Instrumentation Laboratory,” Lang, P. L.; Miller, B. I.; Nowak, A. T. J. Chem. Educ., 2006, 83, 280–282.
“Experimental Design and Optimization: Application to a Grignard Reaction,” Bouzidi, N.; Gozzi, C. J. Chem. Educ., 2008, 85, 1544–1547.
“Visualizing the Solute Vaporization Interference in Flame Atomic Absorption Spectroscopy,” Dockery, C. R.; Blew, M. J.; Goode, S. R. J. Chem. Educ., 2008, 85, 854–858.
“Attaining Optimal Conditions: An Advanced Undergraduate Experiment that Introduces Experimental Design and Optimization,” Van Ryswyk, H.; Van Hecke, G. R. J. Chem. Educ., 1991, 68, 878–882.
“Optimization of HPLC and GC Separations Using Response Surfaces: Three Experiments for the Instrumental Analysis Laboratory,” Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. J. Chem. Educ., 1991, 68, 162–168.
“Central Composite Experimental Designs: Applied to Chemical Systems,” Palasota, J. A.; Deming, S. N. J. Chem. Educ., 1992, 69, 560–563.
“Mixture Design Experiments Applied to the Formulation of Colorant Solutions,” Gozálvez, J. M.; García-Díaz, J. C. J. Chem. Educ., 2006, 83, 647–650.
“Experimental Design, Near-Infrared Spectroscopy, and Multivariate Calibration: An Advanced Project in Chemometrics,” J. Chem. Educ., 2012, 89, 1566–1571. | textbooks/chem/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Contextual_Modules/Developing_an_Analytical_Method_for_the_Analysis_of_a_Medicinal_Plant/10_Instructors_Guide/08_Part_VII.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.