chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Learning Objectives You should be able to: • describe the reactions involved in the speciation of metals in the aquatic and soil environments. • explain the equilibrium approach to modelling metal speciation. • identify which water and soil properties impact the fate of metals • describe how processes such as competition and sorption impact metal bioavailability • explain why the environmental fate of metals is dynamic Keywords: Metal complexation, redox reactions, equilibrium reactions, water chemistry, soil properties. Introduction Metals occur in different physical and chemical forms in the environment, for example as the element (very rare in the environment), as components of minerals, as free cations dissolved in water (e.g. Cd2+), or bound to inorganic or organic molecules in either the solid or dissolved phases (e.g. HgCH3+ or AgCl2+) (Allen 1993). The distribution of a metal over these different forms is referred to as metal speciation. Physical processes may also affect the mobility and bioavailability of metals, for example the electrostatic attraction of metal cations to negatively charged mineral surfaces. These processes are in general not referred to as metal speciation in the strict sense but they are discussed here. Metal speciation reactions The speciation of metals is controlled by both the properties of the metals (see the section Metals and metalloids) and the properties of the environment in which they are present, such as the pH, redox potential and the presence and concentrations and properties of molecules that could form complexes with the metals. These complex forming molecules are often called ligands and these can vary from relatively simple anions in solution, such as sulphate or anions of soluble organic acids, to more complex macromolecules such as proteins and other biomolecules. The adsorption of metals by covalent bond formation to oxide and hydroxide surfaces of minerals, and oxygen- or nitrogen-containing functional groups of solid organic matter, is also referred to as complexation. Since these metal-binding functional groups are often either acidic or basic, the pH is an important environmental parameter controlling complexation reactions. In natural systems the speciation of metals is of great complexity and determines their mobility in the environment and their bioavailability (i.e. how easily they are taken up by organisms). Metal speciation therefore plays a key role in determining the potential bioaccumulation and toxicity of metals and should therefore be considered when assessing their ecological risks. Metal bioavailability and transport are in particular strongly related to the distribution over solid and liquid phases of the environmental matrix. The four main chemical reactions determining metal speciation, so the binding of metal ions to ligands and their presence in solid and liquid phases, are (Bourg, 1995): • adsorption and desorption processes • ion exchange and dissolution reactions • precipitation and co-precipitation • complexation to inorganic and organic ligands The complexity of these reactions is illustrated in Figure 1. Figure in preparation Figure 1. Metals (M) speciation in the environment is determined by a number of reactions, including complexation, precipitation and sorption. These reactions affect the partitioning of metals across solid and liquid phases, hence their mobility as well as their bioavailability. Adsorption, desorption and ion exchange processes take place with the reactive components present in soils, sediments and to lower extent in water. These include: • clay minerals • hydroxides (e.g. FeOH3) and/or carbonates (e.g. CaCO3) • organic matter Metal ions react with these reactive components in different ways. In soils and sediments, cationic metals bind reversibly to clay minerals via cation-exchange processes (see section on Soil). Metal ions also form complexes with so-called functional groups (mainly carboxylic and phenolic groups) present in organic matter (see section on Soil). In aquatic systems similar binding processes occur, in which dissolved organic matter (or carbon) (DOM or DOC) plays a major role. The "dissolved" fraction of organic matter is operationally defined as the fraction passing a 0.45 µm filter and is often referred to fulvic and humic acids. As mentioned above and in the section on Shttps://maken.wikiwijs.nl/147644/Environmental_Toxicology__an_open_online_textbook#!page-5415168oil, negatively charged surfaces of the reactive mineral and organic components present in soil, sediment or water attract positively-charged atoms or molecules (cations, e.g. Cd2+), and allow these cations to exchange with other positively charged ions. The competition between cations for binding sites is driven by the binding affinity of each metal species, as well as the concentration of each metal species. Cation-exchange capacity (CEC) is a property of the sorbent, and defined as the density of available negatively charged sites per mass of environmental matrix (soil, sediment). In fact, it is a measure of how many cations can be retained on solid surfaces. CEC usually is expressed in cmolc/kg soil (see section on Soil). Increasing the pH (i.e. decreasing the concentration of H+ ions) increases the variable charge of most sorbents (more types of protonated groups on sorbent surfaces release their H+), especially for organic matter, and therefore also increases the cation exchange capacity. Protons (H+) also compete with metal ions for the same binding sites. Conversely, at decreasing pH (increasing H+ concentrations), most sorbents lower their CEC. Modelling metal speciation Metal speciation can be modelled if we have sufficient knowledge of the most important reactions involved and the environmental conditions that control these reactions. This knowledge is expressed in the form of equilibria expressing the most important complexation and/or redox reactions. For example, in the general case of a complexation reaction between metal M and ligand L described by the equilibrium: aMm+ (aq) + bLn- (aq) ↔ MaLbq+ (aq) (where q = am-bn) The relationship between the concentrations (or more accurately the activities) of the species is given by: If redox reactions are involved in speciation, we can use the Nernst equation to describe the equilibrium between reduced and oxidised states of the metal: Where Eh is the redox potential, Eh0 the standard potential of the redox pair (relative to the hydrogen electrode), R the molar gas constant, T the temperature, n the number of transferred electrons, F the Faraday constant and {Red/Ox} the activity (or concentration) ratio of the reduced and oxidized species. Since many redox reactions involve the transfer of H+, the value of {Red/Ox} for these equilibria will depend on the pH. Note that the redox potential is often expressed as pe which is defined as the negative logarithm of the electron activity (pe = - log {e-}). Using these comparatively simple equations for all the relevant reactions involved it is possible to construct models to describe metal speciation as a function of ligand concentrations, pH and redox potential. As an example, Table 1 presents the relevant equilibria for the speciation of iron in water. Table 1. Equilibrium reactions relevant for Fe in water (adapted from Essington, 2003) Boundary Equilibrium reaction (1) Fe3+ + e- D Fe2+ pEΘ = 13.05 (2) Fe(OH)3(s) + 3H+ D Fe3+ + 3H2O Ksp = 9.1 x 103 L2 mol-2 (3) Fe(OH)2(s) + 2H+ D Fe2+ + 2H2O K*sp = 8.0 x 1012 L mol-1 (4) Fe(OH)3(s) + H+ + e- D Fe(OH)2(s) + H2O (5) Fe(OH)3(s) + 3H+ + e- D Fe2+ + 3H2O Using these we can derive equations defining the conditions of pH and pe at which the activity or concentration ratio is one for each equilibrium. These are shown as the continuous boundary lines in Fig. 3. In this Pourbaix or pe-pH (or pE-pH) diagram, the fields separated by the boundary lines are labelled with the dominant species present under the conditions that define the fields. (NB. The dotted lines define the conditions of pe and pH under which water is stable.) Environmental effects on speciation In the environment there is, however, in general no equilibrium. This means that the speciation and hence also fate of metals is highly dynamic. Large scale alterations occur when land use changes, e.g. when agricultural land is abandoned and becomes nature. Whereas agricultural soil often is 'limed' (addition of CaCO3) to maintain near-neutral pH and crop is removed by harvesting, in natural ecosystems all produced organic matter remains in the system. Therefore natural soils show an increase in soil organic matter content, while due to microbial decomposition processes soil pH tends to decrease. As a result, DOC concentration in the soil porewater will increase, while metal mobility also is increased by the decreasing soil pH (Cu2+ is more mobile than CuCO3). This may cause historical metal pollution to suddenly become available (the "chemical time bomb" effect). Large scale reconstruction of rivers or deep soil digging for land planning and development may also affect environmental conditions in such a way that metal speciation may change. An example of this is the change in arsenic speciation in groundwater due to the drilling of wells in countries like Bangladesh; the introduction of oxygen and organic matter into the deeper groundwater caused a change of arsenic speciation, enhancing its solubility in water and therefore increasing human exposure (see section on Metals and metalloids). Dynamic conditions do not only occur on large spatial and temporal scales, nature is also dynamic on smaller scales. Abiotic factors such as rain and flooding events, weather conditions, and redox status may alter metal speciation. In addition, biotic factors may affect metal speciation. An example of the latter is the bioturbation by sediment-dwelling organisms that re-suspend particles into water, or earthworms that by their digging activities aerate the soil and excrete mucus that may stimulate microbial activity (see Figure 4A). These activities of soil and sediment organisms alter the environmental conditions and hence affect metal speciation (see Figure 4B). The production of acidic root exudates by plants may also have similar effects on metal speciation. Another process that alters metal speciation is the uptake of metals. Since the ionic metal form seems most prone to root uptake, or active intake over cell membranes, this process may affect metal partitioning over different species. References Allen, H.E. (1993). The significance of trace metal speciation for water, sediment and soil quality criteria and standards. Science of the Total Environment 134, 23-45. Andrews, J.E., Brimblecombe, P., Jickells, T.D., Liss P.S., Reid, B. (2004) An Introduction to Environmental Chemistry, 2nd Edition, Blackwell, ISBN 0-632-05905-2 (chapter 6). Bourg, A.C.M. (1995) Speciation of heavy metals in soils and groundwater and implications for their natural and provoked mobility. In: Salomons, W., Förstner, U., Mader, P. (Eds.). Heavy Metals. Springer, Berlin. p. 19-31. Blust, R., Fontaine, A., Decleir, W. (1991) Effect of hydrogen ions and inorganic complexing on the uptake of copper by the brine shrimp Artemia franciscana. Marine Ecology Progress Series 76, 273-282. Essington, M.E. (2003) Soil and Water Chemistry, CRC Press, ISBN 0-8493-1258-2 (chapters 5, 7 and 9). Sposito, G. (2008) The Chemistry of Soils, 2nd Edition, Oxford University press, ISBN 978-0-19-531369-7 (chapter 4). Sparks, D.L. (2003) Environmental Soil Chemistry, 2nd Edition, Academic Press, ISBN 0-12-656446-9 (chapters 5, 6 and 8). 3.5. Question 1 What are the most important reactions involved in the speciation of metals in the aquatic and soil environments? 3.5. Question 2 What are the most important environmental parameters controlling these reactions? 3.5. Question 3 In the equilibrium approach to modelling metal speciation, dominance or pe-pH diagrams are used as a visual representation of speciation. What do the lines and fields in such diagrams represent? • Author: Martina Vijver, John Parsons • Reviewers: Kees van Gestel, Ronny Blust, Steven Droge
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/03%3A_Environmental_Chemistry_-_From_Fate_to_Exposure/3.05%3A_Metal_Speciation.txt
4.1. Toxicokinetics Author: Nico van Straalen Reviewer: Kees van Gestel Learning objectives: You should be able to • Describe the difference between toxicokinetics and toxicodynamics • Explain the use of different descriptors for the uptake of chemicals by organisms under steady state and dynamic conditions Keywords: toxicodynamics, toxicokinetics, bioaccumulation, toxicokinetic rate constants, critical body residue, detoxification Toxicology usually distinguishes between toxicokinetics and toxicodynamics. Toxicokinetics involves all processes related to uptake, internal transport and the accumulation inside an organism, while toxicodynamics deals with the interaction of a compound with a receptor, induction of defence mechanisms, damage repair and toxic effects. Of course the two sets of processes may interact, for instance defence may feed-back to uptake and damage may change the internal transport. However, often toxicokinetics analysis just focuses on tracking of the chemical itself and ignores possible toxic effects. This holds up to a critical threshold, the critical body concentration, above which effects become obvious and the normal toxicokinetics analysis is no longer valid. The assumption that toxicokinetic rate parameters are independent of the internal concentration is due the limited amount of information that can be obtained from animals in the environment. However, in so called physiology-based pharmacokinetic and pharmacodynamic models (PBPK models) kinetics and dynamics are analyzed as an integrated whole. The use of such models is however mostly limited to mammals and humans. It must be emphasized that toxicokinetics considers fluxes and rates, i.e. mg of a substance moving per time unit from one compartment to another. Fluxes may lead to a dynamic equilibrium, i.e. an equilibrium that is due to inflow being equal to outflow; when only the equilibrium conditions are considered, this is called partitioning. In this Chapter 4.1 we will explore the various approaches in toxicokinetics, including the fluxes of toxicants through individual organisms and through trophic levels as well as the biological processes that determine such fluxes. We start by comparing the concentrations of toxicants between organisms and their environment (section 4.1.1), and between organisms of different trophic levels (section 4.1.6). This leads to the famous concept of bioaccumulation, one of the properties of a substance often leading to environmental problems. While in the past dilution was sometimes seen as a solution to pollution, this is not correct for bioaccumulating substances, since they may turn up elsewhere in the next level food-chain and reach an even higher concentration. The bioaccumulation factor is one of the best-investigated properties characterizing the environmental behaviour of a substance . It may be predicted from properties of the substance such as the octanol-water partitioning coefficient. In section 4.1.2 we discuss the classical theory of uptake-elimination kinetics using the one-compartment linear model. This theory is a crucial part of toxicological analysis. One of the first things you want to know about a substance is how quickly it enters an organism and how quickly it is removed. Since toxicity is basically a time-dependent process, the turnover rate of the internal concentration and the build-up of a residue depend upon the exposure time. An understanding of toxicokinetics is therefore critical to any interpretation of a toxicity experiment. Rate parameters may partly be predicted from substance properties, but properties of the organism play a much greater role here. One of these is simply the body mass; prediction of elimination rate constants from body mass is done by allometric scaling relationships, explored in section 4.1.5. In two sections, 4.1.3 and 4.1.4, we present the biological processes that underlie the turnover of toxicants in an organism. These are very different for metals than for organic substances, hence, two separate sections are devoted to this topic, one on tissue accumulation of metals and one on defence mechanisms for organic xenobiotics. Finally, if we understand all toxicokinetic processes we will also be able to understand whether the concentration inside a target organ will stay below or just passes the threshold that can be tolerated. The critical body concentration, explored in section 4.1.7 is an important concept linking toxicokinetics to toxicity. 4.1. Question 1 Define and indicate the differences between 1. Toxicokinetics 2. Toxicodynamics 3. Partitioning 4. Accumulation 5. Bioaccumulation 4.1.1. Bioaccumulation Author: Joop Hermens Reviewers: Kees van Gestel and Philipp Mayer Learning objectives: You should be able to • define and explain different bioaccumulation parameters. • Mention different biological factors that may affect bioaccumulation. Key words: Bioaccumulation, lipid content Introduction: terminology for bioaccumulation The term bioaccumulation describes the transfer and accumulation of a chemical from the environment into an organism." For a chemical like hexachlorobenzene, the concentration in fish is more than 10,000 times higher than in water, which is a clear illustration of "bioaccumulation". A chemical like hexachlorobenzene is hydrophobic, so has a very low aqueous solubility. It therefore prefers to escape the aqueous phase to enter (or partition into) a more lipophilic phase such as the lipid phase in biota. Uptake may take place from different sources. Fish mostly take up chemicals from the aqueous phase, organisms living at the sediment-water interphase are exposed via the overlying water and sediment particles, organisms living in soil or sediment via pore water and by ingesting soil or sediment, while predators will be exposed via their food. In many cases, uptake is related to more than one source. The different uptake routes are also reflected in the parameters and terminology used in bioaccumulation studies. The different parameters include the bioconcentration factor (BCF), bioaccumulation factor (BAF), biomagnification factor (BMF) and biota-to-sediment or biota-to-soil accumulation factor (BSAF). Figure 1 summarizes the definition of these parameters. Bioconcentration refers to uptake from the aqueous phase, bioaccumulation to uptake via both the aqueous phase and the ingestion of sediment or soil particles, while biomagnification expresses the accumulation of contaminants from food. Cs concentration in sediment or soil Please note that the bioaccumulation factor (BAF) is defined in a similar way as the bioconcentration Factor (BCF), but that uptake can be both from the aqueous phase and the sediment or soil and that the exposure concentration usually is expressed per kg dry sediment or soil. Other definitions of the BAF are possible, but we have followed the one from Mackay et al. (2013). "The bioaccumulation factor (BAF) is defined here in a similar fashion as the BCF; in other words, BAF is CF/CW at steady state, except that in this case the fish is exposed to both water and food; thus, an additional input of chemical from dietary assimilation takes place". All bioaccumulation factors are steady state constants: the concentration in the organism constant and the organisms is in equilibrium with its surrounding phase. It will take time before such an steady state is reached. Steady state is reached when uptake rate (for example from an aqueous phase) equals the elimination rate. Models that include the factor time in describing the uptake are called kinetic models; see section on Bioaccumulation kinetics. Effect of biological properties on accumulation Uptake of chemicals is determined by properties of both the organism and the chemical. For xenobiotic lipophilic chemicals in water, organism-specific factors usually play a minor role and concentrations in organisms can pretty well be predicted from chemical properties (see section on Structure-property relationships). For metals, on the contrary, uptake is to a large extent determined by properties of the organism, and a direct consequence of its mineral requirements. A chemical with low bioavailability (low uptake compared to concentration in exposure medium) may nevertheless accumulate to high levels when the organism is not capable of excreting or metabolising the chemical. Factors related to the organism are: • Fat content. Because lipophilic chemicals mainly accumulate in the fat of organisms, it is reasonable to assume that lipid rich organisms will have higher concentrations of lipophilic chemicals. See Figure 2 for an example of bioconcentration factors of 1,2,4-trichlorobenzene in a number of organisms with varying lipid content. This was one of the explanations for the high PCB levels in eel (high lipid content) in Dutch rivers and seals from the Wadden Sea. Nevertheless, large differences are still found between species when concentrations are expressed on a lipid basis. This may e.g. be explained by the fact that lipids are not all identical: PCBs seem to dissolve better in lipids from anchovies than in lipids from algae (see data in Table 1). Also more recent research has shown that not all lipids are the same and that differences in bioaccumulation between species may be due to differences in lipid composition (Van der Heijden and Jonker, 2011). Related to this is the development of BCF models that are based on multiple compartments and that make a separation between storage and membrane lipids and also include a protein fraction as additional sink (Armitage et al., 2013). In this model, the overall distribution coefficient DBW (or BCF) is estimated via equation 1. The equation is using distribution coefficients because the model also "accounts for the presence of neutral and charged chemical species" (Armitage et al., 2013). DBW overall organism-water distribution coefficient (or surrogate BCF) at a given pH DSL-W storage lipid-water distribution ratio DML-W membrane lipid-water distribution ratio DNLOM-W sorption coefficient to NLOM (non-lipid organic matter, for example proteins) fSL fraction of storage lipids fML fraction of membrane lipids fNLOM fraction of non-lipid organic matter (e.g. proteins, carbohydrates) fW fraction of water • Sex: Chemicals (such as DDT, PCB) accumulated in milk fat may be transferred to juveniles upon lactation. This was found in marine mammals. In this way, females have an additional excretion mechanism. A typical example is shown in Figure 3, taken from a study of Abarnou et al. (1986) on the levels of organochlorinated compounds in the Antarctic dolphin Cephalorhyncus commersonii. In males, concentrations increase with increasing age, but concentrations in mature females decrease with increasing age. • Weight: (body mass) of the organism relative to the surface area across which exchange with the water phase takes place. Smaller organisms have a larger surface-to-volume ratio and the exchange with the surrounding aqueous phase is faster. Therefore, although lipid normalized concentrations will be the same at equilibrium, this equilibrium is reached earlier in smaller than in larger organisms. • Difference in uptake route. The relative importance of uptake through the skin (in fish e.g. the gills) and (oral) uptake through the digestive system. It is generally accepted that for most free-living organism's direct uptake from the water dominates over uptake after digestion in the digestive tract. • Metabolic activity. Even at the same weight or the same age, the balance between uptake and excretion may change due to an increased metabolic activity, e.g. in times of fast growth or high reproductive activity. Table 1. Mean PCB concentrations in algae (Dunaliella spec.), rotifers (Brachionus plicatilis) and anchovies larvae (Angraulis mordax), expressed on a dry-weight basis and on a lipid basis. From Moriarty (1983). Organism Lipid content (%) PCB-concentration based on dry weight (µg g-1) PCB-concentration based on lipid weight (µg g-1) BCF based on concentration in the lipid phase algae 6.4 0.25 3.91 0.48 x 106 rotifer 15.0 0.42 2.80 0.34 x 106 fish (anchovies) larvae 7.5 2.06 27.46 13.70 x 106 Cited references Abarnou, A., Robineau, D., Michel, P. (1986). Organochlorine contamination of commersons dolphin from the Kerguelen islands. Oceanologica Acta 9, 19-29. Armitage, J.M., Arnot, J.A., Wania, F., Mackay, D. (2013). Development and evaluation of a mechanistic bioconcentration model for ionogenic organic chemicals in fish. Environmental Toxicology and Chemistry 32, 115-128. Geyer, H., Scheunert, I., Korte, F. (1985). Relationship between the lipid-content of fish and their bioconcentration potential of 1,2,4-trichlorobenzene. Chemosphere 14, 545-555. Mackay, D., Arnot, J.A., Gobas, F., Powell, D.E. (2013). Mathematical relationships between metrics of chemical bioaccumulation in fish. Environmental Toxicology and Chemistry 32, 1459-1466. Moriarty, F. (1983). Ecotoxicology: The Study of Pollutants in Ecosystems. Publisher: Academic Press, London. Van der Heijden, S.A., Jonker, M.T.O. (2011). Intra- and interspecies variation in bioconcentration potential of polychlorinated biphenyls: are all lipids equal? Environmental Science and Technology 45, 10408-10414. Suggested reading Mackay, D., Fraser, A. (2000). Bioaccumulation of persistent organic chemicals: Mechanisms and models. Environmental Pollution 110, 375-391. Van Leeuwen, C.J., Vermeire, T.G. (Eds.) (2007). Risk Assessment of Chemicals: An Introduction. Springer, Dordrecht, The Netherlands. Chapter 3. 4.1.1. Question 1 What is the difference between BCF and BSAF? 4.1.1. Question 2 What are the main uptake routes for a sediment organism 4.1.1. Question 3 Which biological factors may influence the bioaccumulation? 4.1.2. Toxicokinetics Author: Joop Hermens, Nico van Straalen Reviewers: Kees van Gestel, Philipp Mayer Learning objectives: You should be able to • mention the underlying assumptions of the kinetic models for bioaccumulation • understand the basic equations of a one compartment kinetic bioaccumulation model • explain the differences between one- and two-compartment models • mention which factors affect the rate constants in a compartment model Key words: Bioaccumulation, toxicokinetics, compartment models In the section "Bioaccumulation", the process of bioaccumulation is presented as a steady state process. Differences in the bioaccumulation between chemicals are expressed via, for example, the bioconcentration factor BCF. The BCF represents the ratio of the chemical concentration in, for instance, a fish versus the aqueous concentration at a situation where the concentrations in water and fish do not change in time. (1) where: Caq concentration in water (aqueous phase) (mg/L) Corg concentration in organism (mg/kg) The unit of BCF is L/kg. Kinetic models Steady state can be established in a simple laboratory set-up where fish are exposed to a chemical at a constant concentration in the aqueous phase. From the start of the exposure (time 0, or t=0), it will take time for the chemical concentration in the fish to reach steady state and in some cases, this will not be established within the exposure period. In the environment, exposure concentrations may fluctuate and, in such scenarios, constant concentrations in the organism will often not be established. Steady state is reached when the uptake rate (for example from an aqueous phase) equals the elimination rate. Models that include the factor time in describing the uptake of chemicals in organisms are called kinetic models. Toxicokinetic models for the uptake of chemicals into fish are based on a number of processes for uptake and elimination. An overview of these processes is presented in Figure 1. In the case of fish, the major process of uptake is by diffusion from the surrounding water compartment via the gill to the blood. Elimination can be via different processes: diffusion via the gill from blood to the surrounding water compartment, via transfer to offspring or eggs by reproduction, by growth (dilution) and by internal degradation of the chemical (biotransformation). Kinetic models to describe uptake of chemicals into organisms are relatively simple with the following assumptions: First order kinetics: Rates of exchange are proportional to the concentration. The change in concentration with time (dC / dt ) is related to the concentration and a rate constant (k): (2) One compartment: It is often assumed that an organism consists of only one single compartment and that the chemical is homogeneously distributed within the organism. For "simple" small organisms this assumption is intuitively valid, but for large fish this assumption looks unrealistic. But still, this simple model seems to work well also for fish. To describe the internal distribution of a chemical within fish, more sophisticated kinetic models are needed, similar to the ones applied in mammalian studies. These more complex models are the "physiologically based toxicokinetic" (PBTK) models (Clewell, 1995; Nichols et al., 2004) Equations for the kinetics of accumulation process The accumulation process can be described as the sum of rates for uptake and elimination. (3) Integration of this differential equation leads to equation 4. (4) Corg concentration in organisms (mg/kg) Caq concentration in aqueous phase (mg/L) kw uptake rate constant (L/kg·day) ke elimination rate constant (1/day) t time (day) (dimensions used are: amount of chemical: mg; volume of water: L; weight of organism: kg; time: day); see box. Box: The units of toxicokinetic rate constants The differential equation underlying toxicokinetic analysis is basically a mass balance equation, specifying conservation of mass. A mass balance implies that the amount of chemical is expressed in absolute units such as mg. If Q is the amount in the animal and F the amount in the environmental compartment the mass balance reads: where is the uptake rate constant and k2 the elimination rate constant, both with dimension time-1. However, it is often more practical to work with the concentration in the animal (e.g. expressed in mg/kg). This can be achieved by dividing the left and right sides of the equation by w, the body-weight of the animal and defining Cint = Q/w. In addition, we define the external concentration as Cenv = F/V, where V is the volume (L or kg) of the environmental compartment. This leads to the following formulation of the differential equation: Beware that Cenv is measured in other units (mg per kg of soil, or mg per litre of water) than Cint (mg per kg of animal tissue). To get rid of the awkward factor V/w it is convenient to define a new rate constant, k1: This is the uptake rate constant usually reported in scientific papers. Note that it has other units than : it is expressed as kg of soil per kg of animal tissue per time unit (kg kg-1 h-1), and in the case of water exposure as L kg-1 h-1. The dimension of k2 remains the same whether mass or concentrations are used (time-1). We also learn from this analysis that when dealing with concentrations, the body-weight of the animal must remain constant. Equation 4 describes the whole process with the corresponding graphical representation of the uptake graph (Figure 2). The concentration in the organism is the result of the net process of uptake and elimination. At the initial phase of the accumulation process, elimination is negligible and the ratio of the concentration in the organism is given by: (5) (6) Steady state After longer exposure time, elimination becomes more substantial and the uptake curve starts to level off. At some point, the uptake rate equals the elimination rate and the ratio Corg/Caq becomes constant. This is the steady state situation. The constant Corg/Caq at steady state is called the bioconcentration factor BCF. Mathematically, the BCF can also be calculated from kw/ke. This follows directly from equation 4: after long exposure time (t), becomes 0 leading to (7) Elimination Elimination is often measured following an uptake experiment. After the organism has reached a certain concentration, fish are transferred to a clean environment and concentration in the organism will decrease in time. Because this is also a first order kinetic process, the elimination rate will depend on the concentration in the organism (Corg) and the elimination rate constant (ke) (see equation 8). Concentration will decrease exponentially in time (equation 9) as shown in Figure 3A. Concentrations are often transformed to the natural logarithmic values (ln Corg) because this results in a linear relationship with slope -ke.(equation 10 and figure 3B). (8) (9) (10) where Corg(t=0) is the concentration in the organism when the elimination phase starts. The half-life (T1/2 or DT50) is the time needed to eliminate half the amount of chemical from the compartment. The relationship between ke and T1/2 is: T1/2 = (In 2) / ke. The half-life increases when ke decreases. Multicompartment models Very often, organisms cannot be considered as one compartment, but as two or even more compartments (Figure 4A). Deviations from the one-compartment system usually are seen when elimination does not follow an exponential pattern as expected: no linear relationship is obtained after logarithmic transformation. Figure 4B shows the typical trend of the elimination in a two-compartment system. The decrease in concentration (on a logarithmic scale) shows two phases: phase I with a relatively fast decrease and phase II with a relatively slow decrease. According to the linear compartment theory, elimination may be described as the sum of two (or more) exponential terms, like: (11) where ke(I) and ke(II) represent elimination rate constants for compartment I and II, F(I) and F(II) are the size of the compartments (as fractions) Typical examples of two compartment systems are: • Blood (I) and liver (II) • Liver tissue (I) and fat tissue (II) Elimination from fat tissue is often slower than from, for example, the liver. The liver is a well perfused organ while the exchange between lipid tissue and blood is much less. That explains the faster elimination from the liver. Examples of uptake curves for different chemicals and organisms Figure 5 gives uptake curves for two different chemicals and the corresponding kinetic parameters. Chemical 2 has a BCF of 1000, chemical 1 a BCF of 10,000. Uptake rates (kw) are the same, which is often the case for organic chemicals. Half-lives (time to reach 50 % of the steady state level) are 14 and 140 hours. This makes sense because it will take a longer time to reach steady state for a chemical with a higher BCF. Elimination rate constants also differ a factor of 10. In figure 6, uptake curves are presented for one chemical, but in two organisms of different size/weight. Organism 1 is much smaller than organism 2 and reaches steady state much earlier. T1/2 values for the chemical in organisms 1 and 2 are 14 and 140 hours, respectively. The small size explains this fast equilibration. Rates of uptake depend on the surface-to-volume ratio (S/V) of an organism, which is much higher for a small organism. Therefore, kinetics in small organisms is faster resulting in shorter equilibration times. The effect of size on kinetics is discussed in more detail in Hendriks et al. (2001) and in the Section on Allometric Relationships. Bioaccumulation involving biotransformation and different routes of uptake In equation 2, elimination only includes gill elimination. If other processes such as biotransformation and growth are considered, the equation can be extended to include these additional processes (see equation 12). (12) For organisms living in soil or sediment, different routes of uptake may be of importance: dermal (across the skin), or oral (by ingestion of food and/or soil or sediment particles). Mathematically, the uptake in an organism in sediment can be described as in equation 13. (13) Corg concentration in organisms (mg/kg) Caq concentration in aqueous phase (mg/L) Cs concentration soil or sediment (mg/kg) kw uptake rate constant from water (L/kg/day) ks uptake rate constant from soil or sediment (kgsoil/kgorganism/day) ke elimination rate constant (1/day) t time (day) (dimensions used are: amount of chemical: mg; volume of water: L; weight of organism: kg; time: day) In this equation, kw and ks are the uptake rate constants from water and sediment, ke is the elimination rate constant and Caq and Cs are the concentrations in water and sediment or soil. For soil organisms, such as earthworms, oral uptake appears to become more important with increasing hydrophobicity of the chemical (Jager et al., 2003). This is because the concentration in soil (Cs) will become higher than the porewater concentration Ca for the more hydrophobic chemicals (see section on Sorption). References Clewell, H.J., 3rd (1995). The application of physiologically based pharmacokinetic modeling in human health risk assessment of hazardous substances. Toxicology Letters 79, 207-217. Hendriks, A.J., van der Linde, A., Cornelissen, G., Sijm, D. (2001). The power of size. 1. Rate constants and equilibrium ratios for accumulation of organic substances related to octanol-water partition ratio and species weight. Environmental Toxicology and Chemistry 20, 1399-1420. Jager, T., Fleuren, R., Hogendoorn, E.A., De Korte, G. (2003). Elucidating the routes of exposure for organic chemicals in the earthworm, Eisenia andrei (Oligochaeta). Environmental Science and Technology 37, 3399-3404. Nichols, J.W., Fitzsimmons, P.N., Whiteman, F.W. (2004). A physiologically based toxicokinetic model for dietary uptake of hydrophobic organic compounds by fish - II. Simulation of chronic exposure scenarios. Toxicological Sciences 77, 219-229. Van Leeuwen, C.J., Vermeire, T.G. (Eds.) (2007). Risk Assessment of Chemicals: An Introduction. Springer, Dordrecht, The Netherlands. 4.1.2. Question 1 What are the assumptions in a one-compartment kinetic model? 4.1.2. Question 2 How can you identify when more compartments are involved in a kinetic model? 4.1.2. Question 3 Give examples of compartments in a multi-compartment system. 4.1.2. Question 4 Why is equilibrium reached faster in a small organism compared to a bigger organism? 4.1.2. Question 5 Which two methods can be applied to estimate the bioconcentration factor of a chemical in an organism? 4.1.3. Tissue accumulation of metals Author: Nico M. van Straalen Reviewers: Philip S. Rainbow, Henk Schat Learning objectives: You should be able to • indicate four types of inorganic metal binding cellular constituents present in biological tissues and indicate which metals they bind. • describe how phytochelatin and metallothionein are induced by metals. • mention a number of organ-metal combinations that are critical to metal toxicity. Key words: Metal binding proteins; phytochelatin; metallothionein; Synopsis The issue of metal speciation, which is crucially important to understand metal fate in the environment, is equally important for internal distribution in organisms and toxicity inside the cell. Many metals tend to accumulate in specific organs, for example the hepatopancreas of crustaceans, the chloragogen tissue of annelids, and the kidney of mammals. In addition, there is often a specific organ or tissue where dysfunction or toxicity is first observed, e.g., in the human body, the primary effects of chronic exposure to mercury are seen in the brain, for lead in bone marrow and for cadmium in the kidney. This module is aimed at increasing the insight into the different mechanisms by which metals accumulate in biological tissues. Introduction Metals will be present in biological tissues in a large variety of chemical forms: free metal ion, various inorganic species with widely varying solubility, such as chlorides or carbonates, plus all kind of metal species bound to low-molecular and high-molecular weight biotic ligands. The free metal ion is considered the most relevant species relating to toxicity. To explain the affinities of metals with specific targets, a system has been proposed based on the physical properties of the ion; according to this system, metals are divided into "oxygen-seeking metals" (class A, e.g. lithium, beryllium, calcium and lanthanum), and "sulfur-seeking metals" (class B, e.g. silver, mercury and lead) (See section on Metals and metalloids). However, most of the metals of environmental relevance fall in an intermediate class, called "borderline" (chromium, cadmium, copper, zinc, etc.). This classification is to some extent predictive of the binding of metals to specific cellular targets, such as SH-groups in proteins, nitrogen in histidine or carbonates in bone tissue. Not only do metals differ enormously in their physicochemical properties, also the organisms themselves differ widely in the way they deal with metals. The type of ligand to which a metal is bound, and how this ligand is transported or stored in the body, determines to a great extent where the metal will accumulate and cause toxicity. Sensitive targets or critical biochemical processes differ between species and this may also lead to differential toxicity. Inorganic metal binding Many tissues contain "mineral concretions", that is, granules with a specific mineral composition that, due to the nature of the mineral, attract different metals. Especially the gut epithelium of invertebrates, and their digestive glands (hepatopancreas, midgut gland, chloragogen tissue) may be full of such concretions. Four classes of granules are distinguished (Figure 1): • Calcium-pyrophosphate granules with magnesium, manganese, often also zinc, cadmium, lead and iron • Sulfur granules with copper, sometimes also cadmium • Iron granules, containing exclusively iron • Calcium carbonate granules, containing mostly calcium The type B granules are assumed to be lysosomal vesicles that have absorbed metal-loaded peptides such as metallothionein or phytochelatin, and have developed into inorganic granules by degrading almost all organic material; the high sulfur content derives from the cysteine residues in the peptides. Tissues or cells that specialize in the synthesis of intracellular granules are also the places where metals tend to accumulate. Well-known are the "S cells" in the hepatopancreas of isopods. These cells (small cells, B-type cells sensu Hopkin 1989) contain very large amounts of copper. Most likely the large stores of copper in woodlice and other crustaceans relate to their use of hemocyanin, a copper-dependent protein, as an oxygen-transporting molecule. Similar tissues with high loadings of mineral concretions have been described for earthworms, snails, collembolans and insects. Organic metal binding The second class of metal-binding ligands is of organic nature. Many plants but also several animals synthesize a peptide called phytochelatin (PC). This is an oligomer derived from glutathione with the three amino acids, γ-glumatic acid, cysteine and glycine, arranged in the following way: (γ-glu-cys)n-gly, where n can vary from 2 to 11. The thiol groups of several cysteine residues are involved in metal binding. The other main organic ligand for metals is metallothionein (MT). This is a low-molecular weight protein with hydrophilic properties and an unusually large number of cysteine residues. Several cysteines (usually nine or ten) can bind a number of metal ions (e.g. four or five) in one cluster. There are two such clusters in the vertebrate metallothionein. Metallothioneins occur throughout the tree of life, from bacteria to mammals, but the amino acid sequence, domain structure and metal affinities vary enormously and it is doubtful whether they represent a single evolutionary-homologous group. In addition to these two specific classes of organic ligands, MT and PC, metals will also bind aspecifically to all kind of cellular constituents, such as cell wall components, albumen in the blood, etc. Often this represents the largest store of metals; such aspecific binding sites will constantly deliver free metal ions to the cellular pool and so are the most important cause of toxicity. Of course metals are also present in molecules with specific metal-dependent functions, such as iron in hemoglobin, copper in hemocyanin, zinc in carbonic anhydrase, etc. The distinction between inorganic ligands and organic ones is not as strict as it may seem. After binding to metallothionein or phytochelatin metals may be transferred to a more permanent storage compartment, such as the intracellular granules mentioned above, or they may be excreted. Regulation of metal binding Free metal ions are strong inducers of stress response pathways. This can be due to the metal ion itself but more often the stress response is triggered by a metal-induced disturbance of the redox state, i.e. an induction of oxidative stress. The stress response often involves the synthesis of metal-binding ligands such as phytochelatin and metallothionein. Because this removes metal ions from the active pool it is also called metal scavenging. The binding capacity of phytochelatin is enhanced by activation of the enzyme phytochelatin synthase (PC synthase). According to one model of its action, the C-terminus of the enzyme has a "metal sensor" consisting of a number of cysteines with free SH-groups. Any metal ions reacting with this nucleophile center (and cadmium is a strong reactant) will activate the enzyme which then catalyzes the reaction from (γ-glu-cys)n-gly to (γ-glu-cys)n+1-gly, thus increasing the binding capacity of cellular phytochelatin (Figure 2). This reaction of course relies on the presence of sufficient glutathione in the cell. In plants the PC-metal complex is transported into the central vacuole, where it can be stabilized through incorporation of acid-labile sulfur (S2). The PC moiety is degraded, resulting in the formation of inorganic metal-sulfide crystallites. Alternatively, complexes of metals with organic acids may be formed (e.g. citrates or oxalates). The fate of metal-loaded PC in animal cells is not known, but it might be absorbed in the lysosomal system to form B-type granules (see above). The upregulation of metallothionein (MT) occurs in a quite different manner, since it depends on de novo synthesis of the apoprotein. It is a classic example of gene regulation contributing to protection of the cell. In a wide variety of animals, including vertebrates and invertebrates, metallothionein genes (Mt) are activated by a transcription factor called metal-responsive transcription factor 1 (MTF-1). MTF -1 binds to so-called metal-responsive elements (MREs) in the promoter of Mt. MREs are short motives with a characteristic base-pair sequence that form the core of a transcription factor binding site in the DNA. Under normal physiological conditions MTF-1 is inactive and unable to induce Mt. However, it may be activated by Zn2+ ions, which are released, from unspecified ligands, by metals such as cadmium that can replace zinc (Figure 3). It must be emphasized that the model discussed above is inspired by work on vertebrates. Arthropods (Drosophila, Orchesella, Daphnia) could have a similar mechanism since they also have an MTF-1 homolog that activates Mt, however, the situation for other invertebrates such as annelids and gastropods is unclear; their Mt genes seem to lack MREs, despite being inducible by cadmium. In addition, the variability of metallothioneins in invertebrates is extremely large and not all metal-binding proteins may be orthologs of the vertebrate metallothionein. In snails, a cadmium-binding, cadmium-induced MT functions alongside a copper-binding MT while the two MTs have different tissue distributions and are also regulated quite differently. While both phytochelatin and metallothionein will sequester essential as well as non-essential metals (e.g. Cd) and so contribute to detoxification, the widespread presence of these systems throughout the tree of life suggests that they did not evolve primarily to deal with anthropogenic metal pollution. The very strong inducibility of these systems by non-essential elements like cadmium may be considered a side-effect of a different primary function, for example regulation of the cellular redox state or binding of essential metals. Target organs Any tissue-specific accumulation of metals can be explained by turnover of metal-binding ligands. For example, accumulation of cadmium in mammalian kidney is due to the fact that metallothionein loaded with cadmium cannot be excreted. High concentrations of metals in the hind segments of earthworms are due to the presence of "residual bodies" which are fully packed with intracellular granules. Accumulation of cadmium in the human prostrate is due to the high concentration of zinc citrate in this organ, which serves to protect contractile proteins in sperm tails from oxidation; cadmium assumedly enters the prostrate through zinc transporters. It is often stated that essential metals are subject to regulatory mechanisms, which would imply that their body burden, over a large range of external exposures, is constant. However, not all "essential" metals are regulated to the extent that the whole-body concentration is kept constant. Many invertebrates have body compartments associated with the gut (midgut gland, hepatopancreas, Malpighian tubules) in which metals, often in the form of mineral concretions, are inactivated and stored permanently or exchanged very slowly with the active pool. Since these compartments are outside the reach of regulatory mechanisms but usually not separated in whole-body metal analysis, the body burden as a whole is not constant. Some invertebrates even carry a "backpack" of metals accumulating over life. This holds, e.g., for zinc in barnacles, copper in isopods and zinc in earthworms. Accumulation of metals in target organs may lead to toxicity when the critical binding or excretion capacity is exhausted and metal ions start binding aspecifically to cellular constituents. The organ in which this happens is often called the target organ. The total metal concentration at which toxicity starts to become apparent is called the critical body concentration (CBC) or critical tissue concentration. For example, the critical concentration for cadmium in kidney, above which renal damage is observed to occur, is estimated to be 50 μg/g. A list of critical organs for metals in the human body is given in Table 1. The concept of CBC assumes that the complete metal load in an organ is in equilibrium with the active fraction causing toxicity and that there is no permanent storage pool. In the case of storage detoxification the body burden at which toxicity appears will depend on the accumulation history. Table 1. Critical organs for chronic toxicity of metals in the human body Metal or metalloid Critical organ Symptoms Al Brain Alzheimer's disease As Lung, liver, heart gut Multisystem energy disturbance Cd Kidney, liver Kidney damage Cr Skin, lung, gut Respiratory system damage Cu Liver Liver damage Hg Brain, liver Mental illness Ni Skin, kidney Allergic reaction, kidney damage Pb Bone marrow, blood, brain Anemia, mental retardation References Cobbett, C., Goldsbrough, P. (2002). Phytochelatins and metallothioneins: roles in heavy metal detoxification and homeostasis. Annual Review of Plant Biology 53, 159-182. Dallinger, R., Berger, B., Hunziker, P., Kägi, J.H.R. (1997). Metallothionein in snail Cd and Cu metabolism. Nature 388, 237-238. Dallinger, R., Höckner, M. (2013). Evolutionary concepts in ecotoxicology: tracing the genetic background of differential cadmium sensitivities in invertebrate lineages. Ecotoxicology 22, 767-778. Haq, F., Mahoney, M., Koropatnick, J. (2003) Signaling events for metallothionein induction. Mutation Research 533, 211-226. Hopkin, S.P. (1989) Ecophysiology of Metals in Terrestrial Invertebrates. London, Elsevier Applied Science. Nieboer, E., Richardson, D.H.S. (1980) The replacement of the nondescript term "heavy metals" by a biologically and chemically significant classification of metal ions. Environmental Pollution Series B 1, 3-26. Rainbow, P.S. (2002) Trace metal concentrations in aquatic invertebrates: why and so what? Environmental Pollution 120, 497-507. 4.1.3. Question 1 Mention the four main inorganic cellular constituents that bind metals, their main elemental composition and indicate what metals are usually bound in each structure. 4.1.3. Question 2 How can the intracellular levels of metal-scavenging molecules such as metallothionein (MT) and phytochelatin (PC) be adjusted to counteract the possible adverse effects of free metal ions? Describe the molecular mechanism for metallothionein and phytochelatin separately. 4.1.3. Question 3 1. Mention three organs of the human body susceptible to metal intoxication and to which metals they are particularly sensitive. 4.1.4. Xenobiotic defence and metabolism Author: Nico M. van Straalen Reviewers: Timo Hamers, Cristina Fossi Learning objectives: You should be able to: • recapitulate the phase I, II and III mechanisms for xenobiotic metabolism, and the most important molecular systems involved. • describe the fate and the chemical changes of an organic compound that is metabolized by the human body, from absorption to excretion. • explain the principle of metabolic activation and why some compounds can become very reactive upon xenobiotic metabolism. • develop a hypothesis on the ecological effects of xenobiotic compounds that require metabolic activation. Keywords: biotransformation; phase 1; phase II; phase III; excretion; metabolic activation; cytochrome P450 Synopsis All organisms are equipped with metabolic defence mechanisms to deal with foreign compounds. The reactions involved, jointly called biotransformation, can be divided into three phases, and usually aim to increase water solubility and excretion. The first step (phase I) is catalyzed by cytochrome P450, which is followed by a variety of conjugation reactions (phase II) and excretion (phase III). The enzymes and transporters involved are often highly inducible, i.e. the amount of protein is greatly enhanced by the xenobiotic compounds themselves. The induction involves binding of the compound to cytoplasmic receptor proteins, such as the arylhydrocarbon receptor (AhR), or the constitutive androstane receptor (CAR). In some cases the intermediate metabolites, produced in phase I are extremely reactive and a main cause of toxicity, a well-known example being the metabolic activation of polycyclic aromatic hydrocarbons such as benzo(a)pyrene, which readily forms DNA adducts and causes cancer. In addition, some compounds greatly induce metabolizing enzymes but are hardly degraded by them and cause chronic cellular stress. The various biotransformation reactions are a crucial aspect of both toxicokinetics and toxicodynamics of xenobiotics. Introduction The term "xenobiotic" ("foreign to biology") is generally used to indicate a chemical compound that does not normally have a metabolic function. We will use the term extensively in this module, despite the fact that it is somewhat problematic (e.g., can a compound be considered "foreign" if it circulates in the body, is metabolized or degraded in the body?, and: what is "foreign" for one species is not necessarily "foreign" for another species). The body has an extensive defence system to deal with xenobiotic compounds, loosely designated as biotransformation. The ultimate result of this system is excretion the compound in some form or another. However, many xenobiotics are quite lipophilic, tend to accumulate and are not easily excreted due to low water solubility. Molecular modifications are usually required before such compounds can be removed from the body, as the main circulatory and excretory systems (blood, urine) are water-based. By introducing hydrophilic groups in the molecule (-OH, =O, -COOH) and by conjugating it to an endogenous compound with good water-solubility, excretion is usually accomplished. However, as we will see below, intermediate metabolites may have enhanced reactivity and it often happens that a compound becomes more toxic while being metabolized. In the case of pesticides, deliberate use is made of such responses, to increase the toxicity of an insecticide once it is in the target organism. The study of xenobiotic metabolism is a classical subject not only in toxicology but also in pharmacology. The mode of action of a drug often depends critically on the rate and mode of metabolism. Also, many drugs show toxic side-effects as a consequence of metabolism. Finally, xenobiotic metabolism is also studied extensively in entomology, as both toxicity and resistance of pesticides are often mediated by metabolism. The most problematic xenobiotics are those with a high octanol-water partition coefficient (Kow) that are strongly lipophilic and very hydrophobic. They tend to accumulate, in proportion to their Log Kow, in tissues with a high lipid content such as the subcutis of vertebrates, and may cause tissue damage due to disturbance of membrane functions. This mode of action is called "minimum toxicity". Well-known are low-molecular weight aliphatic petroleum compounds and chlorinated alkanes such as chloroform. These compounds cause their primary damage to cell membranes; especially neurons are sensitive to this effect, hence minimum toxicity is also called narcotic toxicity. Lipophilic chemicals with high Log Kow do not reach concentrations high enough to cause minimum toxicity because they induce biotransformation at lower concentrations. The toxicity is then usually due to a reactive metabolite. Xenobiotic metabolism involves three subsequent phases (Figure 1): 1. Activation (usually oxidation) of the compound by an enzyme known as cytochrome P450, which acts in cooperation with NADPH cytochrome P450 reductase and other factors. 2. Conjugation of the activated product of phase I to an endogenous compound. A host of different enzymes is available for this task, depending on the compound, the tissue and the species. There are also (slightly polar) compounds that enter phase II directly, without being activated in phase I. 3. Excretion of the compound into circulation, urine, or other media, usually by means of membrane-spanning transporters belonging to the class of ATP-binding cassette (ABC) transporters, including the infamous multidrug resistance proteins. Hydrophilic compounds may pass on directly to phase III, without being activated or conjugated. Phase I reactions Cytochrome P450 is a membrane-bound enzyme, associated with the smooth endoplasmic reticulum. It carries a porphyrin ring containing an Fe atom, which is the active center of the molecule. The designation P450 is derived from the fact that it shows an absorption maximum at 450 nm when inhibited by carbon monoxide, a now outdated method to demonstrate its presence. Other (outdated) namings are MFO (mixed function oxygenase) and drug metabolizing enzyme complex. Cytochrome P450 is encoded by a gene called CYP, of which there are many paralogs in the genome, all slightly differing from each other in terms of inducibility and substrate specificity. Three classes of CYP genes are involved with biotransformation, designated CYP1, CYP2 and CYP3 in vertebrates. Each class has several isoforms; the human genome has 57 different CYP genes in total. The CYP complement of invertebrates and plants often involves even more genes; many evolutionary lineages have their own set, arising from extensive gene duplications within that lineage. In humans, the genetic complement of a person's CYP genes is highly relevant as to its drug metabolizing profile (see the section on Genetic variation in toxicant metabolism). Cytochrome P450 operates in conjunction with an enzyme called NADPH cytochrome P450 reductase, which consists of two flavoproteins, one containing Flavin adenine dinucleotide (FAD), the other flavin mononucleotide (FMN). The reduced Fe2+ atom in cytochrome P450 binds molecular oxygen, and is oxidized to Fe3+ while splitting O2; one O atom is introduced in the substrate, the other reacts with hydrogen to form water. Then the enzyme is reduced by accepting an electron from cytochrome P450 reductase. The overall reaction can be written as: RH + O2 + NADPH + H+ → ROH + H2O + NADP+ where R is an arbitrary substrate. Cytochrome P450 is expressed to a great extent in hepatocytes (liver cells), the liver being the main organ for xenobiotic metabolism in vertebrates (Figure 2), but it is also present in epithelia of the lung and the intestine. In insects the activity is particularly high in the Malpighian tubules in addition to the gut and the fat body. In mollusks and crustaceans the main metabolic organ is the hepatopancreas. Phase II reactions After activation by cytochrome P450 the oxidized substrate is ready to be conjugated to an endogenous compound, e.g. a sulphate, glucose, glucuronic acid or glutathione group. These reactions are conducted by a variety of different enzymes, of which some reside in the sER like P450, while others are located in the cytoplasm of the cell (Figure 2). Most of them transfer a hydrophilic group, available from intermediate metabolism, to the substrate, hence the enzymes are called transferases. Usually the compound becomes more polar in phase II, however, not all phase II reactions increase water solubility; for example, methylation (by methyl transferase) decreases reactivity but increases apolarity. Other phase II reactions are conjugation with glutathione, conducted by glutathione-S-transferase (GST) and with glucuronic acid, conducted by UDP-glucuronyl transferase. In invertebrates other conjugations may dominate, e.g. in arthropods and plants, conjugation with malonyl glucose is a common reaction, which is not seen in vertebrates. Conjugation with glutathione in the human body is often followed by splitting off glutamic acid and glycine, leaving only the cysteine residue on the substrate. Cysteine is subsequently acetylated, thus forming a so-called mercapturic acid. This is the most common type of metabolite for many xenobiotics excreted in urine by humans. Like cytochrome P450, the phase II enzymes consist in various isoforms, encoded by different paralogs in the genome. Especially the GST family is quite extensive and polymorphisms in these genes contribute significantly to the personal metabolic profile (see the section on Genetic variation in toxicant metabolism). Phase III reactions In the human body, there are two main pathways for excretion, one from the liver into the bile (and further into the gut and the faeces), the other through the kidney and urine. These two pathways are used by different classes of xenobiotics: very hydrophobic compounds such as high-molecular weight polycyclic aromatic hydrocarbons are still not readily soluble in water even after metabolism but can be emulsified by bile salt and excreted in this way. It sometimes happens that such compounds, once arriving in the gut, are assimilated again, transported to the liver by the portal vein and metabolized again. This is called "entero-hepatic circulation". Lower molecular weight compounds and hydrophilic compounds are excreted through urine. Volatile compounds can leave the body through the skin and exhaled air. Excretion of activated and conjugated compounds from tissues out of the cell usually requires active transport, which is mediated by ABC (ATP binding cassette) transporters, a very large and diverse family of membrane proteins which have in common a binding cassette for ATP. Different subgroups of ATP transporters transport different types of chemicals, e.g. positively charged hydrophobic molecules, neutral molecules and water-soluble anionic compounds. One well-known group consists of multidrug resistance proteins or P-glycoproteins. These transporters export drugs aiming to attack tumor cells. Because their activity is highly inducible, these proteins can enhance excretion enormously, making the cell effectively resistant and thus case major problems for cancer treatment. Induction All enzymes of xenobiotic metabolism are highly inducible: their activity is normally on a low level but is greatly enhanced in the presence of xenobiotics. This is achieved through a classic case of transcriptional regulation upon CYP and other genes, leading to de novo synthesis of protein. In addition, extensive proliferation of the endoplasmic reticulum may occur, and in extreme cases even swelling of the liver (hepatomegaly). The best investigated pathway for transcriptional activation of CYP genes is due to the arylhydrocarbon receptor (AhR). Under normal conditions, this peptide is stabilized in the cytoplasm by heat-shock proteins, however, when a xenobiotic compound binds to AhR, it is activated and can join with another protein called Ah receptor nuclear translocator (ARNT) to translocate to the nucleus and bind to DNA elements present in the promotor of CYP and other genes. It thus acts as a transcriptional activator or transcription factor on these genes (Figure 3). The DNA motifs to which AhR binds are called xenobiotic responsive elements (XRE) or dioxin-responsive elements (DRE). The compounds acting in this manner are called 3-MC-type inducers, after the (highly carcinogenic) model compound 3-methylcholanthrene. The inducing capacity of a compound is related to its binding affinity to the AhR, which in itself is determined by the spatial structure of the molecule. The lock-and key fitting between AhR and xenobiotics explains why induction of biotransformation by xenobiotics shows a very strong stereospecificity. For example, among the chlorinated biphenyls and chlorinated dibenzodioxins, some compounds are extremely strong inducers of CYP1 genes, while others, even with the same number of chlorine atoms, are no inducers at all. The precise position of the chlorine atoms determines the molecular "fit" in the Ah receptor (see Section on Receptor interaction). In addition to 3MC-type of induction there are other modes in which biotransformation enzymes are induced, but these are less well-known. A common class is PB-type induction (named after another model compound, phenobarbital). PB-type induction is not AhR-dependent, but acts through activation of another nuclear receptor, called constitutive androstane receptor (CAR). This receptor activates CYP2 genes and some CYP3 genes. The high inducibility of biotransformation can be exploited in a reverse manner: if biotransformation is seen to be highly upregulated in a species living in the environment, this indicates that that species is being exposed to xenobiotic compounds. Assays addressing cytochrome P450 activity can therefore be exploited in bioindication and biomonitoring systems. The EROD (ethoxyresorufin-O-deethylase) assay is often used for this purpose, although it is not 100% specific to the isoform of P450. Another approach is to address CYP expression directly, e.g. through reverse transcription-quantitative PCR, a method to quantify the amount of CYP mRNA. Secondary effects of biotransformation Although the main aim of xenobiotic metabolism is to detoxify and excrete foreign compounds, some pathways of biotransformation actually enhance toxicity. This is mostly due to the first step, activation by cytochrome P450. The activation may lead to intermediate metabolites which are highly reactive and the actual cause of toxicity. The best investigated examples are due to bioactivation of polycyclic aromatic hydrocarbons (PAHs), a group of chemicals present in diesel, soot, cigarette smoke and charred food products. Many of these compounds, e.g. benzo(a)pyrene, benz(a)anthracene and 3-methylcholanthrene, are not reactive or toxic as such but are activated by cytochrome P450 to extremely reactive molecules. Benzo(a)pyrene, for instance, is activated to a diol-epoxide, which readily binds to DNA, especially to the free amino-group of guanine (Figure 4). The complex is called a DNA adduct, the double helix is locally disrupted and this results in a mutation. If this happens in an oncogene, a tumor may develop (see the Section on Carcinogenesis and genotoxicity). Not all PAHs are carcinogenic. Their activity critically depends on the spatial structure of the molecule, which again determines its "fit" in the Ah receptor. PAHs with a "notch" (often called bay-region) in the molecule tend to be stronger carcinogens than compounds with a symmetric (round or linear) molecular structure. Another mechanism for biotransformation-induced toxicity is due to some very recalcitrant organochlorine compounds such as polychlorinated dibenzodioxins (PCDDs, or dioxins for short) and polychlorinated biphenyls (PCBs). Some of these compounds are very potent inducers of biotransformation, but they are hardly degraded themselves. The consequence is that the highly upregulated cytochrome P450 activity continues to generate a large amount of reactive oxygen (ROS), causing oxidative stress and damage to cellular constituents. It is assumed that the chronic toxicity of 2,3,7,8-tetrachlorodibenzo(para)dioxin (TCDD), one of the most toxic compounds emitted by human activity, is due to its high capacity to induce prolonged oxidative stress. On the molecular level, there is a close link between oxidative stress and biotransformation activity. Many toxicants that primarily induce oxidative stress (e.g. cadmium) also upregulate CYP enzymes. Two defence mechanisms, oxidative stress defence and biotransformation are part of the same integrated stress defence system of the cell. References Bui, P.H., Hsu, E.L., Hankinson, O. (2009), Fatty acid hydroperoxides support cytochrome P450 2S1-mediated bioactivation of benzo[a]pyrene-7-8-dihydrodiol. Molecular Pharmacology 76, 1044-1052. Stroomberg, G.J., Zappey, H., Steen, R.J.C.A., Van Gestel, C.A.M., Ariese, F., Velthorst, N.H., Van Straalen, N.M. (2004). PAH biotransformation in terrestrial invertebrates - a new phase II metabolite in isopods and springtails. Comparative Biochemistry and Physiology Part C 138, 129-137. Timbrell, J.A. (1982). Principles of Biochemical Toxicology. Taylor & Francis Ltd, London. Van Straalen, N.M., Roelofs, D. (2012). An Introduction to Ecological Genomics, 2nd Ed. Oxford University Press, Oxford. Vermeulen, N.P.E., Van den Broek, J.M. (1984). Opname en verwerking van chemicaliën in de mens. Chemisch Magazine. Maart: 167-171. 4.1.4. Question 1 Describe the main processes involved with each of the three general phases of xenobiotic metabolism, as well as the molecular systems involved. 4.1.4. Question 2 The upregulation of cytochrome P450 activity is a classical example of regulation through enhanced de novo synthesis of enzyme. The mechanism is known in quite some detail and involves a variety of components. Describe the involvement of the following components • Aryl hydrocarbon receptor • Heat shock protein 70 • Arylhydrocarbon receptor nuclear translocator • Xenobiotic-response elements • CYP1A1 gene • CYP1A1 mRNA • CYP 1A1 protein • CYP 1A1 enzymatic activity 4.1.4. Question 3 In the past, activity of cytochrome P450 was often assessed using a synthetic fluorescent substrate, resorufin; activity was measured as ethoxyresorufin-O-deethylase (EROD). Discuss the advantages and disadvantages of the use of such an assay in environmental risk assessment. 4.1.5. Allometric relationships Author: A. Jan Hendriks Reviewers: Nico van den Brink, Nico van Straalen Learning objectives: You should be able to • explain why allometrics is important in risk assessment across chemicals and species • summarize how biological characteristics such as consumption, lifespan and abundance scale size • describe how toxicological quantities such as uptake rates and lethal concentrations scale to size Keywords: body size, biological properties, scaling, cross-species extrapolation, size-related uptake kinetics Introduction Globally more than 100,000,000 chemicals have been registered. In the European Union more than 100,000 compounds are awaiting risk assessment to protect ecosystem and human health, while 1,500,000 contaminated sites potentially require clean-up. Likewise, 8,000,000 species, of which 10,000 are endangered, need protection worldwide, with one lost per hour (Hendriks, 2013). Because of financial, practical and ethical (animal welfare) constraints, empirical studies alone cannot cover so many substances and species, let alone their combinations. Consequently, the traditional approach of ecotoxicological testing is gradually supplemented or replaced by modelling approaches. Environmental chemists and toxicologists for long have developed relationships allowing extrapolation across chemicals. Nowadays, so-called Quantitative Structure Activity Relationships (QSARs) provide accumulation and toxicity estimates for compounds based on their physical-chemical properties. For instance bioaccumulation factors and median lethal concentrations have been related to molecular size and octanol-water partitioning, characteristic properties of a chemical that are usually available from its industrial production process. In analogy with the QSAR approach in environmental chemistry, the question may be asked whether it is possible to predict toxicological, physiological and ecological characteristics of species from biological traits, especially traits that are easily measured, such as body size. This approach has gone under the name "Quantitative Species Sensitivity Relationships" (QSSR) (Notenboom, 1995). Among the various traits available, body-size is of particular interest. It is easily measured and a large part of the variability between organisms can be explained from body size, with r2 > 0.5. Not surprisingly, body size also plays an important role in toxicology and pharmacology. For instance, toxic endpoints, such as LC50s, are often expressed per kg body weight. Recommended daily intake values assume a "standard" body weight, often 60 kg. Yet, adult humans can differ in body weight by a factor of 3 and the difference between mouse and human is even larger. Here it will be explored how body-size relationships, which have been studied in comparative biology for a long time, affect the extrapolation in toxicology and can be used to extrapolate between species. Fundamentals of scaling in biology Do you expect a 104 kg elephant to eat 104 times more than a 1 kg rabbit per day? Or less, or more? On being asked, most people intuitively come up with the right answer. Indeed, daily consumption by the proboscid is less than 104 times that of the rodent. Consequently, the amount of food or water used per kilogram of body weight of the elephant is less than that of the rabbit. Yet, how much less exactly? And why should sustaining 1 kg of rabbit tissue require more energy than 1 kg of elephant flesh in the first place? A century of research (Peters, 1983) has demonstrated that many biological characteristics Y scale to size X according to a power function: Y = a Xb where the independent variable X represents body mass, and the dependent variable Y can virtually be any characteristic of interest ranging, e.g., from gill area of fish to density of insects in a community. Plotted in a graph, the equation produces a curved line, increasing super-linearly if b > 1 and sub-linearly if b < 1. If b=1, Y and X are directly proportional and the relationship is called isometric. As curved lines are difficult to interpret, the equation is often simplified by taking the logarithm of the left and right parts. The formula then becomes: log Y = log a + b log X When log Y is plotted against log X, a straight line results with slope b and intercept log a. If data are plotted in this way, the slope parameter b may be estimated by simple linear regression. Across wide size ranges, slope b often turns out to be a multitude of ¼ or, occasionally, ⅓. Rates [kg∙d-1] of consumption, growth, reproduction, survival and what not, increase with mass to the power ¾ , while rate constants, sometimes called specific rates [kg∙kg-1∙d-1], decrease with mass to the power -¼. So, while the elephant is 104 kg heavier than the 1 kg rabbit, it eats only (104)¾ = 103 times more each day. Vice versa, 1 kg of proboscid apparently requires a consumption of (104) kg∙kg-1∙d-1, i.e., 10 times less. Variables with a time dimension [d] like lifespan or predator-prey oscillation periods scale inversely to rate constants and thus change with body mass to the power ¼. So, an elephant becomes (104)¼ = 10 times older than a rabbit. Abundance, i.e., the number of individuals per surface area [m-2] decreases with body mass to the power -¾. Areas, such as gill surface or home range, scale inversely to abundance, typically as body mass to the power ¾. Now, why would sustaining 1 kg of elephant require 10 times less food than 1 kg of rabbit? Biologists, pharmacologists and toxicologists first attributed this difference to area-volume relationships. If objects of the same shape but different size are compared, the volume increases with length to the power 3 and the surface increases with length to the power 2. For a sphere with radius r, for example, area A and volume V increase as A ~ r2 and V ~ r3, so area scales to volume as A ~ r2 ~ (V)2 ~ V. So, larger animals have relatively smaller surfaces, as long as the shape of the organism remains the same. Since many biological processes, such as oxygen and food uptake or heat loss deal with surfaces, metabolism was, for long, thought to slow down like geometric structures, i.e., with multitudes of ⅓. Yet, empirical regressions, e.g. the "mouse-elephant curve" developed by Max Kleiber in the early 1930s show a universal slope of ¼ (Peters, 1983. This became known as the "Kleiber's law". While the data leave little doubt that this is the case, it is not at all clear why it should be ¼ and not ⅓. Several explanations for the ¼ slope have been proposed but the debate on the exact value as well as the underlying mechanism continues. Application of scaling in toxicology Since chemical substances are carried by flows of air and water, and inside the organism by sap and blood, toxicokinetics and toxicodynamics are also expected to scale to size. Indeed, data confirm that uptake and elimination rate constants decrease with size, with an exponent of about -¼ (Figure 1). Slopes vary around this value, the more so for regressions that cover small size ranges and physiologically different organisms. The intercept is determined by resistances in unstirred water layers and membranes through which the substances pass, as well as by delays in the flows by which they are carried. The resistances mainly depend on the affinity and molecular size of the chemicals, reflected by, e.g., the octanol-water partition coefficient Kow for organic chemicals or atomic mass for metals. The upper boundary of the intercept is set by the delays imposed by consumption and, subsequently, egestion and excretion. The lower end is determined by growth dilution. Both uptake and elimination scale to mass with the same exponent so that their ratio, reflecting the bioconcentration or biomagnification factor in equilibrium, is independent of body-size. Scaling of rate constants for uptake and elimination, such as in Figure 1, implies that small organisms reach a given internal concentration faster than large ones. Vice versa, lethal concentrations in water or food needed to reach the same internal level after equal (short-term) exposure duration are lower in smaller compared to larger organisms. Thus, the apparent "sensitivity" of daphnids can, at least partially, be attributed to their small body-size. This emphasizes the need to understand simple scaling relationships before developing to more elaborate explanations. Using Figure 1, one can, within strict conditions not elaborated here, theoretically relate median lethal concentrations LC50 [μg L-1] to the Kow of the chemical and the size of the organism, with r2 > 0.8 (Hendriks, 1995; Table 1). Complicated responses like susceptibility to toxicants can be predicted only from Kow and body size, which illustrates the generality and power of allometric scaling. Of course, the regressions describe the general trends and in individual cases the deviations can be large. Still, considering the challenges of risk assessment as outlined above, and in the absence of specific data, the predictions in Table 1 can be considered as a reasonable first approximation. Table 1. Lethal concentrations and doses as a function of test animal body-mass Species Endpoint Unit b (95% CI) r2 nc ns Source Guppy LC50 mg∙L-1 0.66 (0.51-0.80) 0.98 6 1 1 Mammals LD10≈MTD mg∙animal-1 0.73 0.69‑0.77 27 5 2 Birds Oral LD50 mg∙animal-1 1.19 (0.67-0.82) 0.76 194 3…37 3 Mammals Oral LD50 mg∙animal-1 0.94 (1.18-1.20) 0.89 167 3…16 4 Mammals Oral LD50 mg∙animal-1 1.01 (1.00-1.01) >5000 2…8 5 MTD = maximum threshold dose, repeated dosing. LD50 single dose, b = slope of regression line, nc = number of chemicals, ns = number of species. Sources: 1 Anderson & Weber (1975), 2 Travis & White (1987), 4 Sample & Arenal (1999), 5 Burzala-Kowalczyk & Jongbloed (2011). Allometry is also important when dealing with other levels of biological organisation. Leaf or gill area, the number of eggs in ovaries, the number of cell types and many other cellular and organ characteristics scale to body-size as well. Likewise, intrinsic rates of increase (r) of populations and the production-biomass ratios (P/B) of communities can also be obtained from the (average) species mass. Even the area needed by animals in laboratory assays scales to size, i.e., by m¾, approximately the same slope noted for home ranges of individuals in the field. Future perspectives Since almost any physiological and ecological process in toxicokinetics and toxicodynamics depends on species size, allometric models are gaining interest. Such an approach allows one to quantitatively attribute outliers (like apparently "sensitive" daphnids) to simple biological traits, rather than detailed chemical-toxicological mechanisms. Scaling has been used in risk assessment at the molecular level for a long time. The molecular size of a compound is often a descriptor in QSARs for accumulation and toxicity. If not immediately evident as molecular mass, volume or area often pops up as an indicator of steric properties. Scaling does not only apply to bioaccumulation and toxicity from molecular to community levels, size dependence is also observed in other sections of the environmental cause-effect chain. Emissions of substances, e.g., scale non-linearly to the size of engines and cities. Concentrations of chemicals in rivers depend on water discharge, which in itself is an allometric function of catchment size. Hence, understanding the principles of cross-disciplinary scaling is likely to pay off in protecting many species against many chemicals. References Anderson, P.D., Weber, L.J. (1975). Toxic response as a quantitative function of body size. Toxicology and Applied Pharmacology 33, 471-483. Burzala-Kowalczyk, L., Jongbloed, G. (2011). Allometric scaling: Analysis of LD50 data. Risk Analysis 31, 523-532. Hendriks, A.J. (1995). Modelling response of species to microcontaminants: Comparative ecotoxicology by (sub)lethal body burdens as a function of species size and octanol-water partitioning of chemicals. Ecotoxicology and Environmental Safety 32, 103-130. Hendriks, A.J. (2013). How to deal with 100,000+ substances, sites, and species: Overarching principles in environmental risk assessment. Environmental Science and Technology 47, 3546−3547. Hendriks, A.J., Van der Linde, A., Cornelissen, G., Sijm, D.T.H.M. (2001). The power of size: 1. Rate constants and equilibrium ratios for accumulation of organic substances. Environmental Toxicology and Chemistry 20, 1399-1420. Notenboom, J., Vaal, M.A., Hoekstra, J.A. (1995). Using comparative ecotoxicology to develop quantitative species sensitivity relationships (QSSR). Environmental Science and Pollution Research 2, 242-243. Peters, R.H. (1983). The Ecological Implications of Body Size. Cambridge University Press, Cambridge. Sample, B.E., Arenal, C.A. (1999) Allometric models for interspecies extrapolation of wildlife toxicity data. Bulletin of Environmental Contamination and Toxicology 62, 653-66. Travis, C.C., White, R.K. (1988) Interspecific scaling of toxicity data. Risk Analysis 8, 119-125. 4.1.5. Question 1 Think of a biological trait that you are interested in. How do you think it will scale to organism size? 4.1.5. Question 2 What is the elimination rate constant of a chemical with a Kow of 104 from an insect with a mass of 10-5 kg? 4.1.5. Question 3 Why is size-scaling more prominent in toxicokinetics than in toxicodynamics? 4.1.6. Food chain transfer Auteur: Nico van den Brink Reviewers: Kees van Gestel. Jan Hendriks Learning objectives: You should be able to: • Mention the chemical properties determining the potential of chemicals to accumulate in food chains • Explain the role of biological and ecological factors in the food-chain accumulation of chemicals Keywords: biomagnification, food-chain transfer, Accumulation of chemicals across different trophic levels. Chemicals may be transferred from one organism to another. Grazers will ingest chemicals that are in the vegetation they eat. Similarly, predators are exposed to chemicals in their prey items. This so-called food web accumulation is governed by properties of the chemical, but also by some traits of the receiving organism (e.g. grazer or predator). Chemical properties driving food web accumulation Some chemicals are known to accumulate in food webs, reaching the highest concentrations in top-predators. Examples of such chemicals are organochlorine pesticides like DDT and brominated flame retardants (e.g. PBDEs; see section on POPs). Such accumulating chemicals have a few properties in common: they need to be persistent and they need to have affinity for the organismal body. Organic chemicals with a relatively high log Kow, indicating a high affinity for lipids, will enter organisms quite effectively (see section on Bioconcentration and kinetics modelling). Once in the body, these chemicals will be distributed to lipid rich tissues, and excretion is rather limited. In case of persistent chemicals that are not metabolised, concentrations will increase over time when uptake is higher than excretion. Furthermore, such chemicals are likely to be passed on to organisms at the next trophic level in case of prey-predator interactions. Some of these chemicals may be metabolised by the organism, most often into more water soluble metabolites (see section on Xenobiotic metabolism & defence). These metabolites are more easily excreted, and in this way concentrations of metabolizable chemicals do not increase so much over time, and will therefore also transfer less to higher trophic levels. The effects of metabolism on the internal concentrations of organisms is clearly illustrated by a study on the uptake of organic chemicals by different aquatic species (Kwok et al., 2013). In that study the uptake of persistent chemicals (organochlorine pesticides; OCPs) was compared with the uptake of chemicals that may be metabolised (polycyclic aromatic hydrocarbons; PAHs). The authors compared shrimps with fish, the former having a limited capacity to metabolise PAHs while fish can. Figure 1 shows the Biota-to-Sediment Accumulation Factors (BSAFs; see section on Bioaccumulation), which is the ratio between the concentration in the organism and in the sediment. It is shown that OCPs accumulate to a high extent in both species, reflecting persistent, non-metabolizable chemicals. For PAHs the results are different per species, fish are able to metabolise and as a result the concentrations of PAHs in the fish are low, while in shrimp, with a limited metabolic capacity, the accumulation of PAHs is comparable to the OCPs. These results show that not only the properties of the chemicals are of importance, but also some traits of the organisms involved, in this case the metabolic capacity. Effects of species traits and food web structure on food web accumulation Food-web accumulation of chemicals is driven by food uptake. At lower trophic levels, most organisms will acquire relatively low concentrations from the ambient environment. First consumers, foraging on these organisms will accumulate the chemicals of all of them, and in case of persistent chemicals that enter the body easily, concentrations in the consumers will be higher than in their diet. Similarly, concentrations will increase when the chemicals are transferred to the next trophic level. This process is called biomagnification, indicating increasing concentrations of persistent and accumulative chemicals in food webs. The most iconic example of this is on the increasing concentrations of DDTs in fish eating American Osprey (Figure 2), a casus which has led to the ban of a lot of organochlorine chemicals. Since biomagnification along trophic levels is food driven, it is of importance to include diet composition into the studies. This can be explained by an example on small mammals in the Netherlands. Two similar small mammal species, the bank vole (Myodes glareolus) and the common vole (Microtus arvalis) co-occur in larger part of the Netherlands. Although the species look very similar, they are different in their diet and habitat use. The bank vole is a omnivorous species, inhabiting different types of habitat while the common vole is strictly vegetarian living in pastures. In a study on the species-specific uptake of cadmium, diet items of both species were analysed, indicating nearly 3 orders of magnitudes differences in cadmium concentrations between earthworms and berries from vegetation (Fig 3A, van den Brink et al., 2010). Stable isotopic ratios of carbon and nitrogen were used to assess the general diets of the organisms. The common vole ate mostly stinging nettle and grass, including seeds, while the bank vole showed to forage on grass herbs and earthworms. This difference in diet was reflected in increased concentrations of cadmium in the bank vole in comparison to the common vole (both inhabiting the same area). The concentrations of one bank vole appeared to be extremely low (red diamond in Figure 3b), and initially this was considered to be an artefact. However, detailed analysis of the stable isotopic ratios in this individual revealed that it had foraged on stinging nettle and grass, hence a diet more reflecting the common vole. This emphasises once more that organisms accumulate through their diet (you accumulate what you eat!) Case studies Orcas or Killer whales (Orcinus orca) are large marine predatory mammals, which roam all around the oceans, from the Arctic to the deep south region of the Antarctic. Although they appear ubiquitous around the world, generally different pods of Orcas occur in different regions of the marine ecosystem. Often, each pod has developed specialised foraging behaviours targeted at specific prey species. Although Orcas are generally apex top-predators at the top of the (local) food web, the different foraging strategies would suggest that exposure to accumulating chemicals may differ considerably between pods. This was indeed shown to be the case in a very elaborate study on different pods of Orcas of the West-coast of Canada by Ross et al. (2000). In the Vancouver region there is a resident pod while the region is also often visited by two transient groups of Orcas. PCB concentrations were high in all animals, but the transient animals contained significantly higher levels. The transient whales mainly fed on marine mammals, while the resident animals mainly fed on fish, and this difference in diet was thought to be the cause of the differences in PCB levels between the groups. In that study, it was also shown that PCB levels increased with age, due to the persistence of the PCBs, while female orcas contained significant lower concentrations of PCBs. The latter is caused by the lactation of the female Orcas during which they feed their calves with lipid rich milk, containing relatively high levels of (lipophilic) PCBs. By this process, females offload large parts of their PCB body burden, however by transferring these PCBs to their developing calves (see also Figure 3 in the section on Bioaccumulation). A recent study showed that although PCBs have been banned for decades now, they still pose threats to populations of Orcas (Deforges et al., 2018). In that study, regional differences in PCB burdens were confirmed, likely due to differences in diet preferences although not specifically mentioned. It was shown that PCB levels in most of the Orca populations were still above toxic threshold levels and concerns were raised regarding the viability of these populations. This study confirms that 1) Orcas are exposed to different levels of PCBs according to their diet, which influences the biomagnification of the PCBs, 2) Orca populations are very inefficient in clearing PCBs from the individual due to little metabolism but also from the population due to the efficient maternal transfer from mother to calve, and 3) persistent, accumulating chemicals may pose threats to organisms even decades after their use. Understanding the mechanisms and processes underlying the biomagnification of persistent and toxic compounds is essential for a in depth risk assessment. References Desforges, J.-P., Hall, A., McConnell, B., Rosing-Asvid, A., Barber, J.L., Brownlow, A., De Guise, S., Eulaers, I., Jepson, P.D., Letcher, R.J., Levin, M., Ross, P.S., Samarra, F., Víkingson, G., Sonne, C., Dietz, R. (2018). Predicting global killer whale population collapse from PCB pollution. Science 361, 1373-1376. Ford, J.K.B., Ellis, G.A., Matkin, D.R., Balcomb, K.C., Briggs, D., Morton, A.B. (2005). Killer whale attacks on minke whales: Prey capture and antipredator tactics. Marine Mammal Science 21, 603-618. Guinet, C. (1992). Predation behaviour of killer whales (Orcinus orca) around Grozet Islands. Canadian Journal of Zoology 70, 1656-1667. Kwok, C.K., Liang, Y., Leung, S.Y., Wang, H., Dong, Y.H., Young, L., Giesy, J.P., Wong, M.H. (2013). Biota-sediment accumulation factor (BSAF), bioaccumulation factor (BAF), and contaminant levels in prey fish to indicate the extent of PAHs and OCPs contamination in eggs of waterbirds. Environmental Science and Pollution Research 20, 8425-8434. Ross, P.S., Ellis, G.M., Ikonomou, M.G., Barrett-Lennard, L.G., Addison, R.F. (2000). High PCB concentrations in free-ranging Pacific killer whales, Orcinus orca: Effects of age, sex and dietary preference. Marine Pollution Bulletin 40, 504-515. Samarra, F.I.P., Bassoi, M., Beesau, J., Eliasdottir, M.O., Gunnarsson, K., Mrusczok, M.T., Rasmussen, M., Rempel, J.N., Thorvaldsson, B., Vikingsson, G.A. (2018). Prey of killer whales (Orcinus orca) in Iceland. Plos One 13, 20. van den Brink, N., Lammertsma, D., Dimmers, W., Boerwinkel, M.-C., van der Hout, A. (2010). Effects of soil properties on food web accumulation of heavy metals to the wood mouse (Apodemus sylvaticus). Environmental Pollution 158, 245-251. 4.1.6. Question 1 In a lake, the ecosystem consists of algae, daphnids feeding on the algae and fish feeding on the daphnids. Chemical A has a low log Kow, but is persistent. Chemical B has high log Kow, but can be metabolised by daphnids and fish, and Chemical C has high log Kow and is also persistent. Where may you find the highest concentrations of Chemical A, B and C? 4.1.6. Question 2 Name the two traits of species determining the potential for chemicals to reach high concentrations in it. 4.1.6. Question 3 This is a more general question, trying to put the information in the text of this section on bioaccumulation across different trophic levels in a broader perspective: What property does a chemical need, besides high potential for bioaccumulation and persistence, to have a high likelihood to pose serious environmental risks to organisms? 4.1.7. Critical Body Concentration Author: Martina G. Vijver Reviewers: Kees van Gestel and Frank Gobas Learning objectives: You should be able to • describe the Critical Body Concentration (CBC) concept for assessing the toxicity of chemicals. • graphically explain the CBC concept and make a distinction between slow and fast kinetics. • mention cases in which the CBC approach fails. Keywords: Time dependent effects, internal body concentrations, one compartment model Introduction One of the quests in ecotoxicology is how to link toxicity to exposure and to understand why some organisms experience toxic effects while others do not at the same level of exposure. A generally accepted approach for assessing possible adverse effects on biota, no matter what kind of species, is the Critical Body Concentration (CBC) concept (McCarty 1991). According to this concept, toxicity is determined by the amount of chemical taken up, so by the internal concentration, which has a relationship with the duration of the exposure as well as the exposure concentration. Figure 1 shows the relationship between the development with time of the internal concentrations of a chemical in an organism and the time when mortality occurs at different exposure concentrations under constant exposure. Independent of exposure time or exposure concentration, mortality occurs at a more or less fixed internal concentration. The CBC is defined as the highest internal concentration of a substance in an organism that does cause a defined effect, e.g. 50% mortality or 50% reduction in the number of offspring produced. By comparing internal concentrations measured in exposed organisms to CBC values derived in the laboratory, a measure of risk is obtained. The CBC applies to lethality as well as to sub-lethal effects like reproduction or growth inhibition. Relating toxicity to toxicokinetics From Figure 1A, it may also become clear that chemicals that have fast uptake kinetics will reach the CBC faster than chemicals that have slow kinetics (see Section on Bioaccumulation kinetics). As a consequence, also the time to reach a constant LC50 (indicated as the ultimate LC50: LC50¥; Figure 1B) depends on kinetics. Hence, both toxic effects and chemical concentration are controlled by the same kinetics.The CBC can be derived from the LC50-time relationship and be linked to the LC50¥ using uptake and elimination rate constants (k1 and k2). It should be noted that the k2 in this case does not reflect the rate of chemical excretion but rather the rate of elimination of toxic effects caused by the chemical (so, note the difference here with Section on Bioaccumulation kinetics). The time needed to reach steady state depends on the body size of the organisms, with larger organisms taking longer time to attain steady state compared to smaller organisms (McCarty 1991). The time needed to reach steady state depends also on the exposure surface area of the exposed organisms (Pawlisz and Peters 1993) as well as their metabolic activity. Organisms not capable of excreting or metabolizing a chemical will continue accumulating with time, and the LC50¥ will be zero. This is e.g. the case for cadmium in isopods (Crommentuijn et al. 1994), but kinetics are so slow that cadmium in these animals never reaches lethal concentrations in relatively clean environments as their life span is too short. The CBC integrates environmentally available fractions with bioavailable concentrations and toxicity at specific receptors (McCarty and MacKay 1993). See also Section on Bioavailability. In this way, the actual exposure concentration in the environment does not need to be known for performing a risk assessment. The internal concentration of the chemical in the organism is the only concentration required for a risk assessment. Therefore many difficulties are overcome regarding bioavailability issues, e.g. it removes some of the disadvantages of the exposure concentration expressed per unit of soil, as well as of dealing with exposures that vary over time or space. Proof of the CBC concept A convincing body of evidence was collected to support the CBC approach. For organic compounds with a narcotic mode of action, effects could be assessed over a wide range of organisms, test compounds and exposure media. For narcotic compounds with octanol-water partition coefficients (Kow) varying from 10 to 1,000,000 (see for details Section on Relevant chemical properties), the concentration of chemical required for lethality through narcosis is approximately 1-10 mmol/kg: Figure 2 (McCarty and MacKay 1993). To reduce the variation in bioconcentration factor (BCF) values for the accumulation of chemicals in organisms from water, normalization by lipid content has been suggested allowing to determine the chemical activity within an organism's body (US EPA 2003). For that reason, lipid extraction protocols are intensively described within the updated OECD Guideline for the testing of chemicals No. 305 for fish bioaccumulation tests, along with a sampling schedule of lipid measurement in fish. Correction of the BCF for differences in lipid content is also described in the same OECD guideline No. 305. If chemical and lipid analyses have been conducted on the same fish, this requires correction for the corresponding lipid content of each individual measured concentration in the fish. This should be done prior to using the data to calculate the kinetic BCF. If lipid content is not measure on all sampled fish, a mean lipid content of approx. 5% must be used to normalize the BCF. It should be noted that this correction holds only for chemicals accumulating in lipids and not for chemicals that do primarily bind to proteins (e.g. perfluorinated substances). When does the CBC concept not apply? The CBC concept also has some limitations. Crommentuijn et al. (1994) found that the toxicity of metals to soil invertebrates could not be explained using critical body concentrations. The way different organisms deal with accumulated metals has a large impact on the magnitude of body concentrations reached and the accompanying metal sensitivity (Rainbow 2002). Moreover, adaptation or development of metal tolerance limits the application of CBCs for metals. When the internal metal concentration does not show a monotonic relationship with the exposure concentration, it is not possible to derive CBCs. This means that whenever organisms are capable of trapping a portion of the metal in forms that are not biologically reactive, a direct relationship between body metal concentrations and toxicity may be absent or less evident (Luoma and Rainbow 2005, Vijver et al. 2004). Consequently, for metals a wide range of body concentrations with different biological significance exists. It therefore remains an open question whether the approach is applicable to modes of toxic action other than narcosis. Another important point is the question to what extent the CBC approach is applicable to assessing the effect of chemical mixtures, especially in cases the chemicals have a different mode of action. References Crommentuijn, T., Doodeman, C.J.A.M., Doornekamp, A., Van der Pol, J.J.C., Bedaux, J.J.M., Van Gestel, C.A.M. (1994). Lethal body concentrations and accumulation patterns determine time-dependent toxicity of cadmium in soil arthropods. Environmental Toxicology and Chemistry 13, 1781-1789. Luoma, S.N., Rainbow, P.S. (2005). Why is metal bioaccumulation so variable? Biodynamics as a unifying concept. Environmental Science and Technology 39, 1921-1931 McCarty, L.S. (1991). Toxicant body residues: implications for aquatic bioassays with some organic chemicals. In: Mayes, M.A., Barron, M.G. (Eds.), Aquatic Toxicology and Risk Assessment: Fourteenth Volume. ASTM STP 1124. Philadelphia: American Society for Testing and Materials. pp. 183-192. DOI: 10.1520/STP23572S McCarty, L.S., Mackay, D. (1993). Enhancing ecotoxicological modeling and assessment. Environmental Science and Technology 27, 1719-1727 Pawlisz, A.V., Peters, R.H. (1993). A test of the equipotency of internal burdens of nine narcotic chemicals using Daphnia magna. Environmental Science and Technology 27, 2801-2806 Rainbow P.S. (2002). Trace metal concentrations in aquatic invertebrates: why and so what? Environmental Pollution 120, 497-507. U.S. EPA. (2003). In: Methodology for Deriving Ambient Water Quality Criteria for the Protection of Human Health: Technical Support Document. Volume 2. United States Environmental Protection Agency, Washington, D.C: Development of National Bioaccumulation Factors. Vijver M.G., Van Gestel, C.A.M., Lanno, R.P., Van Straalen, N.M., Peijnenburg, W.J.G.M. (2004) Internal metal sequestration and its ecotoxicological relevance: a review. Environmental Science and Technology 18, 4705-4712. 4.1.7. Question 1 Explain why the CBC approach integrates chemical and biological availability. 4.1.7. Question 2 How are time dynamics involved in the CBC approach? 4.1.7. Question 3 Under what conditions the CBC approach cannot be applied?
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.01%3A_Toxicokinetics.txt
4.2. Toxicodynamics & Molecular Interactions Author: Timo Hamers Reviewers: Frank van Belleghem and Ludek Blaha Learning goals You should be able to • explain that a toxic response requires a molecular interaction between a toxic compound and its target • name at least three different types of biomolecular targets • name at least three functions of proteins that can be hampered by toxic compounds • explain in general terms the consequences of molecular interaction with a receptor protein, an enzyme, a transporter protein, a DNA molecule, and a membrane lipid bilayer. Key words: Receptor; Transcription factor; DNA adducts; Membrane; Oxidative stress Description Toxicodynamics describes the dynamic interactions between a compound and its biological target, leading ultimately to an (adverse) effect. In this Chapter 4.2, toxicodynamics have been described for processes leading to diverse adverse effects. Any adverse effects by a toxic substance is the result of an interaction between the toxicant and its biomolecular target (i.e. mechanism of action). Biomolecular targets include a protein, a DNA or RNA molecule, a phospholipid bilayer membrane, but also small molecules that have specific functions in keeping cellular homeostasis. Both endogenous and xenobiotic compounds that bind to proteins are called ligands. The consequence of a protein interaction depends on the role of the target protein, e.g. 1. Receptor 2. Enzyme 3. Protein Receptor proteins specifically bind and respond to endogenous signalling ligands such as hormones, prostaglandins, growth factors, or neurotransmitters, by causing a typical cellular response. Receptor proteins can be located in the cell membrane, in the cytosol, and in the nucleus of a cell. Agonistic receptor ligands activate the receptor protein whereas antagonistic ligands inactivate the receptor and prevent (endogenous) agonists from activating the receptor. Based on the role of the receptor protein, binding by ligands may interfere with ion channels, G-protein coupled receptors, enzyme linked receptors, or nuclear receptors. Xenobiotic ligands can interfere with these cellular responses by acting as agonistic or antagonistic ligands (link to section on Receptor interaction). Compounds that bind to an enzyme usually cause inhibition of the enzyme activity, i.e. a decrease in the conversion rate of the endogenous substrate(s) of the enzyme into its/their corresponding product(s). Compounds that bind non-covalently to an enzyme cause reversible inhibition, while compounds that bind covalently to an enzyme cause irreversible inhibition (link to section on Protein inactivation). Similarly, compounds that bind to a transporter protein usually inhibit the transport of the natural, endogenous ligand. Such transporter proteins may be responsible for local transport of endogenous ligands across the cell membrane, but also for peripheral transport of endogenous ligands through the blood from one organ to the other (link to section Endocrine disruption). Apart from interaction with functional receptor, enzyme, or transporter proteins, toxic compounds may also interact with structural proteins. For instance the cytoskeleton may be damaged by toxic compounds that block the polymerization of actin, thereby preventing the formation of filaments. In addition to proteins, DNA and RNA macromolecules can be targets for compound binding. Especially the guanine base can be covalently bound by electrophilic compounds, such as reactive metabolites. Such DNA adducts may cause copy errors during DNA replication leading to point mutations (link to section on Genotoxicity). Compounds may also interfere with phospholipid bilayer membranes, especially with the outer cell membrane and with mitochondrial membranes. Compounds disturb the membrane integrity and functioning by partitioning into the lipid bilayer. Lost membrane integrity may ultimately lead to leakage of electrolytes and loss of membrane potential. Narcosis and Membrane Damage Partitioning into the lipid bilayer is a non-specific process. Therefore, concentrations in biological membranes that cause effects through this mode of action do not differ between compounds. As such, this type of toxicity is considered as a "baseline toxicity" (also called "narcosis"), which is exerted by all chemicals. For instance, the chemical concentration in a target membrane causing 50% mortality in a test population is around 50 mmol/kg lipid, irrespective of the species or compound under consideration. Based on external exposure levels, however, compounds do have different narcotic potencies. After all, to reach similar lipid-based internal concentrations, different exposure concentrations are required, depending on the lipid-water partitioning coefficient, which is an intrinsic property of a compound, and not of the species. Narcotic action is not the only mechanism by which compounds may damage membrane integrity. Compounds called "ionophores", for instance, act like ion carriers that transport ions across the membrane, thereby disrupting the electrolyte gradient across the membrane. Ionophores should not be confused with compounds that open or close ion channels, although both type of compounds may disrupt the electrolyte gradient across the membrane. The difference is that ionophores dissolve in the bilayer membrane and shuttle transport ions across the membrane themselves, whereas ion channel inhibitors or stimulators close or open, respectively, a protein channel in the membrane that acts as a gate for ion transport. Finally, it should be mentioned here that some compounds may cause oxidative stress by increasing the formation of reactive oxygen species (ROS), such as H2O2, O3, O2•-, •OH, NO•, or RO•. ROS are oxygen metabolites that are found in any aerobic living organism. Compounds may directly cause an increase in ROS formation by undergoing redox cycling or interfering with the electron transport chain. Alternatively, compounds may cause an indirect increase in ROS formation by interference with ROS-scavenging antioxidants, ranging from small molecules (e.g. glutathione) to proteins (e.g. catalase or superoxide dismutase). For compounds causing both direct or indirect oxidative stress, it is not the compound itself that has a molecular interaction with the target, but the ROS which may bind covalently to DNA, proteins, and lipids (link to section on Oxidative Stress). 4.2. Question 1 Name three biomolecular targets that can be affected by a compound 4.2. Question 2 Name three different mechanisms by which a compound can affect analyte transport across the cell membrane 4.2. Question 3 What is the difference between a receptor agonist and a receptor antagonist? 4.2.1. Protein Inactivation Author: Timo Hamers Reviewers: Frank van Belleghem and Ludek Blaha Learning objectives: You should be able to • discuss how a compound that binds to a protein may inhibit ligand binding, and thereby hamper the function of the protein • explain the mechanism of action of organophosphate insecticides inhibiting acetylcholinesterase • explain the mechanism of action of halogenated phenols inhibiting thyroid hormone transport by transthyretin • distinguish between reversible and irreversible protein inactivation • distinguish between competitive, non-competitive, and uncompetitive enzyme inhibition Key words: enzyme inhibition; acetylcholinesterase, transthyretin, competitive inhibition, non-competitive inhibition, uncompetitive inhibition Introduction Proteins play an important role in essential biochemical processes including catalysis of metabolic reactions, DNA replication and repair, transport of messengers (e.g. hormones), or receptor responses to such messengers. Many toxic compounds exert their toxic action by binding to a protein and thereby disturbing these vital protein functions. Inhibition of the protein transport function Binding of xenobiotic compounds to a transporter protein may hamper binding of the natural ligand of the protein, thereby inhibiting the transporter function of the protein. An example of such inhibition is the binding of halogenated phenols to transthyretin (TTR). TTR is a transport protein for thyroid hormones, present in the blood. It has two binding places for the transport of thyroid hormone, i.e. mainly thyroxine (T4) in mammals and mainly triiodothyronine (T3) in other vertebrates (Figure 1). Compounds with high structural resemblance with thyroid hormone (especially halogenated phenols, such as hydroxylated metabolites of PCBs or PBDEs), are capable to compete with thyroid hormone for TTR binding. Apart from the fact that this enhances distribution of the toxic compounds, this also causes an increase of unbound thyroid hormone in the blood, which is then freely available for uptake in the liver, metabolic conjugation, and urinary excretion. Ultimately, this may lead to decreased thyroid hormone levels in the blood. Inhibition of the protein enzymatic activity Proteins involved in the catalysis of a metabolic reaction are called enzymes. The general formula of such a reaction is Binding of a toxic compound to an enzyme usually causes an inhibition of the enzyme activity, i.e. a decrease in the conversion rate of the endogenous substrate(s) of the enzyme into its/their corresponding product(s). In practice, this causes a toxic response due to a surplus of substrate and/or a deficit of product. One of the classical examples of enzyme inhibition by toxic compounds is the inhibition of the enzyme acetylcholinesterase (AChE) by organophosphate insecticides. AChE catalyzes the hydrolysis of the neurotransmitter acetylcholine (ACh), in the cholinergic synapses. During transfer of an action potential from one cell to the other, ACh is released in these synapses from the presynaptic cell into the synaptic cleft in order to stimulate the acetylcholine-receptor (AChR) on the membrane of the postsynaptic cell. AChE, which is also present in these synapses, is then responsible to break down the ACh into acetic acid and choline: By covalent binding to serine residues in the active site of the AChE enzyme, organophosphate insecticides can inhibit this reaction causing accumulation of the ACh neurotransmitter in the synapse (Fig. 2). As a consequence, the AChR is overstimulated causing convulsions, hypertension, muscle weakness, salivation, lacrimation, gastrointestinal problems, and slow heartbeat. Irreversible vs reversible enzyme inhibition Organophosphate insecticides bind covalently to the AChE enzyme thereby causing irreversible enzyme inhibition. Irreversible enzyme inhibition progressively increases in time following first-order kinetics (link to section on Bioaccumulation and kinetic modelling). Recovery of enzyme activity can only be obtained by de novo synthesis of enzymes. In contrast to AChE inhibition, inhibition of the T4 transport function of TTR is reversible because the halogenated phenols bind to TTR in a non-covalent way. Similarly, non-covalent binding of a toxic compound to an enzyme causes reversible inhibition of the enzyme activity. In addition to covalent and non-covalent enzyme binding, irreversible enzyme inhibition may occur when toxic compounds cause an error during enzyme synthesis. For instance, ions of essential metals, which are present as cofactors in the active site of many enzymes, may be replaced by ions of other metals during enzyme synthesis, yielding inactive enzymes. A classic example of such decreased enzyme activity is the inhibition of δ-aminolevulinic acid dehydratase (δ-ALAD) by lead. In this case, lead replaces zinc in the active site of the enzyme, thereby inhibiting a catalytic step in the synthesis of a precursor of heme, a cofactor of the protein hemoglobulin (link to section on Toxicity mechanisms of metals). With respect to reversible enzyme inhibition, three types of inhibition can be distinguished, i.e. competitive, non-competitive, and uncompetitive inhibition (Figure 3). Competitive inhibition refers to a situation where the chemical competes ("fights") with the substrate for binding to the active site of the enzyme. Competitive inhibition is very specific, because it requires that the inhibitor resembles the substrate and fits in the same binding pocket of the active site. The TTR-binding example described above is a typical example of competitive inhibition between thyroid hormone and halogenated phenols for occupation of the TTR-binding site. A more classic example of competitive inhibition is the inhibition of beta-lactamase by penicillin. Beta-lactamase is an enzyme responsible for the hydrolysis of beta-lactam, which is the final step in bacterial cell wall synthesis. By defective cell wall synthesis, penicillin is an antibiotic causing bacterial death. Non-competitive inhibition refers to a situation where the chemical binds to an allosteric site of the enzyme (i.e. not the active site), thereby causing a conformational change of the active site. As a consequence, the substrate cannot enter the active site, or the active site becomes inactive, or the product cannot be released from the active site. For instance, echinocandin antifungal drugs non-competitively inhibit the enzyme 1,3-beta glucan synthase, which is responsible for the synthesis of beta-glucan, a major constituent of the fungal cell wall. Lack of beta-glucan in fungal cell walls prevents fungal resistance against osmotic forces, leading to cell lysis. Uncompetitive inhibition refers to a situation where the chemical can only bind to the enzyme if the substrate is simultaneously bound. Substrate binding leads to a conformational change of the enzyme, which leads to the formation of an allosteric binding site for the inhibitor. Uncompetitive inhibition is more common in two-substrate enzyme reactions than in one-substrate enzyme reactions. An example of uncompetitive inhibition is the inhibition by lithium of the enzyme inositol mono phosphatase (IMPase), which is involved in recycling of the second messenger inositol-3-phospate (I3P) (link to section on Receptor interaction). IMPase is involved in the final step of dephosphorylating inositol monophosphate into inositol. Since lithium is the primary treatment for bipolar disorder, this observation has led to the inositol depletion hypothesis that inhibition of inositol phosphate metabolism offers a plausible explanation for the therapeutic effects of lithium. 4.2.1. Question 1 Explain how binding of organophosphate insecticides to acetylcholinesterase enzymes may cause neurotoxicity. 4.2.1. Question 2 Explain how organohalogenated phenols may cause decreased blood levels of thyroid hormone T4. 4.2.1. Question 3 What is the difference between a competitive and a non-competitive enzyme inhibitor? 4.2.1. Question 4 Is it possible to outcompete a competitive enzyme inhibitor by increasing the substrate concentration? 4.2.1. Question 5 Is it possible to outcompete a non-competitive enzyme inhibitor by increasing the substrate concentration? 4.2.2. Receptor interaction Author: Timo Hamers Reviewers: Frank van Belleghem and Ludek Blaha Learning objectives You should be able to • explain the possible effects of compound interference with ion channels. • explain the possible effects of compound interference with G-protein coupled receptors (GPCRs). • explain the possible effects of compound interference with enzyme linked receptors. • explain the possible effects of compound interference with nuclear receptors. • understand what signalling pathways are and how they can be affected to toxic compounds Key words: Ion channels, G-protein coupled receptors, enzyme linked receptors, nuclear receptors Introduction Receptor proteins specifically bind and respond to endogenous signalling ligands such as hormones, prostaglandins, growth factors, or neurotransmitters, by causing a typical cellular response. Receptor proteins can be located in the cell membrane, in the cytosol, and in the nucleus of a cell. Agonistic receptor ligands activate the receptor protein whereas antagonistic ligands inactivate the receptor and prevent (endogenous) agonists from activating the receptor (Figure 1). Based on the role of the receptor protein, binding by ligands may interfere with: 1. ion channels 2. G-protein coupled receptors 3. enzyme-linked receptors 4. nuclear receptors. Xenobiotic ligands can interfere with these cellular responses by acting as agonistic or antagonistic ligands. . 1. Ion channels Ion channels are transmembrane protein complexes that transport ions across a phospholipid bilayer membrane. Ion channels are especially important in neurotransmission, when stimulating neurotransmitters (e.g. acetylcholine or ACh) bind to the (so-called ionotropic) receptor part of the ion channel and open the ion channel for a very short (i.e. millisecond) period of time. As a result, ions can cross the membrane causing a change in transmembrane potential (see Figure). On the other hand, receptor-binding by inhibiting neurotransmitters (e.g. gamma-aminobutyric acid or GABA) prevents the opening of ion channels. Compounds interfering with sodium channels, for instance, are neurotoxic compounds (see section on Neurotoxicity). They can either block the ion channels or keep them in a prolonged or permanently open state. Many compounds known to interfere with ion channels are natural toxins. For instance, tetrodotoxin (TTX), which is produced by marine bacteria and highly accumulated in puffer fish, and saxitoxin, which is produced by dinoflagellates and is accumulated in shellfish are capable of blocking voltage-gated sodium channels in nerve cells. In contrast, ciguatoxin, which is another persistent toxin produced by dinoflagellates that accumulates in predatory fish positioned high in the food chain, causes prolongation of the opening of voltage-gated sodium channels. Some pesticides like DDT and pyrethroid insecticides also prevent closure of voltage-gated sodium channels in nerve cells. As a consequence, full repolarization of the membrane potential is not achieved. As a consequence, the nerve cells do not reach the resting potential and any new stimulus that would be too low to reach the threshold for depolarization under normal conditions, will now cause a new action potential. In other words, the nerve cells become hyperexcitable and undergo a series of action potentials (repetitive firing) causing tremors and hyperthermia. 2. G-protein coupled receptors (GPCRs) GPCRs are transmembrane receptors that transfer an extracellular signal into an activated G-protein that is connected to the receptor on the intracellular side of the membrane. G-proteins are heterotrimer proteins consisting of three subunits alpha, beta, and gamma, of which the alpha subunit - in inactivated form - contains a guanosine diphosphate (GDP) molecule. Upon binding by endogenous ligands such as hormones, prostaglandins, or neurotransmitters (i.e. the signal or "first messenger") to the (so-called metabotropic) receptor, a conformational change in the GPCR complex leads to an exchange of the GDP for a guanosine triphosphate (GTP) molecule in the alpha monomer part of the G-protein, causing release of the activated alpha subunit from the beta/gamma dimer part. The activated alpha monomer can interact with several target enzymes causing an increase in "second messengers" starting signal transduction pathways (see point 3 Enzyme-linked receptors). The remaining beta-gamma complex may also move along the inner membrane surface and affect the activity of other proteins (Figure 2). Two major enzymes that are activated by the alpha monomer are adenylyl cyclase causing an increase in second messenger cyclic AMP (cAMP) and phospholipase C causing an increase in second messenger diacylglycerol (DAG). In turn, cAMP and DAG activate protein kinases, which can phosphorylate many other enzymes. Activated phospholipase C also causes an increase in levels of the second messenger inositol-3-phosphate (I3P), which opens ion channels in the endoplasmic reticulum causing a release of calcium from the endoplasmic store, which also acts as a second messenger. On the other hand, the increase in cytosolic calcium levels is simultaneously tempered by the beta/gamma dimer, which can inhibit voltage-gated calcium channels in the cell membrane. Ultimately, the GPCR signal is extinguished by slow dephosphorylation of GTP into GDP by the activated alpha monomer, causing it to rearrange with the beta/gamma dimer into the original inactivated trimer G-protein (see also courses.washington.edu/conj/bess/gpcr/gpcr.htm). The most well-known example of disruption of GPCR signalling is by cholera toxin (see text block Cholera toxin below). Despite the recognized importance of GPRCs in medicine and pharmacology, little attention has so-far been paid in toxicology to interaction of xenobiotics with GPCRs. Although a limited number of studies have demonstrated that endocrine disrupting compounds including PAHs, dioxins, phthalates, bisphenol-A, and DDT can interact with GPCR signalling, the toxicological implications of these interactions (especially with respect to disturbed energetic metabolism) remain subject for further research (see review by Le Ferrec and Øvrevik, 2018). Cholera toxin Cholera toxin is a so-called AB exotoxin by Vibrio cholerae bacteria, consisting of an "active" A-part and a "binding" B-part (see http://www.sumanasinc.com/webcontent/animations/content/diphtheria.html). Upon binding by the B-part to the intestinal epithelium membrane, the entire AB complex is internalized into the cell via endocytosis, and the active A-part is released. This A-part adds an ADP-ribose group to G-proteins making the GTP dephosphorylation of activated G-proteins impossible. As a consequence, activated G-proteins remain in a permanent active state, adenylyl cyclase is permanently activated and cAMP levels rise, which in turn cause an imbalance in ion housekeeping, i.e. an excessive secretion of chloride ions to the gut lumen and a decreased uptake of sodium ions from the gut lumen. Due to the increased osmotic pressure, water is released to the gut lumen causing dehydration and severe diarrhoea ("rice-water stool"). 3. Enzyme-linked receptors Enzyme-linked receptors are transmembrane receptors that transfer an extracellular signal into an intracellular enzymatic activity. Most enzyme-linked receptors belong to the family of receptor tyrosine kinase (RTK) proteins. Upon binding by endogenous ligands such as hormones, cytokines, or growth factors (i.e. the signal or primary messenger) to the extracellular domain of the receptors, the receptor monomers dimerize and develop kinase activity, i.e. become capable of coupling of a phosphate group donated by a high-energy donor molecule to an acceptor protein. The first substrate for this phosphorylation activity is the dimerized receptor itself, which accepts a phosphate group donated by ATP on its intracellular tyrosine residues. This autophosphorylation is the first step of a signalling pathway consisting of a cascade of subsequent phosphorylation steps of other kinase proteins (i.e. signal transduction), ultimately leading to transcriptional activation of genes followed by a cellular response (Figure 3). Figure in preparation Figure 3: Upon ligand binding, tyrosine kinase receptor (TKR) proteins become autophosphorylated and may phosphorylate (i.e. activate other proteins), including other kinases. Xenobiotic compounds can interfere with these signalling pathways in many different ways. Compounds may avoid binding of the endogenous ligand, by blocking the receptor or by chelating the endogenous ligands. Most RTK inhibitors inhibit the kinase activity directly by acting as a competitive inhibitor for ATP binding to the tyrosine residues. Many RTK inhibitors are used in cancer treatment, because RTK overactivity is typical for many types of cancer. This overactivity may for instance be caused by increased levels of receptor-activating growth factors, or to spontaneous dimerization when the receptor is overexpressed or mutated). 4. Nuclear receptors Nuclear receptors are proteins that are activated by endogenous compounds (often hormones) leading ultimately to expression of genes specifically regulated by these receptors. Apart from ligand binding, activation of most nuclear receptors requires dimerization with a coactivating transcription factor. While some nuclear receptors are located in the nucleus in inactive form (e.g. the thyroid hormone receptor), most nuclear receptors are located in the cytosol, where they are bound to co-repressor proteins (often heat-shock proteins) keeping them in an inactive state. Upon ligand binding to the ligand binding domain (LBD) of the receptor, the co-repressor proteins are released and the receptor either forms a homodimer with a similar activated nuclear receptor or forms a heterodimer with a different nuclear receptor, which is often the retinoid-X receptor (RXR) for nuclear hormone receptors. Before or after dimerization, activated nuclear receptors are translocated to the nucleus. In the nucleus, they bind through their DNA-binding domain (DBD, or "zinc finger") to a responsive element in the DNA located in the promotor region of receptor-responsive genes. Consequently, these genes are transcribed to mRNA in the nucleus, which is further translated into proteins in the cell cytoplasm, see Figure 4). Xenobiotic compounds may act as agonist or antagonists of nuclear receptor activation. Chemicals that act as a nuclear receptor agonist mimic the action of the endogenous activator(s), whereas chemicals that act as a nuclear receptor antagonist basically block the LBD of the receptor, preventing the binding of the endogenous activator(s). Over the past decades, interaction of xenobiotics with nuclear receptors involved in signalling of both steroid and non-steroid hormones has gained a lot of attention of researchers investigating endocrine disruption (link to section on Endocrine Disruption). Nuclear receptor activation is also the key mechanism in dioxin-like toxicity (see text block dioxin-like toxicity below). Dioxin-like toxicity The term dioxins refers to polyhalogenated dibenzo-[p]-dioxin (PHDD) compounds, which are planar molecules consisting of two halogenated aromatic rings, which are connected by two ether bridges. The most potent and well-studied dioxin is 2,3,7,8-tetrachloro-[p]-dibenzodioxin (2,3,7,8-TCDD), which is often too simply referred to as TCDD or even just "dioxin". Other compounds with similar properties (dioxin-like compounds) include polyhalogenated dibenzo-[p]-furan (PHDF) compounds (often too simply referred to as "furans"), which are planar molecules consisting of two halogenated aromatic rings connected by one ether bridge and one carbon-carbon bond. A third major class of dioxin-like compounds belong to the polyhalogenated biphenyls (PHB), which consist of two halogenated aromatic rings connected only by a carbon-carbon bond. The most well-known compounds belonging to this latter category are the polychlorinated biphenyls (PCBs). Of all PHDD, PHDF or PHB compounds, only the persistent and planar compounds are considered dioxin-like compounds. For the PHBs, this implies that they should contain zero or at maximum one halogen-substitution in any of the four ortho-positions (see examples below). Non-ortho-substituted PHBs can easily obtain a planar confirmation with the two aromatic rings in one planar field, whereas mono-ortho-substituted PHBs can obtain such confirmation at higher energetic costs.. 2,3,7,8-tetrachlorodibenzo-[p]-dioxin (2,3,7,8-TCDD) is the most potent and well-studied dioxin-like compound, usually too simply referred to as "dioxin". 2,3,7,8-tetrachlorodibenzo-[p]-furan (2,3,7,8-TCDF) a dioxin-like compound equally potent to 2,3,7,8-TCDD. It is usually too simply referred to as "furan". 3,3',4,4',5-pentachlorinated biphenyl (PCB-126) is the most potent dioxin-like PCB compound, with no chlorine substitution in any of the four ortho positions next to the carbon-carbon bridge 2,3',4,4',5-pentachlorinated biphenyl (PCB-118) is a weak dioxin-like PCB compound, with one chlorine substitution in the four ortho positions next to the carbon-carbon bridge 2,2',4,4',5,5'-hexachlorinated biphenyl (PCB-153) is a non-dioxin-like (NDL) PCB compound, with two chlorine substitution in the four ortho positions next to the carbon-carbon bridge The planar composition is required for the dioxin-like compounds to fit as a key in the lock of the arylhydrocarbon (AhR) receptor (also known as the "dioxin-receptor or DR), present in the cytosol. The activated AhR then dissociates from its repressor proteins, is translocated to the nucleus, and forms a heterodimer with the AhR nuclear translocator (ARNT). The AhR-ARNT complex binds to dioxin-response elements (DRE) in the promotor regions of dioxin-responsive genes in the DNA, ultimately leading to transcription and translation of these genes (see Figure 1 in Denison & Nagy, 2003). Famous examples of such genes belong to the CYP1, UGT, and GST families, which are Phase I and Phase II metabolic enzymes whose activation by the AhR-ARNT complex is a natural response triggered by the need to remove xenobiotics (link to section on Xenobiotic metabolism and defence). Other genes with a DRE in their promotor region include genes involved in protein phosphorylation, such as the proto-oncogen c-raf and the cyclin dependent kinase inhibitor p27. This classical mechanism of ligand:AhR:ARNT:DRE complex-dependent induction of gene expression, however, cannot explain all the different types of toxicity observed for dioxins, including immunotoxicity, reproductive toxicity and developmental toxicity. Still, these effects are known to be mediated through the AhR as well, as they were not observed in AhR knockout mice. This can partly be explained by the fact that not all genes that are under transcriptional control of a DRE are known yet. Moreover, AhR dependent mechanisms other than this classical mechanism have been described. For instance, AhR activation may have anti-estrogenic effects because activated AhR (1) binds to the estrogen receptor (ER) and targets it for degradation, (2) binds (with ARNT) to inhibitory DREs in the promotor of ER-dependent genes, and (3) competes with the ER-dimer for common coactivators. Although dioxin-like compounds absolutely require the AhR to exert their major toxicological effects, several AhR independent effects have been described as well, such as AhR-independent alterations in gene expression and changes in Ca2+ influx related to changes in protein kinase activity. Apart from the persistent halogenated dioxinlike compounds described above, other compounds may also activate the AhR, including natural AhR agonists (nAhRAs) found in food (e.g. indolo[3,2-b]carbazole (ICZ) in cruciferous vegetables, bergamottin in grapefruits, tangeretin in citrus fruits), and other planar aromatic compounds, including polycyclic aromatic hydrocarbons (PAHs) produced by incomplete combustion of organic fuels. Upon activation of the AhR, these non-persistent compounds are metabolized by the induced CYP1A biotransformation enzymes. In addition, an endogenous AhR ligand called 6-formylindolo[3,2-b]carbazole (FICZ) has been identified. FICZ is a mediator in many physiological processes, including immune responses, cell growth and differentiation. Endogenous FICZ levels are regulated by a negative feedback FICZ/AhR/CYP1A loop, i.e. FICZ activates AhR and is metabolized by the subsequently induced CYP1A. Dysregulation of this negative feedback loop by other AhR agonists may disrupt FICZ functioning, and could possibly explain some of the effects observed for dioxinlike compounds. Further reading: Denison, M.S., Soshilov, A.A., He, G., De Groot, D.E., Zhao, B. (2011). Exactly the same but different: promiscuity and diversity in the molecular mechanisms of action of the Aryl hydrocarbon (Dioxin) Receptor. Toxicological Sciences 124, 1-22. Further reading: Boelsterli, U.A. (2009). Mechanistic Toxicology (2nd edition). Informa Healthcare, New York, London. Le Ferrec, E., Øvrevik J. (2018). G-protein coupled receptors (GPCR) and environmental exposure. Consequences for cell metabolism using the b-adrenoceptors as example. Current Opinion in Toxicology 8, 14-19. courses.washington.edu/conj/bess/gpcr/gpcr.htm 4.2.2. Question 1 Why are compounds interfering with ion channels mainly neurotoxic compounds? 4.2.2. Question 2 GPRC signalling is not only disrupted through interaction with the receptor. What alternative mechanisms can play a role? 4.2.2. Question 3 What is the main effect of activating enzyme-linked receptors? 4.2.2. Question 4 What happens if a compound binds to a nuclear receptor? 4.2.3. Oxidative stress - I. Reactive oxygen species and antioxidants Author: Frank van Belleghem Reviewers: Raymond Niesink, Kees van Gestel, Éva Hideg Learning objectives: You should be able to • explain what oxidative stress is, under which circumstances it arises and why it is important in toxicology. • describe what reactive oxygen species are and how they are produced. • describe how levels of reactive oxygen species are kept under control. • make a distinction between enzymatic and non-enzymatic antioxidants. Keywords: Reactive oxygen species, Fenton reaction, Enzymatic antioxidants, Non-enzymatic antioxidants, Lipid peroxidation. Reactive oxygen species Molecular oxygen (O2) is a byproduct of photosynthesis and essential to all heterotrophic cells because it functions as the terminal electron acceptor during the oxidation of organic substances in aerobic respiration. This process results in the reduction of O2 to water, leading to the formation of chemical energy and reducing power. The reason why O2 can be reduced with relative ease in biological systems can be found in the physicochemical properties of the oxygen molecule (in the triplet ground state, i.e. as it occurs in the atmosphere). Because of its electron configuration, O2 is actually a biradical that can act as an electron acceptor. The outer molecular orbitals of O2 each contain one electron, the spins of these electrons are parallel (Figure 1). As a result, oxygen (in the ground state) is not very reactive because, according to the Pauli exclusion principle, only one electron at a time can react with other electrons in a covalent bond. As a consequence, oxygen can only undergo univalent reductions, and the complete reduction of oxygen to water requires the sequential addition of four electrons leading to the formation of one-, two-, three-electron oxygen intermediates (Figure 1). These oxygen intermediates are, in sequence, the superoxide anion radical (O2●-), hydrogen peroxide (H2O2) and the hydroxyl radical (OH). Another reactive oxygen species of importance is singlet oxygen (1O2 or 1Δg). Singlet oxygen is formed by converting ground-state molecular oxygen into an excited energy state, which is much more reactive than the normal ground-state molecular oxygen. Singlet oxygen is typically generated by a process called photosensitization, for example in the lens of the eye. Photosensitization occurs when light (UV) absorption by an endogenous or xenobiotic substance lifts the compound to a higher energy state (a high-energy triplet intermediate) which can transfer its energy to oxygen, forming highly reactive singlet oxygen. Apart from oxygen-dependent photodynamic reactions, singlet oxygen is also produced by neutrophils and this has been suggested to be important for bacterial killing through the formation of ozone (O3) (Onyango, 2016). Because these oxygen intermediates are potentially deleterious products that can damage cellular components, they are referred to as reactive oxygen species (ROS). ROS are also often termed 'free radicals' but this is incorrect because not all ROS are radicals (e.g. H2O2, 1O2 and O3). Moreover, as all radicals are (currently) considered as unattached, the prefix 'free' is actually unnecessary (Koppenol & Traynham, 1996). ROS are byproducts of aerobic metabolism in the different organelles of cells, for instance respiration or photosynthesis, or as part of defenses against pathogens. Endogenous sources of reactive oxygen species include oxidative phosphorylation, P450 metabolism, peroxisomes and inflammatory cell activation. For example, superoxide anion radicals are endogenously formed from the reduction of oxygen by the semiquinone of ubiquinone (coenzyme Q), a coenzyme widely distributed in plants, animals, and microorganisms. Ubiquinones function in conjunction with enzymes in cellular respiration (i.e., oxidation-reduction processes). The superoxide anion radical is formed when one electron is taken up by one of the antibonding π*-orbitals (formed by two 2p atomic orbitals) of molecular oxygen. A second example of an endogenous source of superoxide anion radicals is the auto-oxidation of reduced heme proteins. It is known, for example, that oxyferrocytochrome P-450 substrate complexes may undergo auto-oxidation and subsequently split into (ferri) cytochrome P-450, a superoxide anion radical and the substrate (S). This process is known as the uncoupling of the cytochrome P-450 (CYP) cycle and also referred to as the oxidase activity of cytochrome P-450. However, it should be mentioned that this is not the normal functioning of CYP. Only when the transfer of an oxygen atom to a substrate is not tightly coupled to NADPH utilization, so that electrons derived from NADPH are transferred to oxygen to produce O2●- (and also H2O2). Table 1 shows the key oxygen species and their biological half-life, their migration distance, the endogenous source and their reaction with biological compounds. Table 1. The key oxygen species and their characteristics (table adapted from Das & Roychoudhury, 2014) ROS species Half-life (T1/2) Migration distance Endogenous source Mode of action Superoxide anion radical (O2●-) 1-4 µs 30 nm Mitochondria, cytochrome P450, macrophage/ inflammatory cells, membranes, chloroplasts Reacts with compounds with double bonds Hydroxyl radical (OH) 1 µs 1 nm Mitochondria, membranes, chloroplasts Reacts vigorously with all biomolecules. Hydrogen peroxide (H2O2) 1 ms 1 µm Mitochondria, membranes, peroxisomes, chloroplasts Oxidizes proteins by reacting with the Cys residue. Singlet Oxygen 1-4 µs 30 nm Mitochondria, membranes, chloroplasts Oxidizes proteins, polyunsaturated fatty acids and DNA Because of their reactivity, at elevated levels ROS can indiscriminately damage cellular components such as lipids, proteins and nucleic acids. In particular the superoxide anion radical and hydroxyl radicals that possess an unpaired electron are very reactive. In fact, hydroxyl has the highest 1-electron reduction potential, making it the single most reactive radical known. Hydroxyl radicals (Figure 1) can arise from hydrogen peroxide in the presence of redox-active transition metal, notably Fe2+/3+ or Cu+/2+, via the Fenton reaction. In case of iron, for this reaction to take place, the oxidized form (Fe3+) has to be reduced to Fe2+. This means that Fe2+ is only released in an acidic environment (local hypoxia) or in the presence of superoxide anion radicals. The reduction of Fe3+, followed by the interaction with hydrogen peroxide, leading to the generation of hydroxyl radical, is called the iron catalyzed Haber-Weiss reaction. Keeping reactive oxygen species under control In order to keep the ROS concentrations at low physiologic levels, aerobic organisms have evolved complex antioxidant defense systems that include antioxidant components that are enzymatic and non-enzymatic. These are cellular mechanisms that are evolved to inhibit oxidation by quenching ROS. Three classes of enzymes are known to provide protection against reactive oxygen species: the superoxide dismutases that catalyze the dismutation of the superoxide anion radical, and the catalases and peroxidases that react specifically with hydrogen peroxide. These antioxidant enzymes can be seen as a first-line defense as they prevent the conversion of the less reactive oxygen species, superoxide anion radical and hydrogen peroxide, to more reactive species such as the hydroxyl radical. The second line of defense largely consists of non-enzymatic substances that eliminate radicals such as glutathione and vitamins E and C. An overview of the cellular defense system is provided in Figure 3. Enzymatic antioxidants Superoxide dismutases (SODs) are metal-containing proteins (metalloenzymes) that catalyze the dismutation of the superoxide anion radical to molecular oxygen in the ground state and hydrogen peroxide, as illustrated by following reactions: Dismutation of superoxide anion radicals acts in the first part of the reaction with the superoxide anion radical as a reducing agent (a), and as an oxidant in the second part (b). Different types of SOD are located in different cellular locations, for instance Cu-Zn-SOD are mainly located in the cytosol of eukaryotes, Mn-SOD in mitochondria and prokaryotes, Fe-SOD in chloroplasts and prokaryotes and Ni-SOD in prokaryotes. Mn, Fe, Cu and Ni are the redox active metals in the enzymes, whereas Zn not being catalytic in the Cu-Zn-SOD. H2O2 is further degraded by catalase and peroxidase. Catalase (CAT) contains four iron-containing heme groups that allow the enzyme to react with the hydrogen peroxide and is usually located in peroxisomes, which are organelles with a high rate of ROS production. Catalase converts hydrogen peroxide to water and oxygen. In fact, catalase cooperates with superoxide dismutase in the removal of the hydrogen peroxide resulting from the dismutation reaction. Catalase acts only on hydrogen peroxide, not on organic hydroperoxide. Peroxidases (Px) are hemoproteins that utilize H2O2 to oxidize a variety of endogenous and exogenous substrates. An important peroxidase enzyme family is the selenium-cysteine containing Glutathione peroxidase (GPx), present in the cytosol and mitochondria. It catalyzes the conversion of hydrogen H2O2 to H2O via the oxidation of reduced glutathione (GSH) into its disulfide form glutathione disulfide (GSSG). Glutathione peroxidase catalyzes not only the conversion of hydrogen peroxide, but also that of organic peroxides. It can transform various peroxides, e.g. the hydroperoxides of lipids. Glutathione peroxidase is found in both the cytosol and in the mitochondria. In the cytosol, the enzyme is present in special vesicles. Another group of enzymes, not further described here, are the Peroxiredoxins (Prxs), present in the cytosol, mitochondria, and endoplasmic reticulum, use a pair of cysteine residues to reduce and thereby detoxify hydrogen peroxide and other peroxides. It has to be mentioned that no enzymes react with hydroxyl radical or singlet oxygen. Non-enzymatic antioxidants The second line of defense largely consists of non-enzymatic substances that eliminate radicals. The major antioxidant is glutathione (GSH), which acts as a nucleophilic scavenger of toxic compounds, trapping electrophilic metabolites by forming a thioether bond between the cysteine residue of GSH and the electrophile. The result generally is a less reactive and more water-soluble conjugate that can easily be excreted (see also phase II biotransformation reactions). GSH also is a co-substrate for the enzymatic (GS peroxidase-catalyzed) degradation of H2O2 and it keeps cells in a reduced state and is involved in the regeneration of oxidized proteins. Other important radical scavengers of the cell are the vitamins E and C. Vitamin E (α-tocopherol) is lipophilic and is incorporated in cell membranes and subcellular organelles (endoplasmic reticulum , mitochondria, cell nuclei) and reacts with lipid peroxides. α-Tocopherol can be divided into two parts, a lipophilic phytyl tail (intercalating with fatty acid residues of phospholipids ) and a more hydrophilic chroman head with a phenolic group (facing the cytoplasm). This phenolic group can reduce radicals (e.g. lipid peroxy radicals (LOO, see Figure 2, for explanation of lipid peroxidation, see section on Oxidative stress II: induction by chemical exposure and possible effects) and is thereby oxidized in turn to the tocopheryl radical which is relatively unreactive because it is stabilized by resonance. The radical is regenerated by vitamin C or by reduced glutathione (Figure 4). Oxidized non-enzymatic antioxidants are regenerated by various enzymes such as glutathione. Vitamin C (ascorbic acid) is a water-soluble antioxidant and is present in the cytoplasm. Ascorbic acid is an electron donor which reacts quite rapidly with the superoxide anion radical and peroxyl radicals, but is generally ineffective in detoxifying hydroxyl radicals because of its extreme reactivity it does not reach the antioxidant (See Klaassen, 2013). Moreover, it regenerates α-Tocopherol in combination with reduced GSH or compounds capable of donating reducing equivalents (Nimse and Pal, 2015): Figure 5. References Bolton, J.L., Dunlap, T. (2016). Formation and biological targets of quinones: cytotoxic versus cytoprotective effects. Chemical Research in Toxicology 30, 13-37. Das, K., Roychoudhury, A. (2014). Reactive oxygen species (ROS) and response of antioxidants as ROS-scavengers during environmental stress in plants. Frontiers in Environmental Science 2, 53. Edreva, A. (2005). Generation and scavenging of reactive oxygen species in chloroplasts: a submolecular approach.Agriculture, Ecosystems & Environment 106, 119-133. Klaassen, C. D. (2013). Casarett & Doull's Toxicology: The Basic Science of Poisons, Eighth Edition, McGraw-Hill Professional. Koppenol, W.H., Traynham, J.G. (1996). Say NO to nitric oxide: nomenclature for nitrogen-and oxygen-containing compounds. In: Methods in Enzymology (Vol. 268, pp. 3-7). Academic Press. Louise Bolton, J. (2014). Quinone methide bioactivation pathway: contribution to toxicity and/or cytoprotection?. Current Organic Chemistry 18, 61-69. Nimse, S.B., Pal, D. (2015). Free radicals, natural antioxidants, and their reaction mechanisms. Rsc Advances 5, 27986-28006. Onyango, A.N. (2016). Endogenous generation of singlet oxygen and ozone in human and animal tissues: mechanisms, biological significance, and influence of dietary components. Oxidative medicine and cellular longevity, 2016. Niesink, R.J.M., De Vries, J., Hollinger, M.A. (1996). Toxicology: Principles and Applications. CRC Press. Smart, R.C., Hodgson, E. (Eds.). (2018). Molecular and Biochemical Toxicology. John Wiley & Sons. 4.2.3.I. Question 1 What are the chances of hydroxyl radicals being formed inside the cell? On what factors does such formation depend? 4.2.3.I. Question 2 Given: Two oxygen species: I atmospheric oxygen (O2) II singlet oxygen (1O2) Which oxygen species contains one or more unpaired electrons, and therefore has radical properties? I and II only I only II neither I nor II 4.2.3.I. Question 3 Which of the following radicals are detoxified by a-tocopherol (vitamine E)? I hydroxyl radical, OH II superoxide anion radical, O2-• III lipid radical (L) IV lipid peroxyl radical (LOO) I and II III and IV I, II and III II, III and IV 4.2.3.I. Question 4 Given: Three enzymes: I catalase II peroxidase III superoxide dismutase What enzyme removes hydrogen peroxide? I and II I and III II and III only III 4.2.3. Oxidative stress - II. Induction by chemical exposure and possible effects Author: Frank van Belleghem Reviewers: Raymond Niesink, Kees van Gestel, Éva Hideg Learning objectives: You should be able to • explain how xenobiotic compounds can lead to an increased production of. • explain what oxidative stress does with • proteins, • lipids, • DNA and • gene regulation. Keywords: prooxidant-antioxidant balance, bioactivation, oxidative damage, How xenobiotic compounds induce generation of ROS The formation of reactive oxygen species (ROS; see section on Oxidative stress I) may involve endogenous substances and chemical-physiological processes as well as xenobiotics. Experimental evidence has shown that oxidative stress can be considered as one of the key mechanisms contributing to the cellular damage of many toxicants. Oxidative stress has been defined as "a disturbance in the prooxidant-antioxidant balance in favour of the former", leading to potential damage. It is the point at which the production of ROS exceeds the capacity of antioxidants to prevent damage (Klaassen et al., 2013). Xenobiotics involved in the formation of the superoxide anion radical are mainly substances that can be taken up in so reactive oxygen species -called redox cycles. These include quinones and hydroquinones in particular. In the case of quinones the redox cycle starts with a one-electron reduction step, just as in the case of benzoquinone (Figure 1). The resulting benzosemiquinone subsequently passes the electron received on to molecular oxygen. The reduction of quinones is catalyzed by the NADPH-dependent cytochrome P-450 reductase. Obviously, hydroquinones can enter a redox cycle via an oxidative step. This step may be catalyzed by enzymes, for example prostaglandin synthase. Other types of xenobiotic that can be taken up in a redox cycle, are the bipyridyl derivatives. A well-known example is the herbicide paraquat, which causes injury to lung tissue in humans and animals. Figure 2 schematically shows its bioactivation. Other compounds that are taken up in a redox cycle are nitroaromatics, azo compounds, aromatic hydroxylamines and certain metal (particularly Cu and Zn) chelates. Xenobiotics can enhance ROS production if they are able to enter mitochondria, microsomes, or chloroplasts and interact with the electron transport chains, thus blocking the normal electron flow. As a consequence, and especially if the compounds are electron acceptors, they divert the normal electron flow and increase the production of ROS. A typical example is the cytostatic drug doxorubicin, a well-known chemotherapeutic agent, which is used in treatment of a wide variety of cancers. Doxorubicin has a high affinity for cardiolipin, an important compound of the inner mitochondrial membrane and therefore accumulates at that subcellular location. Xenobiotics can cause oxidative damage indirectly by interfering with the antioxidative mechanisms. For instance it has been suggested that as a non-Fenton metal, cadmium (Cd) is unable to directly induce ROS. However, indirectly, Cd induces oxidative stress by a displacement of redox-active metals, depletion of redox scavengers (glutathione) and inhibition of antioxidant enzymes (protein bound sulfhydryl groups) (Cuypers et al., 2010;Thévenod et al., 2009). The mechanisms of oxidative stress As mentioned before, oxidative stress has been defined as "a disturbance in the prooxidant-antioxidant balance in favour of the former". ROS can damage proteins, lipids and DNA via direct oxidation, or through redox sensors that transduce signals, which in turn can activate cell-damaging processes like apoptosis. Oxidative protein damage Xenobiotic-induced generation of ROS can damage proteins through the oxidation of side chains of amino acids residues, the formation of protein-protein cross-links and fragmentation of proteins due to peptide backbone oxidation. The sulfur-containing amino acids cysteine and methionine are particularly susceptible for oxidation. An example of side chain oxidation is the direct interaction of the superoxide anion radical with sulfhydryl (thiol) groups, thereby forming thiyl radicals as intermediates: As a consequence, glutathione, composed of three amino acids (cysteine, glycine, and glutamate) and an important cellular reducing agent, can be damaged in this way. This means that if the oxidation cannot be compensated or repaired, oxidative stress can lead to depletion of reducing equivalents, which may have detrimental effects on the cell. Fortunately, antioxidant defence mechanisms limit the oxidative stress and the cell has repair mechanisms to reverse the damage. For example, heat shock proteins (hsp) are able to renature damaged proteins and oxidatively damaged proteins are degraded by the proteasome. Oxidative lipid damage Increased concentrations of reactive oxygen radicals can cause membrane damage due to lipid peroxidation (oxidation of polyunsaturated lipids). This damage may result in altered membrane fluidity, enzyme activity and membrane permeability and transport characteristics. An important feature characterizing lipid peroxidation is the fact that the initial radical-induced damage at a certain site in a membrane lipid is readily amplified and propagated in a chain-reaction-like fashion, thus dispersing the damage across the cellular membrane. Moreover, the products arising from lipid peroxidation (e.g. alkoxy radicals or toxic aldehydes) may be equally reactive as the original ROS themselves and damage cells by additional mechanisms. The chain reaction of lipid peroxidation consists of three steps: 1. Abstraction of a hydrogen atom from a polyunsaturated fatty acid chain by reactive oxygen radicals (radical formation, initiation). 2. Reaction of the resulting fatty acid radical with molecular oxygen (oxygenation or, more specifically, peroxidation, propagation) 3. These events may be followed by a detoxification process, in which the reaction chain is stopped. This process, which may proceed in several steps, is sometimes referred to as termination. Figure 3 summarizes the various stages in lipid peroxidation. In step II, the peroxidation of biomembranes generates a variety of reactive electrophiles such as epoxides (LOO) and aldehydes, including malondialdehyde (MDA). MDA is a highly reactive aldehyde which exhibits reactivity toward nucleophiles and can form MDA-MDA dimers. Both MDA and the MDA-MDA dimers are mutagenic and indicative of oxidative damage of lipids from a variety of toxicants. A classic example of xenobiotic bioactivation to a free radical that initiates lipid peroxidation is the cytochrome P450-dependent conversion of carbon tetrachloride (CCl4) to generate the trichloromethyl radical (CCl3) and then the trichloromethyl peroxylradical CCl3OO. Also the cytotoxicity of free iron is attributed to its function as an electron donor for the Fenton reaction (see section on Oxidative stress I) for instance via the generation of superoxide anion radicals by paraquat redox cycling) leading to the formation of the highly reactive hydroxyl radical, a known initiator of lipid peroxidation. Oxidative DNA damage ROS can also oxidize DNA bases and sugars, produce single- or double-stranded DNA breaks, purine, pyrimidine, or deoxyribose modifications and DNA crosslinks. A common modification to DNA is the hydroxylation of DNA bases leading to the formation of oxidized DNA adducts. Although these adducts have been identified in all four DNA bases, guanine is the most susceptible to oxidative damage because it has the lowest oxidation potential of all of the DNA bases. The oxidation of guanine and by hydroxyl radicals leads to the formation 8-hydroxyguanosine (8-OH-dG) (Figure 4). Oxidation of guanine has a detrimental effect on base paring, because instead of hydrogen bonding with cytosine as guanine normally does, it can form a base pair with adenine. As a result, during DNA replication, DNA polymerase may mistakenly insert an adenosine opposite to an 8-oxo-2'-deoxyguanosine (8-oxo-dG), resulting in a stable change in DNA sequence, a process known as mutagenesis (Figure 5). Fortunately, there is an extensive repair mechanism that keeps mutations to a relatively low level. Nevertheless, persistent DNA damage can result in replication errors, transcription induction or inhibition, induction of signal transduction pathways and genomic instability, events that are possibly involved in carcinogenesis (Figure 6). It has to be mentioned that mitochondrial DNA, is more susceptible to oxidative base damage compared to nuclear DNA due to its proximity to the electron transport chain (a source of ROS), and the fact that mitochondrial DNA is not protected by histones and has a limited DNA repair system. Figure in preparation Figure 6. Oxidative damage by ROS leading to mutations and eventually to tumour formation. Figure adapted from Boelsterli (2002). One group of xenobiotics that have clearly been associated with eliciting oxidative DNA damage and cancer are redox-active metals, including Fe(III), Cu(II), Ag(I), Cr(III), Cr(VI), which may entail, as seen before, the production of hydroxyl radicals. Other (non-redox-active) metals that can induce ROS-formation themselves or participate in the reactions leading to endogenously generated ROS are Pb(II), Cd(II), Zn(II), and the metalloid As(III) and As(V). Compounds like polycyclic aromatic hydrocarbons (PAHs), likely the largest family of pollutants with genotoxic effects, require activation by endogenous metabolism to become reactive and capable of modifying DNA. This activation is brought about by the so-called Phase I biotransformation (see Section on Xenobiotic metabolism and defence). Genetic detoxifying enzymes, like cytochrome P-450A1, are able to hydrophylate hydrophobic substrates. Whereas this reaction normally facilitates the excretion of the modified substance, some polycyclic aromatic hydrocarbons (PAHs), like benzo[a]pyrene generate semi stable epoxides that can ultimately react with DNA forming mutagenic adducts (see Section on Xenobiotic metabolism and defence). The main regulator of phase I metabolism in vertebrates, the Aryl hydrocarbon receptor (AhR), is a crucial player in this process. Some PAHs, dioxins, and some PCBs (the so-called coplanar congeners; see section on Complex mixtures) bind and activate AhR and increase the activity of phase I enzymes, including cytochrome P-450A1 (CYP1A1), by several fold. This increased oxidative metabolism enhances the toxic effects of the substances leading to increased DNA damage and inflammation (Figure 7). Oxidative effects on cell growth regulation ROS production and oxidative stress can act both on cell proliferation and apoptosis. It has been demonstrated that low levels of ROS influence signal transduction pathways and alter gene expression. Figure in preparation Figure 8. Role of ROS in altered gene expression. Figure adapted from Klaassen (2013). Many xenobiotics, by increasing cellular levels of oxidants, alter gene expression through activation of signaling pathways including cAMP-mediated cascades, calcium-calmodulin pathways, transcription factors such as AP-1 and NF-κB, as well as signaling through mitogen activated protein (MAP) kinases (Figure 8). Activation of these signaling cascades ultimately leads to altered gene expression or a number of genes including those affecting proliferation, differentiation, and apoptosis. References Boelsterli, U.A. (2002). Mechanistic toxicology: the molecular basis of how chemicals disrupt biological targets. CRC Press. Cuypers, A., Plusquin, M., Remans, T., Jozefczak, M., Keunen, E., Gielen, H., ... , Nawrot, T. (2010). Cadmium stress: an oxidative challenge. Biometals 23, 927-940. Furue, M., Takahara, M., Nakahara, T., Uchi, H. (2014). Role of AhR/ARNT system in skin homeostasis. Archives of Dermatological Research 306, 769-779. Klaassen, C.D. (2013). Casarett & Doull's Toxicology: The Basic Science of Poisons, Eighth Edition, McGraw-Hill Professional. Niesink, R.J.M., De Vries, J. & Hollinger, M. A. (1996). Toxicology: Principles and Applications. CRC Press. Thévenod, F. (2009). Cadmium and cellular signaling cascades: to be or not to be? Toxicology and Applied Pharmacology 238, 221-239. 4.2.3.II. Question 1 The herbicide paraquat induces oxidative stress due to • Interaction with the electron transport chain. Its involvement in the redox cycle. It interacts with glutathione. Its involvement in the Fenton reaction. 4.2.3.II. Question 2 Which biopolymers can undergo damage from reactive oxygen species? only DNA and proteins only DNA and membranes only proteins and membranes DNA, proteins and membranes 4.2.3.II. Question 3 Given: The three steps of lipid peroxidation: I initiation II propagation III termination Question: In which step(s) is O2 involved as a reagent or as a product? Only I Only II I and II II and III 4.2.3.II. Question 4 Mitochondrial DNA, compared to nuclear DNA, is relatively susceptible to oxidative base damage. Which of the given alternatives is not correct? The increased susceptibility of mitochondrial DNA is due to: The proximity of mitochondrial DNA to the electron transport chain Mitochondrial DNA is not protected by histones The limited levels of antioxidative compounds inside mitochondria The limited mitochondrial DNA repair system 4.2.4. Cytotoxicity: xenobiotic compounds causing cell death Authors: Frank Van Belleghem, Karen Smeets Reviewers: Timo Hamers, Bas J. Blaauboer Learning objectives: You should be able to: • name the main factors that cause cell death, • describe the process of necrosis and apoptosis, • describe the morphological differences between apoptosis and necrosis, • explain what form of cell death is caused by chemical substances. Keywords: cell death, apoptosis, necrosis, caspase activation, mitochondrial permeability transition Description Cytotoxicity or cell toxicity is the result of chemical-induced macromolecular damage (see the section on Protein inactivation) or receptor-mediated disturbances (see the section on Receptor interactions). Initial events such as covalent binding to DNA or proteins; loss of calcium control or oxidative stress (see the sections on Oxidative stress I and II) can compromise key cellular functions or trigger cell death. Cell death is the ultimate endpoint of lethal cell injury; and can be caused by chemical compounds, mediator cells (i.e. natural killer cells) or physical/environmental conditions (i.e. radiation, pressure, etc.). The multistep process of cell death involves several regulated processes and checkpoints to be passed before the cell eventually reaches a point of no return, leading to either programmed cell death or apoptosis, or to a more accidental form of cell death, called necrosis. This section describes the cytotoxic process itself, in vitro cytotoxicity testing is dealt with in the section on Human toxicity testing - II. In vitro tests. Chemical toxicity leading to cell death Cells can actively maintain the intracellular environment within a narrow range of physiological parameters despite changes in the conditions of the surrounding environment. This internal steady-state is termed cellular homeostasis. Exposure to toxic compounds can compromise homeostasis and lead to injury. Cell injury may be direct (primary) when a toxic substance interacts with one or more target molecules of the cell (e.g. damage to enzymes of the electron transport chain), or indirect (secondary) when a toxic substance disturbs the microenvironment of the cell (e.g. decreased supply of oxygen or nutrients). The injury is called reversible when cells can undergo repair of adaptation to achieve a new viable steady state. When the injury persists or becomes too severe, it becomes irreversible and the cell eventually perishes, thereby terminating cellular functions like respiration, metabolism, growth and proliferation, resulting in cell death (Niesink et al., 1996). The main factors determining the occurrence of cell death are: • the nature and concentration of the active toxic compound - in some cases a reactive intermediate - and the availability of that agent at the site of the target molecules; • the role of the target molecules in the functioning of the cell and/or maintaining the microenvironment; • the effectiveness of the cellular defence mechanisms in the detoxication and elimination of active agents, in repairing (primary) damage, and in the ability to induce proteins that either promote or inhibit the cell death process. It is important to realize that also "harmless" substances such as glucose or salt may lead to cell injury and cell death by disrupting the osmotic homeostasis at sufficient concentrations. Even an essential molecule such as oxygen causes cell injury at sufficiently high partial pressures (see the sections on Oxidative stress I and II). Apart from that, all chemicals exert "baseline toxicity" (also called "narcosis") as described in the textbox "narcosis and membrane damage" in the section on Toxicodynamics & Molecular Interactions. The main types of cell death: necrosis and apoptosis The two most important types of cell death are necrosis or accidental cell death (ACD) and apoptosis, a form of programmed cell death (PCD) or cell suicide. Cellular imbalances that initiate or promote cell death alone or in combination are oxidative stress, mitochondrial injury or disturbed calcium fluxes. These alterations are reversible at first, but after progressive injury, result in irreversible cell death. Cell death can also be initiated via receptor-mediated signal transduction processes. Apoptotic and necrotic cells differ in both the morphological appearance as well as biochemical characteristics. Necrosis is associated with cell swelling and a rapid loss of membrane integrity. Apoptotic cells shrink into small apoptotic bodies. Leaking cells during necrosis induce inflammatory responses, although inflammation is not entirely excluded during the apoptotic process (Rock & Kono, 2008). Necrosis Necrosis has been termed accidental cell death because it is a pathological response to cellular injury after exposure to severe physical, chemical, or mechanical stressors. Necrosis is an energy-independent process that corresponds with damage to cell membranes and subsequent loss of ion homeostasis (in particular Ca2+). Essentially, the loss of cell membrane integrity allows enzymes to leak out of the lysosomal membranes, destroying the cell from the inside. Necrosis is characterized by swelling of cytoplasm and organelles, rupture of the plasma membrane and chromatin condensation (see Figure 1). These morphological appearances are associated with ATP depletion, defects in protein synthesis, cytoskeletal damage and DNA-damage. Besides, cell organelles and cellular debris leak via the damaged membranes into the extracellular space, leading to activation of the immune system and inflammation (Kumar et al., 2015). In contrast to apoptosis, the fragmentation of DNA is a late event. In a subsequent stage, injury is propagated across the neighbouring tissues via the release of proteolytic and lipolytic enzymes resulting in larger areas of necrotic tissue. Although necrosis is traditionally considered as an uncontrolled form of cell death, emerging evidence points out that the process can also occur in a regulated and genetically controlled manner, termed regulated necrosis (Berghe et al., 2014). Moreover, it can also be an autolytic process of cell disintegration after the apoptotic program is completed in the absence of scavengers (phagocytes), termed post-apoptotic or secondary necrosis (Silva, 2010). Apoptosis Apoptosis is a regulated (programmed) physiological process whereby superfluous or potentially harmful cells (for example infected or pre-cancerous cells) are removed in a tightly controlled manner. It is an important process in embryonic development, the immune system and in fact, all living tissues. Apoptotic cells shrink and break into small fragments that are phagocytosed by adjacent cells or macrophages without producing an inflammatory response (Figure 3). It can be seen as a form of cellular suicide because cell death is the result of induction of active processes within the cell itself. Apoptosis is an energy-dependent process (it requires ATP) that involves the activation of caspases (cysteine-aspartyl proteases), pro-apoptotic proteins present as zymogens (i.e. inactive enzyme precursors that are activated by hydrolysis). Once activated, they function as cysteine proteases and activate other caspases. Caspases can be distinguished into two groups, the initiator caspases, which start the process, and the effector caspases, which specifically lyse molecules that are essential for cell survival (Blanco & Blanco 2017). Apoptosis can be triggered by stimuli coming from within the cell (intrinsic pathway) or from the extracellular medium (extrinsic pathway) as shown in Figure 2. The extrinsic pathway activates apoptosis in response to external stimuli, namely by extracellular ligands binding to cell-surface death receptors (Tumour Necrosis Factor Receptor ((TNFR)), leading to the formation of the death-inducing signalling complex (DISC) and the caspase cascade leading to apoptosis. The intrinsic pathway is activated by cell stressors such as DNA damage, lack of growth factors, endoplasmic reticulum (ER) stress, reactive oxygen species (ROS) burden, replication stress, microtubular alterations and mitotic defects (Galluzzi et al., 2018). These cellular events cause the release of cytochrome c and other pro-apoptotic proteins from the mitochondria into the cytosol via the mitochondrial permeability transition (MPT) pore. This is a megachannel in the inner membrane of the mitochondria composed of several protein complexes that facilitate the release of death proteins such as cytochrome c. The opening is triggered and tightly regulated by anti-apoptotic proteins, such as B-cell lymphoma-2 (Bcl-2) and pro-apoptotic proteins, such as Bax (Bcl-2 associated X protein) and Bak (Bcl-2 antagonist killer). The intrinsic and extrinsic pathways are regulated by the apoptosis inhibitor protein (AIP) which directly interacts with caspases and suppresses apoptosis. The release of the death protein cytochrome c induces the formation of a large protein structure formed in the process of apoptosis (the apoptosome complex) activating the caspase cascade leading to apoptosis. Other pro-apoptotic proteins oppose to Bcl (SMAC/Diablo) and stimulate caspase activity by interfering with AIP (HtrA2/Omi). HtrA2/Omi also activates caspases and endonuclease G (responsible for DNA degradation, chromatin condensation, and DNA fragmentation). The apoptosis-inducing factor (AIF) is involved in chromatin condensation and DNA fragmentation. Many xenobiotics interfere with the MPT pore and the fate of a cell depends on the balance between pro- and anti-apoptotic agents (Blanco & Blanco, 2017). What determines the form of cell death caused by chemical substances? Traditionally, toxic cell death was considered to be uniquely of the necrotic type. The classic example of necrosis is the liver toxicity of carbon tetrachloride (CCl4) caused by the biotransformation of CCl4 to the highly reactive radicals (CCl3• and CCl3OO•). Several environmental contaminants including heavy metals (Cd, Cu, CH3Hg, Pb), organotin compounds and dithiocarbamates can exert their toxicity via induction of apoptosis, likely mediated by disruption of the intracellular Ca2+ homeostasis, or induction of mild oxidative stress (Orrenius et al., 2011). In addition, some cytotoxic substances (e.g. arsenic trioxide (As2O3)) tend to induce apoptosis at low exposure levels or early after exposure at high levels, whereas they cause necrosis later at high exposure levels. This implicates that the severity of the insult determines the mode of cell death (Klaassen, 2013). In these cases, both apoptosis and necrosis involve the dysfunction of mitochondria, with a central role for the mitochondrial permeability transition (MPT). Normally, the mitochondrial membrane is impermeable to all solutes except for the ones having specific transporters. MPT allows the entry into the mitochondria of solutes with a molecular weight of lower than 1500 Daltons, which is caused by the opening of mitochondrial permeability transition pores (MPTP) in the inner mitochondrial membrane. As these small-molecular-mass solutes equilibrate across the internal mitochondria membrane, the mitochondrial membrane potential (ΔΨmt) vanishes (mitochondrial depolarization), leading to uncoupling of oxidative phosphorylation and subsequent adenosine triphosphate (ATP) depletion. Moreover, since proteins remain within the matrix at high concentration, the increasing colloidal osmotic pressure will result in movement of water into the matrix, which causes swelling of the mitochondria and rupture of the outer membrane. This results in the loss of intermembrane components (like cytochrome c, AIF, HtrA2/Omi, SMAC/Diablo & Endonuclease G) to the cytoplasm. When MPT occurs in a few mitochondria, the affected mitochondria are phagocytosed and the cell survives. When more mitochondria are affected, the release of pro-apoptotic compounds will lead to the caspase activation resulting in apoptosis. When all mitochondria are affected, ATP becomes depleted and the cell will eventually undergo necrosis as shown in Figure 3 (Klaassen et al., 2013). References Berghe, T.V., Linkermann, A., Jouan-Lanhouet, S., Walczak, H., Vandenabeele, P. (2014). Regulated necrosis: the expanding network of non-apoptotic cell death pathways. Nature reviews Molecular Cell Biology 15, 135. https://doi.org/10.1038/nrm3737 Blanco, G., Blanco, A. (2017). Chapter 32 - Apoptosis. Medical biochemistry. (pp. 791-796) Academic Press. https://doi.org/10.1016/B978-0-12-803550-4.00032-X Galluzzi, L., Vitale, I., Aaronson, S.A., Abrams, J.M., Adam, D., Agostinis, P., ... & Annicchiarico-Petruzzelli, M. (2018). Molecular mechanisms of cell death: recommendations of the Nomenclature Committee on Cell Death 2018. Cell Death & Differentiation, 1. https://doi.org/10.1038/s41418-017-0012-4 Klaassen, C.D., Casarett, L.J., & Doull, J. (2013). Casarett and Doull's Toxicology: The basic science of poisons (8th ed.). New York: McGraw-Hill Education / Medical. ISBN: 978-0-07-176922-8 Kumar, V., Abbas, A.K.,& Aster, J.C. (2015). Robbins and Cotran pathologic basis of disease, professional edition. Elsevier Health Sciences. ISBN 978-0-323-26616-1. Niesink, R.J.M., De Vries, J., Hollinger, M.A. (1996) Toxicology: Principles and Applications, (1st ed.). CRC Press. ISBN 0-8493-9232-2. Orrenius, S., Nicotera, P., Zhivotovsky, B. (2011). Cell death mechanisms and their implications in toxicology. Toxicological Sciences 119, 3-19. https://doi.org/10.1093/toxsci/kfq268 Rock, K.L., Kono, H. (2008). The inflammatory response to cell death. Annu. Rev. Pathmechdis. Mech. Dis. 3, 99-126. https://doi.org/10.1146/annurev.pathmechdis.3.121806.151456 Silva, M.T. (2010). Secondary necrosis: the natural outcome of the complete apoptotic program. FEBS Letters 584, 4491-4499. doi.org/10.1016/j.febslet.2010.10.046 Toné, S., Sugimoto, K., Tanda, K., Suda, T., Uehira, K., Kanouchi, H., ... & Earnshaw, W. C. (2007). Three distinct stages of apoptotic nuclear condensation revealed by time-lapse imaging, biochemical and electron microscopy analysis of cell-free apoptosis. Experimental Cell Research 313, 3635-3644. https://doi.org/10.1016/j.yexcr.2007.06.018 4.2.4. Question 1 Which of the following is a characteristic of necrosis? • The morphological changes are caused by release of lysosome enzymes. • It is an energy-dependent process • It is a programmed response to cellular injury. It does not lead to inflammation 4.2.4. Question 2 Which of the following is a characteristic of apoptosis? Membrane bleb formation Rapid loss of membrane integrity Swelling of mitochondria Cell shrinking 4.2.4. Question 3 Which cellular organelles are involved in the initiation of the intrinsic pathway of apoptosis? Ribosomes Lysosomes Mitochondria Peroxisomes 4.2.4. Question 4 Consider following statements: • Secondary necrosis is a form of accidental cell death. • MPT allows the entry of solutes leading to the increase of the volume of the cytoplasm. Which statement is correct? • Only I • Only II • Neither I, nor II • Both I and II 4.2.5. Neurotoxicity Author: Jessica Legradi Reviewers: Timo Hamers, Ellen Fritsche Learning objectives You should be able to • describe the structure of the nervous system • explain how neurotransmission works • mention some modes of action (MoA) by which pesticides and drugs cause neurotoxicity • understands the relevance of species sensitivity to pesticides • describe what developmental neurotoxicity (DNT) is Keywords: Nervous system, Signal transmission, Pesticides, Drugs, Developmental Neurotoxicity Neurotoxicity Neurotoxicity is defined as the capability of agents to cause adverse effects on the nervous system. Environmental neurotoxicity describes neurotoxicity caused by exposure to chemicals from the environment and mostly refers to human exposure and human neurotoxicity. Ecological neurotoxicity (eco-neurotoxicity) is defined as neurotoxicity resulting from exposure to environmental chemicals in species other than humans (e.g. fish, birds, invertebrates). The nervous system The nervous system consists of the central nervous system (CNS) including the brain and the spinal cord and the peripheral nervous system (PNS). The PNS is divided into the somatic system (voluntary movements), the autonomic (sympathic and parasympathic) system and the enteric (gastrointestinal) system. The CNS and PNS are built from two types of nerve cells, i.e. neurons and glial cells. Neurons are cells that receive, process, and transmit information through electrical and chemical signals. Neurons consist of the soma with the surrounding dendrites and one axon with an axon terminal where the signal is transmitted to another cell (Figure 1A). Compared to neurons, glial cells can have very different appearances (Figure 1B), but are always found in the surrounding tissue of neurons where they provide metabolites, support and protection to neurons without being directly involved in signal transmission. Figure in preparation Figure 1. Structures of a neuron (left; source: https://simple.Wikipedia.org/wiki/Neuron) and of glial cells (right) Neurons are connected to each other via synapses. The sending neuron is called the presynaptic neuron whereas the receiving neuron is the postsynaptic neuron. In the synapse, a small space exists between the axon terminal of the presynaptic neuron and a dendrite of the postsynaptic neuron. This space is named synaptic cleft. Both neurons have ion channels that can be opened and closed in the area of the synapse. There are channels selective for chloride, sodium, calcium, potassium, or protons and non-selective channels. The channels can be voltage gated (i.e. they open and close depending on the membrane potential), ligand gated (i.e. they open and close depending on the presence of other molecules binding to the ion channel), or they can be stress activated (i.e. they open and close due to physical stress (stretching)). Ligands that can open or close ion channels are called neurotransmitters. Depending on the ion channel and if it opens or closes upon neurotransmitter binding, a neurotransmitter can inhibit or stimulate membrane depolarization (i.e. inhibitory or excitatory neurotransmitter, respectively). The ligands bind to the ion channel via receptors (link to section on Receptor interaction). Neurotransmitters have very distinct functions and are linked to physical processes like muscle contraction and body heat and to emotional/cognitive processes like anxiety, pleasure, relaxing and learning. The signal transmission via the synapse (i.e. neurotransmission) is illustrated in Figure 2. The cell membrane of a neuron contains channels that allow ions to enter and exit the neuron. This flow of ions is used to send signals from one neuron to the other. The difference in concentration of negatively and positively charged ions on the inner and outer side of the neuronal membrane creates a voltage across the membrane called the membrane potential. When a neuron is at rest (i.e. not signalling), the inside charge of the neuron is negative relative to the outside. The cell membrane is then at its resting potential. When a neuron is signalling, however, changes in ion inflow and outflow of ions lead to a quick depolarization followed by a repolarization of the membrane potential called action potential. A video showing how the action potential is produced can be found here. Neurons can be damaged via substances that damage the cell body (neuronopathy), the axon (axonopathy), or the myelin sheet or glial cells (myelopathy). Aluminum, arsenic, methanol, methylmercury and lead can cause neuropathy. Acrylamide is known to specifically affect axons and cause axonopathy. Neurotransmitter system related Modes of Action of neurotoxicity Some of the modes of action relevant for neurotoxicity are disturbances of electric signal transmission and inhibition of chemical signal transmission, mainly through interference with the neurotransmitters. Pesticides are mostly designed to interfere with neurotransmission. 1. Interfering with Ion channels (see section on Receptor interaction) Pesticides such as DDT bind to open sodium channels in neurons, which prevents closing of the channels and leads to over-excitation. Pyrethroids, such as permethrin, increase the time of opening of the sodium channels, leading to similar symptoms. Lindane, cyclodiene insecticides like aldrin, dieldrin and endrin ("drins") and phenyl-pyrazols such as fipronil block GABA-mediated chloride channels and prevent hyperpolarization. GABA (gamma-aminobutyric acid) is an inhibitory neurotransmitter which is linked to relaxation and calming. It stimulates opening of chloride channels causing the transmembrane potential to become more negative (i.e. hyperpolarization), thereby increasing the depolarisation threshold for a new action potential. Blockers of GABA-mediated chloride channels prevent the hyperpolarizing effect of GABA, thereby decreasing the inhibitory effect of GABA. Neonicotinoids (e.g., imidacloprid) mimic the action of the excitatory neurotransmitter ACh by activating the nicotinic acetylcholine receptors (nAChR) in the postsynaptic membrane. These compounds are specifically designed for displaying a high affinity to insect nAChR. Many human drugs, like sedatives also bind to neuro-receptors. Benzodiazepine drugs activate GABA-receptors causing hyperpolarization (activating GABA). Tetrahydrocannabinol (THC), which is the active ingredient in cannabis, activates the cannabinoid receptors also causing hyperpolarization. Compounds activating the GABA or cannabinoid receptors induce a strong feeling of relaxation. Nicotine binds and activates the AChR, which can help to concentrate. 2. AChE inhibition Another very common neurotoxic mode of action is the inhibition of acetylcholinesterase (AChE). Organophosphate insecticides like dichlorvos and carbamate insecticides like propoxur bind to AChE, and hence prevent the degradation of acetylcholine in the synaptic cleft, leading to overexcitation of the post-synaptic cell membrane (see also section on Protein interaction). 3. Blocking Neurotransmitter uptake MDMA (3,4-methylenedioxymethamphetamine, also known as ecstasy or XTC) and cocaine block the re-uptake of serotonin, norepinephrine and to a lesser amount dopamine into the pre-synaptic neuron, thereby increasing the amount of these neurotransmitters in the synaptic cleft. Amphetamines also increase the amount of dopamine in the cleft by stimulating the release of dopamine form the vesicles. Dopamine is a neurotransmitter which is involved in pleasure and reward feelings. Serotonin or 5-hydroxytryptamine is a monoamine neurotransmitter linked to feelings of happiness, learning, reward and memory. Long term exposure When receptors are continuously activated or when neurotransmitter levels are continuously elevated, the nervous system adapts by becoming less sensitive to the stimulus. This explains why drug addicts have to increase the number of drugs taken to get to the desired state. If no stimulant is taken, withdrawal symptoms occur from the lack of stimulus. In most cases, the nervous system can recover from drug addiction. Species Sensitivity in Neurotoxicity Differences in species sensitivity can be explained by differences in metabolic capacities between species. Most compounds need to be bio-activated, i.e. being biotransformed into a metabolite that causes the actual toxic effect. For example, most organophosphate insecticides are thio-phosphoesters that require oxidation prior to causing inhibition of AChE. As detoxification is the dominant pathway in mammals and oxidation is the dominant pathway in invertebrates, organophosphate insecticides are typically more toxic to invertebrates than to vertebrates (see Figure 3). Other factors important for species sensitivity are uptake and depuration rate. Figure 3: Mechanism of action of an AChE inhibitor on the example of the insecticide diazinon. After oxidation catalyzed by cytochrome p450 monooxygenases, Diazinon is metabolized into diazoxon which can inhibit acetylcholinesterase. Via further phase I and phase II metabolization steps the molecule is eliminated. Drawn by Steven Droge. Developmental neurotoxicity Developmental neurotoxicity (DNT) particularly refers to the effects of toxicants on the developing nervous system of organisms. The developing brain and nervous system are supposed to be more sensitive to toxic effects than the mature brain and nervous system. DNT studies must consider the temporal and regional occurrence of critical developmental processes of the nervous system, and the fact that early life exposure can lead to long-lasting neurotoxic effects or delays in neurological development. Species differences are also relevant for DNT. Here, developmental timing, speed, or cellular specificities might determine toxicity. 4.2.5. Question 1 What are the two major cell types found in the nervous system? 4.2.5. Question 2 What does GABA do? 4.2.5. Question 3 How does AChE inhibition work? 4.2.5. Question 4 What makes invertebrates more sensitive to organophosphate insecticides? 4.2.5. Question 5 Why is DNT important to study? 4.2.6. Effects of herbicides Author: Nico M. van Straalen Reviewers: Cornelia Kienle, Henk Schat Learning objectives You should be able to • Explain the different ways in which herbicides are applied in modern agriculture • Enumerate the eight major modes of action of herbicides • Provide some examples of side-effects of herbicides Keywords: Amino acid inhibitor, growth regulator, photosynthesis inhibitor, pre-emergence application, selectivity Introduction Herbicides are pesticides (see section on Crop protection products) that aim to kill unwanted weeds in agricultural systems, and weeds growing on infrastructure such as pavement and train tracks. Herbicides are also applied to the crop itself, e.g. as a pre-harvest treatment in crops like potato and oilseed rape, to prevent growth of pathogens on older plants, or to ease mechanical harvest. In a similar fashion, herbicides are used to destroy grass of pastures in preparation of their conversion to cropland. These applications are designated "desiccation". Finally herbicides are used to kill broad-leaved weeds in pure grass-fields (e.g. golf courts). Herbicides represent the largest volume of pesticides applied to date (about 60%), partly because mechanical and hand-executed weed control has declined considerably. The tendency to limit soil tillage (as a strategy to maintain a diverse and healthy soil life) has also stimulated the use of chemical herbicides. Herbicides are obviously designed to kill plants and therefore act upon biochemical targets that are specific to plants. As the crop itself is also a plant, selectivity is a very important issue in herbicide application. This is achieved in several ways. • Application of herbicides before emergence of the crop (pre-emergence application). This will keep the field free of weeds before germination, while the closed canopy of the crop prevents the later growth of weeds. This strategy is often applied in fast-growing crops that make a high canopy with a lot of shading on ground level, such as maize. Examples of herbicides used in pre-emergence application are glyphosate and metolachlor. Also the selectivity of seedling growth inhibitors such as EPTC is due to the fact that these compounds are applied as part of a soil preparation and act on germinating plants before the crop emerges. • Broad-leaved plants are more susceptible to herbicides that rely on contact with the leaves, because they intercept more of a herbicide spray than small-leaved plants such as grass. This type of selectivity allows some herbicides to be used in grassland and cereal crops, to control broad-leaved weeds; the herbicide itself is not intercepted by the crop. Examples are the chlorophenoxy-acetic acids such as MCPA and 2,4-D. • In some cases the crop plant is naturally tolerant to a herbicide due to specific metabolic pathways. The selectivity of ACCase inhibitors such as diclofop-methyl, fenoxaprop-ethyl, and fluazifop-butyl is mostly due to this mechanism. These compounds inhibit acetyl-CoA carboxylases, a group of enzymes essential to fatty acid synthesis. However, in wheat the herbicidal compounds are quickly hydrolysed to non-toxic metabolites, while weeds are not capable of such detoxification. This allows such herbicides to be used in wheat fields. Another type of physiological selectivity is due to differential translocation, that is, some plants quickly transport the herbicide throughout the plant, enabling it to exert toxicity in the leaves, while others keep the substance in the roots and so remain less susceptible. • Several crops have been genetically modified (gm) to become resistant to herbicides; one of the best-known modifications is the insertion of an altered version of the enzyme EPSP synthase. This enzyme is part of the shikimate pathway and is specifically inhibited by glyphosate (Figure 1). The modified version of the enzyme renders the plant insensitive to glyphosate, allowing herbicide use without damage to the crop. Various plant species have been modified in this way, although their culture is limited to countries that allow gm-crops (USA and many other countries, but not European countries). Classification by mode of action The diversity of chemical compounds that have been synthesized to attack specific biochemical targets in plants is enormous. In an attempt to classify herbicides by mode of action a system of 22 different categories is often used (Sherwani et al. 2015). Here we present a simplified classification specifying only eight categories (Plant & Soil Sciences eLibrary 2019, Table 1). Table 1. Classification of herbicides by mode of action No. Class (mode of action) Examples of chemical groups Example of active ingredient 1 Amino acid synthesis inhibitors Sulfonylureas, imidazolones, triazolopyrimidines, epsp synthase inhibitors Glyphosate 2 Seedling growth inhibitors Carbamothiates, acetamides, dinitroanilines EPTC 3 Growth regulators (interfere with plant hormones) Phenoxy-acetic acids, benzoic acid, carboxylic acids, picolinic acids 2,4-D 4 Inhibitors of photosynthesis Triazines, uracils, phenylureas, benzothiadiazoles, nitriles, pyridazines Atrazine 5 Lipid synthesis inhibitors Aryloxyphenoxypropionates, cyclohexanediones Sethoxydim 6 Cell membrane disrupters Diphenylethers, aryl triazolinones, phenylphthalamides, bipyridilium Paraquat 7 Inhibitors of protective pigments Isoxazolidones, isoxazoles, pyridazinones Mesotrione 8 Unknown Chemical compounds with proven herbicide efficacy but unknown mode of action Ethofumesate To illustrate the diversity of herbicidal mode of action, two examples of well-investigated mechanisms are highlighted here. Plants synthesize aromatic amino acids using the shikimate pathway. Also bacteria and fungi avail of this pathway, but it is not present in animals. They must obtain aromatic amino acids through their diet. The first step in this pathway is the conversion of shikimate-3-phosphate and phosphoenolpyruvate (PEP) to 5-enolpyruvylshikimate-3-phosphate (EPSP), by the enzyme EPSP synthase (Figure 1). EPSP is subsequently dephosphorylated and forms the substrate for the synthesis of aromatic amino acids such as phenylalanine, tyrosine and tryptophan. Glyphosate bears a structural resemblance to PEP and competes with PEP as a substrate for EPSP synthase. However, in contrast to PEP it binds firmly to the active site of the enzyme and blocks its activity. The ensuing metabolic deficiency quickly leads to loss of growth potential of the plant. Another very well investigated mode of herbicidal action is photosynthesis inhibition by atrazine and other symmetrical triazines. In contrast to glyphosate, atrazine can only act in aboveground plants with active photosynthesis. Sunny weather stimulates the action of such herbicides. The action of atrazine is due to binding to the D1 quinone protein of the electron transport complex of photosystem II sitting in the inner membrane of the chloroplast (see Figure 1 in Giardi and Pace, 2005). Photosystem II (PSII) is a complex of macromolecules with light harvesting and antenna units, chlorophyll P680, and reaction centers that capture light energy and use it to split water, produce oxygen and transfer electrons to photosystem I, which uses them to eventually produce reduction equivalents. The D1 quinone has a "herbicide binding pocket" and binding of atrazine to this site blocks the function of PSII. A single amino acid in the binding pocket is critical for this; alterations in this amino acid provide a relatively easy possibility for the plant to become resistant to triazines. Side-effects Most herbicides are polar compounds with good water solubility, which is a crucial property for them to be taken up by plants. This implies that herbicides, especially the more persistent ones, tend to leach to groundwater and surface water and are sometimes also found in drinking water resources. Given the large volumes applied in agriculture, concern has arisen that such compounds, despite them being designed to affect only plants, might harm other, so called "non-target" organisms. In agricultural systems and their immediate surroundings, complete removal of weeds will reduce plant biodiversity, with secondary effects on plant-feeding insects and insectivorous birds. In the short term however herbicides will increase the amount of dead plant remains on the soil, which may benefit invertebrates that are less susceptible to the herbicidal effect, and find shelter in plant litter and feed on dead organic matter. Studies show that there is often a positive effect of herbicides on Collembola, mites and other surface-active arthropods (e.g. Fratello et al. 1985). Other secondary effects may occur when herbicides reach field-bordering ditches, where suppression of macrophytes and algae can affect populations of macro-invertebrates such as gammarids and snails. Direct toxicity to non-target organisms is expected from broad-spectrum herbicides that kill plants due to a general mechanism of toxicity. This holds for paraquat, a bipyridilium herbicide (cf. Table 1) that acts as a contact agent and rapidly damages plant leaves by redox-cycling; enhanced by sunshine, it generates oxygen radicals that disrupt biological membranes. Paraquat is obviously toxic to all life and represents an acute hazard to humans. Consequently, its use as a herbicide is forbidden in the EU since 2007. In other cases the situation is more complex. Glyphosate, the herbicide with by far the largest application volume worldwide is suspect of ecological side-effects and has even been labelled "a probable carcinogen" by the IUCR (Tarazona et al., 2017). However, glyphosate is an active ingredient contained in various herbicide formulations, e.g. Roundup Ready, Roundup 360 plus, etc. Evidence indicates that most of the toxicity attributed to glyphosate is actually due to adjuvants in the formulation, specifically polyethoxylated tallowamines (Mesnage et al., 2013). Another case of an unexpected side-effect from a herbicide is due to atrazine. In 2002 a group of American ecologists (Hayes et al., 2002) reported that the incidence of developmental abnormalities in wild frogs was correlated with the volume of atrazine sold in the area where frogs were monitored, across a large number of sites in the U.S. Male Rana pipiens exposed to atrazine in concentrations higher than 0.1 µg/L during their larval stages showed an increased rate of feminization, i.e. the development of oocytes in the testis. This would be due to induction of aromatase, a cytochrome P450 activity responsible for the conversion of testosterone to estradiol. Finally the development of resistance may also be considered an undesirable side-effect. There are currently (2018) 499 unique cases (255 species of plant, combined with 167 active ingredients) of herbicide resistance, indicating the agronomical seriousness of this issue. A full discussion of this topic falls, however, beyond the scope of this module. Conclusions Herbicides are currently an indispensable, high-volume component of modern agriculture. They represent a very large number of chemical groups and different modes of action, often plant-specific. While some of the older herbicides (paraquat, atrazine, glyphosate) have raised concern regarding their adverse effects on non-plant targets, the development of new chemicals and the discovery of new biochemical targets in plant-specific metabolic pathways remains an active field of research. References Fratello, B. et al. (1985) Effects of atrazine on soil microarthropods in experimental maize fields. Pedobiologia 28: 161-168. Giardi, M.T., Pace, E. (2005) Photosynthetic proteins for technological applications. Trends in Biotechnology 23, 257-263. Hayes, T., Haston, K., Tsui, M., Hoang, A., Haeffele, C., Vonk, A. (2002). Feminization of male frogs in the wild. Nature 419, 895-896. Mesnage, R., Bernay, B., Séralini, G.-E. (2013). Ethoxylated adjuvants of glyphosate-based herbicides are active principles of human cell toxicity. Toxicology 313, 122-128. Plant & Soil Sciences eLibrary (2019), https://passel.unl.edu. Sherwani, S.I., Arif, I.A., Khan, H.A. (2015). Modes of action of different classes of herbicides. In: Price, J., Kelton, E., Suranaite, L. (Eds.). Herbicides. Physiological Action and Safety. Chapter 8, IntechOpen. Tarazona, J.V., Court-Marques, D., Tiramani, M., Reich, H., Pfeil, R., Istace, F., Crivellente, F. (2017). Glyphosate toxicity and carcinogenicity: a review of the scientific basis of the European Union Assessment and its differences with IARC. Archives of Toxicology 91, 2723-2743. 4.2.6. Question 1 Define what is meant by a pre-emergence herbicide and why this is useful in agronomy? 4.2.6. Question 2 With a herbicide application in agriculture you want to kill unwanted plants among a crop that is in itself also a plant. How is this possible? 4.2.6. Question 3 Enumerate the eight major modes of action of herbicides 4.2.6. Question 4 Can herbicides cause adverse effects on non-plant species? 4.2.7. Chemical carcinogenesis and genotoxicity Author: Timo Hamers Reviewer: Frederik-Jan van Schooten Learning objectives You should be able to • describe the three different phases in cancer development and understands how compounds can stimulate the corresponding processes in these phases • explaion the difference between base pair substitutions and frameshift mutations both at DNA and the protein level • describe the principle of bioactivation, which distinguishes indirect from direct mutagenic substances • explain the difference between mutagenic and non-mutagenic carcinogens Key words: Bioactivation; Mutation; Tumour promotion; Tumour progression; Ames test Chemical carcinogenesis Cancer is a collective name for multiple diseases sharing a common phenomenon that cell division is out of the control by growth-regulating processes. The consequent, autonomic growing cells are usually concentrated in a neoplasm (often referred to a tumour) but can also be diffusely dispersed, for instance in case of leukaemia or a mesothelioma. Benign tumours refer to neoplasms that are encapsulated and do not distribute through the body, whereas malign tumours cause metastasis , i.e. spreading of carcinogenic cells through the body causing new neoplasms at distant. The term benign sounds more friendly than it actually is: benign tumours can be very damaging to organs which are limited in available space (e.g. the brain in the skull) or to organs that can be obstructed by the tumour (e.g. the gut system). The process of developing cancer (carcinogenesis) is traditionally divided in three phases, i.e. 1. the initiation phase, in the genetic DNA of a cell is permanently changed, resulting in daughter cells that genetically differ from their parent cells; 2. the promotion phase, in which the cell loses its differentiation and gains new characteristics causing increased proliferation; 3. the progression phase, in which the tumour invades surrounding tissues and causes metastasis. Chemical carcinogenesis means that a chemical substance is capable of stimulating one or more of these phases. Carcinogenic compounds are often named after the phase that they affect, i.e. initiators (also called mutagens), tumour promotors, and tumour progressors. It is important to realize that many substances and processes naturally occurring in the body can also stimulate the different phases, i.e. inflammation and exposure to sun light may cause mutations, some endogenous hormones can act as very active promotors in hormone-sensitive cancers, and spontaneous mutations may stimulate the tumour progression phase. Point mutations Gene mutations (aka point mutations) are permanent changes in the order of the nucleotide base-pairs in the DNA. Based on what happens at the DNA level, point mutations can be divided in three types, i.e. a replacement of an original base-pair by another base-pair (base-pair substitution), the insertion of an extra base-pair or the deletion of an original base-pair (Figure 1). In a coding part of DNA, three adjacent nucleotides on a DNA strand (i.e. a triplet) form a codon that encodes for an amino acid in the ultimate protein. Because insertions and deletions cause a shift in these triplet reading frames with one nucleotide to the left or to the right, respectively, these point mutations are also called frame-shift mutations. Based on what happens at the protein level for which a gene encodes, point mutations can also be divided into three types. A missense mutation means that the mutated gene encodes for a different protein than the wildtype gene, a nonsense mutation means that the mutation introduces a STOP codon that interrupts gene transcription resulting in a truncated protein, and a silent mutation means that the mutated gene still encodes for exactly the same protein, despite the fact that the genetic code has been changed. Silent mutations are always base-pair substitutions, because the triplet structure of the DNA has not been damaged. A very illustrative example of the difference between a base-pair substitution and a frameshift mutation at the level of protein expression is the following "wildtype" sentence, consisting of only three letter words representing the triplets in the genomic DNA: The fat cat ate the hot dog. Imagine that the letter t in cat is replaced by an r due to a base-pair substitution. The sentence then reads: The fat car ate the hot dog. This sentence clearly has another meaning, i.e. it contains missense information. Imagine now that the letter a in fat is replaced by an e due to a base-pair substitution. The sentence then reads: The fet cat ate the hot dog. This sentence clearly contains a spelling error (i.e. a mutation), but it's meaning has not changed, i.e. it contains a silent mutation. Imagine now that an additional letter m causes a frameshift in the word fat, due to an insertion. The sentence then reads: The fma tca tat eth eho tdo. This sentence clearly has another meaning, i.e. it contains missense information. Similarly, leaving out the letter a in fat also causes a frameshift mutation, due to a deletion. The sentence then reads: The ftc ata tet heh otd og. Again, this sentence clearly has another meaning, i.e. it contains missense information. This example suggests that the consequences are more dramatic for a frameshift mutation than for a base-pair substitution. Please keep in mind that the replacement of a cat by a car may also have huge consequences in daily life! Mutagenic compounds Base-pair substitutions are often caused by electrophilic substances that want to take up an electron from especially the nucleophilic guanine base that wants to donate an electron to form an electron pair. The consequent guanine addition product (adduct) forms a base-pair with thymine causing a base-pair substitution from G-C to A-T. Alternatively, the guanine adduct may split from the phosphate-sugar backbone of the DNA, leaving an "empty" nucleotide spot in the triplet that can be taken by any nucleotide during DNA replication. Alternatively, base-pair substitutions may be caused by reactive oxygen species (ROS), which are radical compounds that also take up an electron from guanine and form guanine oxidation products (for instance hydroxyl adducts). It should be realized that a DNA adduct can only cause an error in the order of nucleotides (i.e. a mutation) if it is present during DNA replication. Before a cell goes into the DNA synthesis phase of the cell cycle, however, the DNA is thoroughly checked, and possible errors are repaired by DNA repair systems. Exposure to direct mutagenic electrophilic agents rarely occurs because these substances are so reactive that they immediately bind to proteins and DNA in our food and environment. Therefore, DNA damage by such substances in most cases originates from indirect mutagenic compounds, which are activated into DNA-binding agents during Phase I of the biotransformation. This process of bioactivation is a side-effect of the biotransformation, which is actually aiming at rapid detoxification and elimination of toxic compounds. Frame-shift mutations are often caused by intercalating agents. Unlike electrophilic agents and ROS, intercalating agents do not form covalent bonds with the DNA bases. Instead, due to their planar structure intercalating agents fit exactly between two adjacent nucleotides in the DNA helix. As a consequence, they hinder DNA replication, causing the insertion of an extra nucleotide or the deletion of an original nucleotide in the replicate DNA strain. Ames test for mutagenicity Mutagenicity of a compound can be tested in the Ames test, named after Bruce Ames who developed the assay in the early 1970s. The assay makes use of a Salmonella bacteria strain that contains a mutation in a gene encoding for an enzyme involved in the synthesis of the amino acid histidine. Consequently, the bacteria can no longer produce histidine (become "his‑") and become auxotrophic, i.e. they depend on their culture medium for histidine. In the assay, the bacteria are exposed to the test compound in a medium that does not contain histidine. If the test compound is not mutagenic, the bacteria cannot grow and will die. If the test compound is mutagenic, it may cause a back-mutation (reversion) of the original mutation in a few bacteria, restoring the autotrophic capacity of the bacteria (i.e. their capacity to produce their own histidine). Growth of mutated bacteria on the histidine depleted medium can be followed by counting colonies (on an agar plate) or by measuring metabolic activity (in a fluctuation assay). Direct mutagenic compounds can be tested in the Ames test without extra treatment. Indirect mutagenic compounds, however, have to be bio-activated before they exert their mutagenic action. For this purpose, a liver homogenate is added to the culture medium containing all enzymes and cofactors required for Phase-I biotransformation of the test compound. This liver homogenate with induced cytochrome P450 (cyp) activity is usually obtained from rats exposed to mixed-type of inducers (i.e. cyp1a, cyp2b, cyp3a), such as the PCB-mixture Aroclor 1254. Compounds involved in tumour promotion and tumour progression As stated above, non-mutagenic carcinogens are involved in stimulating the tumour promotion. Tumour promoting substances stimulate cell proliferation and inhibit cell differentiation and apoptosis. Unlike mutagenic compounds, tumour promoting compounds do not interfere directly with DNA and their effect is reversible. Many endogenous substances (e.g. hormones) may act as tumour promoting agents. The first illustration that chemicals may induce cancer comes from the case of the chimney sweepers in London around 1775. The surgeon Percival Pott (1714-1788) noticed that many adolescent male patients who had developed scrotal cancer had worked during their childhood as a chimney sweeper. Pott made a direct link between exposure to soot during childhood and development of cancer at later age. Based on this discovery, taking a shower after work became mandatory for children working as chimney sweepers, and the observed scrotum cancer incidence decreased. As such, Percival Pott was the first person (i) to link cancer development to chemical substances, (ii) to link early exposure to later cancer development, and (iii) to obtain better occupational health by decreased exposure through better hygiene. In retrospective, we now know that the mutagens involved were polycyclic aromatic hydrocarbons (PAHs) that were bio-activated into highly reactive diol-epoxide metabolites. The delay in cancer development after the early childhood exposure can be attributed to the absence of a tumour promotor. Only after the chimney sweepers had gone through puberty they had sufficient testosterone levels, which stimulates scrotum tissue growth and in this case acted as an endogenous tumour promoting agent. Tumour progression is the result of aberrant transcriptional activity from either genetic or epigenetic alterations. Genetic alterations can be caused by substances that damage the DNA (called genotoxic substances) and thereby introduce strand breaks and incorrect chromosomal division after mitosis. This results in the typical instable chromosomal characteristics of a malign tumour cell, i.e. a karyotype consisting of reduced and increased numbers of chromosomes (called aneuploidy and polyploidy, respectively) and damaged chromosomal structures (abberations). Chemical substances causing aneuploidy are called aneugens and substances causing chromosomal abberations are called clastogens. Genotoxic substances are also very often mutagenic compounds. Multiple mutations in so-called proto-oncogenes and tumour suppressor genes are necessary to transform a normal cell into a tumour cell. In a healthy cell, cell proliferation is under control by proto-oncogenes that stimulate cell proliferation and tumour suppressor genes that inhibit cell proliferation. In a cancer cell, the balance between proto-oncogenes and tumour suppressor genes is disturbed: proto-oncogenes act as oncogenes, meaning that they continuously stimulate cell proliferation, due to mutations and polyploidy, whereas tumour suppressor genes have become inactive due to mutations and aneuploidy. Epigenetic alterations are changes in the DNA, but not in its order of nucleotides. Typical epigenetic changes include changes in DNA methylation, histone modifications, and microRNA expression. Compounds that change the epigenome may stimulate tumour progression for instance by stimulating expression of oncogenes and inhibiting expression of tumour suppressor genes. The role in tumour promotion and progression of substances that are capable to induce epigenetic changes is a field of ongoing study. 4.2.7. Question 1 What are the three characteristics of the different cancer development stages? 4.2.7. Question 2 What is the difference between a direct and an indirect mutagenic substance? 4.2.7. Question 3 Explain the principle of the Ames test? 4.2.7. Question 4 What is the difference between a base-pair substitution and a frameshift mutation? 4.2.8. Endocrine disruption Author: Majorie van Duursen Reviewer: Timo Hamers, Andreas Kortenkamp Learning objectives You should be able to • explain how xenobiotics can interact with the endocrine system and hormonal actions; • describe the thyroid system and molecular targets for thyroid hormone disruption; • explain the concept "it's the timing of the dose that makes the poison". Keywords: Endocrine system; Endocrine Disrupting Chemical (EDC); DES; Thyroid hormone disruption; Multi- and transgenerational effects Short history The endocrine system plays an essential role in the short- and long-term regulation of a variety of biochemical and physiological processes, such as behavior, reproduction, growth as well as nutritional aspects, gut, cardiovascular and kidney function and the response to stress. As a consequence, chemicals that cause changes in hormone secretion or in hormone receptor activity may target many different organs and functions and may result in disorders of the endocrine system and adverse health effects. The nature and the size of endocrine effects caused by chemicals depend on the type of chemical, the level and duration of exposure as well as on the timing of exposure. The "DES drug disaster" is one of the most striking examples that endocrine-active chemicals can have severe adverse health effects in humans. There was a time when the synthetic estrogen diethylstilbestrol (DES) was considered a miracle drug (Figure 1). DES was prescribed from the 1940s-1970s to millions of women around the world to prevent miscarriages, abortion and premature labor. However, in the early 1970s it was found that daughters of mothers who took DES during their pregnancy have an increased risk of developing a specific vaginal and cervical cancer type. Other studies later demonstrated that women who had been exposed to DES in the womb (in utero) also had other health problems, like increased risk of breast cancer, increased incidence of genital malformations, infertility, miscarriages, and complicated pregnancies. Now, even two generations later, babies are born with reproductive tract malformations that are suspected to be caused by this drug their great grandmothers took during pregnancy. The effects of DES are attributed to the fact that it is a synthetic estrogen (i.e. a xenobiotic compound having similar properties as the natural estrogen 17β-estradiol), thereby disrupting normal endocrine regulation as well as epigenetic processes during development (link to section on Developmental Toxicity). Around the same time of the DES drug disaster, Rachel Carson wrote a New York Times best-seller called Silent Spring. The book focused on endocrine disruptive properties of persistent environmental contaminants, such as the insecticide DDT (Dichloro Diphenyl Trichloroethane). She wrote that these environmental contaminants were badly degradable in the environment and cause reproductive failure and population decline in a variety of wild life. At the time the book was published, endocrine disruption was a controversial scientific theory that was met with much scepticism as empirical evidence was largely lacking. Still, the book of Rachel Carson has encouraged scientific, societal and political discussions about endocrine disruption. In 1996, another popular scientific book was published that presented more scientific evidence to warn against the effects of endocrine disruption: Our Stolen Future: Are We Threatening Our Fertility, Intelligence, and Survival? A Scientific Detective Story by Theo Colborn, Dianne Dumanoski and John Peterson Myers. Currently, endocrine disruption is a widely accepted concept and many scientific studies have demonstrated a wide variety of adverse health effects that are attributed to exposure to endocrine active compounds in our environment. Human epidemiological studies have shown dramatic increases in incidences of hormone-related diseases, such as breast, ovarian, testicular and prostate cancer, endometrial diseases, infertility, decreased sperm quality, and metabolic diseases. Considering that hormones play a prominent role in the onset of these diseases, it is highly likely that exposure to endocrine disrupting compounds contributes to these increased disease incidences in humans. In wildlife, the effects of endocrine disruption include feminizing and demasculinizing effects leading to deviant sexual behaviour and reproductive failure in many species, such as fish, frogs, birds and panthers. A striking example of endocrine disruption can be found in the lake Apopka alligator population. Lake Apopka is the third largest lake in the state of Florida, located a few kilometres north west of Orlando. In July 1980, heavy rainfall caused the spill of huge amounts of DDT in the lake by a local pesticide manufacturer. After that, the alligator population in Lake Apopka started to show a dramatic decline. Upon closer examination, these alligators had higher estradiol and lower testosterone levels in their blood, causing poorly developed testes and extremely small penises in the male offspring and severely malformed ovaries in females. What's in a name: EDC definition Since the early discussions on endocrine disruption, the World Health Organisation (WHO) has published several reports to present the state-of-the-art in scientific evidence on endocrine disruption, associated adverse health effects and the underlying mechanisms. In 2002, the WHO proposed a definition for an endocrine disrupting compound (EDC), which is still being used. According to the WHO, an EDC can be defined as "an exogenous substance or mixture that alters function(s) of the endocrine system and consequently causes adverse health effects in an intact organism, or its progeny, or (sub) populations." In 2012, WHO stated that "EDCs have the capacity to interfere with tissue and organ development and function, and therefore they may alter susceptibility to different types of diseases throughout life. This is a global threat that needs to be resolved." The European Environment Agency concluded in 2012 that "chemically induced endocrine disruption likely affects human and wildlife endocrine health the world over." A recent report (Demeneix & Slama, 2019) that was commissioned by the European Parliament concluded that the lack of EDC consideration in regulatory procedures is "clearly detrimental for the environment, human health, society, sustainability and most probably for our economy". The endocrine system Higher animals, including humans, have developed an endocrine system that allows them to regulate their internal environment. The endocrine system is interconnected and communicates bidirectionally with the neuro- and immunesystems. The endocrine system consist of glands that secrete hormones, the hormones themselves and targets that respond to the hormone. Glands that secrete hormones include the pituitary, thyroid, adrenals, gonads and pancreas. There are three major classes of hormones: amino-acid derived hormones (e.g. thyroid hormones T3 and T4), peptide hormones (e.g. pancreatic hormones) and steroid hormones (e.g. testosterone and estradiol). Hormones elicit a wide variety of biological responses, which almost always start with binding of a hormone to a receptor in its target tissue. This will trigger a chain of intracellular events and eventually a physiological response. Understanding the chemical characteristics of a hormone and its function, may help explain the mechanisms by which chemicals can interact with the endocrine system and subsequently cause adverse health effects. Mechanism of action Inherent to the complex nature of the endocrine system, endocrine disruption comes in many shapes and forms. It can occur at the receptor level (link to section on Receptor Interaction), but endocrine disruptors can also disturb the synthesis, metabolism or transport of hormones (locally or throughout the body), or display a combination of multiple mechanisms. For example, DDT can decrease testosterone levels via increased testosterone conversion by the aromatase enzyme, but also acts like an anti-androgen by blocking the androgen receptor and as an estrogen by activating the estrogen receptor. PCBs, polychlorinated biphenyls, are well-characterized thyroid hormone disrupting chemicals. PCBs are industrial chemicals that were widely used in transformators until their ban in the 1970s, but, due to their persistency, PCBs can still ubiquitously be found in the environment, human and wildlife blood and tissue samples (link to section on POPs). PCBs are known to interfere with the thyroid system via inhibition of thyroid hormone synthesis and/or increasing thyroid hormone metabolism, inhibiting binding of thyroid hormones to serum binding proteins, or blocking the ability of thyroid hormones to thyroid hormone receptors. These thyroid disrupting effects can occur in different organs throughout the body (see Figure 1 in Gilbert et al., 2012). The dose concept In the 18th Century, physician and alchemist Paracelsus phrased the toxicological paradigm: "Everything is a poison. Only the dose makes that something is not a poison" (link to section Concentration-response relationships, and to Introduction). Generally, this is understood as "the effect of the poison increases with the dose". According to this paradigm, upon deteriming the exposure levels where the toxic response begins and ends, safety levels can be derived to protect humans, animals and their environment. However, interpretation and practical implementation of this concept is challenged by issues that have arisen in modern-day toxicology, especially with EDCs, such as non-monotonic dose-response curves and timing of exposure. To establish a dose-response relationship, traditionally, toxicological experiments are conducted where adult animals are exposed to very high doses of a chemical. To determine a safe level, you determine the highest test dose at which no toxic effect is seen (the NOAEL or no observed adverse effect level) and add an additional "safety" or "uncertainty" factor of usually 100. This factor 100 accounts for differences between experimental animals and humans, and differences within the human population (see chapter 6 on Risk assessment). Exposures below the safety level are generally considered safe. However, over the past years, studies measuring the effects of hormonally active chemicals also began to show biological effects of endocrine active chemicals at extremely low concentrations, which were presumed to be safe and are in the range of human exposure levels. There are several physiological explanations to this phenomenon. It is important to realize that endogenous hormone responses do not act in a linear, mono-tonic fashion (i.c. the effect goes in one direction), as can be seen in Figure 2 for thyroid hormone levels and IQ. There are feedback loops to regulate the endocrine system in case of over- or understimulation of a receptor and there are clear tissue-differences in receptor expression and sensitivity to hormonal actions. Moreover, hormones are messengers, which are designed to transfer a message across the body. They do this at extremely low concentrations and small changes in hormone concentrations can cause large changes in receptor occupancy and receptor activity. At high concentrations, the change in receptor occupancy is only minimal. This means that the effects at high doses do not always predict the effects of EDCs at lower doses and vice versa. It is becoming increasingly clear that not only the dose, but also the timing of exposure plays an important role in determining health effects of EDCs. Multi-generational studies show that EDC exposure in utero can affect future generations (Figure 3). Studies on the grandsons and granddaughters whose mothers were exposed prenatally to DES are limited as they are just beginning to reach the age when relevant health problems, such as fertility, can be studied. However, rodent studies with DES, bisphenol-A and DEHP show that perinatally exposed mothers have grandchildren with malformations of the reproductive tract as well as an increased susceptibility to mammary tumors in female offspring and testicular cancer and poor semen quality in male offspring. Some studies even show effects in the great-grandchildren (F3 generation), which indicates that endocrine disrupting effects have been passed to next generations without direct exposure of these generations. These are called trans-generational effects. Long-term, delayed effects of EDCs are thought to arise from epigenetic modifications in (germ) cells and can be irreversible and transgenerational (link to section on Developmental Toxicity). Consequently, safe levels of EDC exposure may vary, dependent on the timing of exposure. Adult exposure to EDCs is often considered activational, e.g. an estrogen-like compounds such as DES can stimulate proliferation of estrogen-sensitive breast cells in an adult leading to breast cancer. When exposure to EDCs occurs during development, the effects are considered to be organizational, e.g. DES changes germ cell development of perinatally exposed mothers and subsequently leads to genital tract malformations in their grandchildren. Multi-generational effects are clear in rodent studies, but are not so clear in humans. This is because it is difficult to characterize EDC exposure in previous generations (which may span over 100 years in humans), and it is challenging to filter out the effect of one specific EDC as humans are exposed to a myriad of chemicals throughout their lives. EDCs in the environment Some well-known examples of EDCs are pesticides (e.g. DDT), plastic softeners (e.g. phthalates, like DEHP), plastic precursors (e.g. bisphenol-A), industrial chemicals (e.g. PCBs), water- and stain-repellents (perfluorinated substances such as PFOS and PFOA) and synthetic hormones (e.g. DES). Exposure to EDCs can occur via air, housedust, leaching into food and feed, waste- and drinking water. Exposure is often unintentional and at low concentrations, except for hormonal drugs. Clearly, synthetic hormones can also have beneficial effects. Hormonal cancers like breast and prostate cancers can be treated with synthetic hormones. And think about the contraceptive pill that has changed the lives of many women around the world since the 1960s. Nowadays, no other method is so widely employed in so many countries around the world as the birth control pill, with an estimate of 75 million users among reproductive-age women with a partner. An unfortunate side effect of this is the increase in hormonal drug levels in our environment leading to feminization of male fish swimming in the polluted waters. Pharmaceutical hormones, along with naturally-produced hormones, are excreted by women and men and these are not fully removed through conventional wastewater treatments. In addition, several pharmaceuticals that are not considered to act via the endocrine system, can in fact display endocrine activity and cause reproductive failure in fish. These are for example the beta-blocker atenolol, antidiabetic drug metformin and analgesic paracetamol. Further reading: WHO-IPCS. Global Assessment of the State-of-the-Science of Endocrine Disruptors. 2002 https://www.who.int/ipcs/publications/new_issues/endocrine_disruptors/en/ WHO-UNEP. State of the Science of Endocrine Disrupting Chemicals.2012 www.who.int/iris/bitstream/10665/78101/1/9789241505031_eng.pdf European Environment Agency (EEA) The impacts of endocrine disrupters on wildlife, people and their environments The Weybridge+15 (1996-2011) report. EEA Technical report No 2/2012 EEA Copenhagen. ISSN 1725-2237 https://www.eea.europa.eu/publications/the-impacts-of-endocrine-disrupters Demeneix, B., Slama, R. (2019) Endocrine Disruptors: from Scientific Evidence to Human Health Protection. Policy Department for Citizens' Rights and Constitutional Affairs Directorate General for Internal Policies of the Union PE 608.866 http://www.europarl.europa.eu/RegData/etudes/STUD/2019/608866/IPOL_STU(2019)608866_EN.pdf 4.2.8. Question 1 What are possible mechanisms to reduce the action of a certain hormone? 4.2.8. Question 2 Give three target sites for thyroid hormone discuption and name the biological process at that target site that can be affected by an EDC. 4.2.8. Question 3 What mechanism can cause demasculinization of male aligators by DDT? 4.2.8. Question 4 Why is timing of exposure important when assessing the risk of EDC exposure? 4.2.9. Developmental toxicity Author: Jessica Legradi, Marijke de Cock Reviewer: Paul Fowler Learning objectives You should be able to • explain the six principles of teratology • name the difference between deformation, malformation and syndrome • describe the principle of DoHAD • indicate what epigenetics is and what epigenetic mechanisms could lead to transgenerational effects Keywords: Teratogenicity, developmental toxicity, DoHAD, Epigenetics, transgenerational Developmental toxicity Developmental toxicity refers to any adverse effects, caused by environmental factors, that interfere with homeostasis, normal growth, differentiation, or development before conception (either parents), during prenatal development, or postnatally until puberty. The effects can be reversible or irreversible. Environmental factors that can have an impact on development are lifestyle factors like alcohol, diet, smoking, drugs, environmental contaminants, or physical factors. Anything that can disturb the development of the embryo or foetus and produces a malformation is called a teratogen. Teratogens can terminate a pregnancy or produce adverse effects called congenital malformations (birth defects, anomaly). A malformation refers to any effect on the structural development of a foetus (e.g. delay, misdirection or arrest of developmental processes). Malformations occur mostly early in development and are permanent. Malformations should not be confused with deformations, which are mostly temporary effects caused by mechanical forces (e.g. moulding of the head after birth). One teratogen can induce several different malformations. All malformations caused by one teratogen are called a syndrome (e.g. fetal alcohol syndrome). Six Principles of Teratology (by James G. Wilson) In 1959 James G. Wilson published the 6 principles of teratology. Till now these principles are still seen as the basics of developmental toxicology. The principles are: 1. Susceptibility to teratogenesis depends on the genotype of the conceptus and a manner in which this interacts with adverse environmental factors Species differences: different species can react different (sensitivities) to the same teratogen. For example, thalidomide (softenon) a drug used to treat morning sickness of pregnant woman causes severe limb malformations in humans whereas such effects were not seen in rats and mice. Strain and intra litter differences: the genetic background of individuals within one species can cause differences in the response to a teratogen. Interaction of genome and environment: organisms of the same genetic background can react differently to a teratogen in different environments. Multifactorial causation: the summary of the above. The severity of a malformation depends on the interplay of several genes (inter and intra species) and several environmental factors. 2. Susceptibility to teratogenesis varies with the developmental stage at the time of exposure to an adverse influence During development there are periods where the foetus is specifically sensitive to a certain malformation (Figure 1). In general, the very early (embryonic) development is more susceptible to teratogenic effects. 3. Teratogenic agents act in specific ways (mechanisms) on developing cells and tissues to initiate sequences of abnormal developmental events (pathogenesis) Every teratogenic agent produces a distinctive malformation pattern. One example is the foetal alcohol syndrome, which induces abnormal appearance, short height, low body weight, small head size, poor coordination, low intelligence, behaviour problems, and problems with hearing or seeing and very characteristic facial features (increased distance between the eyes). 4. The access of adverse influences to developing tissues depends on the nature of the influence (agent) Teratogens can be radiation, infections or chemicals including drugs. The teratogenic effect depends on the concentration of a teratogen that reaches the embryo. The concentration at the embryo is influenced by the maternal absorption, metabolisation and elimination and the time the agent needs to get to the embryo. This can be very different between teratogens. For example, strong radiation is also a strong teratogen as it easily reaches all tissues of the embryo. This also means that a compound tested to be teratogenic in an in vitro test with embryos in a tube might not be teratogenic to an embryo in the uterus of a human or mouse as the teratogen may not reach the embryo at a critical concentration. 5. The four manifestations of deviant development are death, malformation, growth retardation and functional deficit A teratogen can cause minor effects like functional deficits (e.g. reduced IQ), growth retardations or adverse effects like malformations or death. Depending on the timing of exposure and degree of genetic sensitivity, an embryo will have a greater or lesser risk of death or malformations. Very early in development, during the first cell divisions, an embryo will be more likely to die rather than being implanted and developing further. 6. Manifestations of deviant development increase in frequency and degree as dosage increases, from the no-effect to the 100% lethal level The number of effects and the severity of the effects increases with the concentration of a teratogen. This means that there is a threshold concentration below which no teratogenic effects occur (no effect concentration). Developmental Origins of Health and Disease (DOHaD) The concept of Developmental Origins of Health and Disease (DOHaD) describes that environmental factors early in life contribute to health and disease later in life. The basis of this concept was the Barker hypothesis, which was formulated as an explanation for the rise in cardiovascular disease (CVD) related mortality in the United Kingdom between 1900 and 1950. Barker and colleagues observed that the prevalence of CVD and stroke was correlated with neo- and post-natal mortality (Figure 2). This led them to formulate the hypothesis that poor nutrition early in life leads to increased risk of cardiovascular disease and stroke later in life. Later, this was developed into the thrifty phenotype hypothesis stating that poor nutrition in utero programs for adaptive mechanisms that allow to deal with nutrient-poor conditions in later life, but may also result in greater susceptibility to metabolic syndrome. This thrifty hypothesis was finally developed into the DOHaD theory. The effect of early life nutrition on adult health is clearly illustrated by the Dutch Famine Birth Cohort Study. In this cohort, women and men who were born during or just after the Dutch famine, were studied retrospectively. The Dutch famine was a famine that took place in the Western part of the German-occupied Netherlands in the winter of 1944-1945. Its 3-months duration creates the possibility to study the effect of poor nutrition during each individual trimester of pregnancy. Effects on birth weight, for example, may be expected if caloric intake during pregnancy is restricted. This was, however, only the case when the famine occurred during the second or the third trimesters. Higher glucose and insulin levels in adulthood were only seen for those exposed in the third trimester, whereas those exposed during the second trimester showed a higher prevalence of obstructive airways disease. These effects were not observed for the other trimesters, which can be explained by the timing of caloric restriction during pregnancy: during normal pregnancy pancreatic islets develop during the third trimester, while during the second trimester the number of lung cells is known to double. The DOHaD concept does not merely focus on early life nutrition, but includes all kinds of environmental stressors during the developmental period that may contribute to adult disease, including exposure to chemical compounds. Chemicals may elicit effects such as endocrine disruption or neurotoxicity, which can lead to permanent morphological and physiological changes when occurring early in life. Well-known examples of such chemicals are diethylstilbestrol (DES) and dichlorodiphenyltrichloroethane (DDT). DES was an estrogenic drug given to women between 1940 and 1970 to prevent miscarriage. It was withdrawn from the market in 1971 because of carcinogenic effects as well as an increased risk for infertility in children who were exposed in utero (link to section on Endocrine disruption). DDT is a pesticide that has been banned in most countries, but is still used in some for malaria control. Several studies, including a pooled analysis of seven European cohorts, found associations between in utero DDT exposure levels and infant growth and obesity. The ubiquitous presence of chemicals in the environment makes it extremely relevant to study health effects in humans, but also makes it very challenging as virtually no perfect control group exists. This emphasizes the importance of prevention, which is the key message of the DOHaD concept. Adult lifestyle and corresponding exposure to toxic compounds remain important modifiable factors for both treatment and prevention of disease. However, as developmental plasticity, and therefore the potential for change, is highest early in life, it is important to focus on exposure in the early phases: during pregnancy, infancy, childhood and adolescence. This is reflected by regulators frequently imposing lower tolerable exposure levels for infants compared to adults. Epigenetics For some compounds in utero exposure is known to cause effects later in life (see DOHad) or even induce effects in the offspring or grand-offspring of the exposed embryo (transgenerational effect). Diethylstilbesterol (DES) is a compound for which transgenerational effects are reported. DES was given to pregnant women to reduce the risk of spontaneous abortions and other pregnancy complications. Women who took DES during pregnancy have a slightly increased risk of breast cancer. Daughters exposed in utero, on the other hand, had a high tendency to develop rare vaginal tumours. In the third-generation, higher incidences of infertility, ovarian cancer and an increased risk of birth defects were observed. However, the data available for the third generation is small and therefore possess only limited evidence so far. Another compound suspected to cause transgenerational effects is the fungicide vinclozolin. Vinclozolin is an anti-androgenic endocrine disrupting chemical. It has been shown that exposure to vinclozolin leads to transgenerational effects on testis function in mice. Transgenerational effects can be induced via genetic alterations (mutations) in the DNA. Thereby the order of nucleotides in the genome of the parental gametocyte is altered and this alteration is inherited to the offspring. Alternatively, transgenerational effects can be induced is via epigenetic alterations. Epigenetics is defined as the study of changes in gene expression that occur without changes in the DNA sequence, and which are heritable in the progeny of cells or organisms. Epigenetic changes occur naturally but can also be influenced by lifestyle factors or diseases or environmental contaminants. Epigenetic alterations are a special form of developmental toxicology as effects might not cause immediate teratogenic malformations. Instead the effects may be visible only later in life or in subsequent generations. It is assumed that compounds can induce epigenetic changes and thereby cause transgenerational effects. For DES and vinclozolin epigenetic changes in mice have been reported and these might explain the transgenerational changes seen in humans. Two main epigenetic mechanisms are generally described as being responsible for transgenerational effects, i.e. DNA methylation and histone modifications. DNA methylation DNA methylation is the most studied epigenetic modification and describes the methylation of cytosine nucleotides in the genome (Figure 3) by DNA methyltransferase (DNMTs). Gene activity generally depends on the degree of methylation of the promotor region: if the promotor is methylated the gene is usually repressed. One peculiarity of DNA methylation is that it can be wiped and replaced again during epigenetic reprogramming events to set up cell- and tissue-specific gene expression patterns. Epigenetic reprogramming occurs very early in development. During this phase epigenetic marks, like methylation marks are erased and remodelled. Epigenetic reprogramming is necessary as maternal and paternal genomes are differentially marked and must be reprogrammed to assure proper development. Histone modification Within the chromosome the DNA is densely packed around histone proteins. Gene transcription can only take place if the DNA packaging around the histones is loosened. Several histone modification processes are involved in loosening this packaging, such as acetylation, methylation, phosphorylation or ubiquitination of the histone molecules (Figure 4). Figure in preparation Figure 4: (a) The DNA is wrapped around the histone molecules. The histone molecules are arranged in a way that their amino acid tails are pointing out of the package. These tails can be altered for example via acetylation. (b) If the tails are acetylated the DNA is packed less tightly and genes can be transcribed. If the tails are not acetylated the DNA is packed very tight and gene transcription is hampered. References Barker, D.J., Osmond, C. (1986). Infant mortality, childhood nutrition, and ischaemic heart disease in England and Wales. Lancet 1 (8489), 1077-1081. 4.2.9. Question 1 Describe the difference between malformation and deformation? 4.2.9. Question 2 What factors can influence the susceptibility of an organism to teratogen? 4.2.9. Question 3 Describe how DOHaD differs from the Barker hypothesis? 4.2.9. Question 4 What are the two epigenetic mechanisms mostly responsible for transgenerational effects? 4.2.9. Question 5 Name a teratogenic compound? 4.2.10. Immunotoxicity Author: Nico van den Brink Review: Manuel E. Ortiz-Santaliestra Learning objectives: You should be able to: • Understand the complexity of potential effects of chemicals on the immune system • Explain the difference between innate and acquired parts of the immune system • Explain the most important modes of toxicity that may affect immune cells Key words: Immune toxicology, pathogens, innate and adaptive immune system, lyme disease Introduction The immune system of organisms is very complex with different cells and other components interacting with each other. The immune system has the function to protect the organism from pathogens and infections. It consists of an innate part, which is active from infancy and an acquired part which is adaptive to exposure to pathogens. The immune system may include different components depending on the species (Figure 1). The main organs involved in the immune system of mammals are spleen, thymus, bone marrow and lymph nodes. In birds, besides all of the above, there is also the bursa of Fabricius. These organs all play specific roles in the immune defence, e.g. the spleen synthesises antibodies and plays an important role in the dynamics of monocytes; the thymus is the organ where T-cells develop while in bone marrow lymphoid cells are produced, which are transported to other tissues for further development. The bursa of Fabricius is specific for birds and is essential for B-cell development. Blood is an important tissue to be considered because of its role in transporting cells. The innate system generally provides the first response to infections and pathogens, however it is not very specific. It consists of several cell types with different functions like macrophages, neutrophils and mast cells. Macrophages and neutrophils may act against pathogens by phagocytosis (engulfing in cellular lysosomes and destruction of the pathogen). Neutrophils are relatively short lived, act fast and can produce a respiratory burst to destroy the pathogen/microbe. This involves a rapid production of Reactive Oxygen Species (ROS) which may destroy the pathogens. Macrophages generally have a longer live span, react slower but more prolonged and may attack via production of nitric oxide and less via ROS. Macrophages produce cytokines to communicate with other members of the immune system, especially cell types of the acquired system. Other members of the innate immune system are mast cells which can excrete e.g. histamine on detection of antigens. Cells of the acquired, or adaptive immune system mount more specific responses for the immune insult, and are therefore generally more effective. Lymphocytes are the cells of the adaptive immune system which can be classified in B-lymphocytes and T-lymphocytes. B-lymphocytes produce antibodies which can serve as cell surface antigen-receptors, essential in the recognition of e.g. microbes. B-lymphocytes facilitate humoral (extracellular) immune responses against extracellular microbes (in the respiratory gastrointestinal tract and in the blood/lymph circulation). Upon recognition of an antigen, B-lymphocytes produce species antibodies which bind to the specific antigen. This on the one hand may decrease the infectivity of pathogens (e.g. microbes, viruses) directly, but also mark them for recognition by phagocytic cells. T-lymphocytes are active against intracellular pathogens and microbes. Once inside cells, pathogens are out of reach of the B-lymphocytes. T-lymphocytes may activate macrophages or neutrophils to destroy phagocytosed pathogens or even destroy infected cells. Both B- and T-lymphocytes are capable of producing an extreme diversity of clones, specific for antigen recognition. Communication between the different immune cells occurs by the production of e.g. cytokines, including interleukins (ILs), chemokines, interferons (IFs), and also Tumour Necrosis Factors (TNFs). Cytokines and TNFs are related to specific responses in the immune system, for instance IL6 is involved in activating B-cells to produce immunoglobulins, while TNF-α is involved in the early onset of inflammation, therefore one of the cytokines inducing acute immune responses. Inflammation is a generic response to pathogens mounted by cells of the innate part of the immune system. It generally results in increased temperature and swelling of the affected tissue, caused by the infiltration of the tissue by leukocytes and other cells of the innate system. A proper acute inflammatory response is not only essential as a first defence but will also facilitate the activation of the adaptive immune system. Communication between immune cells, via cytokines, not only directs cells to the place of infection but also activates for instance cells of the acquired immune system. This is a very short and non-exhaustive description of the immune system, for more details on the functioning of the immune system see for instance Abbas et al. (2018). Chemicals may affect the immune system in different ways. Exposure to lead for instance may result in immune suppression in waterfowl and raptors (Fairbrother et al. 2004, Vallverdú-Coll et al., 2019). Decreasing spleen weights, lower numbers of white blood cells and reduced ability to mount a humoral response against a specific antigen (e.g. sheep red blood cells), indicated a lower potential of exposed birds to mount proper immune responses upon infection. Exposure to mercury resulted in decreased proliferation of B-cells in zebra finches (Taeniopygia guttata), affecting the acquired part of the immune system (Lewis et al., 2013). However, augmentation of the immune system upon exposure to e.g. cadmium has also been reported in for instance small mammals, indicating an enhancement of the immune response (Demenesku et al., 2014). Both immune suppression as well as immune enhancement may have negative impacts on the organisms involved; the former may decrease the ability of the organism to deal with pathogens or other infections, while immune enhancement may increase the energy demands of the organism and it may also result in for instance hypersensitivity or even auto-immunity in organisms. Chemicals may affect immune cells via toxicity to mechanisms that are not specific to the immune system. Since many different cell types are involved in the immune system, the sensitivity to these modes of toxicity may vary considerably among cells and among chemicals. This would imply that as a whole, the immune system may inherently include cells that are sensitive to different chemicals, and as such may be quite sensitive to a range of toxicants. For instance induction of apoptosis, programmed cell death, is essential to clear the activated cells involved in an immune response after the infection is minimised and the system is returning to a state of homeostasis (see Figure 2). Chemicals may induce apoptosis, and thus interfere with the kinetics of adaptive immune responses, potentially reducing the longevity of cells. Toxic effects on mechanisms specific to the immune system may be related to its functioning. Since the production of ROS and nitric oxides are effector pathways along which neutrophils and macrophages of the innate systems combat pathogens (via a high production of reactive oxygen species, i.e. oxidative burst, to attack pathogens), impacts on the oxidative status of these cells may not only result in general toxicity, potentially affecting a range of cell types, but it may also affect the responsiveness of the (innate) immune system particularly. For instance, cadmium has a high affinity to bind to glutathione (GSH), a prominent anti-oxidant in cells, and has shown to affect acute immune responses in thymus and spleens of mice (Pathak and Khandelwal, 2007) via this mechanism. A decrease of GSH by binding of chemicals (like cadmium) may modulate macrophages towards a pro-inflammatory response by changes in the redox status of the cells involved, changing not only their activities against pathogens but potentially also their production and release of cytokines (Dong et al., 1998). GSH is also involved in the modulation of the acquired immune system by affecting so-called antigen-presenting cells (APCs, e.g. dendritic cells). APCs capture microbial antigens that enter the body, transport these to specific immune-active tissues (e.g. lymph nodes) and present them to naive T-lymphocytes, inducing a proper immune response, so-called T-helper cells. T-helper cells include subsets, e.g. T-helper 1 cells (Th1-cells) and T-helper 2 cells (Th2-cells). Th1 responses are important in the defence against intracellular infections, by activation of macrophages to ingest microbes. Th2-responses may be initiated by infections by organisms too large to be phagocytosed, and mediated by e.g. allergens. As mentioned, GSH depletion may result in changes in cytokine production by APC (Dong et al., 1998), generally affecting the release of Th1-response promoting cytokines. Exposure to chemicals interfering with GSH kinetics may therefore result in a dis-balance between Th1 and Th2 responses and as such affect the responsiveness of the immune system. Cadmium and other metals have a high affinity to bind to GSH and may therefore reduce Th1 responses, while in contrast, GSH promoting chemicals may reduce the organisms' ability to initiate Th2-responses (Pathak & Khandelwal, 2008). The overview on potential effects that chemicals may have on the immune system as presented here is not exhaustive at all. This is even more complicated because effects may be contextual, meaning that chemicals may have different impacts depending on the situation an organism is in. For instance, the magnitude of immunotoxic effects may be dependent on the general condition of the organism, and hence some infected animals may show effects from chemical exposure while others may not. Impacts may also differ between types of infection (e.g. Th1 versus Th2 responsive infections). This, together with the complex and dynamic composition of the immune system, limits the development of general dose response relationships and hazard predictions for chemicals. Furthermore, most of the research on effects of chemicals on the immune system is focussed on humans, based on studies on rats and mice. Little is known on differences among species, especially in non-mammalian species which may have completely differentially structured immune systems. Some studies on wildlife have shown effects of trace metals on small mammals (Tersago et al., 2004, Rogival et al., 2006, Tête et al., 2015) and of lead on birds (Vallverdú-Coll et al., 2015). However, specific modes of action are still to be resolved under field conditions. Research on immuno-toxicity in wildlife however is essential not only from a conservational point of view (to protect the organisms and species involved) but also from the perspective of human health. Wildlife plays an important role in the kinetics of zoonotic diseases, for instance small mammals are the prime reservoir for Borrelia spirochetes, the causative pathogens of Lyme-disease while migrating waterfowl are indicated to drive the spread of e.g. avian influenza. The role of wildlife in the kinetics of the environmental spread of zoonotic diseases is therefore eminent, which may seriously be affected by chemical induced alterations of their immune system. References and further reading Abbas, A.K., Lichtman, A.H., Pillai, S. (2018). Cellular and Molecular Immunology. 9th Edition. Elsevier, Philadelphia, USA. ISBN: 978-0-323-52324-0 Demenesku, J., Mirkov, I., Ninkov, M., Popov Aleksandrov, A., Zolotarevski, L., Kataranovski, D., Kataranovski, M. (2014). Acute cadmium administration to rats exerts both immunosuppressive and proinflammatory effects in spleen. Toxicology 326, 96-108. Dong, W., Simeonova, P.P., Gallucci, R., Matheson, J., Flood, L., Wang, S., Hubbs, A., Luster, M.I. (1998). Toxic metals stimulate inflammatory cytokines in hepatocytes through oxidative stress mechanisms. Toxicology and Applied Pharmacology 151, 359-366. Fairbrother, A., Smits, J., Grasman, K.A. (2004). Avian immunotoxicology. Journal of Toxicology and Environmental Health, Part B 7, 105-137. Galloway, T., Handy, R. (2003). Immunotoxicity of organophosphorous pesticides. Ecotoxicology 12, 345-363. Lewis, C.A., Cristol, D.A., Swaddle, J.P., Varian-Ramos, C.W., Zwollo, P. (2013). Decreased immune response in Zebra Finches exposed to sublethal doses of mercury. Archives of Environmental Contamination & Toxicology 64, 327-336. Pathak, N., Khandelwal, S. (2007). Role of oxidative stress and apoptosis in cadmium induced thymic atrophy and splenomegaly in mice. Toxicology Letters 169, 95-108. Pathak, N., Khandelwal, S. (2008). Impact of cadmium in T lymphocyte subsets and cytokine expression: differential regulation by oxidative stress and apoptosis. Biometals 21, 179-187. Rogival, D., Scheirs, J., De Coen, W., Verhagen, R., Blust, R. (2006). Metal blood levels and hematological characteristics in wood mice (Apodemus sylvaticus L.) along a metal pollution gradient. Environmental Toxicology & Chemistry 25, 149-157. Tersago, K., De Coen, W., Scheirs, J., Vermeulen, K., Blust, R., Van Bockstaele, D., Verhagen, R. (2004). Immunotoxicology in wood mice along a heavy metal pollution gradient. Environmental Pollution 132, 385-394. Tête, N., Afonso, E., Bouguerra, G., Scheifler, R. (2015). Blood parameters as biomarkers of cadmium and lead exposure and effects in wild wood mice (Apodemus sylvaticus) living along a pollution gradient. Chemosphere 138, 940-946. Vallverdú-Coll, N., López-Antia, A., Martinez-Haro, M., Ortiz-Santaliestra, M.E., Mateo, R. (2015). Altered immune response in mallard ducklings exposed to lead through maternal transfer in the wild. Environmental Pollution 205, 350-356. Vallverdú-Coll, N., Mateo, R., Mougeot, F., Ortiz-Santaliestra, M.E. (2019). Immunotoxic effects of lead on birds. Science of the Total Environment 689, 505-515. 4.2.10. Question 1 Name two general reasons why immunomodulation in organisms may be very sensitive to exposure to environmental chemicals 4.2.10. Question 2 Why is it that immunomodulatory chemicals to which humans are not exposed still may have an impact on human health? 4.2.11. Toxicity mechanisms of metals Author: Nico M. van Straalen Reviewers: Philip S. Rainbow, Henk Schat Learning objectives You should be able to • list five biochemical categories of metal toxicity mechanisms and describe an example for each case • interpret biochemical symptoms of metal toxicity (e.g. functional categories of gene expression profiles) and explain these in terms of the mode of action of a particular metal Keywords: Reactive oxygen species, protein binding, DNA binding, ion pumps, Synopsis Toxicity of metals on the biochemical level is due to a wide variety of mechanisms, which may be classified as follows, although they are not mutually exclusive: (1) generation of radical oxygen species (Fe, Cu), (2) binding to nucleophilic groups in proteins (Cd, Pb), (3) binding to DNA (Cr, Cd), (4) binding to ion channels or membrane pumps (Pb, Cd), (5) interaction with the function of essential cellular moieties such as phosphate, sulfhydryl groups, iron or calcium (As, Cd, Al, Pb). In addition, these mechanisms may act simultaneously and interact with each other. There are interesting species patterns of susceptibility to metals, e.g. mammals are hardly susceptible to zinc, while plants and crustaceans are. Earthworms, gastropods and fungi are quite sensitive to copper, but not so for terrestrial vertebrates. In this section we discuss five different categories of metal toxicity as well as some patterns of species differences in sensitivity to metals. Generation of reactive oxygen species Reactive oxygen species (ROS) are activated forms of oxygen that have one or more unpaired electrons in the outer orbit. The best knowns are superoxide anion (O2-), singlet oxygen (1ΔgO2), hydrogen peroxide (H2O2) and hydroxyl radical (OH) (see the section on Oxidative stress), effective catalyzers of reactive oxygen species. This relates to their capacity to engage in redox reactions with transfer of one electron. One of the most famous reactions is the so-called Fenton reaction catalyzed by reduced iron and copper ions: Fe2+ + H2O2 → Fe3+ + OH + OH- Cu+ + H2O2 → Cu2+ + OH + OH- Both reactions produce the highly reactive hydroxyl radical (OH), which may trigger severe cellular damage by peroxidation of membrane lipids (see the section on Oxidative Stress). Very low concentrations of metal ions can keep this reaction running, because the reduced forms of the metal ions are restored by a second reaction with hydrogen peroxide: Fe3+ + H2O2 → Fe2+ + O2- + 2H+ Cu2+ + H2O2 → Cu+ + O2- + 2H+ The overall reaction is a metal-catalyzed degradation of hydrogen peroxide, causing superoxide anion and hydroxyl radical as intermediates. Oxidative stress is one of the most important mechanisms of toxicity of metals. This can also be deduced from the metal-induced transcriptome. Gene expression profiling has shown that it is not uncommon that more than 10% of the genome responds to sublethal concentrations of cadmium. Protein binding Several metals have a great affinity towards sulfhydryl (-SH) groups in the cysteine residues of proteins. Binding to such groups may distort the secondary structure of a protein at sites where SH-groups coordinate to form S-S bridges. The SH-group is a typical example of a nucleophile, that is, a group that easily donates an electron pair to form a chemical bond. The group that accepts the electron pair is called an electrophile. Another amino acid in a protein to which metals are preferentially bound is the imidazole side-chain of histidine. This heterocyclic aromatic group with two nitrogen atoms easily engages into chemical bonds with metal ions. In fact, histidine residues are often used in metalloproteins to coordinate metals at the active site and to transport metals from the roots upwards through the xylem vessels of plants. A classical case of metal-protein interaction with subsequent toxicity is the case of lead binding to δ-aminolevulinic acid dehydratase (δ-ALAD). This is an enzyme involved in the synthesis of hemoglobin. It catalyzes the second step in the biosynthetic pathway, the condensation of two molecules of δ-aminolevulinic acid to one molecule of porphobilinogen, which is a precursor of porphyrin, a functional unit binding iron in hemoglobin (Figure 1). The enzyme has several sulfhydryl groups that are susceptible to lead. In the erythrocyte more than 80% of lead is in fact bound to the δ-ALAD protein (much more than is bound to hemoglobin). Inhibition of δ-ALAD leads to decreased porphyrin synthesis, insufficient hemoglobin, loss of oxygen uptake capacity, and eventually anemia. Because the inhibition of δ-ALAD by lead occurs at already very low exposure levels, it makes a very good biomarker for lead exposure. Measurement of δ-ALAD activity in blood has been conducted extensively in workers of metal-processing industries and people living in metal-polluted environments. Also in fish, birds and several invertebrates (earthworms, planarians) the δ-ALAD assay has been shown to be a useful biomarker of lead exposure. In addition to lead, mercury is known to inhibit δ-ALAD, while the inhibitions by both lead and mercury can be alleviated to some extent by zinc. DNA binding Chromium, especially the trivalent (Cr3+) and the hexavalent (Cr6+) ions are the most notorious metal species known to bind to DNA. Both trivalent and hexavalent chromium may cause mutations and hexavalent chromium is also a known carcinogen. Although the salts of Cr6+ are only slightly soluble, the reactivity of the Cr6+-ion is so pronounced that only very little hexavalent chromium salt is already dangerous. The genotoxicity of trivalent chromium is due to the formation of crosslinks between proteins and DNA. Any DNA molecule is surrounded by proteins (histones, regulatory proteins, chromatine). Cr3+ binds to amino acids such as cysteine, histidine and glutamic acid on the one hand, and to the phosphate groups in DNA on the other, without any preference for a specific nucleotide (base). The result is a covalent bond between DNA and a protein that will inhibit transcription or regulatory functions of the DNA segment involved. Another metal known to interact with DNA is nickel. Although the primary effects of nickel are to induce allergic reactions, it is also a known carcinogen. The exact molecular mechanism is not as well known as in the case of chromium. Nickel could crosslink proteins and DNA in the same way as chromium, but is also argued that nickel's carcinogenicity is due to oxidative stress, resulting in DNA damage. Another suggested mechanism is that nickel could interfere with the DNA repair system. Inhibition of ion pumps Many metals may compete with essential metals during uptake or transport across membranes. A well-known case is the competition between calcium and cadmium at the Ca2+ATPase pump in the basal membrane of fish gills (Figure 2). The gills of fish serve as a target for many water-born toxic compounds because of their large contact area with the water, consisting of several membranes, each with infoldings to increase the surface area, and also their high metabolic activity which stems from their important regulatory activities (uptake of oxygen, uptake of nutrients and osmoregulation). The single-layered epithelium has two types of cells, one active in osmoregulation (called chloride cells), and one active in transport of nutrients and oxygen (called respiratory cells). There are strong tight junctions between these cells to ensure complete impermeability of the epithelium to ions. The apical membrane of the respiratory cells has many uptake pumps and channels (Figure 2). Calcium enters the cells though a calcium channel (without energetic costs, following the gradient). The intracellular calcium concentration is regulated by a calcium-ATPase in the basal membrane, which pumps calcium out of the epithelial cells into the blood. Figure in preparation Figure 2. Schematic representation of the cells in a fish gill epithelium, showing the fluxes of calcium and cadmium. Calcium enters the cell through calcium channels on the apical side, and is pumped out of the cells into the circulation by a calcium ATPase in the basal membrane. Cadmium ions enter the cells also through the calcium channels, but inhibit the basal calcium ATPAse, causing hypocalcemia in the rest of the body. m = mucosa (apical side), s = serosa (basal side), BP = binding protein, mito = mitochondrion, ER = endoplasmic reticulum. From Verbost et al. (1989). Water-borne cadmium ions, which resemble calcium ions in their atomic radius, enter the cell through the same apical calcium channels, but subsequently inhibit the basal membrane calcium transporter by direct competition with calcium for the binding site on the ATPAse. The consequence is an accumulation of calcium in the respiratory cells, and a lack of calcium in the body of the fish, which causes a variety of secondary effects; amongst others hormonal disturbance, while a severe decline of plasma calcium is a direct cause of mortality. This effect of cadmium occurs at very low concentrations (nanomolar range), and it explains the high toxicity of this metal to fish. Similar cadmium-induced hypocalcemia mechanisms are present in the gill membranes of crustaceans and most likely also in gut epithelium cells of many other species. Interaction with essential cellular constituents There are various cellular ligands outside proteins or DNA that may bind metals. Among these are organic acids (malate, citrate), free amino acids (histidine, cysteine), and glutathione. Metals may also interfere with the cellular functions of phosphate, iron, calcium or zinc, for example by replacing these elements from their normal binding sites in enzymes or other molecules. To illustrate a case of interaction with phosphate we discuss shortly the toxicity of arsenic. Arsenic is strictly speaking not a metal, since arsenic oxide may engage in both base-forming and acid-forming reactions. Together with antimony and four other, lesser-known elements, arsenic is indicated as a "metalloid". Arsenic is a potent toxicant; arsenic trioxide (As2O3) is well known for its high mammalian toxicity and its use as a rodenticide and wood preservative. There are also therapeutic applications of arsenic trioxide, against certain leukemias and arsenic is often implied in homeopathic treatments. Arsenic compounds are easily transported throughout the body, also across the placental barrier in pregnant women. Arsenic can occur in two different valency states: arsenate (As5+) and arsenite (As3+). The terms are also used to indicate the oxy-salts, such as ferric arsenate, FeAsO4, and ferric arsenite, FeAsO3. Inside the body, arsenic may be present in oxidized as well as reduced state, depending on the conditions in the cell, and it is enzymatically converted to one or the other state by reductases and oxidases. It may also be methylated by methyltransferases. The two different forms of arsenic have quite different toxicity mechanisms. Arsenate, AsO43-, is a powerful analog of phosphate, while arsenite (AsO33-) reacts with SH-groups in proteins, like the metals discussed above. Arsenite is also a known carcinogen; the mechanism seems not to rely on DNA binding, like in the case of chromium, but on the induction of oxidative stress and interference with cellular signaling. The most common reason of chronic arsenic poisoning is due to inhibition of the enzyme glyceraldehyde phosphate dehydrogenase (GAPDH). This is a critical enzyme of the glycolysis, converting glyceraldehyde-3-phosphate into 1,3-biphosphoglycerate. However, in the presence of arsenate, GAPDH converts glyceraldehyde-3-phosphate into 1-arseno-3-phosphoglycerate. Actually arsenate acts as a phosphate analog to "fool" the enzyme. The product 1-arseno-3-phosphoglycerate does not engage in the next glycolytic reaction, which normally produces one ATP molecule, but it falls back to arsenate and 3-phosphoglycerate, without the production of ATP, while the arsenate released can act again on the enzyme in a cyclical manner. The result is that the glycolytic pathway is uncoupled from ATP-production. Needless to say this signifies a severe and often fatal inhibition of energy metabolism. Species patterns of metal susceptibility Animals, plants, fungi, protists and prokaryotes all differ greatly in their susceptibility to metals. To give a few examples: • Earthworms and snails are known to be quite sensitive to copper; the absence of earthworms in orchards and vineyards where copper-containing fungicides are used is well documented. Snails cannot be cultured in water that runs through copper-containing linings. Fungi are also sensitive to copper, which explains the use of copper in fungicides. Also many plants are sensitive to copper due to the effects on root growth. Among vertebrates, sheep are quite sensitive to copper, unlike most other mammals. • Crustaceans as well as fish are relatively sensitive to zinc. Mammals, however, are hardly sensitive to zinc at all. • Humans are relatively sensitive to lead because high lead exposure lead disturbs the development of children's brain and is correlated with low IQ-scores. Most invertebrates however, are quite insensitive to lead. • Although many invertebrates are quite sensitive to cadmium, the interspecies variation in sensitivity to this element is particularly high, even within the same phylogenetic lineage. The soil-living oribatid mite Platynothrus peltifer is one of the most sensitive invertebrates with respect to the effect of cadmium on reproduction, however Oppia nitens, also an oribatid, is extremely tolerant to cadmium. In the end, such patterns must be explained in terms of the presence of susceptible biochemical targets, different strategies for storage and excretion, and differing mechanisms of defence and sequestration. However, at the moment there is no general framework by which to compare the variation of sensitivity across species. Also, there is no relation between accumulation and susceptibility; some species that accumulate metals to a large degree (e.g. copper in isopods) are not sensitive to the same metal, while others, which do not accumulate the metal, are quite sensitive. Accumulation seems to be partly related to a species feeding strategy (e.g. spiders absorb almost al the (fluid) food they take in and any metals in the food will accumulate in the midgut gland); accumulation is also related to specific nutrient requirements (e.g. copper in isopods, manganese in some oribatid mites). Finally, some populations of some species have evolved specific tolerances in response to their living in a metal-contaminated environment, on top of the already existing accumulation and detoxification strategies. Conclusion Metals do not form a homogeneous group. Their toxicity involves reactivity towards a great variety of biochemical targets. Often several mechanisms act simultaneously and interact with each other. Induction of oxidative stress is a common denominator, as is reaction to nucleophilic groups in macromolecules. The great variety of metal-induced responses makes them interesting model compounds for toxicological studies. References Cameron, K.S., Buchner, V., Tchounwou, P.B. (2011). Exploring the molecular mechanisms of nickel-induced genotoxicity and carcinogenicity: a literature review. Reviews of Environmental Health 26, 81-92. Ernst, W.H.O., Joosse-van Damme, E.N.G. (1983) Umwelbelastung durch Mineralstoffe. Fischer Verlag, Jena. Singh, A.P., Goel, R.K., Kaur, T. (2011) Mechanisms pertaining to arsenic toxicity. Toxicology International 18, 87-93. Verbost, P.M. (1989) Cadmium toxicity: interaction of cadmium with cellular calcium transport mechanisms Ph.D. thesis, Radboud Universiteit Nijmegen. 4.2.11. Question 1 Mention three different classes of primary lesions due to free metal ions and causing metal toxicity. Include the metals that are best known for causing each type of lesion. 4.2.11. Question 2 Is it possible to decide to which type of metal a cell is exposed, based on the kind of cellular disturbance that is observed? 4.2.11. Question 3 Several invertebrate accumulate metals to a very high degree. Mention a few examples and the metals they accumulate. Are these animals also among the most sensitive to metal toxicity? Please explain. 4.2.11. Question 4 "Essential metals can be regulated and are therefore less toxic than xenobiotic metals" - Please comment on this thesis. 4.2.12. Metal tolerance Author: Nico M. van Straalen Reviewers: Henk Schat, Jaco Vangronsveld Learning objectives You should be able to • describe which mechanisms of changes in metal trafficking can contribute to metal tolerance and hyperaccumulation • explain the molecular factors associated with the evolution of metal tolerance in plants and in animals • develop an opinion on the issue of "rescue from pollution by evolution" in the risk assessment of heavy metals Keywords: hyperaccumulation, metal uptake mechanisms, microevolution Synopsis Some species of plants and animals have evolved metal-tolerant populations that can survive exposures that are lethal for other populations of the same species. Best known is the heavy metal vegetation that grows on metalliferous soils. The study of these cases of "evolution in action" has revealed many aspects of metal trafficking in plants, transport across membranes, metal scavenging molecules in the cell, and subcellular distribution of metals, and how these processes have been adapted by natural selection for tolerance. Metal-tolerant plant varieties are usually dependent upon high metal concentrations in the soil and do not grow well in reference soils. In addition, some plant species show an extreme degree of metal accumulation. In animals metal tolerance has been demonstrated in some invertebrates that live in close contact with metal-containing soils and this is usually achieved by altered regulation of metal scavenging proteins such as metallothioneins, or by duplication of the corresponding genes. Genomics studies are broadening our perspective as the adaptation normally does not rely on a single gene but includes hypostatic factors and modifiers. Introduction As metals cannot be degraded or metabolized, the only way to deal with potentially toxic excess is to store or excrete them. Often both mechanisms are operational, excretion being preceded by storage or scavenging, but animals and plants differ greatly in the emphasis on one or the other mechanism. Both essential and nonessential metals are subject to all kinds of trafficking mechanisms aiming to keep the biologically active, free ion concentration of the metal extremely low. Still, there is hardly any relationship between accumulation and tolerance. Some species have low tissue concentrations and are sensitive, others have low tissue concentrations and are tolerant, some accumulate metals and suffer from the high concentrations, others accumulate and are extremely tolerant. Like the mechanisms of biotransformation (see the section on Genetic Variation) metal trafficking mechanisms show genetic variation and such variation may be subject to evolution. However, it has to be noted that only in a limited number of plants and animal species metal-tolerant populations have evolved. This may be due to the fact that evolution of metal tolerance makes use of already existing, moderately efficient, metal trafficking mechanisms in the ancestral species. This interpretation is suggested by the observation that the non-metal-tolerant varieties of metal-tolerant plants already have a certain degree of metal tolerance (larger than species that never evolve metal-tolerant varieties). So the mutational distance to metal tolerance was smaller in the ancestors of metal-tolerant plants than it is in "normal" plants. Real metal tolerance, where the metal-tolerant population can withstand orders of magnitude larger exposures than reference populations, and has become dependent on metal-rich soils, is only found in plants. Metal tolerance in animals is of degree, rather than of kind, and does not come with externally recognizable phenotypes. Most likely the combination of strong selection pressure, the impossibility to escape by locomotion and the right pre-existing genetic variation explain why metal tolerance in plants is so much more prominent compared to animals. In this section we will discuss the various mechanisms that have been shown to underlie metal tolerance. The evolutionary response to environmental metal exposure is one of the classical examples of "evolution in action", next to insecticide resistance in mosquitoes and industrial melanism in butterflies. Metal tolerance in plants For many years, most likely already since humans started to dig ores and use metals for the manufacture of utensils, pottery and tools, it has been known that naturally metal-rich soils harbour a specific metal-tolerant vegetation. This "Schwermetallvegetation", described in the classical book by the German-Dutch botanist W.H.O. Ernst, consists of a designated collection of plant species, with representatives from various families. Several species also have metal-sensitive populations living in normal soils, but some, like the European zinc violet, Viola calaminaria, are restricted to metal-rich soils. This is also seen in the metal-tolerant vegetations of New Caledonia, Cuba, Zimbabwe and Congo, which to a large degree consist of endemic metal-tolerant species (true metallophytes) that are never found in normal soils. However, some common species also developed metal-tolerant ecotypes. Metal-tolerant plant species have expanded their range when humans started to dig the metal ores and now can also be found extensively at mining sites, metal-enriched stream banks, and around metal smelters. Naturally metal-enriched soils differ from reference soils not only in metal concentration but also in other aspects, e.g. calcium and moisture, so the selection for metal tolerance comes goes hand-in-hand with selection by several other factors. Metal tolerance is mainly restricted to herbs and forbs, and (except some tropical serpentines) does not extend to trees. A heavy metal vegetation is recognizable in the landscape as a "meadow", lacking trees, with relatively few plant species and an abundance of metallophytes. In the past, metal ores were discovered from the presence of such metallophytes, an activity called bioprospecting. We know from biochemistry that different metals are bound to different ligands and follow different biochemical pathways in biological tissues (see the section on metal accumulation). Some metals (cadmium, copper, mercury) are "sulphur-seekers", others have an affinity to organic acids (zinc) and still others tend to be associated with calcium-rich tissues (lead). Essential metals such as copper, zinc and iron have their own, metal-specific, transport mechanisms. From these observations one may conclude that metal tolerance will also be specific to the metal and that cross-tolerance (tolerance directed to one metal causing tolerance to another metal as a side-effect) is relatively rare. This is indeed the case. In many cases metal-tolerant plants do not show the same growth characteristics as the non-tolerant varieties of the same species. Loss of growth potential has often been interpreted as a "cost of tolerance". However, genetic research has shown that the lower growth potential of metallophytes is a separate adaptation, to deal with the usually infertile metalliferous soils, and there is no mechanistic link to tolerance. Metabolic costs or negative pleiotropic effects of metal tolerance have not been described. The fact that metal-tolerant plants do not grow well in clean soils is explained by the constitutive upregulation of trafficking and compartmentalization mechanisms, causing increased metal requirements that cannot be met on non-metalliferous soils. Another striking fact is that metal tolerances in the same plant species at different sites have evolved independently from each other. The various metal-tolerant populations of a species do not all descend from a single ancestral population, but result from repeated local evolution. That still in different populations sometimes the same loci are affected by natural selection, is ascribed to the fact that, given the species' genetic background, there are only a limited number of avenues to metal tolerance. A final general principle is that metal tolerance in plants is often targeted towards proteins that transport metals across membranes (cell membrane, tonoplast). The genes of such transporters may be duplicated, the balance between high-affinity transporters and low-affinity versions may be altered, their expression may be upregulated or downregulated, or the proteins may be targeted to different cellular compartments. Although many details on the genetic changes responsible for tolerance in plants are still lacking, the work on copper tolerance in bladder campion, Silene vulgaris, illustrates many of the points listed above. The plant has many metal-tolerant populations, of which one found at Imsbach, Germany, shows an extreme degree of copper tolerance and also some (independently evolved) zinc and cadmium tolerance. The area is known for its "Bergbau" with historical mining activities for copper, silver and cobalt, but also some older calamine deposits, which explains the zinc and cadmium tolerance. Genetic work by H. Schat and colleagues has shown that two ATP-driven copper transporters, designated HMA5I and HMA5II are involved in copper tolerance of Silene. The HMA5I protein resides in the tonoplast to relocate copper into the vacuole, while HMA5II resides in the endoplasmic reticulum. When free copper ions appear in the cell, HMA5II relocates from the ER to the cell membrane and starts pumping copper out of the cell. During transport from roots to shoot (in the xylem vessels) copper is bound as a nicotianamine complex. In addition, plant metallothioneins play a role in copper binding and transport in the phloem and during redistribution from senescent leaves. Copper tolerance in Silene illustrates the principle referred to above that metal tolerance is achieved by enhancing the transport mechanisms already present, not by evolving new genes. Metal hyperaccumulation Some plants accumulate metals to an extreme degree. Well-known are metallophytes growing on serpentine soils, which accumulate very large amounts of nickel. Also copper and cobalt accumulation is observed in several species of plants. Hyperaccumulators do not exclude metals but preferentially accumulate them when the concentration in the soil is extremely high (> 50.000 mg of copper per kg soil). The copper concentration of the leaves may reach values of more than 1000 μg/g. A very extreme example is a tree species, Sebertia acuminata, growing on the island of New Caledonia in ultramafic soil with 0.85% of nickel, which produces a latex containing 11% of nickel by weight. Such extraordinary high concentrations impose extreme demands on the efficiency of metal trafficking and so have attracted the attention of biological investigators. In Western Europe's heavy metal vegetation, zinc accumulators are present in several species of the genera Agrostis, Brassica, Thlaspi and Silene. Most of the experimental research is conducted on the brassicacean species Noccaea (Thlaspi) caerulescens and Arabidopsis halleri, with Arabidopsis thaliana as a non-accumulating reference model. The transport of metals in a plant involves a number of distinct steps, where each step is upregulated in the metal hyperaccumulator. This illustrated in Figure 1 in Verbruggen et al. (2009) for zinc hyperaccumulation in Thlaspi caerulescens. • Uptake in root epithelial cells; this involves ZIP4 and IRT1 zinc transporters • Transport between root tissues • Loading of the root xylem, by means of HMA4 and other metal transporters • In the xylem zinc may be chelated by citrate, histidine of nicotianamine, or may just be present as free ions • Unloading of the xylem in the leaves. This involves YSL proteins and others. • Transport into vacuoles and chelation to vacuole-specific chelators such as malate, involving metal transporters such as HMA3, MTP1 and MHX. While the basic components of the system are beginning to be known, the question how the whole machinery is upregulated in a coherent fashion is not yet clear. Metal tolerance in animals Also in animals, metal tolerant populations of the same species have been reported, however, there is no specific metal-tolerant community with a designated set of species, like in plants. There are, however, obvious metal accumulators among animals. Best known are terrestrial isopods, which accumulate very high concentrations of copper in designated cells in the their hepatopancreas, and some species of oribatid mites which accumulate very high amounts of manganese and zinc. One of the factors investigated to explain metal tolerance in animals is a metal-binding protein, metallothionein (MT). Gene duplication of an MT gene has been implicated in the tolerance of Daphnia and Drosophila to copper. In addition, metal tolerance may be due to altered transcriptional regulation. The latter mechanism underlies the evolution of cadmium tolerance in the soil-living springtail, Orchesella cincta. Detailed genetic analysis of this model system has revealed that the MT promoter of O. cincta shows a very large degree of polymorphism, some alleles affecting the transcription factor binding sites and causing overexpression of MT. The promoter allele conferring strong overexpression of MT upon exposure to cadmium, had a significantly higher frequency in O. cincta populations from metal-contaminated soils (Figure 2). In addition to springtails, evolution of metal tolerance has also been described for the earthworm, Lumbricus rubellus. In a population living in a lead-contaminated deserted mining area in Wales two lineages were distinguished on the basis of the COI gene and RFLPs, Interestingly, the two lineages had colonized different microhabitats of the area, one of them being unable to survive high lead concentrations. Differential expressions were noted for genes in phosphate and calcium metabolism. Two crucial mutations in a calcium transport protein suggested that lead tolerance in L. rubellus is due to modification of calcium transport, a logical target since lead and calcium are often found to interact with each other's transport (see the section on metal accumulation). Conclusions The study of metal tolerance is a rewarding topic of evolutionary ecotoxicology. Several crucial genetic mechanisms have been identified but in none of the study systems a complete picture of the evolved tolerance mechanisms is available. It may be expected that genome-wide studies will be able to identify the full network responsible for tolerance, which most likely includes not only major genes, but also hypostatic factors and modifiers. References Andre, J., King, R.A., Stürzenbaum, S.R., Kille, P., Hodson, M.E., Morgan, A.J. (2010). Molecular genetic differentiation in earthworms inhabiting a heterogeneous Pb-polluted landscape. Environmental Pollution 158, 883-890. Ernst, W.H.O. (1974) Schwermetallvegetation der Erde. Gustav Fischer Verlag, Stuttgart. Janssens, T.K.S., Roelofs, D., Van Straalen, N.M. (2009). Molecular mechanisms of heavy metal tolerance and evolution in invertebrates. Insect Science 16, 3-18. Krämer, U. (2010). Metal hyperaccumulation in plants. Annual Review of Plant Biology 61, 517-534. Li, X., Iqbal, M., Zhang, Q., Spelt, C., Bliek, M., Hakvoort, H.W.J., Quatrocchio, F.M., Koes, R., Schat, H. (2017). Two Silene vulgaris copper transporters residing in different cellular compartments confer copper hypertolerance by distinct mechanisms when expressed in Arabidopsis thaliana. New Phytologist 215, 1102-1114. Lopes, I., Baird, D.J., Ribeiro, R. (2005). Genetically determined resistance to lethal levels of copper by Daphnia longispina: association with sublethal response and multiple/coresistance. Environmental Toxicology and Chemistry 24, 1414-1419. Van Straalen, N.M., Janssens, T.K.S., Roelofs, D. (2011). Micro-evolution of toxicant tolerance: from single genes to the genome's tangled bank. Ecotoxicology 20, 574-579. Verbruggen, N., Hermans, C., Schat, H. (2009) Molecular mechanisms of metal hyperaccumulation in plants. New Phytologist 181, 759-776. 4.2.12. Question 1 Describe the pathway of zinc ions taken up by plants from the soil solution to their final destination in the plant, and how the various steps have been modified in hyperaccumulation plants species such as Arabidopsis halleri. 4.2.12. Question 2 Discuss the difference between trans-regulatory change and cis-regulatory change in the evolution of metal tolerance. 4.2.13. Adverse Outcome Pathways Author: Dick Roelofs Reviewers: Nico van Straalen, Dries Knapen Learning objectives: You should be able to • explain the concept of adverse outcome pathway. • interpret a graphical representation of an AOP • search the AOPwiki database for molecular initiating events, key events and adverse outcomes Keywords: Molecular initiation event, key event, in vitro assay, high throughput assay, pathway Introduction Over the past two decades the availability of molecular, biochemical and genomics data has exponentially increased. Data are now available for a phylogenetically broad range of living organisms, from prokaryotes to humans. This has tremendously advanced our knowledge and mechanistic understanding of biological systems, which is highly beneficial for different fields of biological research such as genetics, evolutionary biology and agricultural sciences. Being an applied biological science, toxicology has not yet tapped this wealth of data, because it is difficult to incorporate mechanistic data when assessing chemical safety in relation to human health and the environment. However, society is increasingly concerned about the release of industrial chemicals with little or no hazard- or risk information. Consequently, a much larger number of chemicals need to be considered for potential adverse effects on human health and ecosystem functioning. To meet this challenge it is necessary to deploy fast, cost-effective and high throughput approaches that can predict potential toxicity of substances and replace traditional tests based on survival and reproduction that run for weeks or months and often are quite labour-intensive. A major challenge is however, to link these fast in vitro and in vivo assays to endpoints used in current risk assessment. This challenge was picked up by defining the adverse outcome pathway (AOP) framework, for the first time proposed by the Gerald Ankley and co-workers from the United States Environmental Protection Agency, US-EPA (Ankley et al., 2010). The framework The AOP framework is defined as an evolution of prior pathway-based concepts, most notably mechanisms and modes of action, for assembling and depicting toxicological data across biological levels of organization (Ankley and Edwards, 2018). An AOP is a graphical representation of a series of measurable key events (KEs). A key event is a measurable directional change in the state of a biological process. KEs can be linked to one another through key event relationships (KERs; see Figure 1). The first KE is depicted as the "molecular initiating event" (MIE), and represents the interaction of the chemical with a biological receptor that activates subsequent key events. The key event relationships should ideally be based on causal evidence. A cascade of key events can eventually result in an adverse outcome (AO) at the individual or population level. The MIE and AO are specialized KEs, but treated like any other KE in the AOP framework. The aim of an AOP is to represent and describe, in a simplified way, how responses at the molecular- and cellular level are translated to impacts on development, reproduction and survival, which are relevant endpoints in risk assessment (Villeneuve et al., 2014). Five core concepts have been defined in the development of AOPs: 1. AOPs are not chemical specific, they are biological pathways; 2. AOPs are modular, they refer to a designated and defined metabolic cascade, even if that cascade interacts with other biological processes; 3. individual AOPs are developed as pragmatic units; 4. networks of multiple AOPs sharing KEs and KERs are functional units of prediction for real-world scenarios; and 5. AOPs are living documents that may change over time based on new scientific insights. Generally, AOPs are simplified linear pathways but different AOPs can be organized in networks with shared nodes. The AOP networks are actually the functional units of prediction, because they represent the complex biological interactions that occur in response to exposure to a toxicant or a mixture of toxicants. Analysis of the intersections (shared key events) of different AOPs making up a network can reveal unexpected biological connections (Villeneuve et al., 2014). Molecular initiating events and key events Typically, an AOP consists of only one MIE, and one AO, connected to each other by a potentially unlimited number of KEs and KERs. The MIE is considered to be the first anchor of an AOP at the molecular level, where stressors directly interact with the biological receptor. Identification of the MIE mostly relies on chemical analysis, in silico analysis or in chemico and in vitro data. For instance, the MIE for AOPs related to estrogen receptor activation involves the binding of chemicals to the estrogen receptor, thereby triggering a cascade of effects in hormone-related metabolism (see the section on Endocrine disruption). The MIE for AOPs related to skin sensitization (see below) involves the covalent interaction of chemicals to skin proteins in skin cells, an event called haptenization (Vinken, 2013). A wide range of biological data can support the understanding of KEs. Usually, early KEs (directly linked to MIEs) are assessed using in vitro assays, but may include in vivo data at the cellular level, while intermediate and late KEs rely on tissue-, organ- or whole organism measurements (Figure 1). Key-event measurements are also related to data from high-throughput screening and/or data generated by different -omics technologies. This is actually where the true value of the AOP framework comes in, since it is currently the only framework able to reach such a high level of data integration in the context of risk assessment. It is even possible to integrate comparative data from phylogenetically divergent organisms into key event measurements, valid across species, which could facilitate the evaluation of species sensitivity (Lalone et al., 2018). The final AO output is usually represented by apical responses, already described as standard guidelines accepted and instrumental in regulatory decision-making, which include endpoints such as development, growth, reproduction and survival. Development of the AOP framework is currently supported by US-Environmental Protection Agency, EU Joint Research Centers (ERC) and the Organization for Economic Cooperation and Development (OECD). Moreover, OECD has sponsored the development of an open access searchable database AOPWiki (https://aopwiki.org/), comprising over 250 AOPs with associated MIEs, KEs and KERs, and more than 400 stressors. New AOPs are added regularly. The database also has a system for specifying the confidence to be placed in an AOP. Where KEs and KERs are supported by direct, specifically designed, experimental evidence, high confidence is placed in them. In other cases confidence is considered moderate or low, e.g. when there is a lack of supporting data or conflicting evidence. Case example: covalent protein binding leading to skin sensitization (AOP40) Skin sensitization is characterized by a two-step process, a sensitization phase and an elicitation phase. The first contact of electrophile compounds with the skin covalently modifies skin proteins and generates an immunological memory due to generated antigen/allergen specific T-cells. During the elicitation phase, repeated contact with the compound elicits the allergic reaction defined as allergic contact dermatitis, which usually develops into a lifelong effect. This is an important endpoint for safety assessment of personal care products, traditionally evaluated by in vivo assays. Based on changed public opinion the European Chemical Agency (ECHA) decided to move away from whole animal skin tests, and developed alternative assessment strategies. During sensitization, the MIE takes place when the chemical enters the skin, where it forms a stable complex with skin-specific carrier proteins (hapten complexes), which are immunogenic. A subsequent KE comprises inflammation and oxidative defense via a signaling cascade called the Keap1/Nrf2 signalling pathway (Kelch-like ECH-associated protein 1 / nuclear factor erythroid 2 related factor 2). At the same time, a second KE is defined as dendritic cell activation and maturation. This results into movement of dendritic cells to lymph nodes, where the hapten complex is presented to naive T-cells. The third KE describes the proliferation of hapten-specific T-cells and subsequent movement of antigen-specific memory cells that circulate in the body. Upon a second contact with the compound, these memory T-cells secrete cytokines that cause an inflammation reaction leading to the AO including red rash, blisters, and burning skin (Vinken et al., 2017). This AOP is designated AOP40 in the database of adverse outcome pathways. A suite of high throughput in vitro assays have now been developed to quantify the intermediate KEs in AOP40. These data formed the basis for the development of a Bayesian network analysis that can predict the potential for skin sensitization. This example highlights the use of pathway-derived data organized in an AOP, ultimately leading to an alternative fast screening method that may replace a conventional method using animal experiments. References Ankley, G.T., Bennett, R.S., Erickson, R.J., Hoff, D.J., Hornung, M.W., Johnson, R.D., Mount, D.R., Nichols, J.W., Russom, C.L., Schmieder, P.K., Serrrano, J.A., Tietge, J.E., Villeneuve, D.L. (2010). Adverse outcome pathways: A conceptual framework to support ecotoxicology research and risk assessment. Environmental Toxicology and Chemistry 29, 730-741. Ankley, G.T., Edwards, S.W. (2018). The adverse outcome pathway: A multifaceted framework supporting 21st century toxicology. Current Opinion in Toxicology 9, 1-7. LaLone, C.A., Villeneuve, D.L., Doering, J.A., Blackwell, B.R., Transue, T.R., Simmons, C.W., Swintek, J., Degitz, S.J., Williams, A.J., Ankley, G.T. (2018). Evidence for cross species extrapolation of mammalian-based high-throughput screening assay results. Environmental Science and Technology 18, 13960-13971. Villeneuve, D.L., Crump, D., Garcia-Reyero, N., Hecker, M., Hutchinson, T.H., LaLone, C.A., Landesmann, B, Lettieri, T., Munn, S., Nepelska, M., Ottinger, M.A., Vergauwen, L., Whelan, M. (2014). Adverse Utcome Pathway development I: Strategies and principles. Toxicological Sciences 142, 312-320. Vinken, M. (2013). The adverse outcome pathway concept: A pragmatic tool in toxicology. Toxicology 312, 158-165. Vinken, M., Knapen, D., Vergauwen, L., Hengstler, J.G., Angrish, M., Whelan, M. (2017). Adverse outcome pathways: a concise introduction for toxicologists. Archives of Toxicology 91, 3697-3707. 4.2.13. Question 1 What is a molecular initiation event? 4.2.13. Question 2 Where in the chain of events do high-throughput in-vitro assays feed into AOPs? 4.2.13. Question 3 What is the ultimate goal of the AOP concept? 4.2.13. Question 4 How many intermediate key events are captured in the AOP for skin sensitization? 4.2.13. Question 5 Is an AOP always represented as a linear chain between MIE and AO? 4.2.14. Genetic variation in toxicant metabolism Author: Nico M van Straalen Reviewers: Andrew Whitehead, Frank van Belleghem Learning objectives: You should be able to • explain four different classes of CYP gene variation and expression, contributing to species differences • explain the associations between biotransformation activity and specific ecologies • explain how genetic variation in biotransformation enzymes may lead to evolution of toxicant tolerance • describe the relevance of human genetic polymorphisms for personalized medicine Keywords: toxicant susceptibility; genetic variation; biotransformation evolution of toxicant tolerance Assumed prior knowledge and related modules • Biotransformation and internal processing of chemicals • Defence mechanisms • Genetic erosion In addition, a basic knowledge of genetics and evolutionary biology is needed to understand this module. Synopsis Susceptibility to toxicants often shows inter-individual differences associated with genetic variation. While such differences are considered a nuisance in laboratory toxicity testing, they are an inextricable aspect of toxicant effects in the environment. Variation may be due to polymorphisms in the target site of toxicant action, but more often differences in metabolic enzymes and rates of excretion contribute to inter-individual variation. The structure of genes encoding metabolic enzymes, as well as polymorphisms in promoter regions of such genes are common sources of genetic variation. Under strong selection pressure species may evolve toxicant-tolerant populations, for example insects to insecticides and bacteria to antibiotics. In human populations, polymorphisms in drug metabolizing enzymes are mapped to provide a basis for personal therapies. This module aims to illustrate some of the genetic principles explaining inter-individual variation of toxicant susceptibility and its evolutionary consequences. Introduction For a long time it has been known that human subjects may differ markedly in their responses to drugs: while some patients hardly respond to a certain dosage, others react vehemently. Similar differences exist between the sexes and between ethnic groups. To avoid failure of treatment on the one hand and overdosing on the other, such personal differences have attracted the interest of pharmacological scientists. Also the tendency to develop cancer upon exposure to mutagenic chemicals is partly due to genetics. Since the rise of molecular ecology in the 1990s ecotoxicologists have noted that inter-individual differences in toxicant responses also exists in the environment. Due to genetic variation environmental pollution may trigger evolutionary change in the wild. From quantitative genetics we know that when a trait is due to many genes, each with an independent additive effect on the trait value, the response to selection R, is linearly related to the selection differential S according to the formula: R = h2S, where h2 is a measure of the heritability of the selected trait (fraction of additive genetic variance relative to total phenotypic variance). Since anthropogenic toxicants can act as very strong selective agents (large S) it is expected that whenever h2 > 0 there will be adaptation. However, the effectiveness of "evolutionary rescue" from pollution is limited to those species that have the appropriate genetic variation and the ability to quickly increase in population size. Polymorphisms of drug metabolizing enzymes in humans One of the most important enzyme systems contributing to metabolism of xenobiotic chemicals is the cytochrome P450 family, a class of proteins located in the smooth endoplasmic reticulum of the cell and acting in co-operation with several other proteins. Cytochrome P450 will oxidize the substrate and enhance its water-solubility (called phase-I reaction), and in many cases activate it for further reactions involving conjugation with an endogenous compound (phase II reactions). These processes generally lead to detoxification and increased excretion of toxic substances. The biochemistry of drug metabolism is discussed in detail in the section on Xenobiotic metabolism and defence. The human genome has 57 genes encoding a P450 protein. The genes are commonly designated as "CYP". Other organisms, especially insects and plants have many more CYPs. For example, the Drosophila genome encodes 83 functional P450 genes and the genome of the model plant Arabidopsis has 244 CYPs. Based on sequence similarity, CYPs are classified in 18 families and 43 subfamilies, but there is no agreement yet about the position of various CYP genes in lower invertebrates. The complexity is enhanced by duplications specific to certain evolutionary lineages, creating a complicated pattern of orthologs (homologs by descent from a common ancestor) and paralogs (homologs due to duplication in the same genome). In addition to functional enzymes it is also common to find many CYP pseudogenes in a genome. Pseudogenes are DNA-sequences that resemble functional genes, but are mutated and they do not result in functional proteins). The expression of CYP enzymes is markedly tissue-specific. Often CYP expression is high in epithelial tissues (lung, intestine) and organs with designated metabolic activity (liver, kidney). In the human body, the liver is the main metabolic organ and is known for its extensive CYP expression. P450 enzymes also differ in their inducibility by classes of chemicals and in their substrate specificity. It is often assumed that the versatility of an organism's CYP genes is a reflection of its ecology. For example, herbivorous insects that consume plants of different kinds with many different feeding repellents must avail of a wide diversity of CYP genes. It has also been shown that activity of CYP enzymes among terrestrial organisms is, in general, higher than among aquatic organisms and that plant-eating birds have higher biotransformation activities than predatory birds. One of the best-investigated CYP genes, especially due to its strong inducibility and involvement in xenobiotic metabolism, is mammalian CYP1A1. In humans induction of this gene is associated with increased lung cancer risk from smoking, and with other cancers, such as breast cancer and prostrate cancer. Human CYP1A1 is located on chromosome 15 and encodes 251 amino acids in seven exons (see Figure 2 in Zhou et al., 2009). About 133 single-nucleotide polymorphisms (SNPs, variations in a single nucleotide that occur at a specific position in the genome) have been described for this gene, of which 23 are non-synonymous (causing a substitution of an amino acid in the protein). Many of these SNPs have a medical relevance. For example, a rather common SNP in exon 7 changes codon 462 from isoleucine into valine. The substituted allele is called CYP1A1*2A, and this occurs at a frequency of 19% in the Caucasian part of the human population. The allelic variant of the enzyme has a higher activity towards 17β-estradiol and is a risk factor for several types of cancer. However, the expression of such traits may vary from one population to another, and may also interact with other risk factors. For example, CYP1A1*2A is a risk factor for cervical cancer in women with a history of smoking in the Polish population, but the same SNP may not be a risk factor in another population or among people with a non-smoking lifestyle. In genetics these effects are known as epistasis: the phenotypic effect of genetic variation at one locus depends on the genotype of another locus. This is also an example of a genotype-by-environment interaction, where the phenotypic effect of a genetic variant depends on the environment (smoking habit). In toxicology it is known that polymorphisms of phase II biotransformation enzymes may significantly contribute to epistatic interaction with CYP genes. Unraveling all these complicated interactions is a very active field of research in human medical genetics. Cytochrome P450 variation across species Comparison of CYP genes in different species has revealed an enormously rapid evolution of this gene family, with many lineage-specific duplications. This indicates strong selective pressures imposed by the need to detoxify substances ingested with the diet. Especially herbivorous animals are constantly exposed to such compounds, synthesized by plants to deter feeding. We also see profound changes in CYP genes associated with evolutionary transitions such as colonization of terrestrial habitats by the various lineages of arthropods. Such natural variation, induced by plant toxins and habitat requirements, is also relevant in the responses to toxicants. In general, variation of biotransformation enzymes can be classified in four main categories 1. Variation in the structure of the genes, e.g. substitutions that alter the binding affinity to substrates; such variation discriminates the various CYP genes. 2. Copy number variation; duplication usually leads to an increase in enzymatic capacity; this process has been enormously important in CYP evolution. Because CYP gene duplications are often specific to the evolutionary lineage, a complicated pattern of paralogs (duplicates within the same genome) and orthologs (genes common by descent, shared with other species) arises. 3. Promoter variation, e.g. due to insertion of transposons or changes in the number or arrangement of transcription factor binding sites. This changes the amount of protein produced from one gene copy by altered transcriptional regulation. 4. Variation in the structure, action or activation of transcriptional regulators. The transcription of biotransformation enzymes is usually induced by a signaling pathway activated by the compound to be metabolized (see the section on Xenobiotic metabolism and defence), and this pathway may show genetic variation. To illustrate the complicated evolution of biotransformation genes, we shortly discuss the CYPs of common cormorant, Phalacrocorax carbo. This is a bird known for its narrow diet (fish) and extraordinary potential for accumulation of dioxin-related compounds (PCBs, PCDDs and PCDFs). Environmental toxicologists have identified two CYP1A genes in the cormorant, called CYP1A4 and CYP1A5. It turns out that CYP1A4 is homologous by descent (orthologous) to mammalian CYP1A1 while CYP1A5 is an ortholog of mammalian CYP1A2. However, the orthologies are not revealed by common phylogenetic analysis if the whole coding sequence is used in the alignment (see Figure 3 in Kubota et al., 2006). This is a consequence of a process called interparalog gene conversion, which tends to homogenize DNA sequences of gene copies located on the same chromosome. This diminishes sequence variation between the paralogs, and creates chimeric gene structures, that are more similar to each other than expected from their phylogenetic relations. If a phylogenetic tree is made using a section of the gene that remained outside the gene conversion, the true phylogenetic relations are revealed (see Figure 3 in Kubota et al., 2006). Cytochrome P450-mediated resistances Cytochrome P450 polymorphisms are also implicated in certain types of insecticide resistance. There are many ways in which insects and other arthropods can become resistant and several mechanisms may even be present in the same resistant strain. Target site alteration (making the target less susceptible to the insecticide, e.g. altered acetylcholinesterase, substitutions in the GABA-receptor, etc.) seems to be the most likely mechanism for resistance, however, such changes often come with substantial costs as they may diminish the natural function of the target (in genetics this is called pleiotropy). Increased metabolism does not usually contribute metabolic costs and this is where cytochromes P450 come into play. A model system for investigating the genetics of such mechanisms is DDT resistance in the fruit fly, Drosophila melanogaster. In a DDT-resistant Drosophila strain, all CYP genes were screened for enhanced expression and it was shown that DDT resistance was due to a highly upregulated variant of only a single gene, Cyp6g1. Further analysis showed that the gene's promoter carried an insertion with strong similarity to a transposable element of the Accord family. The insertion of this element causes a significant overexpression and a high rate of protein synthesis that allows the fly to quickly degrade a DDT dose. The fact that a simple change, in only one allele, can underlie such a distinctive phenotype as pesticide resistance is a remarkable lesson for molecular toxicology. A recent study on killifish, Fundulus heteroclitus, along the East coast of the United States has revealed a much more complicated pattern of resistance. Populations of these fish live in estuaries, some with severely polluted sediments, containing high concentrations of polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs). Killifish from the polluted environments are much more resistant to toxicity from the model compounds PCB126 and benzo(a)pyrene. This resistance is related to mutations in the gene encoding aryl hydrocarbon receptor (AHR), the protein that binds PAHs and certain PCB metabolites and activates CYP expression. Also mutations in a protein called aryl-hydrocarbon receptor-interacting protein (AIP), a protein that combines with AHR to ensure binding of the ligand, contribute to down-regulation of the CY1A1 pathway. The net result is that killifish CYP1A1 shows only moderate induction by PCBs and PAHs and the damaging effects of reactive metabolites are avoided. However, since direct knockdown of CYP1A1 does not provide resistance it is still unclear whether the beneficial effects of the mutations in AHR actually act through an effect on CYP1A1. Interestingly, the various killifish populations show at least three different deletions in the AHR genes (Figure 1). In addition, the tolerant populations show various degrees of CYP1A1 duplication; in one population even eight paralogs are present. This can be interpreted as compensatory adaptations ensuring a basal constitutive level of CYP1A1 protein to conduct routine metabolic activities. The killifish example shows a wonderful case of interplay between genetic tinkering, and strong selection emanating from a polluted environment. Conclusion In this module we have focused on genetic variation in the phase I enzyme, cytochrome P450. A similar complexity lies behind the phase II enzymes and the various xenobiotic-induced transporters (phase III). Still the P450 examples suffice to demonstrate that the machinery of xenobiotic metabolism shows a very large degree of genetic variation, as well as species differences due to duplications, deletions, gene conversion and lineage-specific selection. The variation resides both in copy number variation, alteration of coding sequences and in promoter or enhancer sequences affecting the expression of the enzymes. Such genetic variation is the template for evolution. In polluted environments enhanced expression is sometimes selected for (to neutralize toxic compounds), but sometimes also attenuated expression is selected (to avoid production of toxic intermediates). In the human genome, many of the polymorphisms have a medical significance, determining a personal profile of drug metabolism and tendencies to develop cancer. References Bell, G. (2012). Evolutionary rescue and the limits of adaptation. Philosophical Transactions of the Royal Society B 368, 2012.0080. Daborn, P.J., Yen, J.L., Bogwitz, M.R., Le Goff, G., Feil, E., Jeffers, S., Tijet, N., Perry, T., Heckel, D., Batterham, P., Feyereisen, R., Wilson, T.G., Ffrench-Constant, R.H. (2002). A single P450 allele associated with insecticide resistance in Drosophila. Science 297, 2253-2256. Feyereisen, R. (1999). Insect P450 enzymes. Annual Review of Entomology 44, 507-533. Goldstone, H.M.H., Stegeman, J.J. (2006). A revised evolutionary history of the CYP1A subfamily: gene duplication, gene conversion and positive selection. Journal of Molecular Evolution 62, 708-717. Kubota, A., Iwata, H., Goldstone, H.M.H., Kim, E.-Y., Stegeman, J.J., Tanabe, S. (2006). Cytochrome P450 1A1 and 1A5 in common cormorant (Phalacrocorax carbo): evolutionary relationships and functional implications associated with dioxin and related compounds. Toxicological Sciences 92, 394-408. Reid, N.M., Proestou, D.A., Clark, B.W., Warren, W.C., Colbourne, J.K., Shaw, J.R., Hahn, M., Nacci, D., Oleksiak, M.F., Crawford, D.L., Whitehead, A. (2016). The genomic landscape of rapid repeated evolutionary adaptation to toxic pollution in wild fish Science 354, 1305-1308. Preissner, S.C., Hoffmann, M.F., Preissner, R., Dunkel, R., Gewiess, A., Preissner, S. (2013). Polymorphic cytochrome P450 enyzmes (CYPs) and their role in personalized therapy. PLoS ONE 8, e82562. Roszak, A., Lianeri, M., Sowinska, A., Jagodzinski, P.P. (2014). CYP1A1 Ile462Val polymorphism as a risk factor in cervical cancer development in the Polish populations. Molecular Diagnosis and Therapy 18, 445-450. Taylor, M., Feyereisen, R. (1996). Molecular biology and evolution of resistance to toxicants. Molecular Biology and Evolution 13, 719-734. Walker, C.H., Ronis, M.J. (1989). The monooxygenases of birds, reptiles and amphibians. Xenobiotica 19, 1111-1121. Zhou, S.-F., Liu J.-P., Chowbay, B. (2009). Polymorphism of human cytochrome P450 enzymes and its clinical impact. Drug Metabolism Reviews 41, 89-295. 4.2.14. Question 1 Comment upon: "In the human cytochrome P450 gene 1A1 133 single nucleotide polymorphisms have been described, of which 23 are non-synonymous". • What is a "single nucleotide polymorphism" (SNP)? • What is the difference between a synonymous SNP and a non-synonymous SNP? • What could be the differences of nucleotide substitutions (1) in the promoter of CYP1A1, (2) in one of the introns of the CYP1A1 gene, (3) in one of the exons of CYP1A1? • Does a single amino acid change in the cytochrome P450 protein have any medical relevance? 4.2.14. Question 2 Some populations of killifish along the Atlantic coast of the U.S. show very high resistance against organic contaminants such as dioxin-like PCBs. Genetic research has shown that these resistant populations have a deletion in the gene encoding aryl hydrocarbon receptor (Ahr). Explain why such a mutation can cause resistance to dioxin-like PCBs. 4.2.14. Question 3 Comment upon the concept of "Evolutionary rescue from pollution"- the evolution of tolerance as a way to diminish toxic effects of pollution.
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.02%3A_Toxicodynamics_and_Molecular_Interactions.txt
4.3. Toxicity testing Author: Kees van Gestel Reviewer: Michiel Kraak Learning objectives: You should be able to • Mention the two general types of endpoints in toxicity tests • Mention the main groups of test organisms used in environmental toxicology • Mention different criteria determining the validity of toxicity tests • Explain why toxicity testing may need a negative and a positive control Keywords: single-species toxicity tests, test species selection, concentration-response relationships, endpoints, bioaccumulation testing, epidemiology, standardization, quality control, transcriptomics, metabolomics, Introduction Laboratory toxicity tests may provide insight into the potential of chemicals to bioaccumulate in organisms and into their hazard, the latter usually being expressed as toxicity values derived from concentration-response relationships. Section 4.3.1 on Bioaccumulation testing describes how to perform tests to assess the bioaccumulation potential of chemicals in aquatic and terrestrial organisms, and under static and dynamic exposure conditions. Basic to toxicity testing is the establishment of a concentration-response relationship, which relates the endpoint measured in the test organisms to exposure concentrations. Section 4.3.2 on Concentration-response relationships elaborates on the calculation of the relevant toxicity parameters like the median lethal concentration (LC50) and the medium effective concentration (EC50) from such toxicity tests. It also discusses the pros and cons of different methods for analyzing data from toxicity tests. Several issues have to be addressed when designing toxicity tests that should enable assessing the environmental or human health hazard of chemicals. This concerns among others the selection of test organisms (see section 4.3.4 on the Selection of test organisms for ecotoxicity testing), exposure media, test conditions, test duration and endpoints, but also requires clear criteria for checking the quality of toxicity tests performed (see below). Different whole organism endpoints that are commonly used in standard toxicity tests, like survival, growth, reproduction or avoidance behavior, are discussed in section 4.3.3 on Endpoints. The sections 4.3.4 to 4.3.7 are focusing on the selection and performance of tests with organisms representative of aquatic and terrestrial ecosystems. This includes microorganisms (section 4.3.6), plants (section 4.3.5), invertebrates (section 4.3.4) and vertebrate test organisms (e.g. fish: section 4.3.4 on ecotoxicity tests, and birds: section 4.3.7). Testing of vertebrates, including fish (section 4.3.4) and birds (section 4.3.7), is subject to strict regulations, aimed at reducing the use of test animals. Data on the potential hazard of chemicals to human health therefore preferably have to be obtained in other ways, like by using in vitro test methods (section 4.3.8), by using data from post-registration monitoring of exposed humans (section 4.3.9 on Human toxicity testing), or from epidemiological analysis on exposed humans (section 4.3.10). Inclusion of novel endpoints in toxicity testing Traditionally, toxicity tests focus on whole organism endpoints, with survival, growth and reproduction being the most measured parameters (section 4.3.3). In case of vertebrate toxicity testing, also other endpoints may be used addressing effects at the level of organs or tissues (section 4.3.9 on human toxicity testing). Behavioural (e.g. avoidance behavior) and biochemical endpoints, like enzyme activity, are also regularly included in toxicity testing with vertebrates and invertebrates (sections 4.3.3, 4.3.4, 4.3.7, 4.3.9). With the rise of molecular biology, novel techniques have become available that may provide additional information on the effects of chemicals. Molecular tools may, for instance, be applied in molecular epidemiology (section 4.3.11) to find causal relationships between health effects and the exposure to chemicals. Toxicity testing may also use gene expression responses (transcriptomics; section 4.3.12) or changes in metabolism (metabolomics; section 4.3.13) in relation to chemical exposures to help unraveling the mechanism(s) of action of chemicals. A major challenge still is to explain whole organism effects from such molecular responses. Standardization of tests The standardization of tests is organized by international bodies like the Organization for Economic Co-operation and Development (OECD), the International Standardization Organization (ISO), and ASTM International (formerly known as the American Society for Testing and Materials). Standardization aims at reducing variation in test outcomes by carefully describing the methods for culturing and handling the test organisms, the procedures for performing the test, the properties and composition of test media, the exposure conditions and the analysis of the data. Standardized test guidelines are usually based on extensive testing of a method by different laboratories in a so-called round-robin test. Regulatory bodies generally require that toxicity tests supporting the registration of new chemicals are performed according to internationally standardized test guidelines. In Europe, for instance, all toxicity tests submitted within the framework of REACH have to be performed according to the OECD guidelines for the testing of chemicals (see section on Regulation of chemicals). Quality control of toxicity tests Since toxicity tests are performed with living organisms, this inevitably leads to (biological) variation in outcomes. Coping with this variation requires the use of sufficient replication, careful test designs and good choice of endpoints (section 4.3.3) to enable proper estimates of relevant toxicity data. In order to control the quality of the outcome of toxicity tests, several criteria have been developed, which mainly apply to the performance of the test organisms in the non-exposed controls. These criteria may e.g. require a minimum % survival of control organisms, a minimum growth rate or number of offspring being produced by the controls and limited variation (e.g. <30%) of the replicate control growth or reproduction data (sections 4.3.4, 4.3.5, 4.3.6, 4.3.7). When tests do not meet these criteria, the outcome is prone to doubts, as for instance a poor control survival will make it hard to draw sound conclusions on the effect of the test chemical on this endpoint. As a consequence, tests that do not meet these validity criteria may not be accepted by other scientists and by regulatory authorities. In case the test chemical is added to the test medium using a solvent, toxicity tests should also include a solvent control, in addition to a regular non-exposed control (see section 4.3.4 on the selection of test organisms for ecotoxicity testing). In case the response in the solvent control differs significantly from that in the negative control, the solvent control will be used as the control for analyzing the effects of the test chemical. The negative control will then only be used to check if the validity criteria have been met and to monitor the condition of the test organisms. In case the responses in the negative control and the solvent control do not differ significantly, both controls can be pooled for the data analysis. Most test guidelines also require frequent testing of a positive control, a chemical with known toxicity, to check if the long-term culturing of the test organisms does not lead to changes in their sensitivity. 4.3. Question 1 What are the main endpoints in toxicity testing? 4.3. Question 2 Which are the main groups of organisms used in toxicity testing? 4.3. Question 3 Why is standardization of methods for toxicity testing required? 4.3. Question 4 Which elements are included in the quality control of toxicity tests? 4.3.1. Bioaccumulation testing Author: Kees van Gestel Reviewers: Joop Hermens, Michiel Kraak, Susana Loureiro Learning objectives: You should be able to • describe methods for determining the bioaccumulation of chemicals in terrestrial and aquatic organisms • describe a test design suitable for assessing the bioaccumulation kinetics of chemicals in organisms • mention the pros and cons of static and dynamic bioaccumulation tests Keywords: bioconcentration, bioaccumulation, uptake and elimination kinetics, test methods, soil, water Bioaccumulation is defined as the uptake of chemicals in organisms from the environment. The degree of bioaccumulation is usually indicated by the bioconcentration factor (BCF) in case the exposure is via water, or the biota-to-soil/sediment accumulation factor (BSAF) for exposure in soil or sediment (see section on Bioaccumulation). Because of the potential risk for food-chain transfer, experimental determination of the bioaccumulation potential of chemicals is usually required in case of a high lipophilicity (log Kow > 3), unless the chemical has a very low persistency. For very persistent chemicals, experimental determination of bioaccumulation potential may already be triggered at log Kow > 2. The experimental determination of BCF and BSAF values makes use of static or dynamic exposure systems. In static tests, the medium is dosed once with the test chemical, and organisms are exposed for a certain period of time after which both the organisms and the test medium are analyzed for the test chemical. The BCF or BSAF are calculated from the measured concentrations. There are a few concerns with this way of bioaccumulation testing. First, exposure concentrations may decrease during the test, e.g. due to (bio)degradation, volatilization, sorption to the walls of the test container, or uptake of the test compound by the test organisms. As a consequence, the concentration in the test medium measured at the start of the test may not be indicative of the actual exposure during the test. To take this into account, exposure concentrations can be measured at the start and the end of the test and also at some intermediate time points. Body concentrations in the test organisms may then be related to time-weighted-average (TWA) exposure concentrations. Alternatively, to overcome the problem of decreasing concentrations in aquatic test systems, continuous flow systems or passive dosing techniques can be applied. Such methods, however, are not applicable to soil or sediment tests, where repeated transfer of organisms to freshly spiked medium is the only way to guarantee more or less constant exposure concentrations in case of rapidly degrading compounds. To avoid that the uptake of the test chemical in test organisms leads to decreasing exposure concentrations, the amount of biomass per volume or mass of test medium should be sufficiently low. Second, it is uncertain whether at the end of the exposure period steady state or equilibrium is reached. If this is not the case, the resulting BSAF or BCF values may underestimate the bioaccumulation potential of the chemical. To tackle this problem, a dynamic test may be run to assess the uptake and elimination rate constants to derive a BSAF or BCF values using uptake and elimination rate constants (see below). Such uncertainties also apply to BCF and BSAF values obtained by analyzing organisms collected from the field and comparing body concentrations with exposure levels in the environment. Using data from field-exposed organisms on one hand have large uncertainty as it remains unclear whether equilibrium was reached, on the other hand they to do reflect exposure over time under fluctuating but realistic exposure conditions. Dynamic tests, also indicated as uptake/elimination or toxicokinetic tests, may overcome some, but not all, of the disadvantages of static tests. In dynamic tests, organisms are exposed for a certain period of time in spiked medium to assess the uptake of the chemical, after which they are transferred to clean medium for determining the elimination of the chemical. During both the uptake and the elimination phase, at different points in time, organisms are sampled and analyzed for the test chemical. The medium is also sampled frequently to check for a possible decline of the exposure concentration during the uptake phase. Also in dynamic tests, keeping exposure concentrations constant as much as possible is a major challenge, requiring frequent renewal (see above). Toxicokinetic tests should also include controls, consisting of test organisms incubated in the clean medium and transferred to clean medium at the same time the organisms from the treated medium are transferred. Such controls may help identifying possible irregularities in the test, such as poor health of the test organisms or unexpected (cross)contamination occurring during the test. The concentrations of the chemical measured in the test organisms are plotted against the exposure time, and a first-order one-compartment model is fitted to the data to estimate the uptake and elimination rate constants. The (dynamic) BSAF or BCF value is then determined as the ratio of the uptake and elimination rate constants (see section on Bioconcentration and kinetic models). In a toxicokinetics test, usually replicate samples are taken at each point in time, both during the uptake and the elimination phase. The frequency of sampling may be higher at the beginning than at the end of both phases: a typical sampling scheme is shown in Figure 1. Since the analysis of toxicokinetics data using the one-compartment model is regression based, it is generally preferred to have more points in time rather than having many replicates per sampling time. From that perspective, often no more than 3-4 replicates are used per sampling time, and 5-6 sampling times for the uptake and elimination phases each. Preferably, replicates are independent, so destructively sampled at a specific sampling point. Especially in aquatic ecotoxicology, mass exposures are sometimes used, having all test organisms in one or few replicate test containers. In this case, at each sampling time some replicate organisms are taken from the test container(s), and at the end of the uptake phase all organisms are transferred to (a) container(s) with clean medium. Figure 2 shows the result of a test on the uptake and elimination kinetics of molybdenum in the earthworm Eisenia andrei. From the ratio of the uptake rate constant (k1) and elimination rate constant (k2) a BSAF of approx. 1.0 could be calculated, suggesting a low bioaccumulation potential of Mo in earthworms in the soil tested. Another way of assessing the bioaccumulation potential of chemicals in organisms includes the use of radiolabeled chemicals, which may facilitate easy detection of the test chemical. The use of radiolabeled chemicals may however, overestimate bioaccumulation potential when no distinction is made between the parent compound and potential metabolites. In case of metals, stable isotopes may also offer an opportunity to assess bioaccumulation potential. Such an approach was also applied to distinguish the role of dissolved (ionic) Zn in the bioaccumulation of Zn in earthworms from ZnO nanoparticles. Earthworms were exposed to soils spiked with mixtures of 64ZnCl2 and 68ZnO nanoparticles. The results showed that dissolution of the nanoparticles was fast and that the earthworms mainly accumulated Zn present in ionic form in the soil solution (Laycock et al., 2017). Standard test guidelines for assessing the bioaccumulation (kinetics) of chemicals have been published by the Organization for Economic Cooperation and Development (OECD) for sediment-dwelling oligochaetes (OECD, 2008), for earthworms/enchytraeids in soil (OECD, 2010) and for fish (OECD, 2012). References Diez-Ortiz, M., Giska, I., Groot, M., Borgman, E.M., Van Gestel, C.A.M. (2010). Influence of soil properties on molybdenum uptake and elimination kinetics in the earthworm Eisenia andrei. Chemosphere 80, 1036-1043. Laycock, A., Romero-Freire, A., Najorka, J., Svendsen, C., Van Gestel, C.A.M., Rehkämper, M. (2017). Novel multi-isotope tracer approach to test ZnO nanoparticle and soluble Zn bioavailability in joint soil exposures. Environmental Science and Technology 51, 12756−12763. OECD (2008). Guidelines for the testing of chemicals No. 315: Bioaccumulation in Sediment-dwelling Benthic Oligochaetes. Organization for Economic Cooperation and Development, Paris. OECD (2010). Guidelines for the testing of chemicals No. 317: Bioaccumulation in Terrestrial Oligochaetes. Organization for Economic Cooperation and Development, Paris. OECD (2012). Guidelines for the testing of chemicals No. 305: Bioaccumulation in Fish: Aqueous and Dietary Exposure. Organization for Economic Cooperation and Development, Paris. 4.3.1. Question 1 Why may BSAF and BCF values obtained from static tests not reflect the real bioaccumulation potential of chemicals, even if it was possible to keep exposure concentrations constant? 4.3.1. Question 2 Describe the experimental design of a test for assessing the uptake and elimination kinetics of chemicals in test organisms, in soil or water. 4.3.1. Question 3 a. What experimental problem may be encountered when determining the bioaccumulation of chemicals in terrestrial or aquatic organisms? b. And how may this problems be overcome in case of aquatic organisms? c. Is such a solution also possible for terrestrial organisms? 4.3.2. Concentration-response relationships Author: Kees van Gestel Reviewers: Michiel Kraak, Thomas Backhaus Learning goals: You should be able to • understand the concept of the concentration-response relationship • define measures of toxicity • distinguish quantal and continuous data • mention the reasons for preferring ECx values above NOEC values Keywords: concentration-related effects, measure of lethal effect, measure of sublethal effect, regression-based analysis Key paradigm in human and environmental toxicology is that the dose determines the effect. This paradigm goes back to Paracelsus, stating that any chemical is toxic, but that the dose determines the severity of the effect. In practice, this paradigm is used to quantify the toxicity of chemicals. For that purpose, toxicity tests are performed in which organisms (microbes, plants, invertebrates, vertebrates) or cells are exposed to a range of concentrations of a chemical. Such tests also include incubations in non-treated control medium. The response of the test organisms is determined by monitoring selected endpoints, like survival, growth, reproduction or other parameters (see section on Endpoints). Endpoints can increase (e.g. mortality) or decrease with increasing exposure concentration (e.g. survival, reproduction, growth). The response of the endpoints is plotted against the exposure concentration, and so-called concentration-response curves (Figure 1) are fitted, from which measures of the toxicity of the chemical can be calculated. The unit of exposure, the concentration or dose, may be expressed differently depending on the exposed subject. Dose is expressed as mg/kg body weight in human toxicology and following single (oral or dermal) exposure events in mammals or birds. For other orally or dermally exposed (invertebrate) organisms, like honey bees, the dose may be expressed per animal, e.g. µg/bee. Environmental exposures generally express exposure as the concentration in mg/kg food, mg/kg soil, mg/l surface, drinking or ground water, or mg/m3 air. Ultimately, it is the concentration (number of molecules of the chemical) at the target site that determines the effect. Consequently, expressing exposure concentrations on a molar basis (mol/L, mol/kg) is preferred, but less frequently applied. At low concentrations or doses, the endpoint measured is not affected by exposure. At increasing concentration, the endpoint shows a concentration-related decrease or increase. From this decrease or increase, different measures of toxicity can be calculated: ECx/EDx: the "effective concentration" resp. "effective dose"; "x" denotes the percentage effect relative to an untreated control. This should always be followed by giving the selected endpoint. LCx/LDx: same, but specified for a specific endpoint: lethality. EC50/ED50: the median effect concentration or dose, with "x" set to 50%. This is the most common estimate used in environmental toxicology. This should always be followed by giving the selected endpoint. LC50/LD50: same, but specified for a specific endpoint: lethality. The terms LCx and LDx refer to the fraction of animals responding (dying), while the ECx and EDx indicate the degree of reduction of the measured parameter. The ECx/EDx describe the overall average performance of the test organisms in terms of the parameter measured (e.g., growth, reproduction). The meaning of an LCx/LDx seems obvious: it refers to lethality of the test chemical. The use of ECx/EDx, however, always requires explicit mentioning of the endpoint it concerns. Concentration-response models usually distinguish quantal and continuous data. Quantal data refer to constrained ("yes/no") responses and include, for instance, survival data, but may also be applicable to avoidance responses. Continuous data refer to parameters like growth, reproduction (number of juveniles or eggs produced) or biochemical and physiological measurements. A crucial difference between quantal and continuous responses is that quantal responses are population-level responses, while continuous responses can also be observed on the level of individuals. An organism cannot be half-dead, but it can certainly grow at only half the control rate. Concentration-response models are usually sigmoidal on a log-scale and are characterized by four parameters: minimum, maximum, slope and position. The minimum response is often set to the control level or to zero. The maximum response is often set to 100%, in relation to the control or the biologically plausible maximum (e.g. 100% survival). The slope identifies the steepness of the curve, and determines the distance between the EC50 and EC10. The position parameter indicates where on the x-axis the curve is placed. The position may equal the EC50 and in that case it is named the turning point. But this in fact holds only for a small fraction of models and not for models that are not symmetrical to the EC50. In environmental toxicology, the parameter values are usually presented with 95% confidence intervals indicating the margins of uncertainty. Statistical software packages are used to calculate these corresponding 95% confidence intervals. Regression-based test designs require several test concentrations, and the results are dependent on the used statistical model, especially in the low-effect region. Sometimes it is simply impossible to use a regression-based design because the endpoint does not cover a sufficiently high effect range (>50% effect is typically needed for an accurate fit). In case of quantal responses, especially survival, the slope of the concentration-response curve is an indication of the sensitivity distribution of the individuals within the population of test organisms. For a very homogenous population of laboratory test animals having the same age and body size, a steeper concentration-response curve is expected than when using field-collected animals representing a wider range of ages and body sizes (Figure 2). In addition to ECx values, toxicity tests may also be used to derive other measures of toxicity: NOEC/NOEL: No-Observed Effect Concentration or Effect Level LOEC/LOEL: Lowest Observed Effect Concentration or Effect Level NOAEL: No-Observed Adverse Effect Level. Same as NOEL, but focusing on effects that are negative (adverse) compared to the control. LOAEL: Lowest Observed Adverse Effect Level. Same as LOEL, but focusing on effects that are negative (adverse) compared to the control. Where the ECx are derived by curve fitting, the NOEC and LOEC are derived by a statistical test comparing the response at each test concentration with that of the controls. The NOEC is defined as the highest test concentration where the response does not significantly differ from the control. The LOEC is the next higher concentration, so the lowest concentration tested at which the response significantly differs from the control. Figure 3 shows NOEC and LOEC values derived from a hypothetical test. Usually an Analysis of Variance (ANOVA) is used combined with a post-hoc test, e.g. Tukey, Bonferroni or Dunnett, to determine the NOEC and LOEC. Most available toxicity data are NOECs, hence they are the most common values found in databases and therefore used for regulatory purposes. From a scientific point of view, however, there are quite some disadvantages related to the use of NOECs: • Obtained by statistical test (hypothesis testing) (compared to regression analysis); • Equal to one of the test concentrations, so not using all data from the toxicity test; • Sensitive to the number of replicates used per exposure concentration and control; • Sensitive to variation in response, so for differences between replicates; • Depends on the statistical test chosen, and on the variance (σ); • Does not have confidence intervals; • Makes it hard to compare toxicity data between laboratories and between species. The NOEC may, due to its sensitivity to variation and test design, sometimes be equal to or even higher than the EC50. Because of the disadvantages of the NOEC, it is recommended to use measures of toxicity derived by fitting a concentration-response curve to the data obtained from a toxicity test. As an alternative to the NOEC, usually an EC10 or EC20 is used, which has the advantages that it is obtained using all data from the test and that it has a 95% confidence interval indicating its reliability. Having a 95% confidence interval also allows a statistical comparison of ECx values, which is not possible for NOEC values. 4.3.2. Question 1 Which four parameters describe a dose-response curve? 4.3.2. Question 2 What would be the preferred unit of measures of toxicity (e.g. EC20 or EC50) describing the effect of a chemical on the survival of soil invertebrates exposed in a standardized test soil? 4.3.2. Question 3 Why would you expect that using an age-synchronized laboratory population of test organisms results in a much steeper concentration-response curve for effects on survival of a chemical than a field-collected population of non-synchronized individuals? 4.3.2. Question 4 Why are EC10 values preferred over NOECs when using the outcomes of toxicity tests for the risk assessment of chemicals? 4.3.3. Endpoints Author: Michiel Kraak Reviewers: Kees van Gestel, Carlos Barata Learning objectives: You should be able to • list the available whole organism endpoints in toxicity tests. • motivate the importance of sublethal endpoints in acute and chronic toxicity tests. • describe how sublethal endpoints in acute and chronic toxicity tests are measured. Keywords: Mortality, survival, sublethal endpoints, growth, reproduction, behaviour, photosynthesis Introduction Most toxicity tests performed are short-term high-dose experiments, acute tests in which mortality is often the only endpoint. Mortality, however, is a crude parameter in response to relatively high and therefore often environmentally irrelevant toxicant concentrations. At much lower and therefore environmentally more relevant toxicant concentrations, organisms may suffer from a wide variety of sublethal effects. Hence, toxicity tests gain ecological realism if sublethal endpoints are addressed in addition to mortality. Mortality Mortality can be determined in both acute and chronic toxicity tests. In acute tests, mortality is often the only feasible endpoint, although some acute tests take long enough to also measure sublethal endpoints, especially growth. Generally though, this is restricted to chronic toxicity tests, in which a wide variety of sublethal endpoints can be assessed in addition to mortality (Table 1). Mortality at the end of the exposure period is assessed by simply counting the number of surviving individuals, but it can also be expressed either as percentage of the initial number of individuals or as percentage of the corresponding control. The increasing mortality with increasing toxicant concentrations can be plotted in a dose-response relationship from which the LC50 can be derived (see section on Concentration-response relationship). If assessing mortality is non-destructive, for instance if this can be done by visual inspection, it can be scored at different time intervals during a toxicity test. Although repeated observations may take some effort, they generally do generate valuable insights in the course of the intoxication process over time. Sublethal endpoints in acute toxicity tests In acute toxicity tests it is difficult to assess other endpoints than mortality, since effects of toxicants on sublethal endpoints like growth and reproduction need much longer exposure times to become expressed (see section on Chronic toxicity). Incorporating sublethal endpoints in acute toxicity tests thus requires rapid responses to toxicant exposure. Photosynthesis of plants and behaviour of animals are elegant, sensitive and rapidly responding endpoints that can be incorporated into acute toxicity tests (Table 1). Behavioural endpoints Behaviour is an understudied but sensitive and ecologically relevant endpoint in ecotoxicity testing, since subtle changes in animal behaviour may affect trophic interactions and ecosystem functioning. Several studies reported effects on animal behaviour at concentrations orders of magnitudes lower than lethal concentrations. Van der Geest et al. (1999) showed that changes in ventilation behaviour of fifth instar larvae of the caddisfly Hydropsyche angustipennis occurred at approximately 150 times lower Cu concentrations than mortality of first instar larvae. Avoidance behaviour of the amphipod Corophium volutator to contaminated sediments was 1,000 times more sensitive than survival (Hellou et al., 2008). Chevalier et al. (2015) tested the effect of twelve compounds covering different modes of action on the swimming behaviour of daphnids and observed that most compounds induced an early and significant swimming speed increase at concentrations near or below the 10% effective concentration (48-h EC10) of the acute immobilization test. Barata et al. (2008) reported that the short term (24 h) D. magna feeding inhibition assay was on average 50 times more sensitive than acute standardized tests when assessing the toxicity of a mixture of 16 chemicals in different water types combinations. These and many other examples all show that organisms may exhibit altered behaviour at relatively low and therefore often environmentally relevant toxicant concentrations. Behavioural responses to toxicant exposure can also be very fast, allowing organisms to avoid further exposure and subsequent bioaccumulation and toxicity. A wide array of such avoidance responses have been incorporated in ecotoxicity testing (Araújo et al., 2016), including the avoidance of contaminated soil by earthworms (Eisenia fetida) (Rastetter & Gerhardt; 2018), feeding inhibition of mussels (Corbicula fluminea) (Castro et al., 2018), aversive swimming response to silver nanoparticles by the unicellular green alga Chlamydomonas reinhardtii (Mitzel et al., 2017) and by daphnids to twelve compounds covering different modes of toxic action (Chevalier et al., 2015). Photosynthesis Photosynthesis is a sensitive and well-studied endpoint that can be applied to identify hazardous effects of herbicides on primary producers. In bioassays with plants or algae, photosynthesis is often quantified using pulse amplitude modulation (PAM) fluorometry, a rapid measurement technique suitable for quick screening purposes. Algal photosynthesis is preferably quantified in light adapted cells as effective photosystem II (PSII) efficiency (ΦPSII) (Ralph et al., 2007; Sjollema et al., 2014). This endpoint responds most sensitively to herbicide activity, as the most commonly applied herbicides either directly or indirectly affect PSII (see section on Herbicide toxicity). Sublethal endpoints in chronic toxicity tests Besides mortality, growth and reproduction are the most commonly assessed endpoints in ecotoxicity tests (Table 1). Growth can be measured in two ways, as an increase in length and as an increase in weight. Often only the length or weight at the end of the exposure period is determined. This, however, includes both the growth before and during exposure. It is therefore more distinctive to measure length or weight at the beginning as well as at the end of the exposure, and then subtract the individual or average initial length or weight from the final individual length or weight. Growth during the exposure period may subsequently be expressed as percentage of the initial lengths or weight. Ideally the initial length or weight is measured from the same individuals that will be exposed. When organisms are sacrificed to measure the initial length or weight, which is especially the case for dry weight, this is not feasible. In that case a subsample from the individuals is taken apart at the beginning of the test. Reproduction is a sensitive and ecological relevant endpoint in chronic toxicity tests. It is an integrated parameter, incorporating many different aspects of the process, that can be assessed one by one. The first reproduction parameter is the day of first reproduction. This is an ecologically very relevant parameter, as delayed reproduction obviously has strong implications for population growth. The next reproduction parameter is the amount of offspring. In this case the number of eggs, seeds, neonates or juveniles can be counted. For organisms that produce egg ropes or egg masses, both the number of egg masses as well as the number of eggs per mass can be determined. Lastly the quality of the offspring can be quantified. This can be achieved by determining their physiological status (e.g. fat content), their size, survival and finally their chance or reaching adulthood. Table 1. Whole organism endpoints often used in toxicity tests. Quantal refers to a yes/no endpoint, while graded refers to a continuous endpoint (see section on Concentration-response relationship). Endpoint Acute/Chronic Quantal/Graded mortality both quantal behaviour acute graded avoidance acute quantal photosynthesis acute graded growth (length and weight) mostly chronic graded reproduction chronic graded A wide variety of other, less commonly applied sublethal whole organism endpoints can be assessed upon chronic exposure. The possibilities are endless, with some specific endpoints being designed for the effect of a single compound only, or species specific endpoints, sometimes described for only one organism. Sub-organismal endpoints are described in a separate chapter (see section on Molecular endpoints in toxicity tests). References Araujo, C.V.M., Moreira-Santos, M., Ribeiro, R. (2016). Active and passive spatial avoidance by aquatic organisms from environmental stressors: A complementary perspective and a critical review. Environment International 92-93, 405-415. Barata, C., Alanon, P., Gutierrez-Alonso, S., Riva, M.C., Fernandez, C., Tarazona, J.V. (2008). A Daphnia magna feeding bioassay as a cost effective and ecological relevant sublethal toxicity test for environmental risk assessment of toxic effluents. Science of the Total Environment 405(1-3), 78-86. Castro, B.B., Silva, C., Macario, I.P.E., Oliveira, B., Concalves, F., Pereira, J.L. (2018). Feeding inhibition in Corbicula fluminea (OF Muller, 1774) as an effect criterion to pollutant exposure: Perspectives for ecotoxicity screening and refinement of chemical control. Aquatic Toxicology 196, 25-34. Chevalier, J., Harscoët, E., Keller, M., Pandard, P., Cachot, J., Grote, M. (2015). Exploration of Daphnia behavioral effect profiles induced by a broad range of toxicants with different modes of action. Environmental Toxicology and Chemistry 34, 1760-1769. Hellou J., Cheeseman, K., Desnoyers, E., Johnston, D., Jouvenelle, M.L., Leonard, J., Robertson, S., Walker, P. (2008). A non-lethal chemically based approach to investigate the quality of harbor sediments. Science of the Total Environment 389, 178-187. Mitzel, M.R., Lin, N., Whalen, J.K., Tufenkji, N. (2017). Chlamydomonas reinhardtii displays aversive swimming response to silver nanoparticles Environmental Science: Nano 4, 1328-1338. Ralph, P.J., Smith, R.A., Macinnis-Ng, C.M.O., Seery, C.R. (2007). Use of fluorescence-based ecotoxicological bioassays in monitoring toxicants and pollution in aquatic systems: Review. Toxicological and Environmental Chemistry 89, 589-607. Rastetter, N., Gerhardt, A. (2018). Continuous monitoring of avoidance behaviour with the earthworm Eisenia fetida. Journal of Soils and Sediments 18, 957-967. Sjollema, S.B., Van Beusekom, S.A.M., Van der Geest, H.G., Booij, P., De Zwart, D., Vethaak, A.D., Admiraal, W. (2014). Laboratory algal bioassays using PAM fluorometry: Effects of test conditions on the determination of herbicide and field sample toxicity. Environmental Toxicology and Chemistry 33, 1017-1022. Van der Geest, H.G., Greve, G.D., De Haas, E.M., Scheper, B.B., Kraak, M.H.S., Stuijfzand, S.C., Augustijn, C.H., Admiraal, W. (1999). Survival and behavioural responses of larvae of the caddisfly Hydropsyche angustipennis to copper and diazinon. Environmental Toxicology and Chemistry 18, 1965-1971. 4.3.3. Question 1 What is the importance of incorporating sublethal endpoints in acute and chronic toxicity tests? 4.3.3. Question 2 Name one animal and one plant specific sublethal endpoint that can be incorporated in acute toxicity test. 4.3.3. Question 3 Name the two most commonly assessed endpoints in chronic toxicity tests. 4.3.4. Selection of test organisms - Eco animals Author: Michiel Kraak Reviewers: Kees van Gestel, Jörg Römbke Learning objectives: You should be able to • name the requirements for suitable laboratory ecotoxicity test organisms. • list the most commonly used standard test organisms per environmental compartment. • argue the need for more than one test species and the need for non-standard test organisms. Key words: Test organism, standardized laboratory ecotoxicity tests, environmental compartment, habitat, different trophic levels Introduction Standardized laboratory ecotoxicity tests require constant test conditions, standardized endpoints (see section on Endpoints) and good performance in control treatments. Actually, in reliable, reproducible and easy to perform toxicity tests, the test compound should be the only variable. This sets high demands on the choice of the test organisms. For a proper risk assessment, it is crucial that test species are representative of the community or ecosystem to be protected. Criteria for selection of organisms to be used in toxicity tests have been summarized by Van Gestel et al. (1997). They include: 1. Practical arguments, including feasibility, cost-effectiveness and rapidity of the test, 2. Acceptability and standardisation of the tests, including the generation of reproducible results, and 3. Ecological significance, including sensitivity, biological validity etc. The most practical requirement is that the test organism should be easy to culture and maintain, but equally important is that the test species should be sensitive towards different stressors. These two main requirements are, however, frequently conflicting. Species that are easy to culture are often less sensitive, simply because they are mostly generalists, while sensitive species are often specialists, making it much harder to culture them. For scientific and societal support of the choice of the test organisms, preferably they should be both ecologically and economically relevant or serve as flagship species, but again, these are opposite requirements. Economically relevant species, like crops and cattle, hardly play any role in natural ecosystems, while ecologically highly relevant species have no obvious economic value. This is reflected by the research efforts on these species, since much more is known about economically relevant species than about ecologically relevant species. There is no species that is most sensitive to all pollutants. Which species is most sensitive depends on the mode of action and possibly also other properties of the chemical, the exposure route, its availability and the properties of the organism (e.g., presence of specific targets, physiology, etc.). It is therefore important to always test a number of species, with different life traits, functions, and positions in the food web. According to Van Gestel et al. (1997) such a battery of test species should be: 1. Representative of the ecosystem to protect, so including organisms having different life-histories, representing different functional groups, different taxonomic groups and different routes of exposure; 2. Representative of responses relevant for the protection of populations and communities; and 3. Uniform, so all tests in a battery should be applicable to the same test media and applying to the same test conditions, e.g. the same range of pH values. Representation of environmental compartments Each environmental compartment, water, air, soil and sediment, requires its specific set of test organisms. The most commonly applied test organisms are daphnids (Daphnia magna) for water, chironomids (Chironomus riparius) for sediments and earthworms (Eisenia fetida) for soil. For air, in the field of inhalation toxicology, humans and rodents are actually the most studied organism. In ecotoxicology, air testing is mostly restricted to plants, concerning studies on toxic gasses. Besides the most commonly applied organisms, there is a long list of other standard test organisms for which test protocols are available (Table 1; OECD site). Table 1. Non-exhaustive list of standard ecotoxicity test species. Environmental compartment(s) Organism group Test species Water Plant Water Plant Lemna Water Algae Species of choice Water Cyanobacteria Species of choice Water Fish Water Fish Water Amphibian Water Insect Water Crustacean Water Snail Water Snail Water-sediment Plant Water-sediment Insect Water-sediment Oligochaete worm Lumbriculus variegatus Sediment Anaerobic bacteria Sewage sludge Soil Plant Species of choice Soil Oligochaete worm Soil Oligochaete worm Soil Collembolan Soil Mite Hypoaspis (Geolaelaps) aculeifer Soil Microorganisms Natural microbial community Dung Insect Dung Insect Musca autumnalis Air-soil Plant Species of choice Terrestrial Bird Species of choice Terrestrial Insect Terrestrial Insect Terrestrial Insect Terrestrial Mite Non-standard test organisms The use of standard test organisms in standard ecotoxicity tests performed according to internationally accepted protocols strongly reduces the uncertainties in ecotoxicity testing. Yet, there are good reasons for deviating from these protocols. The species in Table 1 are listed according to their corresponding environmental compartment, but ignores differences between ecosystems and habitats. Soils may differ extensively in composition, depending on e.g. the sand, clay or silt content, and properties, e.g. pH and water content, each harbouring different species. Likewise, stagnant and current water have few species in common. This implies that based on ecological arguments there may be good reasons to select non-standard test organisms. Effects of compounds in streams can be better estimated with riverine insects rather than with the stagnant water inhabiting daphnids, while the compost worm Eisenia fetida is not necessarily the most appropriate species for sandy soils. The list of non-standard test organisms is of course endless, but if the methods are well documented in the open literature, there are no limitations to employ these alternative species. They do involve, however, experimental challenges, since non-standard test organisms may be hard to culture and to maintain under laboratory conditions and no protocols are available for the ecotoxicity test. Thus increasing the ecological relevance of ecotoxicity tests also increases the logistical and experimental constraints (see chapter 6 on Risk assessment). Increasing the number of test species The vast majority of toxicity tests is performed with a single test species, resulting in large margins of uncertainty concerning the hazardousness of compounds. To reduce these uncertainties and to increase ecological relevance it is advised to incorporate more test species belonging to different trophic levels, for water e.g. algae, daphnids and fish. For deriving environmental quality standards from Species Sensitivity Distributions (see section on SSDs) toxicity data is required for minimal eight species belonging to different taxonomical groups. This obviously causes tension between the scientific requirements and the available financial resources. References Van Gestel, C.A.M., Léon, C.D., Van Straalen, N.M. (1997). Evaluation of soil fauna ecotoxicity tests regarding their use in risk assessment. In: Tarradellas, J., Bitton, G., Rossel, D. (Eds). Soil Ecotoxicology. CRC Press, Inc., Boca Raton: 291-317. 4.3.4. Question 1 Name the requirements for suitable laboratory ecotoxicity test organisms. 4.3.4. Question 2 List the most commonly used standard test organisms per environmental compartment. 4.3.4. Question 3 Argue 1] the need for more than one test species, and 2] the need for non-standard test organisms. 4.3.5. Selection of test organisms - Eco plants Author: J. Arie Vonk Reviewers: Michiel Kraak, Gertie Arts, Sergi Sabater Learning objectives: You should be able to • name the requirements for suitable laboratory ecotoxicity tests for primary producers • list the most commonly used primary producers and endpoints in standardized ecotoxicity tests • argue the need for selecting primary producers from different environmental compartments as test organisms for ecotoxicity tests Key words:Test organism, standardized laboratory ecotoxicity test, primary producers, algae, plants, environmental compartment, photosynthesis, growth Introduction Photo-autotrophic primary producers use chlorophyll to convert CO2 and H2O into organic matter through photosynthesis under (sun)light. These primary producers are the basis of the food web and form an essential component of ecosystems. Besides serving as a food source, multicellular photo-autotrophs also form habitat for other primary producers (epiphytes) and many fauna species. Primary producers are a very diverse group, ranging from tiny unicellular pico-plankton up to gigantic trees. For standardized ecotoxicity tests, primary producers are represented by (micro)algae, aquatic macrophytes and terrestrial plants. Since herbicides are the largest group of pesticides used globally to maintain high crop production in agriculture, it is important to assess their impact on primary producers (Wang & Freemark, 1995). However, concerning testing intensity, primary producers are understudied in comparison to animals. Standardized laboratory ecotoxicity tests with primary producers require good control over test conditions, standardized endpoints (Arts et al., 2008; see the Section on Endpoints) and growth in the controls (i.e. doubling of cell counts, length and/or biomass within the experimental period). Since the metabolism of primary producers is strongly influenced by light conditions, availability of water and inorganic carbon (CO2 and/or HCO3- and CO32-), temperature and dissolved nutrient concentrations, all these conditions should be monitored closely. The general criteria for selection of test organisms are described in the previous chapter (see the section on the Selection of ecotoxicity test organisms). For primary producers, the choice is mainly based on the available test guidelines, test species and the environmental compartment of concern. Standardized ecotoxicity testing with primary producers There are a number of ecotoxicity tests with a variety of primary producers standardized by different organizations including the OECD and the USEPA (Table 1). Characteristic for most primary producers is that they are growing in more than one environmental compartment (soil/sediment; water; air). As a result of this, toxicant uptake for these photo-autotrophs might be diverse, depending on the chemical and the compartment where exposure occurs (air, water, sediment/soil). For both marine and freshwater ecosystems, standardized ecotoxicity tests are available for microalgae (unicellular micro-organisms sometimes forming larger colonies) including the prokaryotic Cyanobacteria (blue-green algae) and the eukaryotic Chlorophyta (green algae) and Bacillariophyceae (diatoms). Macrophytes (macroalgae and aquatic plants) are multicellular organisms, the latter consisting of differentiated tissues, with a number of species included in standardized ecotoxicity tests. While macroalgae grow in the water compartment only, aquatic plants are divided into groups related to their growth form (emergent; free-floating; submerged and sediment-rooted; floating and sediment-rooted) and can extend from the sediment (roots and root-stocks) through the water into the air. Both macroalgae and aquatic plants contain a wide range of taxa and are present in both marine and freshwater ecosystems. Terrestrial higher plants are very diverse, ranging from small grasses to large trees. Plants included in standardized ecotoxicity tests consist of crop and non-crop species. An important distinction in terrestrial plants is reflected in dicots and monocots, since both groups differ in their metabolic pathways and might reflect a difference in sensitivity to contaminants. Table 1. Open source standard guidelines for testing the effect of compounds on primary producers. All tests are performed in (micro)cosms except marked with * Primary producer Species Compartment Test number Organisation Microalgae & cyano-bacteria various species Freshwater 201 OECD 2011 Anabaena flos-aque Freshwater 850.4550 USEPA 2012 Pseudokirchneriella subcapitata, Skeletonema costatum Freshwater, Marine water 850.4500 USEPA 2012 Floating macrophytes Lemna spp. Freshwater 221 OECD 2006 Lemna spp. Freshwater 850.4400 USEPA 2012 Submerged macrophytes Freshwater 238 OECD 2014 Myriophyllum spicatum Sediment (Freshwater) 239 OECD 2014 Aquatic plants* not specified Freshwater 850.4450 USEPA 2012 Terrestrial plants wide variety of species Air 227 OECD 2006 wide variety of species Air 850.4150 USEPA 2012 wide variety of species (crops and non-crops) Soil & Air 850.4230 USEPA 2012 legumes and rhizobium symbiont Soil & Air 850.4600 USEPA 2012 wide variety of species (crops and non-crops) Soil 208 OECD 2006 wide variety of species (crops and non-crops) Soil 850.4100 USEPA 2012 various crop species Soil & Air 850.4800 USEPA 2012 Terrestrial plants* not specified Terrestrial 850.4300 USEPA 2012 Representation of environmental compartments Since primary producers can take up many compounds directly by cells and thalli (algae) or by their leaves, stems, roots and rhizomes (plants), different environmental compartments need to be included in ecotoxicity testing depending on the chemical characteristics of the contaminants. Moreover, the chemical characteristics of the compound under consideration determine if and how the compound might enter the primary producers and how it is transported through organisms. For all aquatic primary producers, exposure through the water phase is relevant. Air exposure occurs in the emergent and floating aquatic plants, while rooting plants and algae with rhizoids might be exposed through sediment. Sediment exposure introduces additional challenges for standardized testing conditions, since changes in redox conditions and organic matter content of sediments can alter the behavior of compounds in this compartment. All terrestrial plants are exposed through air, soil and water (soil moisture, rain, irrigation). Air exposure and water deposition (rain or spraying) directly exposes aboveground parts of terrestrial plants, while belowground plant parts and seeds are exposed through soil and soil moisture. Soil exposure introduces additional challenges for standardized testing conditions, since changes in water or sediment organic matter content of soils can alter the behavior of compounds in this compartment. Test endpoints Bioaccumulation after uptake and translocation to specific cell organelles or plant tissue can result in incorporation of compounds in primary producers. This has been observed for heavy metals, pesticides and other organic chemicals. The accumulated compounds in primary producers can then enter the food chain and be transferred to higher trophic levels (see the section on Biomagnification). Although concentrations in primary producers are indicative of the presence of bioavailable compounds, these concentrations do not necessarily imply adverse effects on these organisms. Bioaccumulation measurements can therefore be best combined with one or more of the following endpoint assessments. Photosynthesis is the most essential metabolic pathway for primary producers. The mode of action of many herbicides is therefore photosynthesis inhibition, whereby different metabolic steps can be targeted (see the section on Herbicide toxicity). This endpoint is relevant for assessing acute effects on the chlorophyll electron transport using Pulse-Amplitude-Modulation (PAM) fluorometry or as a measure of oxygen or carbon production by primary producers. Growth represents the accumulation of biomass (microalgae) or mass (multicellular primary producers). Growth inhibition is the most important endpoint in test with primary producers since this endpoint integrates responses of a wide range of metabolic effects into a whole organism or a population response of primary producers. However, it takes longer to assess, especially for larger primary producers. Cell counts, increase in size over time for either leaves, roots, or whole organisms, and (bio)mass (fresh weight and dry weight) are the growth endpoints mostly used. Seedling emergence reflects the germination and early development of seedlings into plants. This endpoint is especially relevant for perennial and biannual plants depending on seed dispersal and successful germination to maintain healthy populations. Other endpoints include elongation of different plant parts (e.g. roots), necrosis of leaves, or disturbances in plant-microbial symbiont relationships. Current limitations and challenges for using primary producers in ecotoxicity tests For terrestrial vascular plants, many crop and non-crop species can be used in standardized tests, however, for other environmental compartments (aquatic and marine) few species are available in standardized test guidelines. Also not all environmental compartments are currently covered by standardized tests for primary producers. In general, there are limited tests for aquatic sediments and there is a total lack of tests for marine sediments. Finally, not all major groups of primary producers are represented in standardized toxicity tests, for example mosses and some major groups of algae are absent. Challenges to improve ecotoxicity tests with plants would be to include more sensitive and early response endpoints. For soil and sediment exposure of plants to contaminants, development of endpoints related to root morphology and root metabolism could provide insights into early impact of substances to exposed plant parts. Also the development of ecotoxicogenomic endpoints (e.g. metabolomics) (see the section on Metabolomics) in the field of plant toxicity tests would enable us to determine effects on a wider range of plant metabolic pathways. References Arts, G.H.P., Belgers, J.D.M., Hoekzema, C.H., Thissen, J.T.N.M. (2008). Sensitivity of submersed freshwater macrophytes and endpoints in laboratory toxicity tests. Environmental Pollution 153, 199-206. Wang, W.C., Freemark, K. (1995) The use of plants for environmental monitoring and assessment. Ecotoxicology and Environmental Safety 30: 289-301. 4.3.5. Question 1 Which conditions need to be controlled carefully in laboratory ecotoxicity tests with primary producers? 4.3.5. Question 2 List the different groups of primary producers used in standardized tests for each environmental compartment. 4.3.5. Question 3 Argue why testing of primary producers is relevant in relation to [A] environmental exposure of ecosystems to pesticides and [B] the role of primary producers in ecosystems 4.3.6. Selection of test organisms - Microorganisms Author: Patrick van Beelen Reviewers: Kees van Gestel, Erland Bååth, Maria Niklinska Learning objectives: You should be able to • describe the vital role of microorganisms in ecosystems. • explain the difference between toxicity tests for protecting biodiversity and for protecting ecosystem services. • explain why short-term microbial tests can be more sensitive than long-term ones. Keywords: microorganisms, processes, nitrogen conversion, test methods The importance of microorganisms Most organisms are microorganisms, which means they are generally too small to see with the naked eye. Nevertheless, microorganisms affect almost all aspects of our lives. Viruses are the smallest of microorganisms, the prokaryotic bacteria and archaea are bigger (in the micrometer range), and the sizes of eukaryotic microorganisms range from three to hundred micrometers. The microscopic eukaryotes have larger cells with a nucleus and come in different shapes like green algae, protists and fungi. Cyanobacteria and eukaryotic algae perform photosynthesis in the oceans, seas, brackish and freshwater ecosystems. They fix carbon dioxide into biomass and form the basis of the largest aquatic ecosystems. Bacteria and fungi degrade complex organic molecules into carbon dioxide and minerals, which are needed for plant growth. Plants often live in symbiosis with specialized microorganisms on their roots, which facilitate their growth by enhancing uptake of water and nutrients, speeding up plant growth. Invertebrate and vertebrate animals, including humans, have bacteria and other microorganisms in their intestines to facilitate the digestion of food. Cows for example cannot digest grass without the microorganisms in their rumen. Also, termites would not be able to digest lignin, a hard to digest wood polymer, without the aid of gut fungi. Leaf cutter ants transport leaves into their nest to feed the fungi which they depend on. Also, humans consume many foodstuffs with yeasts, fungi or bacteria for preservation of the food and a pleasant taste. Beer, wine, cheese, yogurt, sauerkraut, vinegar, bread, tempeh, sausage and may other foodstuffs need the right type of microorganisms to be palatable. Having the right type of microorganisms is also vital for human health. Human mother's milk contains oligosaccharides, which are indigestible for the newborn child. These serve as a major food source for the intestinal bacteria in the baby, which reduce the risk of dangerous infections. This shows that the interaction between specific microorganisms and higher organisms are often highly specific. Marine viruses are very abundant and can limit algal blooms promoting a more diverse marine phytoplankton. Pathogenic viruses, bacteria, fungi and protists enhance the biodiversity of plants and animals by the following mechanism: The densest populations are more susceptible to diseases since the transmission of the disease becomes more frequent. When the most abundant species become less frequent, there is more room for the other species and biodiversity is enhanced. In agriculture, this enhanced biodiversity is unwanted since the livestock and crop are the most abundant species. That is why disease control becomes more important in high intensity livestock farming and in large monocultures of crops. Microorganisms are at the base of all ecosystems and are vital for human health and the environment. The microbiological society has a nice video explaining why microbiology matters. Protection goals The functioning of natural ecosystems on earth is threatened by many factors, such as habitat loss, habitat fragmentation, global warming, species extinction, over fertilization, acidification and pollution. Natural and man-made chemicals can exhibit toxic effects on the different organisms in natural ecosystems. Toxic chemicals released in the environment may have negative effects on biodiversity or microbial processes. In the ecosystem strongly affected by such changes, the abundance of different species could be smaller. The loss of biodiversity of the species in a specific ecosystem can be used as a measure for the degradation of the ecosystem. Humans benefit from the presence of properly functioning ecosystems. These benefits can be quantified as ecosystem services. Microbial processes contribute heavily to many ecosystem services. Groundwater for example, is often a suitable source of drinking water since microorganisms have removed pollutants and pathogens from the infiltrating water. See Section on Ecosystem services and protection goals. Environmental toxicity tests Most environmental toxicity tests are single species tests. Such tests typically determine toxicity of a chemical to a specific biological species like for example the bioluminescence by the Allivibrio fisheri bacteria in the Microtox test or the growth inhibition test on freshwater algae and cyanobacteria (see Section on Selection of test organisms - Eco plants). These tests are relatively simple using a specific toxic chemical on a specific biological species in an optimal setting. The OECD guidelines for the testing of chemicals, section 2, effects on biotic systems gives a list of standard tests. Table 1 lists different tests with microorganisms standardized by the Organization for Economic Cooperation and Development (OECD). Table 1. Generally accepted environmental toxicity tests using microorganisms, standardized by the Organization for Economic Cooperation and Development (OECD). OECD test No Title Medium Test type 201 Freshwater algae and cyanobacteria, growth inhibition test (chapter reference) Aquatic Single species 209 Activated sludge, respiration inhibition test Sediment Process 224 (draft guideline) Determination of the inhibition of the activity of anaerobic bacteria Sediment Process 217 Soil microorganisms: carbon transformation test Soil Process 216 Soil microorganisms: nitrogen transformation test Soil Process The outcome of these tests can be summarized as EC10 values (see Section on Concentration-response relationships), which can be used in risk assessment (see Sections on Predictive risk assessment approaches and tools and on Diagnostic risk assessment approaches and tools) Basically, there are three types of tests. Single species tests, community tests and tests using microbial processes. Single species tests The ecological relevance of a single species test can be a matter of debate. In most cases it is not practical to work with ecologically relevant species since these can be hard to maintain under laboratory conditions. Each ecosystem will also have its own ecologically relevant species, which would require an extremely large battery of different test species and tests, which are difficult to perform in a reproducible way. As a solution to these problems, the test species are assumed to exhibit similar sensitivity for toxicants as the ecological relevant species. This assumption was confirmed in a number of cases. If the sensitivity distribution of a given toxicant for a number of test species would be similar to the sensitivity distribution of the relevant species in a specific ecosystem, one could use a statistic method to estimate a safe concentration for most of the species. Toxicity tests with short incubation times are often disputed since it takes time for toxicants to accumulate in the test animals. This is not a problem in microbial toxicity tests since the small size of the test organisms allows a rapid equilibrium of the concentrations of the toxicant in the water and in the test organism. On the contrary, long incubation times under conditions that promote growth, can lead to the occurrence of resistant mutants, which will decrease the apparent sensitivity of the test organism. This selection and growth of resistant mutants cannot, however, be regarded as a positive thing since these mutants are different from the parent strain and might also have different ecological properties. In fact, the selection of antibiotic resistant microorganisms in the environment is considered to be a problem since these might transfer to pathogenic (disease promoting) microorganisms which gives problems for patients treated with antibiotics. The OECD test no 201, which uses freshwater algae and cyanobacteria, is a well-known and sensitive single species microbial ecotoxicity test. These are explained in more detail in the Section on Selection of test organisms - Eco plants. Community tests Microorganisms have a very wide range of metabolic diversity. This makes it more difficult to extrapolate from a single species test to all possible microbial species including fungi, protists, bacteria, archaea and viruses. One solution is to test a multitude of species (a whole community) exposed in a single toxicity experiment, it becomes more difficult to attribute the decline or increase of species to toxic effects. The rise and decline of species can also be caused by other factors, including species interactions. The method of Pollution-induced community tolerance is used for the detection of toxic effects on communities. Organisms survive in polluted environments only when they can tolerate toxic chemical concentrations in their habitat. During exposure to pollution the sensitive species become extinct and tolerant species take over their place and role in the ecosystem (Figure 1). This takeover can be monitored by very simple toxicity tests using a part of the community extracted from the environment. Some tests use the incorporation of building blocks for DNA (thymidine) and protein (leucine). Other tests use different substrates for microbial growth. The observation that this part of the community becomes more tolerant as measured by these simple toxicity tests reveals that the pollutant really affects the microbial community. This is especially helpful when complex and diverse environments like biofilms, sediments and soils are studied. Tests using microbial processes The protection of ecosystem services is fundamentally different from the protection of biodiversity. When one wants to protect biodiversity all species are equally important and are worth protecting. When one wants to protect ecosystem services only the species that perform the process have to be protected. Many contributing species can be intoxicated without having much impact on the process. An example is nitrogen transformation, which is tested by measuring the conversion of ammonium into nitrite and nitrate (see box). The inactivation of the most sensitive species can be compensated by the prolonged activity or growth of less sensitive species. The test design of microbial process tests aims to protect the process and not the contributing species. Consequently, the process tests from Table 1 seldom play a decisive role in reducing the maximum tolerable concentration of a chemical. Reason is that the single species toxicity tests generally are more sensitive since they use a specific biological species as test organism instead of a process. Box: Nitrogen transformation test The OECD test no. 216 Soil Microorganisms: Nitrogen Transformation Test is a very well-known toxicity test using the soil process of nitrogen transformation. The test for non-agrochemicals is designed to detect persistent adverse effects of a toxicant on the process of nitrogen transformation in soils. Powdered clover meal contains nitrogen mainly in the form of proteins which can be degraded and oxidized to produce nitrate. Soil is amended with clover meal and treated with different concentrations of a toxicant. The soil provides both the test organisms and the test medium. A sandy soil with a low organic carbon content is used to minimize sorption of the toxicant to the soil. Sorption can decrease the toxicity of a toxicant in soil. According to the guideline, the soil microorganisms should not be exposed to fertilizers, crop protection products, biological materials or accidental contaminations for at least three months before the soil is sampled. In addition, the soil microorganisms should at least form 1% of the soil organic carbon. This indicates that the microorganisms are still alive. The soil is incubated with clover meal and the toxicant under favorable growth conditions (optimal temperature, moisture) for the microorganisms. The quantities of nitrate formed are measured after 7 and 28 days of incubation. This allows for the growth of microorganisms resistant to the toxicant during the test, which can make the longer incubation time less sensitive. The nitrogen in the proteins of clover meal will be converted to ammonia by general degradation processes. The conversion of clover meal to ammonia can be performed by a multitude of species and is therefore not very sensitive to inhibition by toxic compounds. The conversion of ammonia to nitrate generally is performed in two steps. First, ammonia oxidizing bacteria or archaea, oxidize ammonia into nitrite. Second, nitrite is oxidized by nitrite oxidizing bacteria into nitrate. These latter two steps are generally much slower than ammonium production, since they require specialized microorganisms. These specialized microorganisms also have a lower growth rate than the common microorganisms involved in the general degradation of proteins into amino acids. This makes the nitrogen transformation test much more sensitive compared to the carbon transformation test, which uses more common microorganisms. Under the optimal conditions in the nitrogen transformation test some minor ammonia or nitrite oxidizing species might seem unimportant since they do not contribute much to the overall process. Nevertheless these minor species can become of major importance under less optimal conditions. Under acid conditions for example, only the archaea oxidize ammonia into nitrite while the ammonia oxidizing bacteria become inhibited. The nitrogen transformation test has a minimum duration of 28 days at 20°C under optimal moisture conditions, but can be prolonged to 100 days. Shorter incubation times would make the test more sensitive. 4.3.6. Question 1 What is the disadvantage of growth during a toxicity test using a microbial process? 4.3.6. Question 2 Mention the intermediates during the degradation of clover meal to nitrate. 4.3.6. Question 3 Why are shorter microbial tests often more sensitive than longer ones? 4.3.6. Question 4 When one microbial species in the environment is replaced by another one, can it have an effect on animals or plants? 4.3.7. Selection of test organisms - Birds Author: Annegaaike Leopold Reviewers: Nico van den Brink, Kees van Gestel, Peter Edwards Learning objectives: You should be able to • Understand and argue why birds are an important model in ecotoxicology; • understand and argue the objective of avian toxicity testing performed for regulatory purposes; • list the most commonly used avian species; • list the endpoints used in avian toxicity tests; • name examples of how uncertainty in assessing the risk of chemicals to birds can be reduced. Keywords: birds, risk assessment, habitats, acute, reproduction. Introduction Birds are seen as important models in ecotoxicology for a number of reasons: • they are a diverse, abundant and widespread order inhabiting many human altered habitats like agriculture; • they have physiological features that make them different from other vertebrate classes that may affect their sensitivity to chemical exposure; • they play a specific role ecologically and fulfill essential roles in ecosystems (e.g. in seed dispersal, as biological control agents through eating insects, and removal of carcasses e.g. by vultures); • protection goals are frequently focused on iconic species that appeal to the public. A few specific physiological features will be discussed here. Birds are oviparous, laying eggs with hard shells. This leads to concentrated exposure (as opposed to exposure via the bloodstream as in most other vertrebrate species) to maternally transferred material, and where relevant, its metabolites. It also means that offspring receive a single supply of nutrients (and not a continuous supply through the blood stream). This makes birds sensitive to contaminants in a different way than non-oviparous vertebrates, since the embryos develop without physiological maternal interference. The bird embryo starts to regulate its own hormone homeostasis early on in its development in contrast to mammalian embryos. As a result contaminants deposited in the egg by the female bird may cause disturbance of the regulation of these embryonic processes (Murk et al., 1996). Birds have a higher body temperature (40.6 ºC) and a relatively high metabolic rate, which can impact their response to chemicals. As chicks, birds generally have a rapid growth rate, compared to many vertebrate species. Chicks of precocial (or nidifugous) species leave the nest upon hatching and, while they may follow the parents around, they are fully feathered and feed independently. They typically need a few months to grow to full size. Altricial species are naked, blind and helpless at hatch and require parental care until they fledge the nest. They often grow faster - passerines (such as swallows) can reach full size and fledge 14 days after hatching. Many bird species migrate seasonally over long distances and adaptation to this, changes their physiology and biochemical processes. Internal concentrations of organic contaminants, for example, may increase significantly due to the use of lipids stores during migration, while changes in biochemistry may increase the sensitivity of birds to the chemical. Birds function as good biological indicators of environmental quality largely because of their position in the foodchain and habitat dependence. Protection goals are frequently focused on iconic species, for example the Atlantic puffin, the European turtle dove and the common barn owl (Birdlife International, 2018). It was recognized early on that exposure of birds to pesticides can take place through many routes of dietary exposure. Given their association with a wide range of habitats, exposure can take place by feeding on the crop itself, on weeds, or (treated) weed seeds, on ground dwelling or foliar dwelling invertebrates, by feeding on invertebrates in the soil, such as earthworms, by drinking water from contaminated streams or by feeding on fish living in contaminated streams (Figure 1, Brooks et al., 2017). Following the introduction of persistent and highly toxic synthetic pesticides in the 1950s and prior to safety regulations, use of many synthetic organic pesticides led to wildlife losses - of birds, fish, and other wildlife (Kendall and Lacher, 1994). As a result, national and international guidelines for assessing first acute and subacute effects of pesticides on birds were developed in the 1970s. In the early 1980s tests were developed to study long-term or reproductive effects of pesticides. Current bird testing guidelines focused primarily on active ingredients used in plant protection products, veterinary medicines and biocides. In Europe the industrial chemicals regulation REACH only requires information on long-term or reproductive toxicity for substances manufactured or imported in quantities of at least 1000 tonnes per annum. These data may be needed to assess the risks of secondary poisoning by a substance that is likely to bioaccumulate and does not degrade rapidly. Secondary poisoning may occur, for example when raptors consume contaminated fish. In the United States no bird tests are required under the industrial chemicals legislation. The objective of performing avian toxicity tests is to inform an avian effects assessment (Hart et al., 2001) in order to: • provide scientifically sound information on the type, size, frequency and pattern over time of effects expected from defined exposures of birds to chemicals. • reduce uncertainty about potential effects of chemicals on birds. • provide information in a form suitable for use in risk assessment. • provide this information in a way that makes efficient use of resources and avoids unnecessary use and suffering of animals. Bird species used in toxicity testing Selection of bird species for toxicity testing occurs primarily on the basis of their ecological relevance, their availability and ability to adjust to laboratory conditions for breeding and testing. This means most test species have been domesticated over many years. They should have been shown to be relatively sensitive to chemicals through previous experience or published literature and ideally have available historical control data. The bird species most commonly used in toxicity testing have all been domesticated: • the waterfowl species mallard duck (Anas platyrynchos) is in the mid range of sensitivity to chemicals, an omnivorous feeder, abundant in many parts of the world, a precocial species; raised commercially and test birds show wild type plumage; • the ground dwelling game species bobwhite quail (Colinus virginianus) is common in the USA and is of similar in sensitivity to mallards; feeds primarily on seeds and invertebrates; a precocial species; raised commercially and test birds show wild type plumage; • the ground dwelling species Japanese quail (Coturnix coturnix japonica) occurs naturally in East Asia; feeds on plants material and terrestrial invertebrates. Domesticated to a far great extent than mallard or bobwhite quail and birds raised commercially (for eggs or for meat) are further removed genetically from the wild type. This species is unique in that the young of the year mature and breed (themselves) within 12 months; • the passerine, altricial species zebra finch (Taeniopygia guttata) occurs naturally in Australia and Indonesia; they eat seeds; are kept and sold as pets; are not far removed from wild type; • the budgerigar (Melopsittacus undulates) is also altricial); occurs naturally in Australia; eats seeds eating; is bred in captivity and kept and sold as pets. Other species of birds are sometimes used for specific, often tailor-designed studies. These species include: • the canary (Serinus canaria domestica). • the rock pigeon (Columba livia) • the house sparrow (Passer domesticus), • red-winged blackbird (Agelaius phoeniceus) - US only • the ring-necked pheasant, (Phasianus colchicus), • the grey partridge (Perdix perdix) Most common avian toxicity tests: Table 1 provides an overview of all the avian toxicity tests that have been developed over the past approximately 40 years, the most commonly used guidelines, the recommended species, the endpoints recorded in each of these tests, the typical age of birds at the start of the test, the test duration and the length of exposure. Table 1: Most common avian toxicity tests with their recommended species and key characteristics. Avian toxicity test Guideline Recommended species Endpoints Age at start of test Length of study Length of exposure Acute oral gavage- sequential testing - average 26 birds OECD 223 bobwhite quail, Japanese quail, zebra finch, budgerigar mortality, clinical signs, body weight, food consumption, gross necropsy Young birds not yet mated, at least 16 weeks old at start of test. At least 14 days Single oral dose at beginning of test Acute oral gavage - 60 bird design USEPA OCSPP 850.2100 Bobwhite quail single passerine species recommended. See above Young birds not yet mated, at least 16 weeks old at start of test. 14 days Single oral dose at beginning of test Sub-acute dietary toxicity * OCSPP 850.2200 Bobwhite quail, mallard See above Mallard: 5 days old Bobwhite quail: 10-14 days old 8 days 5 days One-generation reproduction OECD 206 OCSPP 850.2200 Bobwhite quail, mallard, Japanese quail** Adult body weight,food consumption, egg production, fertility, embryo survivial, hatchrate, chick survival. Approaching first breeding season: Mallard (6 to 12 months old) Bobwhite quail 20 -24 weeks 20 - 22 weeks 10 weeks Avoidance testing (pen trials) OECD Report As closely related to species at risk as possible; eg: sparrow rock dove, pheasant, grey partridge Food intake, mortality, Sublethal effects Young adults if possible (depends on study design) One to several days, depending on the study design. One to several days, depending on the study design. Two-generation endocrine disruptor test OCSPP 890.2100 Japanese quail In addition to endpoints listed for one-generation study: male sexual behaviour, biochemical, histological, and morphological endpoints 4 weeks post hatch 38 weeks 8 weeks - adult (F0) generation +14 weeks F1 generation. Field studies to refine food residues in higher tier Bird risk assessments. Appendix N of the EFSA Bird and Mammal Guidance. Depends on the species at risk in the are of pesticide use. Depends on the study design developed. Uncontrollable in a field study Depends on the study design developed. Depends on the study design developed. * This study is hardly every asked for anymore. ** Only in OECD Guideline Acute toxicity testing To assess the short-term risk to birds, acute toxicity tests must be performed for all pesticides (the active ingredient thereof) to which birds are likely to be exposed, resulting in an LD50 (mg/kg body/day) (see section on Concentration-response relationships). The acute oral toxicity test involves gavage or capsule dosing at the start of the study (Figure 2). Care must be taken when dosing birds by oral gavage. Some species can readily regurgitate leading to uncertainty in the the dose given. These include mallard duck, pigeons and some passerine species. Table 1 gives the birds species recommended in the OECD and USEPA guidelines, respectively. Gamebirds and passerines are a good combination to take account of phylogeny and a good starting point to better understand the distribution of species sensitivity. The OECD guideline 223 uses on average 26 birds and is a sequential design (Edwards et al., 2017). Responses of birds to each stage of the test are combined to estimate and improve the estimate of the LD50 and slope. The testing can be stopped at any stage once the accuracy of the LD50 estimate meets the requirements for the risk assessment, hence using far fewer birds for compliance with the 3Rs (reduction, refinement and replacement). If toxicity is expected to be low, 5 birds are dosed at the limit dose of 2000 mg/kg (which is the highest acceptable dose to be given by oral gavage, from a humane point of view). If there is no mortality in the limit test after 14 days the study is complete and the LD50 >2000 mg/kg body weight. If there is mortality a single individual is treated at each of 4 different doses in Stage 1. With these results a working estimate of the LD50 is determined to select 10 further dose levels for a better estimate of the LD50 in Stage 2. If a slope is required a further Stage 3 is required using 10 more birds in a combination of doses selected on the basis of a provisional estimate of the slope. The USEPA guideline is a single stage design preceeded by a range finding test (used only to set the concentrations for the main test). The LD50 test uses 60 birds (10 at each of five test concentrations and 10 birds in the control group). Despite the high numbers of birds used, the ability to estimate a slope is poor compared to OECD223 (the ability to calculate the LD50 is similar to the OECD 223 guideline). Dietary toxicity testing For the medium-term risk assessment an avian dietary toxicity test was regularly performed in the past exposing juvenile (chicks) of bobwhite quail, Japanese quail or mallard to a treated diet. This test determines the median lethal concentration (LC50) of a chemical in response to a 5-day dietary exposure. Given the scientific limitations and animal welfare concerns related to this test (EFSA, 2009) current European regulations recommend to only perform this test when it is expected that the LD50 value measured by the medium-term study will be lower than the acute LD50 i.e. if the chemical is cumulative in its effect. Reproduction testing One-generation reproduction tests in bobwhite quail and/or mallard are requested for the registration of all pesticides to which birds are likely to be exposed during the breeding season. Table 1 presents the two standard studies: OECD Test 206 and the US EPA OCSPP 850.2100 study. The substance to be tested is mixed into the diet from the start of the test. The birds are fed ad libitum for a recommended period of 10 weeks before they begin laying eggs in response to a change in photoperiod. The egg-laying period should last at least ten weeks. Endpoints include adult body weight, food consumption, macroscopic findings at necropsy and reproductive endpoints, with the number of 14-day old surviving chicks/ducklings as an overall endpoint. The OECD guideline states that the Japanese quail (Coturnix coturnix japonica), is also acceptable. Avoidance (or repellancy) testing Avoidance behaviour by birds in the field could be seen as reducing the risk of exposure to a pesticide and therefore could be considered in the risk assessment. However, the occurrence of avoidance in the laboratory has a confounding effect on estimates of toxicity in dietary studies (LD50). Avoidance tests thus far have greatest relevance in the risk assessment of seed treatments. A number of factors need to be taken into account including the feeding rate and dietary concentration which may determine whether avoidance or mortality is the outcome. The following comprehensive OECD report provides an overview of guideline development and research activities that have taken place to date under the OECD flag. Sometimes these studies are done as semi-field (or pen) studies. Endocrine disruptor testing Endocrine-disrupting substances can be defined as materials that cause effects on reproduction through the disruption of endocrine-mediated processes. If there is reason to suspect that a substance might have an endocrine effect in birds, a two-generation avian test design aimed specifically at the evaluation of endocrine effects could be performed. This test has been developed by the USEPA (OCSPP 890.210). The test has not, however, been accepted as an OECD test to date. It uses the Japanese quail as the preferred species. The main reasons that Japanese quail were selected for this test were: 1) Japanese quail is a precocial species as mentioned earlier. This means that at hatch Japanese quail chicks are much further in their sexual differentiation and development than chicks of altricial species would be. Hormonal processes occurring in Japanese quail in these early stages of development can be disturbed by chemicals maternally deposited in the egg (Ottinger and Dean, 2011). Conversely altricial species undergo these same sexual development stages post-hatch and can be exposed to chemicals in food that might impact these same hormonal processes. 2) as mentioned above, the young of the year mature and breed (themselves) within 12 months which makes the test more efficient that if one used bobwhite quail or mallard. It is argued among avian toxicologists, that it is necessary to develop a zebra finch endocrine assay system, alongside the Japanese quail system, as this will allow a more systematic determination of differences between responses to EDC's in altricial and precocial species, there by allowing a better evaluation and subsequent risk assessment of potential endocrine effects in birds. Differences in parental care, nesting behaviour and territoriality are examples of aspects that could be incorporated in such an approach (Jones et al., 2013). Field studies: Field studies can be used to test for adverse effects on a range of species simultaneously, under conditions of actual exposure in the environment (Hart et al, 2001). The numbers of sites and control fields and methods (corpse searches, censusing and radiotracking) need careful consideration for optimal use of field studies in avian toxicology. The field site will define the species studied and it is important to consider the relevance of that species in other locations. For further reading about techniques and methods to be used in avian field research Sutherland et al and Bibby et al. (2000) are recommended. References Bibby, C., Jones, M., Marsden, S. (2000). Expedition Field Techniques Bird Surveys. Birdlife International. Birdlife International (2018). The Status of the World's Birds. https://www.birdlife.org/sites/default/files/attachments/BL_ReportENG_V11_spreads.pdf Brooks, A.C., Fryer, M., Lawrence, A., Pascual, J., Sharp, R. (2017). Reflections on bird and mammal risk assessment for plant protection products in the European Union: Past, present and future. Environmental Toxicology and Chemistry 36, 565-575. Edwards, P.J., Leopold, A., Beavers, J.B., Springer, T.A., Chapman, P., Maynard, S.K., Hubbard, P. (2017). More or less: Analysis of the performance of avian acute oral guideline OECD 223 from empirical data. Integrated Environmental Assessment and Management 13, 906-914. Hart, A., Balluff, D., Barfknecht, R., Chapman, P.F., Hawkes, T., Joermann, G., Leopold, A., Luttik, R. (Eds.) (2001). Avian Effects Assessment: A Framework for Contaminants Studies. A report of a SETAC workshop on 'Harmonised Approaches to Avian Effects Assessment', held with the support of the OECD, in Woudschoten, The Netherlands, September 1999. A SETAC Book. Jones, P.D., Hecker, M., Wiseman, S., Giesy, J.P. (2013). Birds. Chapter 10 In: Matthiessen, P. (Ed.) Endocrine Disrupters - Hazard Testing and Assessment Methods. Wiley & Sons. Kendall, R.J., Lacher Jr, T.E. (Eds.) (1994). Wildlife Toxicology and Population Modelling - Integrated Studies of Agrochecosystems. Special Publication of SETAC. Murk, A.J., Boudewijn, T.J., Meininger, P.L., Bosveld, A.T.C., Rossaert, G., Ysebaert, T., Meire, P., Dirksen, S. (1996). Effects of polyhalogenated aromatic hydrocarbons and related contaminants on common tern reproduction: Integration of biological, biochemical, and chemical data. Archives of Environmental Contamination and Toxicology 31, 128-140. Ottinger, M.A., Dean, K. (2011). Neuroendocrine Impacts of Endocrine-Disrupting Chemicals in Birds: Life Stage and Species Sensitivities. Journal of Toxicology and Environmental Health, Part B: Critical Reviews. 26 July 2011. Sutherland, W.J., Newton, I., Green, R.E. (Eds.) (2004). Biological Ecology and Conservation. A Handbook of Techniques. Oxford University Press 4.3.7. Question 1 Give three reasons why birds are an important model in ecotoxicology? 4.3.7. Question 2 What is the objective of avian toxicity testing? 4.3.7. Question 3 Which avian species are most commonly used and why? 4.3.7. Question 4 What is the difference between precocial and altricial species? 4.3.7. Question 5 Which endpoints are typically used in standardised avian toxicity tests? 4.3.7. Question 6 Name two examples of how one might reduce uncertainty in assessing risks of chemicals on birds. 4.3.8. In vitro toxicity testing Author: Timo Hamers Reviewer: Arno Gutleb Learning goals You should be able to: • explain the difference between in vitro and in vivo bioassays • describe the principle of a ligand binding assay, an enzyme inhibition assay, and a reporter gene bioassay • explain the difference between primary cell cultures, finite cell lines, and continuous cell lines • describe different levels in cell differentiation potency from totipotent to unipotent; • indicate how in vitro cell cultures of differentiated cells can be obtained from embryonic stem cells and from induced pluripotent stem cells • give examples of endpoints that can be measured in cell-based bioassays • discuss in his own words a future perspective of in vitro toxicity testing Keywords: ligand binding assay; enzyme inhibition assay; primary cell culture; cell line; stem cell; organ on a chip Introduction In vitro bioassays refer to testing methods making use of tissues, cells, or proteins. The term "in vitro" (meaning "in glass") refers to the test tubes or petri dishes made from glass that were traditionally used to perform these types of toxicity tests. Nowadays, in vitro bioassays are more often performed in plastic microtiter wells-plates containing multiple (6, 12, 24, 48, 96, 384, or 1536) test containers (called "wells") per plate (Figure 1). In vitro bioassays are usually performed to screen individual substances or samples for specific bioactive properties. As such, in vitro toxicology refers to the science of testing substances or samples for specific toxic properties using tissues, cells, or proteins. Most in vitro bioassays show a mechanism-specific response, which is for instance indicative of the inhibition of a specific enzyme or the activation of a specific molecular receptor. Moreover, in vitro bioassays are usually performed in small test volumes and have short test durations (usually incubation periods range from 15 minutes to 48 hours). As a consequence, multiple samples can be tested simultaneously in a single experiment and multiple experiments can be performed in a relatively short test period. This "medium-throughput" characteristic of in vitro bioassays can even be increased to high-throughput" if the time-limiting steps in the test procedure (e.g. sample preparation, cell culturing, pipetting, read-out) are further automated. Toxicity tests making use of bacteria are also often performed in small volumes, allowing short test-durations and high-throughput. Still, such tests make use of intact organisms and should therefore strictly be considered as in vivo bioassays. This holds especially true if bacteria are used to study endpoints like survival or population growth. However, bacteria test systems studying specific toxic mechanisms, such as the Ames test used to screen substances for mutagenic properties (see section on Carcinogenicity and Genotoxicity), are often considered as in vitro bioassays, because of the similarity in test characteristics when compared to in vitro toxicity tests with cells derived from higher organisms. Protein-based assays The simplest form of an in vitro binding assay consists of a purified protein that is incubated with a potential toxic substance or sample. Purified proteins are usually obtained by isolation from an intact organism or from cultures of recombinant bacteria, which are genetically modified to express the protein of interest. Ligand binding assays are used to determine if the test substance is capable of binding to the protein, thereby inhibiting the binding capacity of the natural (endogenous) ligand to that protein (see section on Protein Inactivation). Proteins of interest are for instance receptor proteins or transporter proteins. Ligand binding assays often make use of a natural ligand that has been labelled with a radioactive isotope. The protein is incubated with the labelled ligand in the presence of different concentrations of the test substance. If protein-binding by the test substance prevents ligand binding to the protein, the free ligand shows a concentration-dependent increase in radioactivity (See Figure 2). Consequently, the ligand-protein complex shows a concentration-dependent decrease in radioactivity. Alternatively, the natural ligand may be labelled with a fluorescent group. Binding of such a labelled ligand to the protein often causes an increase in fluorescence. Consequently, a decrease in fluorescence is observed if a test substance prevents ligand binding to the protein. Enzyme inhibition assays are used to determine if a test substance is capable to inhibit the enzymatic activity of a protein. Enzymatic activity is usually determined as the conversion rate of a substrate into a product. Enzyme inhibition is determined as a decrease in conversion rate, corresponding to lower concentrations of product and higher concentrations of substrate after different periods of incubation. Quantitative measures of substrate disappearance or product formation can be done by chemical analysis of the substrate or the product. Preferably, however, the reaction rate is measured by spectrophotometry or by fluorescence. This is achieved by performing the reaction with a substrate that has a specific colour or fluorescence by itself or that yields a product with a specific colour or fluorescence, in some cases after reaction with an additional indicator compound. A well-known example of an enzyme inhibition assay is the acetylcholinesterase inhibition assay (see section on Diagnosis - In vitro bioassays). Cell cultures Cell-based bioassays make use of cell cultures that are maintained in the laboratory. Cell culturing starts with mechanical or enzymatic isolation of single cells from a tissue (obtained from an animal or a plant). Subsequently, the cells are grown in cell culture medium, i.e. a liquid that contains all essential nutrients required for optimal cell growth (e.g. growth factors, vitamins, amino acids) and regulates the physicochemical environment of the cells (e.g. pH buffer, salinity). Typically, several types of cell cultures can be distinguished (Figure 3). Primary cell cultures consist of cells that are directly isolated from a donor organism and are maintained in vitro. Typically, such cell cultures consist of either a cell suspension of non-adherent cells or a monolayer of adherent cells attached to a substrate (i.e. often the bottom of the culture vessel). The cells may undergo several cell divisions until the cell suspension becomes too dense or the adherent cells grow on top of each other. The cells can then be further subcultured by transferring part of the cells from the primary culture to a new culture vessel containing fresh medium. This progeny of the primary cell culture is called a cell line, whereas the event of subculturing is called a passage. Typically, cell lines derived from primary cells undergo senescence and stop proliferating after a limited number (20-60) of cell divisions. Consequently, such a finite cell line can undergo only a limited number of passages. Primary cell cultures and their subsequent finite cell lines have the advantage that they closely resemble the physiology of the cells in vivo. The disadvantage of such cell cultures for toxicity testing is that they divide relatively slowly, require specific cell culturing conditions, and are finite. New cultures can only be obtained from new donor organisms, which is time-consuming, expensive, and may introduce genetic variation. Alternatively, continuous cell lines have been established, which have an indefinite life span because the cells are immortal. Due to genetic mutations cells from a continuous cell line can undergo an indefinite number of cell divisions and behave like cancer cells. The immortalizing mutations may have been present in the original primary cell culture, if these cells were isolated from a malign cancer tumour tissue. Alternatively, the original finite cell line may have been transformed into a continuous cell line by introducing a viral or chemical induced mutation. The advantage of continuous cell lines is that the cells proliferate quickly and are easy to culture and to manipulate (e.g. by genetic modification). The disadvantage is that continuous cell lines have a different genotype and phenotype than the original healthy cells in vivo (e.g. have lost enzymatic capacity) and behave like cancer cells (e.g. have lost their differentiating capacities and ability to form tight junctions). Differentiation models To study the toxic effects of compounds in vitro, toxicologists prefer to use cell cultures that resemble differentiated, healthy cells rather than undifferentiated cancer cells. Therefore, differentiation models have gained increasing attention in in vitro toxicology in recent years. Such differentiation models are based on stem cells, which are cells that possess the potency to differentiate into somatic cells. Stem cells can be obtained from embryonic tissues at different stages of normal development, each with their own potency to differentiate into somatic cells (see Figure 1 in Berdasco and Esteller, 2011). In the very early embryonic stage, cells from the "morula stage" (i.e. after a few cell divisions of the zygote) are totipotent, meaning that they can differentiate in all cell types of an organism. Later in development, cells from the inner cell mass of the trophoblast are pluripotent, meaning that they can differentiate in all cell types, except for extra-embryonic cells. During gastrulation, cells from the different germ layers (i.e. ectoderm, mesoderm, and endoderm) are multipotent, meaning that they can differentiate into a restricted number of cell types. Further differentiation results in precursor cells that are unipotent, meaning that they are committed to differentiate into a single ultimate differentiated cell type. While remaining undifferentiated, in vitro embryonic stem cell (ESC) cultures can divide indefinitely, because they do not suffer from senescence. However, an ESC cell line cannot be considered as a continuous (or immortalized) cell line, because the cells contain no genetic mutations. ESCs can be differentiated into the cell type of interest by manipulating the cell culture conditions in such a way that specific signalling pathways are stimulated or inhibited in the same sequence as happens during in vivo cell type differentiation. Manipulation may consist of addition of growth factors, transcription factors, cytokines, hormones, stress factors, etc. This approach requires good understanding of which factors affect decision steps in the cell lineage of the cell type of interest. Differentiation of ESCs into differentiated cells is not only applicable in in vitro toxicity testing, but also in drug discovery, regenerative medicine, and disease modelling. Still, the destruction of a human embryo for the purpose of isolation of - mainly pluripotent - human ESCs (hESCs) raises ethical issues. Therefore, alternative sources of hESCs have been explored. The isolation and subsequent in vitro differentiation of multipotent stem cells from amniotic fluid (collected during caesarean sections), umbilical cord blood, and adult bone marrow is a very topical field of research. A revolutionary development in the field of non-embryonic stem cell differentiation models was the discovery that differentiated cells can be reprogrammed to undifferentiated cells with pluripotent capacities, called induced pluripotent stem cells (iPSCs) (Figure 4). In 2012, the Nobel Prize in Physiology or Medicine was awarded to John B. Gurdon and Shinya Yamanaka for this ground-breaking discovery. Reprogramming of differentiated cells isolated from an adult donor is obtained by exposing the cells to a mixture of reprogramming factors, consisting of transcription factors typical for pluripotent stem cells. The obtained iPSCs can be differentiated again (similar as ESCs) into any type of differentiated cells, for which the required conditions for cell lineage are known and can be simulated in vitro. Whereas iPSC based differentiation models require a complete reprogramming of a differentiated somatic cell back to the stem cell level, transdifferentiation (or lineage reprogramming) is an alternative technique by which differentiated somatic cells can be transformed into another type of differentiated somatic cells, without undergoing an intermediate pluripotent stage. Especially fibroblast cell lines are known for their capacity to be transdifferentiated into different cell types, like neurons or adipocytes (Figure 5). Cell-based bioassays In cell-based in vitro bioassays, the cell cultures are exposed to test compounds or samples and their response is measured. In principle, all types of cell culture models discussed above can be used for in vitro toxicity testing. For reasons of time, money, and comfort, continuous cell lines are commonly used, but more and more often primary cell lines and iPSC-derived cell lines are used, for reasons of higher biological relevance. Endpoints that are measured in in vitro cell cultures exposed to toxic compounds typically range from effects on cell viability (measured as decreased mitochondrial functioning, increased membrane damage, or changes in cell metabolism; see section on Cytotoxicity) and cell growth to effects on cell kinetics (absorption, elimination and biotransformation of cell substrates), changes in the cell transcriptome, proteome or metabolome, or effects on cell-type dependent functioning. In addition, cell differentiation models can be used not only to study effects of compounds on differentiated cells, but also to study the effects on the process of cell differentiation per se by exposing the cells during differentiation. A specific type of cell-based bioassays are the reporter gene bioassays, which are often used to screen individual compounds or complex mixtures extracted from environmental samples for their potency to activate or inactivate receptors that play a role in the expression of genes that play an important role in a specific path. Reporter gene bioassays make use of genetically modified cell lines or bacteria that contain an incorporated gene construct encoding for an easily measurable protein (i.e. the reporter protein). This gene construct is developed in such a way that its expression is triggered by a specific interaction between the toxic compound and a cellular receptor. If the receptor is activated by the toxic compound, transcription and translation of the reporter protein takes place, which can be easily measured as a change in colour, fluorescence, or luminescence (see section on Diagnosis - In vitro bioassays). Future developments Although there is a societal need for a non-toxic environment, there is also a societal demand to Reduce, Refine and Replace animal studies (three R principles). Replacement of animal studies by in vitro tests requires that the obtained in vitro results are indicative and predictive for what happens in the in vivo situation. It is obvious that a cell culture consisting of a single cell type is not comparable to a complex organism. For instance, toxicokinetic aspects are hardly taken into account in cell-based bioassays. Although some cells might have metabolic capacities, processes like adsorption, distribution, and elimination are not represented as exposure is usually directly on the cells. Moreover, cell cultures often lack repair mechanisms, feedback loops, and any other interaction with other cell types/tissues/organs as found in intact organisms. To expand the scope of in vitro - in vivo extrapolation (IVIVE), more complex in vitro models are developed nowadays that have a closer resemblance to the in vivo situation. For instance, whereas cell culturing was traditionally done in 2D monolayers (i.e. in layers of 1 cell thickness), 3D cell culturing is gaining ground. The advantage of 3D culturing is that it represents a more realistic type of cell growth, including cell-cell interactions, polarization, differentiation, extracellular matrix, diffusion gradients, etc. For epithelial cells (e.g. lung cells), such 3D cultures can even be grown at the air-liquid interphase reflecting the in vivo situation. Another development is cell co-culturing where different cell types are cultured together in a cell culture. For instance, two cell types that interact in an organ can be co-cultured. Alternatively, a differentiated cell type that has poor metabolic capacity can be co-cultured with a liver cell in order to take possible detoxification or bioactivation after biotransformation into account. The latest development in increasing complexity in in vitro test systems are so-called organ-on-a-chip devices, in which different cell types are co-cultured in miniaturized small channels. The cells can be exposed to different flows representing for instance the blood stream, which may contain toxic compounds (see for instance video clips at https://wyss.harvard.edu/technology/human-organs-on-chips/). Based on similar techniques, even human body-on-a-chip devices can be constructed. Such chips contain different miniaturized compartments containing cell co-cultures representing different organs, which are all interconnected by different channels representing a microfluid circulatory system (Figure 6). Although such devices are in their infancies and regularly run into impracticalities, it is to be expected that these innovative developments will play their part in the near future of toxicity testing. 4.3.8. Question 1 What is the principle of a ligand binding assay? 4.3.8. Question 2 What is the principle of an enzyme inhibition assay? 4.3.8. Question 3 What is the principle of a reporter gene bioassay? 4.3.8. Question 4 What are the advantages and disadvantages of primary cell cultures versus continuous cell lines? 4.3.8. Question 5 What is the difference between embryonic stem cells and induced pluripotent stem cells? In preparation 4.3.9. Human toxicity testing - II. In vitro tests (Draft) Author: Nelly Saenen Reviewers: Karen Smeets, Frank Van Belleghem Learning objectives: You should be able to • argue the need for alternative test methods for toxicity • list commonly used in vitro cytotoxicity assays and explain how they work • describe different types of toxicity to skin and in vitro test methods to assess this type of toxicity Keywords: In vitro, toxicity, cytotoxicity, skin Introduction Toxicity tests are required to assess potential hazards of new compounds to humans. These tests reveal species-, organ- and dose- specific toxic effects of the compound under investigation. Toxicity can be observed by either in vitro studies using cells/cell lines (see section on in vitro bioassay) or by in vivo exposure on laboratory animals; and involves different durations of exposure (acute, subchronic, and chronic). In line with Directive 2010/63/EC on the protection of animals used for scientific purposes, it is encouraged to use alternatives to animal testing (OECD: alternative methods for toxicity testing). The first step towards replacing animals is to use in vitro methods which can be used to predict acute toxicity. In this chapter, we present acute in vitro cytotoxicity tests (= quality of being toxic to cells) and skin corrosive, irritant, phototoxic, and sensitivity tests as skin is the largest organ of the body. 1. Cytotoxicity tests The cytotoxicity test is one of the biological evaluation and screening tests in vitro to observe cell viability. Viability levels of cells are good indicators of cell health. Conventionally used tests for cytotoxicity include dye exclusion or uptake assays such as Trypan Blue Exclusion (TBE) and Neutral Red Uptake (NRU). The TBE test is used to determine the number of viable cells present in a cell suspension. Live cells possess intact cell membranes that exclude certain dyes, such as trypan blue, whereas dead cells do not. In this assay, a cell suspension incubated with serial dilutions of a test compound under study is mixed with the dye and then visually examined. A viable cell will have a clear cytoplasm whereas a nonviable cell will have a blue cytoplasm. The number of viable and/or dead cells per unit volume is determined by light microscopy using a hemacytometer counting chamber (Figure 1). This method is simple, inexpensive and a good indicator of membrane integrity but counting errors (~10%) can occur due to poor dispersion of cells, or improper filling of counting chamber. The NRU assay is an approach that assesses the cellular uptake of a dye (Neutral Red) in the presence of a particular substance under study (see e.g.Figure 1 in Repetto et al., 2008). This test is based on the ability of viable cells to incorporate and bind neutral red in the lysosomes, a process based on universal structures and functions of cells (e.g. cell membrane integrity, energy production and metabolism, transportation of molecules, secretion of molecules). Viable cells can take up neutral red via active transport and incorporate the dye into their lysosomes while non-viable cells cannot. After washing, viable cells can release the incorporated dye under acidified-extracted conditions. The amount of released dye can be measured by the use of spectrophotometry. Nowadays, colorimetric assays to assess cell viability have become popular. For example, the MTT assay (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) tests cell viability by assessing activity of mitochondrial enzymes. NAD(P)H-dependent oxidoreductase enzymes, which under defined conditions reflect the number of viable cells, are capable of reducing the yellow formazan salt dye into an insoluble purple crystalline product. After solubilizing the end product using dimethyl sulfoxide (DMSO), the product can be quantitatively measured by light absorbance at a specific wavelength. This method is easy to use, safe and highly reproducible. One disadvantage is that MTT formazan is insoluble so DMSO is required to solubilize the crystals. 2. Skin corrosion and irritation Skin corrosion refers to the production of irreversible damage to the skin; namely, visible necrosis (= localized death of living cells, see section on cell death) through the epidermis and into the dermis occurring after exposure to a substance or mixture. Skin irritation is a less severe effect in which a local inflammatory reaction is observed onto the skin after exposure to a substance or mixture. Examples of these substances are detergents and alkalis which commonly affect hands. The identification and classification of irritant substances has conventionally been achieved by means of skin or eye observation in vivo. Traditional animal testing used rabbits because of its thin skin. In the Draize test, for example, the test substance is applied to the eye or shaved skin of a rabbit, and covered for 24h. After 24 and 72h, the eye or skin is visually examined and graded subjectively based on appearance of erythema and edema. As these in vivo tests have been heavily criticized, they are now being phased out in favor of in vitro alternatives. The Skin Corrosion Test (SCT) and Skin Irritation Test (SIT) are in vitro assays that can be used to identify whether a chemical has the potential to corrode or irritate skin. The method uses a three-dimensional (3D) human skin model (Episkin model) which comprises of a main basal, supra basal, spinous and granular layers and a functional stratum corneum (the outer barrier layer of skin). It involves topical application of a test substance and subsequent assessment of cell viability (MTT assay). Test compounds considered corrosive or irritant are identified by their ability to decrease cell viability below the defined threshold level (lethal dose by which 50% of cells are still viable - LD50). 3. Skin phototoxicity Phototoxicity (photoirritation) is defined as a toxic response that is elicited after the initial exposure of skin to certain chemicals and subsequent exposure to light (e.g. chemicals that absorb visible or ultraviolet (UV) light energy that induces toxic molecular changes). The 3T3 NRU PT assay is based on an immortalised mouse fibroblast cell line called Balb/c 3T3. It compares cytotoxicity of a chemical in the presence or absence of a non-cytotoxic dose of simulated solar light. The test expresses the concentration-dependent reduction of the uptake of the vital dye neutral red when measured 24 hours after treatment with the chemical and light irradiation. The exposure to irradiation may alter cell surface and thus may result in decreased uptake and binding of neutral red. These differences can be measured with a spectrophotometer. 4. Skin sensitisation Skin sensitisation is the regulatory endpoint aiming at the identification of chemicals able to elicit an allergic response in susceptible individuals. In the past, skin sensitisation has been detected by means of guinea pigs (e.g. guinea pig maximisation test and the Buehler occluded patch tests) or murine (e.g. murine local lymph node assay). The latter is based upon quantification of T-cell proliferation in the draining lymph nodes behind the ear (auricular) of mice after repeated topical application of the test compound. The key biological events (Figure 2) underpinning the skin sensitisation process are well established and include: 1. haptenation, the covalent binding of the chemical compounds (haptens) to skin proteins (key event 1); 2. signaling, the release of pro-inflammatory cytokines and the induction of cyto-protective pathways in keratinocytes (key event 2); 3. the maturation, and mobilisation of dendritic cells, immuno-competent cells in the skin (key event 3); 4. migration of dendritic cells, movement of dendritic cells bearing hapten-protein comples from skin to draining local lymph node; 5. the antigen presentation to naïve T-cells and proliferation (clonal expansion) of hapten-peptide specific T-cells (key event 4) Figure in preparation Figure 2: Key biological events in skin sensitisation. Figure adapted from Today a number of non-animal methods addressing each a specific key mechanism of the induction phase of skin sensitisation can be employed. These include the Direct Peptide Reactivity Assay (DPRA), ARE-Nrf2 Luciferase Test Method: KeratinoSens, Human Cell Line Activation Test (h-CLAT), U937 cell line activation test (U-SENS), and Interleukin-8 Reporter Gene assay (IL-8 Luc assay). Detailed information of these methods can be found on OECD site: skin sensitization. References OECD site. Alternative methods for toxicity testing. https://ec.europa.eu/jrc/en/eurl/ecvam/alternative-methods-toxicity-testing Episkin model. http://www.episkin.com/Episkin OECD site. Skin sensitization. https://ec.europa.eu/jrc/en/eurl/ecvam/alternative-methods-toxicity-testing/validated-test-methods/skin-sensitisation Repetto, G., Del Peso, A., Zurita, J.L. (2008). Neutral red uptake assay for the estimation of cell viability/cytotoxicity.Nature protocols 3(7), 1125. 4.3.9. Human toxicity testing - III. Carcinogenicity assays (Draft) Author: Jan-Pieter Ploem Reviewers: Frank van Belleghem Learning objectives: You should be able to • explain the different approaches used for carcinogen testing. • list some advantages and disadvantages of the different methods • understand the difference between GTX and NGTX compounds, and its consequence regarding toxicity testing Introduction The term "carcinogenicity" refers to the property of a substance to induce or increase the incidence of cancer after inhalation, ingestion, injection or dermal application. Traditionally, carcinogens have been classified according to their mode of action (MoA). Compounds directly interacting with DNA, resulting in DNA-damage or chromosomal aberrations are classified as genotoxic (GTX) carcinogens. Non-genotoxic (NGTX) compounds do not directly affect DNA and are believed to affect gene expression, signal transduction, disrupt cellular structures and/or alter cell cycle regulation. The difference in mechanism of action between GTX and NGTX require a different approach in many cases. Genotoxic carcinogens Genotoxicity itself is considered to be an endpoint in its own right. The occurrence of DNA-damage can be observed/determined quit easily by a variety of methods based on both bacterial and mammalian cells. Often a tiered testing method is used to evaluate both heritable germ cell line damage and carcinogenicity. Currently eight in vitro assays have been granted OECD guidelines, four of which are commonly used. • The Ames test The gold standard for genotoxicity testing is the Ames test, a test that has been developed in the early seventies. The test evaluates the potential of a chemical to induce mutations (base pair substitutions, frame shift induction, oxidative stress, etc.) in Salmonella typhimurium. During the safety assessment process, it is the first test performed unless deemed unsuitable for specific reasons (e.g. during testing for antibacterial substances). With a sensitivity of 70-90% it is a relatively good predictor of genotoxicity. The principle of the test is fairly simple. A bacterial strain, with a genetic defect, is placed on minimal medium containing the chemical in question. If mutations are induced, the genetic defect in some cells will be restored, thus rendering the cells able to synthesize the deficient amino acid caused by the defect in the original cells. • Escherichia coli reverse mutation assay The Ames assay is basically a bacterial reverse mutation assay. In this case different strains of E. coli, which are deficient in both DNA-repair and an amino acid, are used to identify genotoxic chemicals. Often a combination of different bacterial strains is used to increase the sensitivity as much as possible. • In vitro mammalian chromosome aberration assay Chromosomal mutations can occur in both somatic cells and in germ cells, leading to neoplasia or birth and developmental abnormalities respectively. There are two types of chromosomal mutations: Structural changes: stable aberrations such as translocations and inversions, and unstable aberrations such as gaps and breaks. Numerical changes: aneuploidy (loss or gain of chromosomes) and polyploidy (multiples of the diploid chromosome complement). To perform the assay, mammalian cells are exposed in vitro to the potential carcinogen and then harvested. Through microscopy the frequency of aberrations is determined. The chromosome aberration can be and is performed with both rodent and human cells which is an interesting feat regarding the translational power of the assay. • In vitro mammalian cell gene mutation test This mammalian genotoxicity assay utilizes the HPRT gene, a X-chromosome located reporter gene. The test relies on the fact that cells with an intact HPRT gene are susceptible to the toxic effects of 6-thioguanine, while mutants are resistant to this purine analogue. Wild-type cells are sensitive to the cytostatic effect of the compound while mutants will be able to proliferate in the presence of 6-thioguanine. • Micronucleus test Next to the four mentioned assays, there is a more recent developed test that already has proven to be a valuable resource for genotoxicity testing. The test provides an alternative to the chromosome aberration assay but can be evaluated faster. The test allows for automated measurement as the analysis of the damage proves to be less subjective. Micronuclei are "secondary" nuclei formed as a result of aneugenic or clastogenic damage. It is important to note that these assays are all described from an in vivo perspective. However, an in vivo approach can always be used, see two-year rodent assay. In that case, live animals are exposed to the compound after which specific cells are harvested. The advantage of this approach is the presence of the natural niche in which susceptible cells normally grow, resulting in a more relevant range of effects. The downside of in vivo assays is the current ethical pressure on these kind of methods. Several instances actively promote the development and usage of in vitro or simply non-animal alternative methods. • Two-year rodent carcinogenicity assay For over 50 years, 2-year rodent carcinogenicity assay has been the golden standard for carcinogenicity testing. The assay relies on the exposure to a compound during a major part of an organism's lifespan. During the further development of the assay, a 2-species/2-gender setup became the preferred method, as some compounds showed different results in e.g. rats and mice and even between male and female individuals. For this approach model organisms are exposed for two years to a compound. Depending on the possible mode of exposure (i.e. inhalation, ingestion, skin/eye/... contact) when the compounds enters the relevant industry, a specific mode of exposure towards the model is chosen. During this time period the health of the model organism is documented through different parameters. Based on this a conclusion regarding the compound is drawn. Non-genotoxic carcinogens Carcinogens not causing direct DNA-damage are classified as NGTX compounds. Due to the fact that there are a large number of potential malign pathways or effects that could be induced, the identification of NGTX carcinogens is significantly more difficult compared to GTX compounds. The two-year rodent carcinogenicity assay is one of the assays capable of accurately identify NGTX compounds. The use of transgenic models has greatly increased the sensitivity and specificity of this assay towards both groups of carcinogens while also improving the refinement of the assay by shortening the required time to formulate a conclusion regarding the compound. An in vitro method to identify NGTX compounds is rare. Not many alternative assays are able to cope with the vast variety of possible effects caused by the compounds resulting in many false negatives. However, cell morphology based methods such as the cell transformation assay, can be a good start in developing methods for this type of carcinogens. References Stanley, L. (2014). Molecular and Cellular Toxicology. p. 434. 4.3.10. Environmental epidemiology - I. Basic principles and study designs Authors: Eva Sugeng and Lily Fredrix Reviewers: Ľubica Murínová and Raymond Niesink Learning objectives: You should be able to • describe and apply definitions of epidemiologic research. • name and identify study designs in epidemiology, describe the design, and advantages and disadvantages of the design. 1. Definitions of epidemiology Epidemiology (originating from Ancient Greek: Epi -upon, demos - people, logos - the study of) is the study of the distribution and determinants of health-related states or events in specified populations, and the application of this study to the prevention and control of health problems (Last, 2001). Epidemiologists study human populations with measurements at one or more points in time. When a group of people is followed over time, we call this a cohort (originating from Latin: cohors (Latin), a group of Roman soldiers). In epidemiology, the relationship between a determinant or risk factor and health outcome - variable is investigated. The outcome variable mostly concerns morbidity: a disease, e.g. lung cancer, or a health parameter, e.g. blood pressure, or mortality: death. The determinant is defined as a collective or individual risk factor (or set of factors) that is (causally) related to a health condition, outcome, or other defined characteristic. In human health - and, specifically, in diseases of complex etiology - sets of determinants often act jointly in relatively complex and long-term processes (International Epidemiological Association, 2014). The people that are subject of interest are the target population. In most cases, it is impossible and unnecessary to include all people from the target population and therefore, a sample will be taken from the target population, which is called the study population. The sample is ideally representative of the target population (Figure 1). To get a representative sample, it is possible to recruit subjects at random. 2. Study designs Epidemiologic research can either be observational or experimental (Figure 2). Observational studies do not include interference (e.g. allocation of subjects into exposed / non-exposed groups), while experimental studies do. With regard to observational studies analytical and descriptive studies can be distinguished. Descriptive studies describe the determinant(s) and outcome without making comparisons, while analytical studies compare certain groups and derive inferences. 2.1.1. Cross-sectional study In a cross-sectional study, determinant and outcome are measured at the same time. For example, pesticide levels in urine (determinant) and hormone levels in serum (outcome) are collected at one point in time. The design is quick and cheap because all measurements take place at the same time. The drawback is that the design does not allow to conclude about causality, that is whether the determinant precedes the outcome, it might be the other way around or caused by another factor (lacking Hill's criterion for causality temporality, Box 1). This study design is therefore mostly hypothesis generating. 2.1.2 Case-control study In a case-control study, the sample is selected based on the outcome, while the determinant is measured in the past. In contrast to a cross-sectional study, this design can include measurements at several time points, hence it is a longitudinal study. First, people with the disease (cases) are recruited and then matched controls (people not affected by the disease), comparable with regard to e.g. age, gender and geographical region, are involved into the study. Important is that controls have the same risk to develop the disease as the cases. The determinant is collected retrospectively, meaning that participants are asked about exposure in the past. The retrospective character of the design poses a risk for recall bias when people are asked about events that happened in the past, they might not remember them correctly. Recall bias is a form of information bias, when a measurement error results in misclassification. Bias is defined as a systematic deviation of results or inferences from the truth (International Epidemiological Association, 2014). One should be cautious to draw conclusions about causality with the case-control study design. According to Hill's criterion temporality (see Box 1), the exposure precedes the outcome, but because the exposure is collected retrospectively, the evidence may be too weak to draw conclusions about a causal relationship. The benefits are that the design is suitable for research on diseases with a low incidence (in a prospective cohort study it would result in a low number of cases), and for research on diseases with a long latency period, that is the time that exposure to the determinant can result in the disease (in a prospective cohort study, it would take many years to follow-up participants until the disease develops). An example of a case-control study in environmental epidemiology Hoffman et al. (2017) investigated papillary thyroid cancer (PTC) and exposure to flame retardant chemicals (FRs) in the indoor environment. FRs are chemicals which are added to household products in order to limit the spread of fire, but can leach to house dust where residents can be exposed to the contaminated house dust. FRs are associated with thyroid disease and thyroid cancer. In this case-control study, PTC cases and matched cases were recruited (outcome), and FR exposure (determinant) was assessed by measuring FRs in the house dust of the participants. The study showed that participants with higher exposure to FRs (bromodiphenyl ether-209 concentrations above the median level) had 2.3 more odds (see section Quantifying disease and associations) on having PTC, compared to participants with lower exposure to FRs (bromodiphenyl ether-209 concentrations below the median level). 2.1.3 Cohort study A cohort study, another type of a longitudinal study, includes a group of individuals that are followed over time in the future (prospective) or that will be asked about the past (retrospective). In a prospective cohort study, the determinant is measured at the start of the study and the incidence of the disease is calculated after a certain time period, the follow-up. The study design needs to start with people who are at risk for the disease, but not yet affected by the disease. Therefore, the prospective study design allows to conclude that there may be a causal relationship, since the health outcome follows the determinant in time (Hill's criterion temporality). However, interference of other factors is still possible, see paragraph 3 about confounding and effect modification. It is possible to look at more than 1 health outcome, but the design is less suitable for diseases with a low incidence or with a long latency period, because then you either need a large study population to have enough cases, or need to follow the participants for a long time to measure cases. A major issue with this study design is attrition (loss to follow-up), it means to what extent do participants drop out during the study course. Selection bias can occur when a certain type of participants drops out more often, and the research is conducted with a selection of the target population. Selection bias can also occur at the start of a study, when some members of the target population are less likely to be included in the study population in comparison to other members and the sample therefore is not representative of the target population. An example of a prospective cohort study De Cock et al. (2016) present a prospective cohort study investigating early life exposure to chemicals and health effects in later life, the LInking EDCs in maternal Nutrition to Child health (LINC study). For this, over 300 pregnant women were recruited during pregnancy. Prenatal exposure to chemicals was measured in, amongst others, cord blood and breast milk and the children were followed over time, measuring, amongst others, height and weight status. For example, prenatal exposure to dichlorodiphenyl-dichloroethylene (DDE), a metabolite of the pesticide dichlorodiphenyl-trichloroethane (DDT), was assessed by measuring DDE in umbilical cord blood, collected at delivery. During the first year, the body mass index (BMI), based on weight and height, was monitored. DDE levels in umbilical cord blood were divided into 4 equal groups, called quartiles. Boys with the lowest DDE concentrations (the first quartile) had a higher BMI growth curve in the first year, compared to boys with the highest concentrations DDE (the fourth quartile) (De Cock et al., 2016). 2.1.4 Nested case-control study When a case-control study is carried out within a cohort study, it is called a nested case-control study. Cases in a cohort study are selected, and matching non-cases are selected as controls. This type of study design is useful in case of a low amount of cases in a prospective cohort study. An example of a nested case-control study Engel et al. (2018) investigated attention-deficit hyperactivity disorder (ADHD) in children in relation to prenatal phthalate exposure. Phthalates are added to various consumer products to soften plastics. Exposure occurs during ingestion, inhalation or dermal absorption and sources are for example plastic packaging of food, volatile household products and personal care products (Benjamin et al., 2017). Engel et al. (2018) carried out a nested case-control study within the Norwegian Mother and Child Cohort (MoBa). The cohort included 112,762 mother-child pairs of which only a small amount of cases with a clinical ADHD diagnosis. A total of 297 cases were randomly sampled from registrations of clinically ADHD diagnoses. In addition, 553 controls without ADHD were randomly sampled from the cohort. Phthalate metabolites were measured in maternal urine collected at midpregnancy and concentrations were divided into 5 equal groups, called quintiles. Children of mothers in the highest quintile of the sum of metabolites of the phthalate bis(2-ethylhexyl) phthalate (DEHP) had 2.99 (95%CI: 1.47-5.49) more odds (see chapter Quantifying disease and associations) of an ADHD diagnosis in comparison to the lowest quintile. 2.2.1 (Non) Randomized controlled trials A randomized controlled trial (RCT) is an experimental study in which participants are randomly assigned to an intervention group or a control group. The intervention group receives an intervention or treatment, the control group receives nothing, usual care or a placebo. Clinical trials that test the effectiveness of medication are an example of an RCT. If the assignment of participants to groups is not randomized, the design is called a non-randomized controlled trial. The latter design provides less strength of evidence. When groups of people instead of individuals, are randomized, the study design is called a cluster-randomized controlled trial. This is, for example, the case when classrooms with children at school are randomly assigned to the intervention- and control group. Variations are used to switch groups between the intervention and control group. For example, a crossover design makes it possible that people are both intervention group and control group in different phases of the study. In order to not restrain the benefits of the intervention to the control group, a waiting list design makes the intervention available to the control group after the research period. An example of an experimental study An example of an experimental study design within environmental research is the study of Bae and Hong (2015). In a randomized crossover trial, participants had to drink beverages either from a BPA containing can, or a BPA-free glass bottle. Besides BPA levels in urine, blood pressure was measured after exposure. The crossover design included 3 periods, with either drinking only canned beverages, both canned and glass-bottled beverages or only glass-bottled beverages. BPA concentration was increased with 1600% after drinking canned beverages in comparison to drinking from glass bottles. 3. Confounding and effect modification Confounding occurs when a third factor influences both the outcome and the determinant (see Figure 3). For example, the number of cigarettes smoked is positively associated with the prevalence of esophageal cancer. However, the number of cigarettes smoked is also positively associated with the amount of standard glasses alcohol consumption. Besides, alcohol consumption is a risk factor for esophageal cancer. Alcohol consumption is therefore a confounder in the relationship smoking and esophageal cancer. One can correct for confounders in the statistical analysis, e.g. using stratification (results are presented for the different groups separately). Effect modification occurs when the association between exposure/determinant and outcome is different for certain groups (Figure 3). For example, the risk of lung cancer due to asbestos exposure is about ten times higher for smokers than for non-smokers. A solution to deal with effect modification is stratification as well. Box 1: Hill's criteria for causation With epidemiological studies it is often not possible to determine a causal relationship. That is why epidemiological studies often employ a set of criteria, the Hill's criteria of causation, according to Sir Austin Bradford Hill, that need to be considered before conclusions about causality are justified (Hill, 1965). 1. Strength: stronger associations are more reason for causation. 2. Consistency: causation is likely when observations from different persons, in different populations and circumstances are consistent. 3. Specificity: specificity of the association is reason for causation. 4. Temporality: for causation the determinant must precede the disease. 5. Biological gradient: is there biological gradient between the determinant and the disease, for example, a dose-response curve? 6. Plausibility: is it biological plausible that the determinant causes the disease? 7. Coherence: coherence between findings from laboratory analysis and epidemiology. 8. Experiment: certain changes in the determinant, as if it was an experimental intervention, might provide evidence for causal relationships. 9. Analogy: consider previous results from similar associations. References Bae, S., Hong, Y.C. (2015). Exposure to bisphenol a from drinking canned beverages increases blood pressure: Randomized crossover trial. Hypertension 65, 313-319. https://doi.org/10.1161/HYPERTENSIONAHA.114.04261 Benjamin, S., Masai, E., Kamimura, N., Takahashi, K., Anderson, R.C., Faisal, P.A. (2017). Phthalates impact human health: Epidemiological evidences and plausible mechanism of action. Journal of Hazardous Materials 340, 360-383. https://doi.org/10.1016/j.jhazmat.2017.06.036 De Cock, M., De Boer, M.R., Lamoree, M., Legler, J., Van De Bor, M. (2016). Prenatal exposure to endocrine disrupting chemicals and birth weight-A prospective cohort study. Journal of Environmental Science and Health - Part A Toxic/Hazardous Substances and Environmental Engineering 51, 178-185. https://doi.org/10.1080/10934529.2015.1087753 De Cock, M., Quaak, I., Sugeng, E.J., Legler, J., Van De Bor, M. (2016). Linking EDCs in maternal Nutrition to Child health (LINC study) - Protocol for prospective cohort to study early life exposure to environmental chemicals and child health. BMC Public Health 16: 147. https://doi.org/10.1186/s12889-016-2820-8 Engel, S.M., Villanger, G.D., Nethery, R.C., Thomsen, C., Sakhi, A.K., Drover, S.S.M., … Aase, H. (2018). Prenatal phthalates, maternal thyroid function, and risk of attention-deficit hyperactivity disorder in the Norwegian mother and child cohort. Environmental Health Perspectives. https://doi.org/10.1289/EHP2358 Hill, A.B. (1965). The Environment and Disease: Association or Causation? Journal of the Royal Society of Medicine 58, 295-300. https://doi.org/10.1177/003591576505800503 Hoffman, K., Lorenzo, A., Butt, C.M., Hammel, S.C., Henderson, B.B., Roman, S.A., … Sosa, J.A. (2017). Exposure to flame retardant chemicals and occurrence and severity of papillary thyroid cancer: A case-control study. Environment International 107, 235-242. https://doi.org/10.1016/j.envint.2017.06.021 International Epidemiological Association. (2014). Dictionary of epidemiology. Oxford University Press. https://doi.org/10.1093/ije/15.2.277 Last, J.M. (2001). A Dictionary of Epidemiology. 4th edition, Oxford, Oxford University Press. 4.3.10.1. Question 1 Researcher A investigates the relation between parabens and breast cancer. She includes 200 women with breast cancer and 200 women without and asks all women about the use of personal care products that contain parabens in the past 10 years using a questionnaire. What is the study design that is used? Case-control study Cross-sectional study Prospective cohort study Randomized controlled trial 4.3.10.1. Question 2 Researcher B has an opinion about the advantages and disadvantages of study design that was chosen. Which of the following advantages of this design is correct? There is a low chance of recall-bias. The design is able to prove a causal relationship between the exposure and the outcome. The design takes into account the long latency period for breast cancer. The Relative Risk (RR) can be used in this design. 4.3.10.1. Question 3 Researcher B is involved in a prospective cohort study investigating lead exposure and ADHD in children. Prenatal exposure to lead measured in umbilical cord blood, collected at delivery. At 7 years ADHD symptoms were counted. They found that higher lead concentrations was associated with more ADHD symptoms for both boys and girls. Boys had more ADHD symptoms than girls, and higher lead levels. What kind of factor is gender? Determinant Confounder Effect modifier Outcome 4.3.10. Environmental epidemiology - II. Quantifying disease and associations Authors: Eva Sugeng and Lily Fredrix Reviewers: Ľubica Murínová and Raymond Niesink Learning objectives You should be able to • describe measures of disease. • calculate and interpret effect sizes fitting to the epidemiologic study design. • describe and interpret significance level. • describe stratification and interpret stratified data. 1. Measures of disease Prevalence is the proportion of a population with an outcome at a certain time point (e.g. currently, 40% of the population is affected by disease Y) and can be calculated in cross-sectional studies. Incidence concerns only new cases, and the cumulative incidence is the proportion of new cases in the population over a certain time span (e.g. 60% new cases of influenza per year). The (cumulative) incidence can only be calculated in prospective study designs, because the population needs to be at risk to develop the disease and therefore participants should not be affected by the disease at the start of the study. Population Attributable Risk (PAR) is a measure to express the increase in disease in a population due to the exposure. It is calculated with this formula: 2.1 In case of dichotomous outcomes (disease, yes versus no) Risk ratio or relative risk (RR) is the ratio of the incidence in the exposed group to the incidence in the unexposed group (Table 1): The RR can only be used in prospective designs, because it consists of probabilities of an outcome in a population at risk. The RR is 1 if there is no risk, <1 if there is a decreased risk, and >1 if there is an increased risk. For example, researchers find an RR of 0.8 in a hypothetical prospective cohort study on the region children live in (rural vs. urban) and the development of asthma (outcome). This means that children living in rural areas have a 0.8 lower risk to develop asthma, compared to children living in the urban areas. Risk difference (RD) is the difference between the risks in two groups (Table 1): Odds ratio (OR) is the ratio of odds on the outcome in the exposed group to the odds of the outcome in the unexposed group (Table 1). The OR can be used in any study design, but is most frequently used in case-control studies. (Table 1) The OR is 1 if there is no difference in odds, >1 if there is a higher odds, and <1 if there is a lower odds. For example, researchers find an OR of 2.5 in a hypothetical case-control study on mesothelioma cancer and occupational exposure to asbestos in the past. Patients with mesothelioma cancer had 2.5 higher odds on being occupational in the past exposed to asbestos compared to the healthy controls. The OR can also be used in terms of odds on the disease instead of the exposure, the formulae is then (Table 1): For example, researchers find an odds ratio of 0.9 in a cross-sectional study investigating mesothelioma cancer in builders working with asbestos comparing the use of protective clothing and masks. The builders who used protective clothing and masks had 0.9 odds on having mesothelioma cancer in comparisons to builders who did not use protective clothing and masks. Table 1: concept table to use for calculation of the RR, RD, and OR Disease/outcome + Disease/outcome - Exposure/determinant + A B Exposure/determinant - C D 2.2 In case of continuous outcomes (when there is a scale on which a disease can be measured, e.g. blood pressure) Mean difference is the difference between the mean in the exposed group versus the unexposed group. This is also applicable to experimental designs with a follow-up to assess increase or decrease of the outcome after an intervention: the mean at the baseline versus the mean after the intervention. This can be standardized using the following formulae: The standard deviation (SD) is a measure of the spread of a set of values. In practice, the SD must be estimated either from the SD of the control group, or from an 'overall' value from both groups. The best-known index for effect size is Cohens 'd'. The standardized mean difference can have both a negative and a positive value (between -2.0 and +2.0). With a positive value, the beneficial effect of the intervention is shown, with a negative value, the effect is counterproductive. In general, an effect size of for example 0.8 means a large effect. 3. Statistical significance and confidence interval Effect measurements such as the relative risk, the odds ratio and the mean difference are reported together with statistical significance and/or a confidence interval. Statistical significance is used to retain or reject null hypothesis. The study starts with a null hypothesis assumption, we assume that there is no difference between variables or groups, e.g. RR=1 or the difference in means is 0. Then the statistical test gives us the probability of getting the outcome observed (e.g. OR=2.3, or mean difference=1.5) when in fact the null hypothesis is true. If the probability is smaller than 5%, we conclude that the observation is true and we may reject the null hypothesis. The 5% probability corresponds to a p-value of 0.05. A p-value cut-off of p<0.05 is generally used, which means that p-values smaller than 0.05 are considered statistically significant. The 95% confidence interval. A 95% confidence interval is a range of values within which you can be 95% certain that the true mean of the population or measure of association lies. For example, in the hypothetical cross-sectional study on smoking (yes or no) and lung cancer, an OR of 2.5 was found, with an 95% CI of 1.1 to 3.5. That means, we can say with 95% certainty that the true OR lies between 1.1 and 3.5. This is regarded statistically significant, since the 1, which means no difference in odds, does not lie within the 95% CI. If researchers also studied oesophagus cancer in relation to smoking and found an OR of 1.9 with 95% CI of 0.6-2.6, this is not regarded statistically significant, since 95% CI includes 1. 4. Stratification When two populations investigated have a different distribution of, for example, age and gender, it is often hard to compare disease frequencies among them. One way to deal with that is to analyse associations between exposure and outcome within strata (groups). This is called stratification. Example: a hypothetical study to investigate differences in health (outcome, measured with number of symptoms, such as shortness of breath while walking) between two groups of elderly, urban elderly (n=682) and rural elderly (n=143) (determinant). No difference between urban and rural elderly was found, however there was a difference in the number of women and men in both groups. The results for symptoms for urban and rural elderly are therefore stratified by gender (Table 2). Then, it appeared that male urban elderly have more symptoms than male rural elderly (p=0.01). The difference is not significant for women (p=0.07). The differences in health of elderly living an urban region are different for men and women, hence gender is an effect modifier of our association of interest. Table 2. Number of symptoms (expressed as a percentage) for urban and rural elderly stratified by gender. Significant differences in bold. number of symptoms Women Men Urban Rural Urban Rural None 16.0 30.4 16.2 43.5 One 26.4 30.4 45.2 47.8 Two or more 57.6 39.1 37.8 8.7 N 125 23 74 23 p-value 0.07 0.01 4.3.10.2. Question 1 Researcher B investigates the relation between parabens and breast cancer using a cross-sectional design. She includes 500 women between 55-65 years and asks them about the use of personal care products that contain parabens using a questionnaire, that divides the group into frequent users (N=267) and infrequent users (N=233). She in addition asks if the women have a breast cancer diagnosis and finds out that 41 women have breast cancer, of these women, 30 frequently use PCP. What is a correct outcome of this study? The Odds Ratio is 3.6. The Population Attributable Risk is 0.4. The incidence of breast cancer in this population is 8.2% The prevalence of breast cancer in this population is 8.2%. 4.3.10.2. Question 2 With the statistical analysis to get the OR, Researcher A finds the following numbers: OR=2.6, 95CI: 0.9-3.5, p=0.07. What is the interpretation? Frequent PCP users have significantly higher odds on having breast cancer. Frequent PCP users have higher odds on breast cancer, but not significantly. Frequent PCP users have higher risk on breast cancer, but not significantly Frequent PCP users have significantly higher risk on breast cancer 4.3.10.2. Question 3 Researchers investigate an intervention to reduce phthalate exposure. The intervention group gets advice to reduce phthalates in food, personal care products and volatile household products. The control group does not get the intervention. Before and after the intervention period urine samples from the participants are collected and analyzed for phthalate metabolites. What is incorrect? The researchers can conclude about causality, because the exposure proceeds the outcome. The researchers can quantify the size of the difference between the intervention and control group using stratification. The researchers can show if the intervention reduces (or increases) the exposure with a risk difference. That there is no difference between the intervention and control group is called the null hypothesis. 4.3.11. Molecular epidemiology - I. Human biomonitoring Author: Marja Lamoree Reviewers: Michelle Plusquin and Adrian Covaci Learning objectives: You should be able to • explain the purpose of human biomonitoring • understand that the internal dose may come from different exposure routes • describe the different steps in analytical methods and to clarify the specific requirements with regard to sampling, storage, sensitivity, throughput and accuracy • clarify the role of metabolism in the distribution of samples in the human body and specify some sample matrices • explain the role of ethics in human biomonitoring studies Keywords: chemical analysis, human samples, exposure, ethics, cohort Human biomonitoring Human biomonitoring (HBM) involves the assessment of human exposure to natural and synthetic chemicals by the quantitative analysis of these compounds, their metabolites or reaction products in samples from human origin. Samples used in HBM can include blood, urine, faces, saliva, breast milk and sweat or other tissues, such as hair, nails and teeth. The concentrations determined in human samples are a reflection of the exposure of an individual to the compounds analysed, also referred to as the internal dose. HBM data are collected to obtain insight into the population's exposure to chemicals, often with the objective to integrate them with health data for health impact assessment in epidemiological studies. Often, specific age groups are addressed, such as neonates, toddlers, children, adolescents, adults and elderly. Human biomonitoring is an established method in occupational and environmental exposure assessment. In several countries, HBM studies have been conducted for decades already, such as the German Environment Survey (GerES) and the National Health and Nutrition Examination Survey (NHANES) program in the United States. HBM programs may sometimes be conducted under the umbrella of the World Health Organization (WHO). Other examples are the Canadian Health Measures Survey, the Flemish Environment and Health Study and the Japan Environment and Children's Study, the latter specifically focuses on young children. Children are considered to be more at risk for the adverse health effects of early exposure to chemical pollutants, because of their rapid growth and development and their limited metabolic capacity to detoxify harmful chemicals. Table 1. Information sources for Human Biomonitoring (HBM) programmes Programme Internet link German Environment Survey (GerES) www.umweltbundesamt.de/en/topics/health/assessing-environmentally-related-health-risks/german-environmental-survey-geres National Health and Nutrition Examination Survey (NHANES) https://www.cdc.gov/nchs/nhanes/index.htm WHO www.euro.who.int/en/data-and-evidence/environment-and-health-information-system-enhis/activities/human-biomonitoring-survey Canadian Health Measures Survey (CHMS) http://www23.statcan.gc.ca/imdb/p2SV...rvey&Id=148760 Japan Environment and Children's Study (JECS) https://www.env.go.jp/en/chemi/hs/jecs/ Studies focusing on the impact of exposure to chemicals on health are conducted with the use of cohorts: groups of people that are enrolled in a certain study and volunteer to take part in the research program. Usually, apart from donating e.g. blood or urine samples, health measures, such as blood pressure, body weight, hormone levels, etc. but also data on diet, education, social background, economic status and lifestyle are collected, the latter through the use of questionnaires. A cross-sectional study aims at the acquisition of exposure and health data of the whole (volunteer) group at a defined moment, whereas in a longitudinal study follow-up studies are conducted with a certain frequency (i.e. every few years) in order to follow and evaluate the changes in exposure, describe time trends as well as study health and lifestyle on the longer term (see section on Environmental Epidemiology). To obtain sufficient statistical power to derive meaningful relationships between exposure and eventual (health) effects, the number of participants in HBM studies is often very large, i.e. ranging to 100,000 participants. Because a lot of (sometimes sensitive) data is gathered from many individuals, ethics is an important aspect of any HBM study. Before starting a certain study involving HBM, a Medical Ethical Approval Committee needs to approve it. Applications to obtain approval require comprehensive documentation of i) the study protocol (what is exactly being investigated), ii) a statement regarding the safeguarding of the privacy and collected data of the individuals, the access of researchers to the data and the safe storage of all information and iii) an information letter for the volunteers explaining the aim of and procedures used in the study and their rights (e.g. to withdraw), so that they can give consent to be included in the study. Chemical absorption, distribution, metabolism and excretion Because chemicals often undergo metabolic transformation (see section on Xenobiotic metabolism and defence) after entering the body via ingestion, dermal absorption and inhalation, it is important to not only focus on the parent compound (= the compound to which the individual was exposed), but also include metabolites. Diet, socio-economic status, occupation, lifestyle and the environment all contribute to the exposure of humans, while age, gender, health status and weight of an individual define the effect of the exposure. HBM data provide an aggregation of all the different routes through which the individual was exposed. For an in-depth investigation of exposure sources, however, chemical analysis of e.g. diet (including drinking water), the indoor and outdoor environment are still necessary. Another important source of chemicals to which people are exposed in their day to day life are consumer products, such as electronics, furniture, textiles, etc., that may contain flame retardants, stain repellents, colorants and dyes, preservatives, among others. The distribution of a chemical in the body is highly dependent on its physico-chemical properties, such as lipophilicity/hydrophilicity and persistence, while also phase I and Phase II transformation (see section on Xenobiotic metabolism and defence) play a determining role, see Figure. 1. For lipophilic compounds (e.g. section on POPs) storage occurs in fat tissue, while moderately lipophilic to hydrophilic compounds are excreted after metabolic transformation, or in unchanged form. Based on these considerations, a proper choice for sampling of the appropriate matrix can be made, i.e. some chemicals are best measured in urine, while for others blood may be better suitable. For the design of the sampling campaign, the properties of the compounds to be analyzed should be taken into account. In case of volatility, airtight sampling containers should be used, while for light-sensitive compounds amber coloured glassware is the optimal choice. Ideally, after collection, the samples need to be stored under the correct conditions as quickly as possible, in order the avoid degradation caused by thermal instability or by biodegradation caused by remaining enzyme activity in the sample (e.g. in blood or breast milk samples). Labeling and storage of the large quantities of samples generally included in HBM studies are important parts of the sampling campaign (see for video: https://www.youtube.com/watch?v=FQjKKvAhhjM). Chemical analysis of human samples for exposure assessment Typically, for the determination of the concentrations of compounds to which people are exposed and the corresponding metabolites formed in the human body, analytical techniques such as liquid and gas chromatography (LC and GC, respectively) coupled to mass spectrometry (MS) are applied. Chromatography is used for the separation of the compounds, while MS is used to detect the compounds. Prior to the analysis using LC- or GC-MS, the sample is pretreated (e.g. particles are removed) and extracted, i.e. the compounds to be analysed are concentrated in a small volume while sample matrix constituents that may interfere with the analysis (e.g. lipids, proteins) are removed, resulting in an extract that is ready to be injected onto chromatographic system. In Figure 2 a schematic representation is given of all steps in the analytical procedure. The analytical methods to quantify concentrations of chemicals in order to assess human exposure need to be of a high quality due to the specific nature of HBM studies. The compounds to be analysed are usually present in very low concentrations (i.e. in the order of pg/L for cord blood), and the sample volumes are small. For some matrices, the small sample volumes is dictated by the fact that sample availability is not unlimited, e.g. for blood. Another factor that governs the limited sample volume available is the costs that are related to the requirement of dedicated, long term storage space at conditions of -20 ⁰C or even -80 ⁰C to ensure sample integrity and stability. The compounds on which HBM studies often focus are those to which we are exposed in daily life. This implies that the analytical procedure should be able to deal with contamination of the sample with the compounds to be analysed, due to the presence of the compounds in our surroundings. Higher background contamination leads to a decreased capacity to detect low concentrations, thus negatively impacting the quality of the studies. Examples of compounds that have been monitored frequently in human urine are phthalates, such as diethyl hexyl phthalate or shortly DEHP. DEHP is a chemical used in many consumer products and therefore contamination of the samples with DEHP from the surroundings severely influences the analytical measurements. One way around this is to focus on the metabolites of DEHP after Phase I or II metabolism: this guarantees that the chemical has passed the human body and has undergone a metabolic transformation, and its detection is not due to contamination from the background, which results in a more reliable exposure metric. When the analytical method is designed for the quantitative analysis of metabolites, an enzymatic step for the deconjugation of the Phase II metabolites should be included (see section on Xenobiotic metabolism and defence). Because the generated data, i.e. the concentrations of the compounds in the human samples, are used to determine parameters like average/median exposure levels, the detection frequency of specific compounds and highest/lowest exposure levels, the accuracy of the measurements should be high. In addition, analytical methods used for HBM should be capable of high throughput, i.e. the time needed per analysis should be low, because of the large numbers of samples that are typically analysed, in the order of a hundred to a few thousand samples, depending on the study. Summarizing, HBM data support the assessment of temporal trends and spatial patterns in human exposure, sheds light on subpopulations that are at risk and provides insight into the effectiveness of measures to reduce or even prevent adverse health effects due to chemical exposure. Background information: HBM4EU project info: www.hbm4eu.eu; video: https://www.youtube.com/watch?v=DmC1v6EAeAM&feature=youtu.be. 4.3.11. Question 1 Draw a scheme of a typical analytical procedure for the quantitative determination of the concentration of a compound in a certain sample and name the different steps. 4.3.11. Question 2 What are the aims of human biomonitoring? 4.3.11. Question 3 Why is it useful to focus HBM on metabolites of chemicals and not on the original chemical, which is the case for e.g. phthalates? 4.2.3.I. Question 4 Mention 5 factors related to chemical analysis in HBM and explain their importance 4.3.11. Question 5 Ethics approval from a Medical Ethical Approval Committee is mandatory to carry out an HBM study. Specify what type of information should be included in a dossier in order to obtain approval. 4.3.11. Molecular epidemiology - II. The exposome and internal molecular markers (draft) Authors: Karen Vrijens and Michelle Plusquin Reviewers: Frank Van Belleghem, Learning objectives You should be able to • explain the concept of the exposome, including its different exposures • understand the application of the meet-in-the-middle model in molecular epidemiological studies • describe how different molecular markers such as gene expression, epigenetics and metabolomics can represent sensitivity to certain environmental exposure Exposome The exposome idea was described by Christopher Wild in 2005 as a measure of all human life-long exposures, including the process of how these exposures relate to health. An important aim of the exposome is to explain how non-genetic exposures contribute to the onset or development of important chronic disease. This concept represents the totality of exposures from three broad domains, i.e. internal, specific external and general external (Figure 1) (Wild, 2012). The internal exposome includes processes such as metabolism, endogenous circulating hormones, body morphology, physical activity, gut microbiota, inflammation, and aging. The specific external exposures include diverse agents, for example, radiation, infections, chemical contaminants and pollutants, diet, lifestyle factors (e.g., tobacco, alcohol) and medical interventions. The wider social, economic and psychological influences on the individual make up the general external exposome, including the following factors but not limited to social capital, education, financial status, psychological stress, urban-rural environment, climate, etc1. The exposome concept clearly illustrates the complexity of the environment humans are exposed to nowadays, and how this can impact human health. There is a need for internal biomarkers of exposure (see section on Human biomonitoring) as well as biomarkers of effect, to disentangle the complex interplay between several exposures occurring potentially simultaneously and at different concentrations throughout life. Advances in biomedical sciences and molecular biology thereby collecting holistic information of epigenetics, transcriptome (see section on Gene expression), metabolome (see section on Metabolomics), etc. are at the forefront to identify biomarkers of exposure as well as of effect. Meet in the middlemodel To determine the health effect of environmental exposure, markers that can detect early changes before disease arises are essential and can be implemented in preventative medicine. These types of markers can be seen as intermediate biomarkers of effect, and their discovery relies on large-scale studies at different levels of biology (transcriptomics, genomics, metabolomics). The term "omics" refers to the quantitative measurement of global sets of molecules in biological samples using high throughput techniques (i.e. automated experiments that enable large scale repetition)7, in combination with advanced biostatistics and bioinformatics tools8. Given the availability of data from high-throughput omics platforms, together with reliable measurements of external exposures, the use of omics enhances the search for markers playing a role in the biological pathway linking exposure to disease risk. The meet-in-the-middle (MITM) concept was suggested as a way to address the challenge of identifying causal relationships linking exposures and disease outcome (Figure 2). The first step of this approach consists in the investigation of the association between exposure and biomarkers of exposure. The next step consists in the study of the relationship between (biomarkers of) exposure and intermediate omics biomarkers of early effects; and third, the relation between the disease outcome and intermediate omics biomarkers is assessed. The MITM stipulates that the causal nature of an association is reinforced if it is found in all three steps. Molecular markers that indicate susceptibility to certain environmental exposures are starting to become uncovered and can aid in targeted prevention strategies. Therefore, this approach is heavily dependent on new developments in molecular epidemiology, in which molecular biology is merged into epidemiological studies. Below, the different levels of molecular biology currently studied to identify markers of exposure and effect are discussed in detail. Levels Intermediate biomarkers can be identified as measurable indicators of certain biological states at different levels of the cellular machinery, and vary in their response time, duration, site and mechanism of action. Different molecular markers might be preferred depending on the exposure(s) under study. Gene expression Changes at the mRNA level can be studied following a candidate approach in which mRNAs with a biological role suspected to be involved in the molecular response to a certain type of exposure (e.g. inflammatory mRNAs in the case of exposure to tobacco smoke) are selected a priori and measured using quantitative PCR technology or alternatively at the level of the whole genome by means of microarray analyses or Next Generation Sequencing technology. 10 Changes at the transcriptome level, are studied by analysing the totality of RNA molecules present in a cell type or sample. Both types of studies have proven their utility in molecular epidemiology. About a decade ago the first study was published reporting on candidate gene expression profiles that were associated with exposure to diverse carcinogens11. Around the same time, the first studies on transcriptomics were published, including transcriptomic profiles for a dioxin-exposed population 12, in association with diesel-exhaust exposure,13 and comparing smokers versus non-smokers both in blood 14 as well as airway epithelium cells15. More recently, attention has been focused on prenatal exposures in association with transcriptomic signatures, as this fits within the scope of the exposome concept. As such, transcriptomic profiles have been described in association with exposure to maternal smoking assessed in placental tissue,16 as well as particulate matter exposure in cord blood samples17. Epigenetics Epigenetics related to all heritable changes in that do not affect the DNA sequence itself directly. The most widely studied epigenetic mechanism in the field of environmental epidemiology to date is DNA methylation. DNA methylation refers to the process in which methyl-groups are added to a DNA sequence. As such, these methylation changes can alter the expression of a DNA segment without altering its sequence. DNA methylation can be studied by a candidate gene approach using a digestion-based design or, more commonly used, a bisulfite conversion followed by pyrosequencing, methylation-specific PCR or a bead array. The bisulfite treatment of DNA mediates the deamination of cytosine into uracil, and these converted residues will be read as thymine, as determined by PCR-amplification and sequencing. However, 5 mC residues are resistant to this conversion and will remain read as cytosine (Figure 3). If an untargeted approach is desirable, several strategies can be followed to obtain whole-genome methylation data, including sequencing. Epigenotyping technologies such as the human methylation BeadChips 18 generate a methylation-state-specific 'pseudo-SNP' through bisulfite conversion; therefore, translating differences in the DNA methylation patterns into sequence differences that can be analyzed using quantitative genotyping methods19. An interesting characteristic of DNA methylation is that it can have transgenerational effects (i.e. effects that act across multiple generations). This was first shown in a study on a population that was prenatally exposed to famine during the Dutch Hunger Winter in 1944-1945. These individuals had less DNA methylation of the imprinted gene coding for insulin-like growth factor 2 (IGF2) measured 6 decades later compared with their unexposed, same-sex siblings. The association was specific for peri-conceptional exposure (i.e. exposure during the period from before conception to early pregnancy), reinforcing that very early mammalian development is a crucial period for establishing and maintaining epigenetic marks20. Post-translational modifications (i.e. referring to the biochemical modification of proteins following protein biosynthesis) recently gained more attention as they are known to be induced by oxidative stress 21 (see sections on Oxidative stress) and specific inflammatory mediators 22. Besides their function in the structure of chromatin in eukaryotic cells, histones have been shown to have toxic and pro-inflammatory activities when they are released into the extracellular space 23. Much attention has gone to the associations between metal exposures and histone modifications,24 although recently a first human study on the association between particulate matter exposure and histone H3 modification was published25. Expression of microRNAs (miRNAs are small noncoding RNAs of ∼22nt in length which are involved in the regulation of gene expression at the posttranscriptional level by degrading their target mRNAs and/or inhibiting their translation) {Ambros et al,2004}{Ambros, 2004 #324}{Ambros, 2004 #324} has also been shown to serve as a valuable marker of exposure, both candidate and untargeted approaches have resulted in the identification of miRNA expression patterns that are associated with exposure to smoking 26, particulate matter 27, and chemicals such as polychlorinated biphenyls (PCBs) 28. Metabolomics Metabolomics have been proposed as a valuable approach to address the challenges of the exposome. Metabolomics, the study of metabolism at the whole-body level, involves assessment of the entire repertoire of small molecule metabolic products present in a biological sample. Unlike genes, transcripts and proteins, metabolites are not encoded in the genome. They are also chemically diverse, consisting of carbohydrates, amino acids, lipids, nucleotides and more. Humans are expected to contain a few thousand metabolites, including those they make themselves as well as nutrients and pollutants from their environment and substances produced by microbes in the gut. The study of metabolomics increases knowledge on the interactions between gene and protein expression, and the environment29. Metabolomics can be a biomarker of effect of environmental exposure as it allows for the full characterization of biochemical changes that occur during xenobiotic metabolism (see Section on Xenobiotic metabolism and defence). Recent technological developments have allowed downscaling the sample volume necessary for the analysis of the full metabolome, allowing for the assessment of system-wide metabolic changes that occur as a result of an exposure or in conjunction with a health outcome 30. As for all discussed biomarkers, both targeted metabolomics, in which specific metabolites are measured in order to characterize a pathway of interest, as well as untargeted metabolomic approaches are available. Among "omics" methodologies, metabolomics interrogates the levels of a relatively lower number of features as there are about 2900 known human metabolites versus ~30,000 genes. Therefore it has strong statistical power compared to transcriptome-wide and genome-wide studies 31. Metabolomics is, therefore, a potentially sensitive method for identifying biochemical effects of external stressors. Even though the developing field of "environmental metabolomics" seeks to employ metabolomic methodologies to characterize the effects of environmental exposures on organism function and health, the relationship between most of the chemicals and their effects on the human metabolome have not yet been studied. Challenges Limitations of molecular epidemiological studies include the difficulty to obtain samples to study, the need for large study populations to identify significant relations between exposure and the biomarker, the need for complex statistical methods to analyse the data. To circumvent the issue of sample collection, much effort has been focused on eliminating the need for blood or serum samples by utilizing saliva samples, buccal cells or nail clippings to read out molecular markers. Although these samples can be easily collected in a non-invasive manner, care must be taken to prove that these samples indeed accurately reflect the body's response to exposure rather than a local effect. For DNA methylation, it has been shown this is heavily dependent on the locus under study. For certain CpG sites the correlation in methylation levels is much higher than for other sites 32. For those sites that do not correlate well across tissues, it has furthermore been demonstrated that DNA methylation levels can differ in their associations with clinical outcomes 33, so care must be taken in epidemiological study design to overcome these issues. 4.3.11. Question 6 Describe the 3 components of the exposome 4.3.11. Question 7 Explain how using the meet in the middle model biomarkers can be identified 4.3.11. Question 8 Give 3 examples of molecular (bio)markers of environmental exposure. 4.3.12. Gene expression Author: Nico M. van Straalen Reviewers: Dick Roelofs, Dave Spurgeon Learning objectives: You should be able to • provide an overview of the various "omics" approaches (genomics, transcriptomics, proteomics and metabolomics) deployed in environmental toxicology. • describe the practicalities of transcriptomics, how a transcription profile is generated and analysed. • indicate the advantages and disadvantages of the use of genome-wide gene expression in environmental toxicology. • develop an idea on how transcriptomics might be integrated into risk assessment of chemicals. Keywords: genomics, transcriptomics, proteomics, metabolomics, risk assessment Synopsis Low-dose exposure to toxicants induces biochemical changes in an organism, which aim to maintain homoeostasis of the internal environment and to prevent damage. One aspect of these changes is a high abundance of transcripts of biotransformation enzymes, oxidative stress defence enzymes, heat shock proteins and many proteins related to the cellular stress response. Such defence mechanisms are often highly inducible, that is, their activity is greatly upregulated in response to a toxicant. It is also known that most of the stress responses are specific to the type of toxicant. This principle may be reversed: if an upregulated stress response is observed, this implies that the organism is exposed to a certain stress factor; the nature of the stress factor may even be derived from the transcription profile. For this reason, microarrays, RNA sequencing or other techniques of transcriptome analysis, have been applied in a large variety of contexts, both in laboratory experiments and in field surveys. These studies suggest that transcriptomics scores high on (in decreasing order) (1) rapidity, (2) specificity, and (3) sensitivity. While the promises of genomics applications in environmental toxicology are high, most of the applications are in mode-of-action studies rather than in risk assessment. Introduction No organism is defenceless against environmental toxicants. Even at exposures below phenotypically visible no-effect levels a host of physiological and biochemical defence mechanisms are already active and contribute to the organism's homeostasis. These regulatory mechanisms often involve upregulation of defence mechanisms such as oxidative stress defence, biotransformation (xenobiotic metabolism), heat shock responses, induction of metal-binding proteins, hypoxia response, repair of DNA damage, etc. At the same time downregulation is observed for energy metabolism and functions related to growth and reproduction. In addition to these targeted regulatory mechanisms targeting, there are usually a lot of secondary effects and dysfunctional changes arising from damage. A comprehensive overview of all these adjustments can be obtained from analysis of the transcriptome. In this module we will review the various approaches adopted in "omics", with an emphasis on transcriptomics. "Omics" is a container term comprising five different activities. Table 1 provides a list of these approaches and their possible contribution to environmental toxicology. Genomics and transcriptomics deal with DNA and mRNA sequencing, proteomics relies on mass spectrometry while metabolomics involves a variety of separation and detection techniques, depending on the class of compounds analysed. The various approaches gain strength when applied jointly. For example proteomics analysis is much more insightful if it can be linked to an annotated genome sequence and metabolism studies can profit greatly from transcription profiles that include the enzymes responsible for metabolic reactions. Systems biology aims to integrate the different approaches using mathematical models. However, it is fair to say that the correlation between responses at the different levels is often rather poor. Upregulation of a transcript does not always imply more protein, more protein can be generated without transcriptional upregulation and the concentration of a metabolite is not always correlated with upregulation of the enzymes supposed to produce it. In this module we will focus on transcriptomics only. Metabolomics is dealt with in a separate section. Table 1. Overview of the various "omics" approaches Term Description Relevance for environmental toxicology Genomics Genome sequencing and assembly, comparison of genomes, phylogenetics, evolutionary analysis Explanation of species and lineage differences in susceptibility from the structure of targets and metabolic potential, relationship between toxicology, evolution and ecology Transcriptomics Genome-wide transcriptome (mRNA) analysis, gene expression profiling Target and metabolism expression indicating activity, analysis of modes of action, diagnosis of substance-specific effects, early warning instrument for risk assessment Proteomics Analysis of the protein complement of the cell or tissue Systemic metabolism and detoxification, diagnosis of physiological status, long-term or permanent effects Metabolomics Analysis of all metabolites from a certain class, pathway analysis Functional read-out of the physiological state of a cell or tissue Systems biology Integration of the various "omics" approaches, network analysis, modelling Understanding of coherent responses, extrapolation to whole-body phenotypic responses Transcriptomics analysis The aim of transcriptomics in environmental toxicology is to gain a complete overview of all changes in mRNA abundance in a cell or tissue as a function of exposure to environmental chemicals. This is usually done in the following sequence of steps: 1. Exposure of organisms to an environmental toxicant, including a range of concentrations, time-points, etc., depending on the objectives of the experiment. 2. Isolation of total RNA from individuals or a sample of pooled individuals. The number of biological replicates is determined at this stage, by the number of independent RNA isolations, not by technical replication further on in the procedure. 3. Reverse transcription. mRNAs are transcribed to cDNA using the enzyme reverse transcriptase that initiates at the polyA tail of mRNAs. Because ribosomal RNA lacks a poly(A)tail they are (in principle) not transcribed to cDNA. This is followed by size selection and sometimes labelling of cDNAs with barcodes to facilitate sequencing. 4. Sequencing of the cDNA pool and transcriptome assembly. The assembly preferably makes use of a reference genome for the species.If no reference genome is available, the transcriptome is assembled de novo, which requires a greater sequencing depth and usually ends in many incomplete transcripts. A variety of corrections are applied to equalize effects of total RNA yield, library size, sequencing depth, gene length, etc. 5. Gene expression analysis and estimation of fold regulation. This is done, in principle, by counting the normalized number of transcripts per gene for every gene in the genome, for each of the different conditions to which the organism was exposed. The response per gene is expressed as fold regulation, by expressing the transcripts relative to a standard or control condition. Tests are conducted to separate significant changes from noise. 6. Annotation and assessment of pathways and functions as influenced by exposure. An integrative picture is developed, taking all evidence together, of the functional changes in the organism. In the recent past, step 4 was done by microarray hybridization rather than by direct sequencing. In this technique two pools of cDNA (e.g. a control and a treatment) are hybridized to a large number of probes fixed onto a small glass plate. The probes are designed to represent the complete gene complement of the organism. Positive hybridization signals are taken as evidence for upregulated gene expression. Microarray hybridization arose in the years 1995-2005 but has now been largely overtaken by ultrafast and high-throughput next generation sequencing methods, however, due to cost-efficiency, relative simplicity of bioinformatics analysis, and standardization of the assessed genes it is still often used. We illustrate the principles of transcriptomics analysis and the kind of data analysis that follows it, by an example from the work by Bundy et al. (2008). These authors exposed earthworms (Lumbricus rubellus) to soils experimentally amended with copper, quite a toxic element for earthworms. The copper-induced transcriptome was surveyed using a custom-made microarray and metabolic profiles were established using NMR (nuclear magnetic resonance) spectroscopy. From the 8,209 probes on the microarray, 329 showed a significant alteration of expression under the influence of copper. The data were plotted in a "heat map" diagram (Figures 1A and 1B), providing a quick overview of upregulated and downregulated genes. The expression profiles were also analysed in reduced dimensionality using principal component analysis (PCA). This showed that the profiles varied considerably with treatment. Especially the highest and the penultimate highest exposures generated a profile very different from the control (see Figure 1C). The genes could be allocated to four clusters, (1) genes upregulated by copper over all exposures (Figure 1D), (2) genes downregulated by copper (see Figure 1E), (3) genes upregulated by low exposures but unaffected at higher exposures (see Figure 1F), and (4) genes upregulated by low exposure but downregulated by higher concentrations (see Figure 1G). Analysis of gene identity combined with metabolite analysis suggested that the changes were due to an effect of copper on mitochondrial respiration, reducing the amount of energy generated by oxidative phosphorylation. This mechanism was underlying the reduction of body-growth observed on the phenotypic level. Omics in risk assessment How could omics-technology, especially transcriptomics, contribute to risk assessment of chemicals? Three possible advantages have been put forward: 1. Gene expression analysis is rapid. Gene regulation takes place on a time-scale of hours and results can be obtained within a few days. This compares very favourably with traditional toxicity testing (Daphnia, 48 hours, Folsomia, 28 days). 2. Gene expression is specific. Because a transcription profile involves hundreds to thousands of endpoints (genes), the information content is potentially very large. By comparing a new profile generated by an unknown compound, to a trained data set, the compound can usually be identified quite precisely. 3. Gene expression is sensitive. Because gene regulation is among the very first biochemical responses in an organism, it is expected to respond to lower dosages, at which whole-body parameters such as survival, growth and reproduction are not yet responding. Among these advantages, the second one (specificity) has shown to be the most consistent and possibly brings the largest advantage. This can be illustrated by a study by Dom et al. (2012) in which gene expression profiles were generated for Daphnia magna exposed to different alcohols and chlorinated anilines (Figure 2). The profiles of replicates exposed to the same compound were always clustered together, except in one case (ethanol), showing that gene expression is quite specific to the compound. It is possible to reverse this argument: from the gene expression profile the compound causing it can be deduced. In addition, the example cited showed that the first separation in the cluster analysis was between exposures that did and did not affect energy reserves and growth. So the gene expression profiles are not only indicative of the compound, but also of the type of effects expected. The claim of rapidity also proved true, however, the advantage of rapidity is not always borne out. It may be an issue when quick decisions are crucial (evaluating a truck loaded with suspect contaminated soil, deciding whether to discharge a certain waste stream into a lake yes or no), but for regular risk assessment procedures it proved to be less of an advantage than sometimes expected. Finally, greater sensitivity of gene expression, in the sense of lower no-observed effect concentrations than classical endpoints is a potential advantage, but proves to be less spectacular in practice. However, there are clear examples in which exposures below phenotypic effect levels were shown to induce gene expression responses, indicating that the organism was able to compensate any negative effects by adjusting its biochemistry. Another strategy regarding the use of gene expression in risk assessment is not to focus on genome-wide transcriptomes but on selected biomarker genes. In this strategy, gene expressions are aimed for that show (1) consistent dose-dependency, (2) responses over a wide range of contaminants, and (3) correlations with biological damage. For example, De Boer et al. (2015) analysed a composite data set including experiments with six heavy metals, six chlorinated anilines, tetrachlorobenzene, phenanthrene, diclofenac and isothiocyanate, all previously used in standardized experiments with the soil-living collembolan, Folsomia candida. Across all treatments a selection of 61 genes was made, that were responsive in all cases and fulfilled the three criteria listed above. Some of these marker genes showed a very good and reproducible dose-related response to soil contamination. Two biomarkers are shown in Figure 3. This experiment, designed to diagnose a field soil with complex unknown contamination, clearly demonstrated the presence of Cyp-inducing organic toxicants. Of course there are also disadvantages associated with transcriptomics in environmental toxicology, for example: 1. Gene expression analysis requires a knowledge-intensive infrastructure, including a high level of expertise for some of the bioinformatics analyses. Also, adequate molecular laboratory facilities are needed; some techniques are quite expensive. 2. Gene expression analysis is most fruitful when species are used that are backed up by adequate genomic resources, especially a well annotated genome assembly, although this is becoming less of a problem with increasing availability of genomic resources. 3. The relationship between gene expression and ecologically relevant variables such as growth and reproduction of the animal is not always clear. Conclusions Gene expression analysis has come to occupy a designated niche in environmental toxicology since about 2005. It is a field highly driven by technology, and shows continuous change over the last years. It may significantly contribute to risk assessment in the context of mode of action studies and as a source of designated biomarker techniques. Finally, transcriptomics data are very suitable to feed into information regarding key events, important biochemical alterations that are causally linked up to the level of the phenotype to form an adverse outcome pathway. We refer to the section on Adverse outcome pathways for further reading. References Bundy, J.G., Sidhu, J.K., Rana, F., Spurgeon, D.J., Svendsen, C., Wren, J.F., Stürzenbaum, S.R., Morgan, A.J., Kille, P. (2008). "Systems toxicology" approach identifies coordinated metabolic responses to copper in a terrestrial non-model invertebrate, the earthworm Lumbricus rubellus. BMC Biology 6, 25. De Boer, T.E., Janssens, T.K.S., Legler, J., Van Straalen, N.M., Roelofs, D. (2015). Combined transcriptomics analysis for classification of adverse effects as a potential end point in effect based screening. Environmental Science and Technology 49, 14274-14281. Dom, N., Vergauwen, L., Vandenbrouck, T., Jansen, M., Blust, R., Knapen, D. (2012). Physiological and molecular effect assessment versus physico-chemistry based mode of action schemes: Daphnia magna exposed to narcotics and polar narcotics. Environmental Science and Technology 46, 10-18. Gibson, G., Muse, S.V. (2002). A Primer of Genome Science. Sinauer Associates Inc., Sunderland. Gibson, G. (2008). The environmental contribution to gene expression profiles. Nature Reviews Genetics 9, 575-581. Roelofs, D., De Boer, M., Agamennone, V., Bouchier, P., Legler, J., Van Straalen, N. (2012). Functional environmental genomics of a municipal landfill soil. Frontiers in Genetics 3, 85. Van Straalen, N.M., Feder, M.E. (2012). Ecological and evolutionary functional genomics - how can it contribute to the risk assessment of chemicals? Environmental Science & Technology 46, 3-9. Van Straalen, N.M., Roelofs, D. (2008). Genomics technology for assessing soil pollution. Journal of Biology 7, 19. 4.3.12. Question 1 Define: (1) genomics, (2) transcriptomics (3) proteomics, and (4) metabolomics, and indicate which of these approaches is most useful in the risk assessment of chemicals. 4.3.12. Question 2 Specify three advantages and at least one disadvantage of the use of genome-wide gene expression as a tool in environmental risk assessment. 4.3.13. Metabolomics Author: Pim E.G. Leonards Reviewers: Nico van Straalen, Drew Ekman Learning objectives: You should be able to: • understands the basics of metabolomics and how metabolomics can be used. • describe the basic principles of metabolomics analysis, and how a metabolic profile is generated and analysed. • describe the differences between targeted and untargeted metabolomics and how each is used in environmental toxicology. • develop an idea on how metabolomics might be integrated into hazard and risk assessments of chemicals. Keywords: Metabolomics, metabolome, environmental metabolomics, application areas of metabolomics, targeted and untargeted metabolomics, metabolomics analysis and workflow Introduction Metabolomics is the systematic study of small organic molecules (<1000 Da) that are intermediates and products formed in cells and biofluids due to metabolic processes. A great variety of small molecules result from the interaction between genes, proteins and metabolites. The primary types of small organic molecules studied are endogenous metabolites (i.e., those that occur naturally in the cell) such as sugars, amino acids, neurotransmitters, hormones, vitamins, and fatty acids. The total number of endogenous metabolites in an organism is still under study but is estimated to be in the thousands. However, this number varies considerably between species and cell types. For instance, brain cells contain relative high levels of neurotransmitters and lipids, nevertheless the levels between different types of brain tissues can vary largely. Metabolites are working in a network, e.g. citric acid cycle, by the conversion of molecules by enzymes. The turnover time of the metabolites is regulated by the enzymes present and the amount of metabolites present. The field of metabolomics is relatively new compared to genomics, with the first draft of the human metabolome available in 2007. However, the field has grown rapidly since that time due to its recognized ability to reflect molecular changes most closely associated with an organism's phenotype. Indeed, in comparison to other 'omics approaches (e.g., transcriptomics), metabolites are the downstream results from the action of genes and proteins and, as such, provide a direct link with the phenotype (Figure 1). The metabolic status of an organism is directly related to its function (e.g. energetic, oxidative, endocrine, and reproductive status) and phenotype, and is, therefore, uniquely suitable to relate chemical stress to the health status of organisms. Moreover, transcriptomics and proteomics, the identification of metabolites does not require the existence of gene sequences, making it particularly useful for the those species which lack a sequenced genome. Definitions The complete set of small molecules in a biological system (e.g. cells, body fluids, tissues, organism) is called the metabolome (Table 1). The term metabolomics was introduced by Oliver et al. (1998) who described it "as the complete set of metabolites/low molecular weight compounds which is context dependent, varying according to the physiology, development or pathological state of the cell, tissue, organ or organism". This quote highlights the observation that the levels of metabolites can vary due to internal as well as external factors, including stress resulting from exposure to environmental contaminants. This has resulted in the emergence and growth of the field of environmental metabolomics which is based on the application of metabolomics, to biological systems that are exposed to environmental contaminants and other relevant stressors (e.g., temperature). In addition to endogenous metabolites, some metabolomic studies also measure changes in the biotransformation of environmental contaminants, food additives, or drugs in cells, the collection of which has been termed the xenometabolome. Table 1: Definitions of metabolomics. Term Definition Relevance for environmental toxicology Metabolomics Analysis of small organic molecules (<1000 Da) in biological systems (e.g. cell, tissue, organism) Functional read-out of the physiological state of a cell or tissue and directly related to the phenotype Metabolome Measurement of the complete set of small molecules in a biological system Discovery of affected metabolic pathways due to contaminant exposure Environmental metabolomics Metabolomics analysis in biological systems that are exposed to environmental stress, such as the exposure to environmental contaminants Metabolomics focused on environmental contaminant exposure to study for instance the mechanism of toxicity or to find a biomarker of exposure or effect Xenometabolome Metabolites formed from the biotransformation of environmental contaminants, food additives, or drugs Understanding the metabolism of the target contaminant Targeted metabolomics Analysis of a pre-selected set of metabolites in a biological system Focus on the effects of environmental contaminants on specific metabolic pathways Untargeted metabolomics Analysis of all detectable (i.e., not preselected) of metabolites in a biological system Discovery-based analysis of the metabolic pathways affected by environmental contaminant exposure Environmental Metabolomics Analysis The development and successful application of metabolomics relies heavily on i) currently available analytical techniques that measure metabolites in cells, tissues, and organisms, ii) the identification of the chemical structures of the metabolites, and iii) characterisation of the metabolic variability within cells, tissues, and organisms. The aim of metabolomics analysis in environmental toxicology can be: • to focus on changes in the abundances of specific metabolites in a biological system after environmental contaminant exposure: targeted metabolomics • to provide a "complete" overview of changes in abundances of all detectable metabolites in a biological system after environmental contaminant exposure: untargeted metabolomics In targeted metabolomics a limited number of pre-selected metabolites (typically 1-100) are quantitatively analysed (e.g. nmol dopamine/g tissue). For example, metabolites in the neurotransmitter biosynthetic pathway could be targeted to assess exposures to pesticides. Targeting specific metabolites in this way typically allows for their detection at low concentrations with high accuracy. Conversely, in untargeted metabolomics the aim is to detect as many metabolites as possible, regardless of their identities so as to assess as much of the metabolome as possible. The largest challenge for untargeted metabolomics is the identification (annotation) of the chemical structures of the detected metabolites. There is currently no single analytical method able to detect all metabolites in a sample, and therefore a combination of different analytical techniques are used to detect the metabolome. Different techniques are required due to the wide range of physical-chemical properties of the metabolites. The variety of chemical structures of metabolites are shown in Figure 2. Metabolites can be grouped in classes such as fatty acids (the classes are given in brackets in Figure 2), and within a class different metabolites can be found. A general workflow of environmental metabolomics analysis uses the following steps: 1. of the organism or cells to an environmental contaminant. An unexposed control group must also be included. The exposures often include the use of various concentrations, time-points, etc., depending on the objectives of the study. 2. Sample collection of the relevant biological material (e.g. cell, tissue, organism). It is important that the collection be done as quickly as possible so as to quench any further metabolism. Typically ice cold solvents are used. 3. of the metabolites from the cell, tissue or organisms by a two-step extraction using a combination of polar (e.g. water/methanol) and apolar (e.g. chloroform) extraction solvents. 4. of the polar and apolar fractions using liquid chromatography (LC) or gas chromatography (GC) combined with mass spectrometry (MS), or by nuclear magnetic resonance (NMR) spectroscopy. The analytical tool(s) used will depend on the metabolites under consideration and whether a targeted or untargeted approach is required. 5. Metabolite detection (targeted or untargeted analysis). • Targeted metabolomics - a specific set of pre-selected metabolites are detectedand their concentrations are determined using authentic standards. • Untargeted metabolomics - a list of all detectable metabolites measured by MS or NMR response, and their intensities is collected. Various techniques are then used to determine the identities of those metabolites that change due to the exposure (see step 7 below). 6. Statistical analysis using univariate and multivariate statistics to calculate the difference between the exposure and the control groups. The fold change (fold increase or decrease of the metabolite levels) between an exposure and control group are determined. 7. For untargeted metabolomics only the chemical structure of the statistically significant metabolites are identified. The identification of the chemical structure of the metabolite can be based on the molecular weight, isotope patterns, elemental compositions, mass spectrometry fragmentation patterns, etc. Mass spectrometry libraries are used for the identification to match the above parameters in the samples with the data in libraries. 8. Data processing: Identification of the metabolic pathways influenced by the chemical exposure. Integrative picture of the molecular and functional level of an organism due to chemical exposure. Understand the relationship between the chemical exposure, molecular pathway changes and the observed toxicity. Or to identify potential biomarkers of exposure or effect. Box: Analytical tools for metabolomics analysis The most frequently used analytical tools for measuring metabolites are mass spectrometry (MS) and nuclear magnetic resonance (NMR) spectroscopy. MS is an analytical tool that generates ions of molecules and then measures their mass-to-charge ratios. This information can be used to generate a "molecular finger print" for each molecule, and based on this finger print metabolites can be identified. Chromatography is typically used to separate the different metabolites of a mixture found in a sample before it enters the mass spectrometer. Two main chromatography techniques are used in metabolomics: liquid chromatography and gas chromatography. Due to its high sensitivity, MS is able to measure a large number of different metabolites simultaneously. Moreover, when coupled with a separation method such as chromatography, MS can detect and identify thousands of metabolites. Mass spectrometry is much more sensitive than NMR, and it can detect a large range of different types of metabolites with different physical-chemical properties. NMR is less sensitive and can therefore detect a lower number of metabolites (typically 50-200). The advantages of NMR are the minimum amount of sample handling, the reproducibility of the measurements (due to high precision), and it is easier to quantify the levels of metabolites. In addition, NMR is a non-destructive technique such that a sample can often be used for further analyses after the data has been acquired. Application of environmental metabolomics Metabolomics has been widely used in drug discovery and medical sciences. More recently, metabolomics is being incorporated into environmental studies, an emerging field of research called environmental metabolomics. Environmental metabolomics is used mainly in five application domains (Table 2). Arguably the most commonly used application is for studying the mechanism of toxicity/mode of action (MoA) of contaminants. However, many studies have identified select metabolites that show promise for use as biomarkers of exposure or effect. As a result of its strength in identifying response fingerprints, metabolomics is also finding use in the regulatory toxicology field particularly for read-across studies. This application is particularly useful for rapidly screening contaminants for toxicity. Metabolomics can be also be used in dose-response studies (benchmark dosing) to derive a point of departure (POD). This is especially interesting in regulatory chemical risk assessment. Currently, the field of systems toxicology is explored by combining data from different omics field (e.g. transcriptomics, proteomics, metabolomics) to improve our understanding of the relationship between the different omics, chemical exposure, and toxicity, and to better understand the mechanism of toxicity/ MoA. Table 2: Application areas of metabolomics in environmental toxicology. Application area Description Mechanism of toxicity/ Mode of action (MoA) Using metabolomics to understand at the molecular level the pathways that are affected by exposure to environmental contaminants. Discovery of the mode of action of chemicals. In an adverse outcome pathway (AOP)Discovery metabolomics is used to identify the key events (KE), by linking chemical exposure at the molecular level to functional endpoints (e.g. reproduction, behaviour). Biomarker discovery Identification of metabolites that can be used as convenient (i.e., easy and inexpensive to measure) indicators of exposure or effect. Read-across In regulatory toxicology, metabolomics is used in read-across studies to provide information on the similarity of the responses between chemicals. This approach is useful to identify more environmentally toxic chemicals. Point of departure Metabolomics can be used in dose-response studies (benchmark dosing) to derive a point of departure (POD). This is especially interesting in regulatory chemical risk assessment. This application is currently not used yet. Systems toxicology Combination approach of different omics (e.g. transcriptomics, proteomics, metabolomics) to improve our understanding of the relationship between the different omics and chemical exposure, and to better understand the mechanism of toxicity/ MoA. As an illustration of the mechanism of toxicity/mode of action application, Bundy et al. (2008) used NMR-based metabolomics to study earthworms (Lumbricus rubellus) exposed to various concentrations of copper in soil (0, 10, 40, 160, 480 mg copper / kg soil). They performed both transcriptomic and metabolomics studies. Both polar (sugars, amino acid, etc.) and apolar (lipids) metabolites were analysed, and fold changes relative to the control group were determined. For example, differences in the fold changes of lipid metabolites (e.g. fatty acids, triacylglycerol) as a function of copper concentration are shown as a "heatmap" in Figure 3A. Clearly the highest dose group (480 mg/kg) has a very different lipid metabolite pattern than the other groups. The polar metabolite data was analysed using principal component analysis (PCA) , a multivariate statistical tool that reduces the number of dimensions of the data. The PCA score plot shown in Figure 3B reveals that the largest differences in metabolite profiles exist between: the control and low dose (10 mg Cu/kg) groups, the 40 mg Cu/kg and 160 mg Cu/kg groups, and the highest dose (480 mg Cu/kg) group (Figure 3B). These separations indicate that the metabolite patterns in these groups were different as a result of the different copper exposures. Some of the metabolites were up- and some were down-regulated due to the copper exposure (two examples given in Figures 3c and 3D). The metabolite data were also combined with gene expression data in a systems toxicology application. This combined analysis showed that the copper exposures led to disruption of energy metabolism, particularly with regard to effects on the mitochondria and oxidative phosphorylation. Bundy et al. associated this effect on energy metabolism with a reduced growth rate of the earthworms. This study effectively showed that metabolomics can be used to understand the metabolite pathways that are affected by copper exposure and are closely linked to phenotypic changes (i.e., reduced growth rate). The transcriptome data collected simultaneously were in good accordance with the metabolome patterns, supporting Bundy et al.'s hypothesis that simultaneous measurement of the transcriptomes and metabolome can be used to validate the findings of both approaches, and in turn the value of "systems toxicology". Challenges in metabolomics Several challenges currently exist in the field of metabolomics. From a biological perspective, metabolism is a dynamic process and therefore very time-sensitive. Taking samples at different time-points during development of an organism, or throughout a chemical exposure can result in quite different metabolite patterns. Sample handling and storage can also be challenging as some metabolites are very unstable during sample collection and sample treatment. From an analytical perspective, metabolites possess a wide range of physico-chemical properties and occur in highly varying concentrations such that capturing the widest portion of the metabolome requires analysis with more than one analytical technique. However, the largest challenge is arguably the identification of the chemical structure of unknown metabolites. Even with state-of-the-art analytical techniques only a fraction of the unknown metabolites can be confidently identified. Conclusions Metabolomics is a relative new field in toxicology, but is rapidly increasing our understanding of the biochemical pathways affected by exposure to environmental contaminants, and in turn their mechanisms of action. Linking the changed molecular pathways due to the contaminant exposure to phenotypical changes of the organisms is an area of great interest. Continual advances in state-of-the-art analytical tools for metabolite detection and identification will continue to this trend and expand the utility of environmental metabolomics for prioritizing contaminants. However, a number of challenges remain for the widespread use of metabolomics in regulatory toxicology. Fortunately, recent growth in international interest to address these challenges is underway, and is making great strides in a variety of applications. References Bundy, J.G., Sidhu, J.K., Rana, F., Spurgeon, D.J., Svendsen, C., Wren, J.F., Sturzenbaum, S.R., Morgan, A.J., Kille, P. (2008). 'Systems toxicology' approach identifies coordinated metabolic responses to copper in a terrestrial non-model invertebrate, the earthworm Lumbricus rubellus. BMC Biology, 6(25), 1-21. Bundy, J.G., Matthew P. Davey, M.P., Viant, M.R. (2009). Environmental metabolomics: a critical review and future perspectives. Metabolomics, 5, 3-21. Johnson, C.H., Ivanisevic, J., Siuzdak, G. (2016). Metabolomics: beyond biomarkers and towards mechanisms. Nature Reviews, Molecular and Cellular Biology 17, 451-459. 4.3.13. Question 1 What type of molecules are measured with metabolomics? Proteins Small molecules Genes Polymers 4.3.13. Question 2 What are typical application areas of environmental metabolomics? 4.3.13. Question 3 Give four main elements of the workflow of metabolomics, and describe two in more detail?
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.03%3A_Toxicity_Testing.txt
4.4. Increasing ecological realism in toxicity testing Author: Michiel Kraak Reviewer: Kees van Gestel Learning objectives: You should be able to • argue the need to increase ecological realism in single-species toxicity tests. • list the consecutive steps to increase ecological realism in single-species toxicity tests. Keywords: single-species toxicity tests, mixture toxicity, multistress, chronic toxicity, multigeneration effects, ecological realism. Introduction The vast majority of single-species toxicity tests reported in the literature concerns acute or short-term exposures to individual chemicals, in which mortality is often the only endpoint. This is in sharp contrast with the actual situation at contaminated sites, where organisms may be exposed to relatively low levels of mixtures of contaminants under suboptimal conditions for their entire life span. Hence there is an urgent need to increase ecological realism in single-species toxicity tests by addressing sublethal endpoints, mixture toxicity, multistress effects, chronic toxicity and multigeneration effects. Increasing ecological realism in single-species toxicity tests Mortality is a crude parameter representing the response of organi sms to relatively high and therefore often environmentally irrelevant toxicant concentrations. At much lower and environmentally more relevant toxicant concentrations, organisms may suffer from a wide variety of sublethal effects. Hence, the first step to gain ecological realism in single-species toxicity tests is to address sublethal endpoints instead of, or in addition to mortality (Figure 1). Yet, given the short exposure time in acute toxicity tests it is difficult to assess other endpoints than mortality. Photosynthesis of plants and behaviour of animals are elegant, sensitive and rapidly responding endpoints that can be incorporated into short-term toxicity tests to enhance their ecological realism (see section on Endpoints). Since organisms are often exposed to relatively low levels of contaminants for their entire life span, the next step to increase ecological realism in single-species toxicity tests is to increase exposure time by performing chronic experiments (Figure 1) (see section on Chronic toxicity). Moreover, in chronic toxicity tests a wide variety of sublethal endpoints can be assessed in addition to mortality, the most common ones being growth and reproduction (see to section on Endpoints). Given the relatively short duration of the life cycle of many invertebrates and unicellular organisms like bacteria and algae, it would be relevant to prolong the exposure time even further, by exposing the test organisms for their entire life span, so from the egg or juvenile phase till adulthood including their reproductive performance, or for several generations, assessing multigeneration effects (Figure 1) (see section on Multigeneration effects). In contaminated environments, organisms are generally exposed to a wide variety of toxicants under variable and sub-optimal conditions. To further gain ecological realism, mixture toxicity and multistress scenarios should thus be considered (figure 1) (see sections on Mixture toxicity and Multistress). The highest ecological relevance of laboratory toxicity tests may be achieved by addressing the above mentioned issues all together in one type of experiment, chronic mixture toxicity tests assessing sublethal endpoints. Yet, even nowadays such studies remain scarce. Another way of increasing ecological realism of toxicity testing is by moving towards multispecies test systems that allow for assessing the impacts of chemicals and other stressors on species interactions within communities (see chapter 5 on Population, community and ecosystem ecotoxicology). 4.4. Question 1 Argue the need to increase ecological realism in single-species toxicity tests. 4.4 Question 2 List the consecutive steps to increase ecological realism in single-species toxicity tests. 4.4.1. Mixture toxicity Authors: Michiel Kraak & Kees van Gestel Reviewer: Thomas Backhaus Learning objectives: You should be able to · explain the concepts involved in mixture toxicity testing, including Concentration Addition and Response Addition. · design mixture toxicity experiments and to understand how the toxicity of (equitoxic) toxicant mixtures is assessed. · interpret the results of mixture toxicity experiments and to understand the meaning of Concentration Addition, Response Addition, as well as antagonism and synergism as deviations from Concentration Addition and Response Addition. Key words: Mixture toxicity, TU summation, equitoxicity, Concentration Addition, Response Addition, Independent Action Introduction In contaminated environments, organisms are generally exposed to complex mixtures of toxicants. Hence, there is an urgent need for assessing their joint toxic effects. In theory, there are four classes of joint effects of compounds in a mixture as depicted in Figure 1. Four classes of joint effects No interaction (additive) Interaction (non-additive) Similar action Simple similar action/ Concentration Addition Complex similar action Dissimilar action Independent action/ Response Addition Dependent action Figure 1. The four classes of joint effects of compounds in a mixture, as proposed by Hewlett and Plackett (1959). Simple similar action & Concentration Addition The most simple case concerns compounds that share the same mode of action and do not interact (Figure 1 upper left panel: simple similar action). This holds for compounds acting on the same biological pathway, affecting strictly the same molecular target. Hence, the only difference is the relative potency of the compounds. In this case Concentration Addition is taken as the starting point, following the Toxic Unit (TU) approach. This approach expresses the toxic potency of a chemical as TU, which is calculated for each compound in the mixture as: with c = the concentration of the compound in the mixture, and ECx = the concentration of the compound where the measured endpoint is affected by X % compared to the non-exposed control. Next, the toxic potency of the mixture is calculated as the sum of the TUs of the individual compounds: Imagine that the EC50 of compound A is 300 μg.L-1 and that of compound B 60 μg.L-1. In a mixture of A+B 30 μg.L-1 A and 30 μg.L-1 B are added. These concentrations represent 30/300 = 0.1 TU of A and 30/60 = 0.5 TU of B. Hence, the mixture consists of 0.1 + 0.5 = 0.6 TU. Yet, the two compounds in this mixture are not represented at equal toxic strength, since this specific mixture is dominated by compound B. To compose mixtures in which the compounds are represented at equal toxic strength, the equitoxicity concept is applied: 1 Equitoxic TU A+B = 0.5 TU A + 0.5 TU B 1 Equitoxic TU A+B = 150 μg.L-1 A + 30 μg.L-1 B As in traditional concentration-response relationships, survival or a sublethal endpoint is plotted against the mixture concentration from which the EC50 value and the corresponding 95% confidence limits can be derived (see section on Concentration-response relationships). If the upper and lower 95% confidence limits of the EC50 value of the mixture include 1 TU, the EC50 of the mixture does not differ from 1 TU and the toxicity of the compounds in the mixture is indeed concentration additive (Figure 2). An experiment appealing to the imagination was performed by Deneer et al. (1988), who tested a mixture of 50 narcotic compounds (see section on Toxicodynamics and Molecular interactions) and observed perfect concentration addition, even when the individual compounds were present at only 0.25% (0.0025 TU) of their EC50. This showed in particular that narcotic compounds present at concentrations way below their no effect level still contribute to the joint toxicity of a mixture (Deneer et al., 1988). This was also shown for metals (Kraak et al., 1999). This is alarming, since even nowadays environmental legislation is still based on a compound-by-compound approach. The study by Deneer et al. (1988) also clearly demonstrated the logistical challenges of mixture toxicity testing. Since for composing equitoxic mixtures the EC50 values of the individual compounds need to be known, testing an equitoxic mixture of 50 compounds requires 51 toxicity tests: 50 individual compounds and 1 mixture. Independent Action & Response Addition When chemicals have a different mode of action, act on different targets, but still contribute to the same biological endpoint, the mixture is expected to behave according to Response Addition (also termed Independent Action; Figure 1, lower left panel). Such a situation would occur, for example, if one compound inhibits photosynthesis, and a second one inhibits DNA-replication, but both inhibit the growth of an exposed algal population. To calculate the effect of a mixture of compounds with different modes of action, Response Addition is applied as follows: The probability that a compound, at the concentration at which it is present in the mixture, exerts a toxic effect (scaled from 0 to 1), differs per compound and the cumulative effect of the mixture is the result of combining these probabilities, according to: E(mix) = E(A) + E(B) - E(A)E(B) Where E(mix) is the fraction affected by the mixture, and E(A) and E(B) are the fractions affected by the individual compounds A and B at the concentrations at which they occur in the mixture. In fact, this equation sums the fraction affected by compound A and the fraction affected by compound B at the concentration at which they are present in the mixture, and then corrects for the fact that the fraction already affected by chemical A cannot be affected again by chemical B (or vice versa). The latter part of the equation is needed to account for the fact that the chemicals act independent from each other. This is visualised in Figure 3. The equation: E(mix) = E(A) + E(B) - E(A)E(B) can be rewritten as: 1-E(mix) = (1-EA)*(1-EB) This means that the probability of not being affected by the mixture (1-E(mix)) is the product of the probabilities of not being affected by (the specific concentrations of) compound A and compound B. At the EC50, both the affected and the unaffected fraction are 50%, hence (1-EA)*(1-EB) = 0.5. If both compounds equally contribute to the effect of the mixture, (1-EA) = (1-EB) and thus (1-EA or B)2 = 0.5, so both (1-EA) and (1-EB) equal = 0.71. Since the probability of not being affected is 0.71 for compound A and compound B, the probability of being affected is 0.29. Thus at the EC50 of a mixture of two compounds acting according to Independent Action, both compounds should be present at a concentration equalling their EC29. Interactions between the compounds in a mixture Concentration Addition as well as Response Addition both assume that the compounds in a mixture do not interact (see Figure 1). However, in reality, such interactions can occur in all four steps of the toxic action of a mixture. The first step concerns chemical and physicochemical interactions. Compounds in the environment may interact, affecting each other's bioavailability. For instance, excess of Zn causes Cd to be more available in the soil solution as a result of competition for the same binding sites. The second step involves physiological interactions during uptake by an organism, influencing the toxicokinetics of the compounds, for example by competition for uptake sites at the cell membrane. The third step refers to the internal processing of the compounds, e.g. involving effects on each other's biotransformation or detoxification (toxicokinetics). The fourth step concerns interactions at the target site(s), i.e. the toxicodynamics during the actual intoxication process. The typical whole organism responses that are recorded in many ecotoxicity tests integrate the last three types of interactions, resulting in deviations from the toxicity predictions from Concentration Addition and Response Addition. Deviations from Concentration Addition If the EC50 of the mixture is higher than 1 TU and the lower 95% confidence limit is also above 1 TU, the toxicity of the compounds in the mixture is less than concentration additive, as more of the mixture is needed than anticipated to cause 50% effect (Figure 4, blue line; antagonism). Correspondingly, if the EC50 of the mixture is lower than 1 TU and the upper 95% confidence limit is also below 1 TU, the toxicity of the compounds in the mixture is more than concentration additive (Figure 4, red line; synergism). When the toxicity of a mixture is more than concentration additive, the compounds enhance each other's toxicity. When the toxicity of a mixture is less than concentration additive, the compounds reduce each other's toxicity. Both types of deviation from additivity can have two different reasons: 1. The compounds have the same mode of action, but do interact (Figure 1, upper right panel: complex similar action). 2. The compounds have different modes of actions (Independent action/Response Addition; Figure 1, lower left panel). Concentration-response surfaces and isoboles Elaborating on Figure 4, concentration-response relationships for mixtures can also be presented as multi-dimensional figures, with different axes for the concentration of each of the chemicals included in the mixture (Figure 5A). In case of a mixture of two chemicals, such a dose-response surface can be shown in a two-dimensional plane using isoboles. Figure 5B shows isoboles for a mixture of two chemicals, under different assumptions for interactions according to Concentration Addition. If the interaction between the two compounds decreases the toxicity of the mixture, this is referred to as antagonism (Figure 5B, blue line). If the interaction between the two compounds increases the toxicity of the mixture, this is referred to as synergism (Figure 5B, red line). Thus both antagonism and synergism are terms to describe deviations from Concentration Addition due to interaction between the compounds. Yet, antagonism in relation to Concentration Addition (less than concentration additive; Figure 5B blue line) can simply be caused by the compounds behaving according to Response Addition, and not behaving antagonistically. Synergism and antagonism evaluated by both concepts The use of the terms synergism and antagonism may be problematic, because antagonism in relation to Concentration Addition (less than concentration additive; Figure 5B blue line) can simply be caused by the compounds behaving according to Response Addition, and not behaving antagonistically. Similarly, deviations from Response Addition could also mean that chemicals in the mixture do have the same mode of action, so act additively according to Concentration Addition. One can therefore only conclude on synergism/antagonism if the experimental observations are higher/lower than the predictions by both concepts. Suggested further reading Rider, C.V., Simmons, J.E. (2018). Chemical Mixtures and Combined Chemical and Nonchemical Stressors: Exposure, Toxicity, Analysis, and Risk, Springer International Publishing AG. ISBN-13: 978-3319562322. Bopp, S.K., Kienzler, A., Van der Linden, S., Lamon, L., Paini, A., Parissis, N., Richarz, A.N., Triebe, J., Worth, A. (2016). Review of case studies on the human and environmental risk assessment of chemical mixtures. JRC Technical Reports EUR 27968 EN, European Union, doi:10.2788/272583. References Berenbaum, M.C. (1981). Criteria for analysing interactions between biologically active agents. Advances in Cancer Research 35, 269-335. Deneer, J.W., Sinnige, T.L., Seinen, W., Hermens, J.L.M. (1988). The joint acute toxicity to Daphnia magna of industrial organic chemicals at low concentrations. Aquatic Toxicology 12, 33-38. Hewlett, P.S., Plackett, R.L. (1959). A unified theory for quantal responses to mixtures of drugs: non-interactive action. Biometrics 15, 691 610. Kraak, M.H.S., Stuijfzand, S.C., Admiraal, W. (1999). Short-term ecotoxicity of a mixture of five metals to the zebra mussel Dreissena polymorpha. Bulletin of Environmental Contamination and Toxicology 63, 805-812. Van Gestel, C.A.M., Jonker, M.J., Kammenga, J.E., Laskowski, R., Svendsen, C. (Eds.) (2011). Mixture toxicity. Linking approaches from ecological and human toxicology. SETAC Press, Society of Environmental Toxicology and Chemistry, Pensacola. 4.4.1. Question 1 What is the motivation to perform mixture toxicity experiments? 4.4.1. Question 2 When do you expect concentration-addition and when not? 4.4.1. Question 3 What are the three possible outcomes of a mixture toxicity experiment applying concentration addition? 4.4.1. Question 4 Calculate the effect concentration at which two compounds with different modes of action showing no interaction equally contribute to a mixture causing 60% effect. 4.4.1. Question 5 One can only conclude on synergism/antagonism, if the experimental observations are higher/lower than the predictions by both concepts (concentration addition and response addition). Why? 4.4.2. Multistress Introduction Author: Michiel Kraak Reviewer: Kees van Gestel Learning objectives: You should be able to · define stress and multistress. · explain the ecological relevance of multistress scenarios. Keywords: Stress, multistress, chemical-abiotic interactions, chemical-biotic interactions Introduction In contaminated environments, organisms are generally exposed to a wide variety of toxicants under variable and sub-optimal conditions. To gain ecological realism, multistress scenarios should thus be considered, but these are, however, understudied. Definitions Stress is defined as an environmental change that affects the fitness and ecological functioning of species (i.e., growth, reproduction, behaviour), ultimately leading to changes in community structure and ecosystem functioning. Multistress is subsequently defined as a situation in which an organism is exposed both to a toxicant and to stressful environmental conditions. This includes chemical-abiotic interactions, chemical-biotic interactions as well as combinations of these. Common abiotic stressors are for instance pH, drought, salinity and above all temperature, while common biotic stressors include predation, competition, population density and food shortage. Experiments on such stressors typically study, for instance, the effect of increasing temperature or the influence of food availability on the toxicity of compounds. The present definition of multistress thus excludes mixture toxicity (see section on Mixture toxicity) as well as situations in which organisms are confronted with several suboptimal (a)biotic environmental variables jointly without being exposed to toxicants. The next chapters deal with chemical-abiotic and chemical-biotic interactions and with practical issues related with the performance of multistress experiments, respectively. 4.4.2. Question 1 Give the definitions of stress and multistress. 4.4.2. Question 2 What is the ecological relevance of testing multistress scenarios? 4.4.3. Multistress - biotic Authors: Marjolein Van Ginneken and Lieven Bervoets Reviewers: Michiel Kraak and Martin Holmstrup Learning objectives: You should be able to • define biotic stress and to give three examples. • explain how biotic stressors can change the toxicity of chemicals • explain how chemicals can change the way organisms react to biotic stressors Keywords: Multistress, chemical-biotic interactions, stressor interactions, bioavailability, behavior, energy trade-off Introduction Generally, organisms have to cope with the joint presence of chemical and natural stressors. Both biotic and abiotic stressors can affect the chemicals' bioavailability and toxicokinetics. Additionally, they can influence the behavior and physiology of organisms, which could result in higher or lower toxic effects. Vice versa, chemicals can alter the way organisms react to natural stressors. By studying the effects of multiple stressors, we can identify potential synergistic, additive or antagonistic interactions, which are essential to adequately assess the risk of chemicals in nature. Relyea (2003), for instance, found that apparently safe concentrations of carbaryl can become deadly to some amphibian species when combined with predator cues. This section focuses on biotic stress, which can be defined as stress caused by living organisms and includes predation, competition, population density, food availability, pathogens and parasitism. It will describe how biotic stressors and chemicals act and interact. Types of biotic stressors Biotic stressors can have direct and indirect effects on organisms. For example, predators can change food web structures by consuming their prey and thus altering prey abundance and can indirectly affect prey growth and development as well, by inducing energetically-costly defense mechanisms. Also behaviors like (foraging) activity can be decreased and even morphological changes can be induced. For example, Daphnia pulex can develop neck spines when they are subject to predation. Similarly, parasites can alter host behavior or induce morphological changes, e.g., in coloration, but they usually do not kill their host. Yet, parasitism can compromise the immune system and alter the energy budget of the host. High population density is a stressor that can affect energy budgets and intraspecific and interspecific competition for space, status or resources. By altering resource availability, changes in growth and size at maturity can be the result. Additionally, these competition-related stressors can affect behavior, for example by limiting the number of suitable mating partners. Also pathogens (e.g., viruses, bacteria and fungi) can lower fitness and fecundity. It should be realized that the effects of different biotic stressors cannot be strictly separated from each other. For example, pathogens can spread more rapidly when population densities are high, while predation, on the other hand, can limit competition. Effects of biotic stressors on bioavailability and toxicokinetics Biotic stressors can alter the bioavailability of chemicals. For example in the aquatic environment, food level may determine the availability of chemicals to filter feeders, as they may adsorb to particulate organic matter, such as algae. As the exposure route (waterborne or via food) can influence the subsequent toxicokinetic processes, this may also change the chemicals' toxic effects. Effects of biotic stressors on behavior Biotic stressors have been reported to cause behavioral effects in organisms that could change the toxic effects of chemicals. These effects include altered feeding rates and reduced activities. The presence of a predator, for example, reduces prey (foraging) activity to avoid being detected by the perceived predator and so decreases chemical uptake via food. On the other hand, the condition of the prey organisms will decrease due to the lower food consumption, which means less energy is available for other physiological processes (see below). In addition to biotic stressors, also chemicals can disrupt essential behaviors by reduction of olfactory receptor sensitivity, cholinesterase inhibition, alterations in brain neurotransmitter levels, and impaired gonadal or thyroid hormone levels. This could lead to disruptive effects on communication, feeding rates and reproduction. An inability to find mating partners, for example, could then be worsened by a low population density. Furthermore, chemicals can alter predator-prey relationships, which might result in trophic cascades. Strong top-down effects will be observed when a predator or grazer is more sensitive to the contaminant than its prey. Alternatively, bottom-up effects are observed when the susceptibility of a prey species to predation is increased. For example, Cu exposure of fish and crustaceans can decrease their response to olfactory cues, making them unresponsive to predator stress and increasing the risk to be detected and consumed (Van Ginneken et al., 2018). Effects on the competition between species may also occur, when one species is more sensitive than the other. Thus, both chemical and biotic stressors can alter behavior and result in interactive effects that could change the entire ecosystem structure and function (Fleeger et al., 2003). Physiology Biotic stressors can cause elevated respiration rates of organisms, in aquatic organisms leading to a higher toxicant uptake through diffusion. On the other hand, they can also decrease respiration. For example, low food levels decrease metabolic activity and thus respiration. Additionally, a reduced metabolic rate could decrease the toxicity of chemicals which are metabolically activated. Also certain chemicals, such as metals, can cause a higher or lower oxygen consumption, which might counteract or reinforce the effects of biotic stressors. Besides affecting respiration, both biotic and chemical stressors can induce physiological damage to organisms. For instance, predator stress and pesticides cause oxidative stress, leading to synergistic effects on the induction of antioxidant enzymes such as catalase and superoxide dismutase (Janssens and Stoks, 2013). Furthermore, the organism can eliminate or detoxify internal toxicant concentrations, e.g. by transformation via Mixed Function Oxidation enzymes (MFO) or by sequestration, i.e. binding to metallothioneins or storage in inert tissues such as granules. These defensive mechanisms for detoxification and damage control are energetically costly, leading to energy trade-offs. This means less energy can be used for other processes such as growth, locomotion or reproduction. Food availability and lipid reserves can then play an important role, as well-fed organisms that are exposed to toxicants can more easily pay the energy costs than food-deprived organisms. Interactive effects The possible interactions, i.e. antagonism, synergism or additivity, between effects of stressors are difficult to predict and can differ depending on the stressor combination, chemical concentration, the endpoint and the species. For Ceriodaphnia dubia, Qin et al. (2011) demonstrated that predator stress influenced the toxic effects of several pesticides differently. While predator cues interacted antagonistically with bifenthrin and thiacloprid, they acted synergistically with fipronil. It should also be noted that interactive effects in nature might be weaker than when observed in the laboratory as stress levels fluctuate more rapidly or animals can move away from areas with high predator risk or chemical exposure levels. On the other hand, because generally more than two stressors are present in ecosystems, which could interact in an additive or synergistic way as well, they might be even more important in nature. Understanding interactions among multiple stressors is thus essential to estimate the actual impact of chemicals in nature. References Fleeger, J.W., Carman, K.R., Nisbet, R.M. (2003). Indirect effects of contaminants in aquatic ecosystems. Science of the Total Environment 317, 207-233. Janssens, L., Stoks, R. (2013). Synergistic effects between pesticide stress and predator cues: conflicting results from life history and physiology in the damselfly Enallagma cyathigerum. Aquatic Toxicology 132, 92-99. Qin, G., Presley, S.M., Anderson, T.A., Gao, W., Maul, J.D. (2011). Effects of predator cues on pesticide toxicity: toward an understanding of the mechanism of the interaction. Environmental Toxicology and Chemistry 30, 1926-1934. Relyea, R.A. (2003). Predator cues and pesticides: a double dose of danger for amphibians. Ecological Applications 13, 1515-1521. Van Ginneken, M., Blust, R., Bervoets, L. (2018). Combined effects of metal mixtures and predator stress on the freshwater isopod Asellus aquaticus. Aquatic Toxicology 200, 148-157. 4.4.3. Question 1 Give the definition of biotic stress and give 3 examples. 4.4.3. Question 2 How can biotic stressors change the toxic effects of chemicals? 4.4.3. Question 3 Give an example of how chemicals can change the way organisms react to biotic stress? 4.4.4. Multistress - abiotic Author: Martina Vijver Reviewers: Kees van Gestel, Michiel Kraak, Martin Holmstrup Learning objectives: You should be able to • relate stress to the ecological niche concept • list abiotic factors that may alter the toxic effects of chemicals on organisms, and indicate if these abiotic factors decrease or increase the toxic effects of chemicals on organisms Key words: Stress, ecological niche concept, multistress, chemical-abiotic interactions Introduction: stress related to the ecological niche concept The concept of stress can be defined at various levels of biological organization, from biochemistry to species fitness, ultimately leading to changes in community structure and ecosystem functioning. Yet, stress is most often studied in the context of individual organisms. The concept of stress is not absolute and can only be defined with reference to the normal range of ecological functioning. This is the case when organisms are within their range of tolerance (so-called ecological amplitude) or within their ecological niche, which describes the match of a species to specific environmental conditions. Applying this concept to stress allows it to be defined as a condition evoked in an organism by one or more environmental factors that bring the organism near or over the edges of its ecological niche (Van Straalen, 2003), see Figure 1. Multistress is subsequently defined as a situation in which an organism is exposed both to a toxicant and to stressful environmental conditions (see section Multistress - Introduction and definitions). This includes chemical-abiotic interactions, chemical-biotic interactions (see section Multistress - chemical - biotic interactions) as well as combinations of these. In general, organisms living under conditions close to their environmental tolerance limits appear to be more vulnerable to additional chemical stress. The opposite also holds: if organisms are stressed due to exposure to elevated levels of contaminants, their ability to cope with sub-optimal environmental conditions is reduced. Chemical-abiotic interactions Temperature. One of the predominant environmental factors altering toxic effects obviously is temperature. For poikilothermic (cold-blooded) organisms, increases in temperature lead to an increase in activity, which may affect both the uptake and the effects of chemicals. In a review by Heugens et al. (2001), studies reporting the effect chemicals on aquatic organisms in combination with abiotic factors like temperature, nutritional state and salinity were discussed. Generally, toxic effects increased with increasing temperature. Dependent on the effect parameter studied, the differences in toxic effects between laboratory and relevant field temperatures ranged from a factor of 2 to 130. Also freezing temperatures may interfere with chemical effects as was shown in another influential review of Holmstrup et al. (2010). Membrane damage is mentioned as an explanation for the synergistic interaction between combinations of metals and temperatures below zero. Food. Food availability may have a strong effect on the sensitivity of organisms to chemicals (see section Multistress - chemical - biotic interactions). In general decreasing food or nutrient levels increased toxicity, resulting in differences in toxicity between laboratory and relevant field situations ranging from a factor of 1.2 to 10 (Heugens et al., 2001). Yet, way higher differences in toxic effects related to food levels have been reported as well: Experiments performed with daphnids in cages that were placed in outdoor mesocosm ditches (see sections on Cosm studies and In situ bioassays) showed stunning differences in sensitivity to the insecticide thiacloprid. Under conditions of low to ambient nutrient concentrations, the observed toxicity, expressed as the lowest observed effect concentration (LOEC) for growth and reproduction occurred at thiacloprid concentrations that were 2500-fold lower than laboratory-derived LOEC values. Contrary to the low nutrient treatment, such altered toxicity was often not observed under nutrient-enriched conditions (Barmentlo et al submitted). The difference was likely attributable to the increased primary production that allowed for compensatory feeding and perhaps also reduced the bioavailability of the insecticide. Similar results were observed for sub-lethal endpoints measured on the damselfly species Ischnura elegans, for which the response to thiacloprid exposure strongly depended on food availability and quality. Damselfies that were feeding on natural resources were significantly more affected than those that were offered high quality artificial food (Barmentlo et al submitted). Salinity. The influence of salinity on toxicity is less clear (Heugens et al. 2001). If salinity pushes the organism towards its niche boundaries, it will worsen the toxic effects that it is experiencing. In case that a specific salinity fits in the ecological niche of the organism, processes affecting exposure will predominantly determine the stress it will experience. This for instance means that metal toxicity decreases with increasing salinity, as it is strongly affected by the competition of ions (see section on Metal speciation). The toxic effect induced by organophosphate insecticides however, increases with increasing salinity. For other chemicals, no clear relationship between toxicity and salinity was observed. A salinity increase from freshwater to marine water decreased toxicity by a factor of 2.1 (Heugens et al. 2001). However, as less extreme salinity changes are more relevant under field conditions, the change in toxicity is probably much smaller. pH. Many organisms have a species-specific range of pH levels at which they function optimally. At pH values outside the optimal range, organisms may show reduced reproduction and growth, in extreme cases even reduced survival. In some cases, the effects of pH may be indirect, as pH may also have an important impact on exposure of organisms to toxicants. This is especially the case for metals and ionizable chemicals: metal speciation, but also the form in which ionizable chemicals occur in the environment and therefore their bioavailability, is highly dependent on pH (see sections on Metal speciation and Ionogenic organic chemicals). An example of the interaction between pH and metal effects was shown by Crommentuijn et al. (1997), who observed a reduced control reproduction of the springtail Folsomia candida, but also the lowest cadmium toxicity at a soil pHKCl 7.0 compared to pHKCl 3.1-5.7. Drought. In soil, the moisture content (see section on Soil) is an important factor, since drought is often limiting the suitability of the soil as a habitat for organisms. Holmstrup et al. (2010), reviewing the literature, concluded that chemicals interfering with the drought tolerance of soil organisms, e.g. by affecting the functioning of membranes or the accumulation of sugars, may exacerbate the effects of drought. Earthworms are breathing through the skin and can only survive in moist soils, and the eggs of springtails can only survive at a relative air humidity close to 100%. This makes these organisms especially sensitive to drought, which may be enhanced by exposure to chemicals like metals, polycyclic aromatic hydrocarbon or surfactants (Holmstrup et al., 2010). Many different abiotic conditions, such as oxygen levels, light, turbidity, and organic matter content, can push organisms towards the boundaries of their niche, but we will not discuss all stressors in this book. Multistress in environmental risk assessment In environmental risk assessment, differences between stress-induced effects as determined in the laboratory under standardized optimal conditions with a single toxicant and the effects induced by multiple stressors are taken into account by applying an uncertainty factor. Yet, the choice for uncertainty factors is based on little ecological evidence. In 2001, Heugens already argued for obtaining uncertainty factors that sufficiently protect natural systems without being overprotective. Van Straalen (2003) echoed this and in current research the question is still raised if enough understanding has been gained to make accurate laboratory-to-field extrapolations. It remains a challenge to predict toxicant-induced effects on species and even on communities while accounting for variable and suboptimal environmental conditions, even though these conditions are common aspects of natural ecosystems (see for instance the section on Eco-epidemiology). References Barmentlo, S.H., Vriend, L.M, van Grunsven, R.H.A., Vijver, M.G. (submitted). Evidence that neonicotinoids contribute to damselfly decline. Crommentuijn, T., Doornekamp, A., Van Gestel, C.A.M. (1997). Bioavailability and ecological effects of cadmium on Folsomia candida (Willem) in an artificial soil substrate as influenced by pH and organic matter. Applied Soil Ecology 5, 261-271. Heugens, E.H., Hendriks, A.J., Dekker, T., Van Straalen, N.M., Admiraal, W. (2001). A review of the effects of multiple stressors on aquatic organisms and analysis of uncertainty factors for use in risk assessment. Critical Reviews in Toxicology 31, 247-84. Holmstrup, M., Bindesbøl, A.M., Oostingh, G.J., Duschl, A., Scheil, V., Köhler, H.R., Loureiro, S., Soares, A.M.V.M., Ferreira, A.L.G., Kienle, C., Gerhardt, A., Laskowski, R., Kramarz, P.E., Bayley, M., Svendsen, C., Spurgeon, D.J. (2010). Review Interactions between effects of environmental chemicals and natural stressors: A review. Science of the Total Environment 408, 3746-3762. Van Straalen, N.M. (2003). Ecotoxicology becomes stress ecology Environmental Science and Technology 37, 324A-330A. 4.4.4. Question 1 1. Describe the niche-based definition of stress using Figure 1. a. Explain what happens to a species when it has to deal with a temporary stress b. Explain what happens to a species if the stress is long term and the species is able to adapt to it 4.4.4. Question 2 Mention different abiotic factors and indicate how they may affect the sensitivity of organisms to chemicals. 4.4.5. Chronic toxicity - Eco Author: Michiel Kraak Reviewers: Kees van Gestel and Lieven Bervoets Learning objectives: You should be able to • explain the concepts involved in chronic toxicity testing, including the Acute to Chronic Ratio (ACR). • design chronic toxicity experiments and to solve the challenges involved in chronic toxicity testing. • interpret the results of chronic toxicity experiments and to mention the types of effects of toxicants that cannot be determined in acute toxicity experiments. Key words: Chronic toxicity, chronic sublethal endpoints, Acute to Chronic Ratio, mode of action. Introduction Most toxicity tests performed are short-term high-dose experiments, acute tests in which mortality is often the only endpoint. This is in sharp contrast with the field situation, where organisms are often exposed to relatively low levels of contaminants for their entire life span. The shorter the life cycle of the organism, the more realistic this scenario becomes. Hence, there is an urgent need for chronic toxicity testing. It should be realized though, that the terms acute and chronic have to be considered in relation to the length of the life cycle of the organism. A short-term exposure of four days is acute for fish, but chronic for algae, comprising already four generations. From acute to chronic toxicity testing The reason for the bias towards acute toxicity testing is obviously the higher costs involved in chronic toxicity testing, simply caused by the much longer duration of the test. Yet, chronic toxicity testing is challenging for several other reasons as well. First of all, during prolonged exposure organisms have to be fed. Although unavoidable, especially in aquatic toxicity testing, this will definitely influence the partitioning and the bioavailability of the test compound. Especially lipophilic compounds will strongly bind to the food, making toxicant uptake via the food more important than for hydrophilic compounds, thus causing compound specific changes in exposure routes. For chronic aquatic toxicity tests, especially for sediment testing, it may be challenging to maintain sufficiently high oxygen concentrations throughout the entire experiment (Figure 1). Obvious choices to be made include the duration of the exposure and the endpoints of the test. Generally it is aimed at including at least one reproductive event or the completion of an entire life cycle of the organism within the test duration. To ensure this, validity criteria are set to the different test guidelines, such as: - the mean number of living offspring produced per control parent daphnid surviving till the end of the test should be above 60 (OECD, 2012). - 85% of the adult control chironomid midges from the control treatment should emerge between 12 and 23 days after the start of the experiment (OECD, 2010). - the mean number of juveniles produced by 10 control collembolans should be at least 100 (OECD, 2016a). Chronic toxicity Generally toxicity increases with increasing exposure time, often expressed as the acute-to-chronic ratio (ACR), which is defined as the LC50 from an acute test divided by the door NOEC of EC10 from the chronic test. Alternatively, as shown in Figure 2, the acute LC50 can be divided by the chronic LC50. If compounds exhibit a strong direct lethal effect, the ACR will be low, but for compounds that slowly build up lethal body burdens (see section on Critical body concentrations) it can be very high. Hence, there is a relationship between the mode of action of a compound and the ACR. Yet, if chronic toxicity has to be extrapolated from acute toxicity data and the mode of action of the compound is unknown, an ACR of 10 is generally considered. It should be realized though that this number is chosen quite arbitrarily, potentially leading to under- as well as over estimation of the actual ACR. Since reproductive events and the completion of life cycles are involved, chronic toxicity tests allow an array of sublethal endpoints to be assessed, including growth and reproduction, as well as species specific endpoints like emergence (time) of chironomids. Consequently, compounds with different modes of action may cause very diverse sublethal effects on the test organisms during chronic exposure (Figure 3). The polycyclic aromatic compound (PAC) phenanthrene did not affect the completion of the life cycle of the midges, but above a certain exposure concentration the larvae died and no emergence was observed at all, suggesting a non-specific mode of action (narcosis). In contrast, the PAC acridone caused no mortality but delayed adult emergence significantly over a wide range of test concentrations, suggesting a specific mode of action affecting life cycle parameters of the midges (Leon Paumen et al., 2008). This clearly demonstrates that specific effects on life cycle parameters of compounds with different modes of action need time to become expressed. Chronic toxicity tests are single species tests, but if the effects of toxicants are assessed on all relevant life-cycle parameters, these can be integrated into effects on population growth rate (r). For the 21-day daphnid test this is achieved by the integration of age-specific data on the probability of survival and fecundity. The population growth rates calculated from chronic toxicity data are obviously not related to natural population growth rates in the field, but they do allow to construct dose-response relationships for the effects of toxicants on r, the ultimate endpoint in chronic toxicity testing (Figure 4; Waaijers et al., 2013). Chronic toxicity testing in practice Several protocols for standardized chronic toxicity tests are available, although less numerous than for acute toxicity testing. For water, the most common test is the 21 day Daphnia reproduction test (OECD, 2012), for sediment 28-day test guidelines are available for the midge Chironomus riparius (OECD, 2010) and for the worm Lumbriculus variegatus (OECD, 2007). For terrestrial soil, the springtail Folsomia candida (OECD, 2016a) and the earthworm Eisenia fetida (OECD, 2016b) are the most common test species, but also for enchytraeids a reproduction toxicity test guideline is available (OECD, 2016c). For a complete overview see (https://www.oecd-ilibrary.org/environment/oecd-guidelines-for-the-testing-of-chemicals-section-2-effects-on-biotic-systems_20745761/datedesc#collectionsort). References Leon Paumen, M., Borgman, E., Kraak, M.H.S., Van Gestel, C.A.M., Admiraal, W. (2008). Life cycle responses of the midge Chironomus riparius to polycyclic aromatic compound exposure. Environmental Pollution 152, 225-232. OECD (2007). OECD Guideline for Testing of Chemicals. Test No. 225: Sediment-Water Lumbriculus Toxicity Test Using Spiked Sediment. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2007. OECD (2010). OECD Guideline for Testing of Chemicals. Test No. 233: Sediment-Water Chironomid Life-Cycle Toxicity Test Using Spiked Water or Spiked Sediment. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2010. OECD (2012). OECD Guideline for Testing of Chemicals. Test No. 211. Daphnia magna Reproduction Test No. 211. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2012. OECD (2016a). OECD Guideline for Testing of Chemicals. Test No. 232. Collembolan Reproduction Test in Soil. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2016. OECD (2016b). OECD Guideline for Testing of Chemicals. Test No. 222. Earthworm Reproduction Test (Eisenia fetida/Eisenia andrei). Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2016. OECD (2016c). OECD Guideline for Testing of Chemicals. Test No. 220: Enchytraeid Reproduction Test. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2016. Waaijers, S.L., Bleyenberg, T.E., Dits, A., Schoorl, M., Schütt, J., Kools, S.A.E., De Voogt, P., Admiraal, W., Parsons, J.R., Kraak, M.H.S. (2013). Daphnid life cycle responses to new generation flame retardants. Environmental Science and Technology 47, 13798-13803. 4.4.5. Question 1 In acute toxicity tests the LC50 is derived after a short exposure time. Mention two outcomes of chronic toxicity tests that cannot be determined in acute toxicity tests. 4.4.5. Question 2 For which mode of toxic action of compounds do you expect low Acute-to-Chronic Ratios (ACR) and for which chemical mode of action do you expect ACR to be high? 4.4.6. Multigeneration toxicity testing - Eco Author: Michiel Kraak Reviewers: Kees van Gestel, Miriam Leon Paumen Learning objectives: You should be able to · explain how effects of toxicants may propagate during multigeneration exposure. · describe the experimental challenges and limitations of multigeneration toxicity testing and to be able to design multigeneration tests. · explain the implications of multigeneration testing for ecological risk assessment. Key words: Multigeneration exposure, extinction, adaptation, test design Introduction It is generally assumed that chronic life cycle toxicity tests are indicative of the actual risk that populations suffer from long-term exposure. Yet, at contaminated sites, organisms may be exposed during multiple generations and the shorter the life cycle of the organism, the more realistic this scenario becomes. There are, however, only few multigeneration studies performed, due to the obvious time and cost constraints. Since both aquatic and terrestrial life cycle toxicity tests generally last for 28 days (see section on Chronic toxicity), multigeneration testing will take approximately one month per generation. Moreover, the test compound often affects the life cycle of the test species in a dose-dependent manner. Consequently, the control population, for example, could already be in the 9th generation, while an exposed population could still be in the 8th generation due to chemical exposure related delay in growth and/or development. On top of these experimental challenges, multigeneration experiments are extremely error prone, simply because the chance that an experiment fails increases with increasing exposure time. Experimental considerations Designing a multigeneration toxicity experiment is challenging. First of all, there is the choice of how many generations the experiment should last, which is most frequently, but completely arbitrarily, set at approximately 10. Test concentrations have to be chosen as well, mostly based on chronic life cycle EC50 and EC10 values (Leon Paumen et al. 2008). Yet, it cannot be anticipated if, and to what extent, toxicity increases (or decreases) during multigeneration exposure. Hence, testing only one or two exposure concentrations increases the risk that the observed effects are not dose related, but are simply due to stochasticity. If the test concentrations chosen are too high, many treatments may go extinct after few generations. In contrast, too low test concentrations may show no effect at all. The latter was observed by Marinkovic at al. (2012), who had to increase the exposure concentrations during the experiment (see Figure in graphical abstract of Marinkovic et al., 2012). Finally, since a single experimental treatment often consists of an entire population, treatment replication is also challenging. Once the experiment is running, choices have to be made on the transition from generation to generation. If a replicate is maintained in a single jar, vessel or aquarium, generations may overlap and exposure concentrations may decrease with time. Therefore, most often a new generation is started by exposing offspring from the previous exposed parental generation in a freshly spiked experimental unit. If the aim is to determine how a population recovers when the concentration of the toxicant decreases with time, exposure to a single spiked medium also is an option, which seems most applicable to soils (Ernst et al., 2016; van Gestel et al., 2017). To assess recovery after several generations of (continuous) exposure to contaminated media, offspring from previous exposed generations may be maintained under control conditions. A wide variety of endpoints can be selected in multigeneration experiments. In case of aquatic insects like the non-biting midge Chironomus riparius these include survival, larval development time, emergence, emergence time, adult life span and reproduction. For terrestrial invertebrates survival, growth and reproduction can be selected. Only a very limited number of studies evaluated actual population endpoints like population growth rate (Postma and Davids, 1995). To persist or to perish If organisms are exposed for multiple generations the effects tend to worsen, ultimately leading to extinction, first of the population exposed to the highest concentration, followed by populations exposed to lower concentrations in later generations (Leon Paumen et al. 2008). Yet, it cannot be excluded that extinction occurs due to the relatively small population sizes in multigeneration experiments, while larger populations may pass a bottleneck and recover during later generations. Thresholds have also been reported, as shown in Figure 1 (Leon Paumen et al. 2008). Below certain exposure concentrations the exposed populations perform equally well as the controls, generation after generation. Hence, these concentrations may be considered as the 'infinite no effect concentration'. A mechanistic explanation may be that the metabolic machinery of the organism is capable of detoxifying or excreting the toxicants and that this takes so little energy that there is no trade off regarding growth and reproduction. It is concluded that the frequently reported worsening of effects during multigeneration toxicant exposure raises concerns about the use of single-generation studies in risk assessment to tackle long-term population effects of environmental toxicants. If populations exposed for multiple generations do not get extinct and persist, they may have developed resistance or adaptation (Figure 2). Regular sensitivity testing can therefore be included in multigeneration experiments, as depicted in Figure 1. Yet, it is still under debate whether this lower sensitivity is due to genetic adaptation, epigenetics or phenotypic plasticity (Marinkovic et al., 2012). References Ernst, G., Kabouw, P., Barth, M., Marx, M.T., Frommholz, U., Royer, S., Friedrich, S. (2016). Assessing the potential for intrinsic recovery in a Collembola two-generation study: possible implementation in a tiered soil risk assessment approach for plant protection products. Ecotoxicology 25, 1-14. Leon Paumen, M., Steenbergen, E., Kraak, M.H.S., Van Straalen, N. M., Van Gestel, C.A.M. (2008). Multigeneration exposure of the springtail Folsomia candida to phenanthrene: from dose-response relationships to threshold concentrations. Environmental Science and Technology 42, 6985-6990. Marinkovic, M., De Bruijn, K., Asselman, M., Bogaert, M., Jonker, M.J., Kraak, M.H.S., Admiraal, W. (2012). Response of the nonbiting midge Chironomus riparius to multigeneration toxicant exposure. Environmental Science and Technology 46, 12105−12111. Postma. J.F., Davids, C. (1995). Tolerance induction and life-cycle changes in cadmium-exposed Chironomus riparius (Diptera) during consecutive generations. Ecotoxicology and Environmental Safety 30, 195-202. Van Gestel, C.A.M., De Lima e Silva, C., Lam, T., Koekkoek, J.C. Lamoree, M.H., Verwei, R.A. (2017). Multigeneration toxicity of imidacloprid and thiacloprid to Folsomia candida. Ecotoxicology 26, 320-328. 4.4.6. Question 1 What is the motivation to perform multigeneration experiments? 4.4.6. Question 2 What are the two alternative outcomes of multigeneration toxicity experiments? 4.4.6. Question 3 What are the implications of multigeneration testing for ecological risk assessment? 4.4.7. Tropical Ecotoxicology Authors: Michiel Daam, Jörg Römbke Reviewer: Kees van Gestel, Michiel Kraak Learning objectives: You should be able · to name the distinctive features of tropical and temperate ecosystems · to explain their implications for environmental risk assessment in these regions · to mention some of the main research needs in tropical ecotoxicology Key words: Environmental risk assessment; pesticides; temperature; contaminant fate; test methods Introduction The tropics cover the area of the world (approx. 40%) that lies between the Tropic of Cancer, 23½° north of the equator and the Tropic of Capricorn, 23½° south of the equator. It is characterized by, on average, higher temperatures and sunlight levels than in temperate regions. Based on precipitation patterns, three main tropical climates may be distinguished: Tropical rainforest, monsoon and savanna climates. Due to the intrinsic differences between tropical and temperate regions, differences in the risks of chemicals are also likely to occur. These differences are briefly exemplified by taking pesticides as an example, addressing the following subjects: 1) Climate-related factors; 2) Species sensitivities; 3) Testing methods; 4) Agricultural practices and legislation. 1. Climate-related factors Three basic climate factors are essential for pesticide risks when comparing temperate and tropical aquatic agroecosystems: rainfall, temperature and sunlight. For example, high tropical temperatures have been associated with higher microbial activities and hence enhanced microbial pesticide degradation, resulting in lower exposure levels. On the other hand, toxicity of pesticides to aquatic biota may be higher with increasing temperature. Regarding terrestrial ecosystems, other important abiotic factors to be considered are soil humidity, pH, clay and organic carbon content and ion exchange capacity (i.e. the capacity of a soil to adsorb certain compounds) (Daam et al., 2019). Although several differences in climatic factors may be distinguished between tropical and temperate areas, these do not lead to consistent greater or lesser pesticide risk (e.g. Figure 1). 2. Species sensitivities Tropical areas harbour the highest biodiversity in the world and generate nearly 60% of the primary production. This higher species richness, as compared to their temperate counterparts, dictates that the possible occurrence of more sensitive species cannot be ignored. However, studies comparing the sensitivity of species from the same taxonomic group did not demonstrate a consistent higher or lower sensitivity of tropical organisms compared to temperate organisms (e.g. Figure 2). 3) Testing methods Given the vast differences in environmental conditions between tropical and temperate regions, the use of test procedures developed under temperate environments to assess pesticide risks in tropical areas has often been disputed. Subsequently, methods developed under temperate conditions need to be adapted to tropical environmental conditions, e.g. by using tropical test substrates and by testing at higher temperatures (Niva et al., 2016). As discussed above, tropical and temperate species from the same taxonomic group are not expected to demonstrate consistent differences in sensitivity. However, certain taxonomic groups may be more represented and/or ecologically or economically more important in tropical areas, such as freshwater shrimps (Daam and Rico, 2016) and (terrestrial) termites (Daam et al., 2019). Subsequently, the development of test procedures for such species and the incorporation in risk assessment procedures seems imperative. 4) Agricultural practices and legislation Agricultural practices in tropical countries are likely to lead to a higher pesticide exposure and hence higher risks to aquatic and terrestrial ecosystems under tropical conditions. Some of the main reasons for this include i) unnecessary applications and overuse; ii) use of cheaper but more hazardous pesticides, and iii) dangerous transportation and storage conditions, all often a result of a lack in training of pesticide applicators in the tropics (Daam and Van den Brink, 2010; Daam et al., 2019). Finally, countries in tropical regions usually do not have strict laws and risk assessment regulations in place regarding the registration and use of pesticides, meaning that pesticides banned in temperate regions for environmental reasons are often regularly available and used in tropical countries such as Brazil (e.g. Waichman et al. 2002). References and recommended further reading Daam, M.A., Van den Brink, P.J. (2010). Implications of differences between temperate and tropical freshwater ecosystems for the ecological risk assessment of pesticides. Ecotoxicology 19, 24-37. Daam, M.A., Chelinho, S., Niemeyer, J., Owojori, O., de Silva, M., Sousa, J.P., van Gestel, C.A.M., Römbke, J. (2019). Environmental risk assessment of pesticides in tropical terrestrial ecosystems: current status and future perspectives. Ecotoxicology and Environmental Safety 181, 534-547. Daam, M.A., Rico, A. (2016). Freshwater shrimps as sensitive test species for the risk assessment of pesticides in the tropics. Environmental Science and Pollution Research 25, 13235-13243. Niemeyer, J.C., Moreira-Santos, M., Nogueira, M.A., Carvalho, G.M., Ribeiro, R., Da Silva, E.M., Sousa, J.P. (2010). Environmental risk assessment of a metal contaminated area in the Tropics. Tier I: screening phase. Journal of Soils and Sediments 10, 1557-1571. Niva, C.C., Niemeyer, J.C., Rodrigues da Silva Júnior, F.M., Tenório Nunes, M.E., de Sousa, D.L., Silva Aragão, C.W., Sautter, K.D., Gaeta Espindola, E., Sousa, J.P., Römbke, J. (2016). Soil Ecotoxicology in Brazil is taking its course. Environmental Science and Pollution Research 23, 363-378. Waichman, A.V., Römbke, J., Ribeiro, M.O.A., Nina, N.C.S. (2002). Use and fate of pesticides in the Amazon State, Brazil. Risk to human health and the environment. Environmental Science and Pollution Research 9, 423-428. 4.4.7. Question 1 What are the most important climatic factors affecting the fate and effects of chemicals when comparing temperate and tropical regions? 4.4.7. Question 2 Can tropical organisms be expected to be more sensitive to chemicals than temperate organisms? Please justify your answer. 4.4.7. Question 3 Should ecotoxicological test methods be adapted for their use in tropical regions? If yes, please provide two examples of adaptations that should be made. 4.4.7. Question 4 If a chemical is allowed for use in Europe, would you recommend its use in a tropical country without additional testing? Please justify your answer.
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.04%3A_Increasing_ecological_realism_in_toxicity_testing.txt
3.6. Availability and bioavailability 3.6.1. Definitions Authors: Martina Vijver Reviewer: Kees van Gestel, Ravi Naidu Leaning objectives: You should be able to: • understand that bioavailability consists of three principle processes. • understand that bioavailability is a dynamic concept. • understand why bioavailability is important to explain uptake and effects, essential for a proper risk assessment of chemicals. Keywords: Chemical availability, actual and potential uptake, toxico-kinetics, toxico-dynamics. Introduction: Although many environmental chemists, toxicologists, and engineers claim to know what bioavailability means, the term eludes a consensus definition. Bioavailability may be defined as that fraction of chemical present in the environment that is or may become available for biological uptake by passage across cell membranes. Bioavailability generally is approached from a process-oriented point of view within a toxicological framework, which is applicable to all types of chemicals (Figure 1). The first process is chemical availability which can be defined as the fraction of the total concentration of chemicals present in an environmental compartment that contributes to the exposure of an organism. The total concentration in an environmental compartment is not necessarily involved in the exposure, as a smaller or larger fraction of the chemical may be bound to organic or inorganic components of the environment. Organic matter and clay particles, for instance, are important in binding chemicals (see section on Soil), while also the presence of cations and pH are important factors modifying the partitioning of chemicals between different environmental phases (see section on Metal speciation). The second process is the actual or potential uptake, described as the toxicokinetics of a substance which reflects the development with time of its concentration on, and in, the organism (see section on Bioconcentration and kinetics modelling). The third process describes the internal distribution of the substance leading to its interaction(s) at the cellular site of toxicity activation. This is sometimes referred to as toxico-availability and also includes the biochemical and physiological processes resulting from the effects of the chemical at the site of action. Details on the bioavailability concept described above as well as how the physico-chemical interactions influencing each process are described in the sections on Metal speciation and Bioconcentration and kinetics modelling. Kinetics are involved in all three basic processes. The timeframe can vary from very brief (less than seconds) to very long in the order of hundreds of years. Figure 2 shows that some fractions of pollutants are present in soil or sediment, but may never contribute to the transport of chemicals that could reach the internal site during an organism's lifespan. The fractions with different desorption kinetics may relate to different experimental techniques to determine the relevant bioavailability metric. Box 1: Illustration of how bioavailability influences our human fitness Iron deficiency occurs when a body has not enough iron to supply its needs. Iron is present in all cells of the human body and has several vital functions. It is a key component of the hemoglobin protein, carrying oxygen to the tissues from the lungs. Iron also plays an important role in oxidation/reduction reactions, which are crucial for the functioning of the cytochrome P450 enzymes that are responsible for the biotransformation of endogenic as well as xenobiotic chemicals. Iron deficiency therefore can interfere with these vital functions, leading to a lack of energy (feeling tired) and eventually to malfunctioning of muscles and the brain. In case of iron deficiency, the medical doctor will prescribe Fe-supplements and iron-rich food such as red meat and green leafy vegetables like spinach. Although this will lead to a higher intake of iron (after all exposure is higher), it does not necessarily lead to a higher uptake as here bioavailability becomes important. It is advised to avoid drinking milk or caffeinated drinks together with eating iron-rich products or supplements because both drinks will prevent the absorption of iron in the intestinal tract. Calcium ions abundant in milk will compete with iron ions for the same uptake sites, so excess calcium will reduce iron uptake. Carbonates and caffeine molecules, but also phytate (inositol polyphosphate) present in vegetables, will strongly bind the iron, also reducing its availability for uptake. Bioavailability used in Risk Assessment For regulatory purposes, it is necessary to use a straightforward approach to assess and prioritize contaminated sites based on their risk to human and environmental health. The bioavailability concept offers a scientific underpinned concept to be used in risk assessment. Examples for inorganic contaminants are the derived 2nd tier models such as the Biotic Ligand Models, while for organic chemicals the Equilibrium Partitioning (EqP) concept (see Box 2 in the section on Sorption) is applied. A quantitative example is given for copper in different water types in Figure 3 and Table 1, in which water chemistry is explicitly accounted for to enable estimating the available copper concentration. The current Dutch generic quality target for surface waters is 1.5 µg/L total dissolved copper. The bioavailability-corrected risk limits (HC5) for different water types, in most cases, exceeded this generic quality target. Table 1. Bioavailability adjusted Copper 5% Hazardous Concentration (HC5, potentially affecting <5% of relevant species) for different water types. Water type description no. DOC (mg/L) pH Average HC5 (µg/L) Large rivers 1 3.1 ± 0.9 7.7 ± 0.2 9.6 ± 2.9 Canals, lakes 2 8.4 ± 4.4 8.1 ± 0.4 35.0 ± 17.9 Streams, brooks 3 18.2 ± 4.3 7.4 ± 0.1 73.6 ± 18.9 Ditches 4 27.5 ± 12.2 6.9 ± 0.8 64.1 ± 34.5 Sandy springs 5 2.2 ± 1.0 6.7 ± 0.1 7.2 ± 3.1 When the calculated HC5 value is lower, this means that the bioavailability of copper is higher and hence at the same total copper concentration in water the risk is higher. The bioavailability-corrected HC5s for Cu differ significantly among water types. The lowest HC5 values were found for sandy springs (water type V) and large rivers (water type I), which appear to be sensitive water bodies. These differences can be explained from partitioning processes (chemical availability) and competition processes (the toxicokinetics step) on which the BLMs are based. Streams and brooks (water type III) have rather high total copper concentrations without any adverse effects, which can be attributed to the protective effect of relatively high dissolved organic carbon (DOC) concentrations and the neutral to basic pH causing a high binding of Cu to the DOC. For risk managers, this water type specific risk approach can help to identify the priority in cleanup activities among sites having elevated copper concentrations. It remains possible that, for extreme environmental situations (e.g., extreme droughts and low water discharges or extreme rain fall and high runoff), combinations of the water chemistry parameters may result in calculated HC5 values that are even lower than the calculated average values. For the latter (important) reason, the generic quality target is more strict. References Hamelink, J., Landrum, P.F., Bergman, H., Benson, W.H. (1994) Bioavailability: physical, chemical, and biological interactions, CRC Press. Ortega-Calvo, J.J., Harmsen, J., Parsons, J.R., Semple, K.T., Aitken, M.D., Ajao, C., Eadsforth, C., Galay-Burgos, M., Naidu, R., Oliver, R., Peijnenburg, W.J.G.M., Römbke, J., Streck, G., Versonnen, B. (2015) From bioavailability science to regulation of organic chemicals. Environmental Science and Technology 49, 10255−10264. Vijver, M.G., de Koning, A., Peijnenburg, W.J.G.M. (2008) Uncertainty of water type-specific hazardous copper concentrations derived with biotic ligand models. Environmental Toxicology and Chemistry 27, 2311-2319. 3.6.1. Question 1 What are the three processes that define bioavailability? 3.6.1. Question 2 Explain how metal uptake changes when high concentrations of dissolved organic matter are in the exposure medium. 3.6.1. Question 3 Explain why bioavailability is a dynamic concept. 3.6.2. Assessing available concentrations of organic chemicals Author: Jose Julio Ortega-Calvo Reviewers: John Parsons, Gerard Cornelissen Learning objectives: You should be able to: • define the concept of freely dissolved concentrations and fast-desorbing fractions of organic chemicals in soil and sediment, as indicators of their bioavailability • understand how to determine bioavailable concentrations with the use of passive sampling • understand how to determine fast-desorbing fractions with desorption extraction methods. Keywords: Bioavailability, Freely-dissolved concentration, Desorption, Passive sampling, Infinite sink Introduction: Bioavailability through the water phase In many exposure scenarios involving organic chemicals, ranging from a bacterial cell to a fish, or from a sediment bed to a soil profile, the organisms experience the pollution through the water phase. Even when this is not the case, for example when uptake is from sediment consumed as food, the aqueous concentration may be a good indicator of the bioavailable concentration, since ultimately a chemical equilibrium will be established between the solid phase, the aqueous phase (possibly in the intestine), and the organism. Thus, taking an aqueous sample from a given environment, and determining the concentration of a certain chemical with the appropriate analytical equipment seems a straightforward approach to assess bioavailability. However, especially for hydrophobic chemicals, which tend to remain sorbed to solid surfaces (see sections on Relevant chemical properties and Sorption of organic chemicals), the determination of the chemicals present in the aqueous phase, as a way to assess bioavailability, has represented a significant challenge to environmental organic chemistry. The phase exchange among different compartments often leads to equilibrium aqueous concentrations that are very low, because most of the chemicals remain associated to the solids, and after sustained exposure, to the biota. These freely dissolved concentrations (Cfree) are very useful to determine bioavailability, as they represent the "tip of the iceberg" under equilibrium exposure, and are what organisms "see" (Figure 1, left). Similarly to the balance between gravity and buoyancy forces leading to iceberg flotation up to a certain level, Cfree is determined by the equilibrium between sorption and desorption, and connected to the concentration of the sorbed chemical (Csorbed) through a partitioning coefficient. Biological uptake may also result in the fast removal of the chemical from the aqueous phase, and thus in further desorption from the solids, so equilibrium is never achieved, and actual aqueous concentrations are much lower than the equilibrium Cfree (or even close to zero). In these situations, bioavailability is driven by the desorption kinetics of the chemical. Usually, desorption occurs as a biphasic process, where a fast desorption phase, occurring during a few hours or days, is followed by a much slower phase, taking months or even years. Therefore, for scenarios involving rapid exposures, or for studies on coupled desorption/biodegradation, the fast-desorbing fraction of the chemicals (Ffast) can be used to determine bioavailability. This fraction is often referred to as the bioaccessible fraction. Following the iceberg analogy (Figure 1, right), Ffast would constitute the upper iceberg fraction rapidly melting by sun irradiation, with a very minimal "visible" surface (representing the desorbed chemical in the aqueous solution, which is quickly removed by biological uptake). The slowly desorbing -or melting- fraction, Fslow, would remain in the sorbed state, within a given time span, having little interactions with the biota. Determining bioavailability with passive sampling methods Cfree can be determined with a passive sampler, in the form of polymer-coated fibers or sheets (membranes) made of a variety of polymers, which establish an additional sorption equilibrium with the aqueous phase in contact with the soil or sediment (Jonker et al., 2018). Depending on the analytes of interest, different polymers, such as polydimethylsiloxane (PDMS) or polyethylene (PE), are used in passive samplers. The passive sampler, enriched in the analyte (similarly to the floating iceberg in Figure 1, left, where Csorbed in this case is the concentration in the passive sampler), can be used in this way to determine indirectly the pollutant concentration present in the aqueous phase, even at very low concentrations, though the appropriate distribution ratio between sampler and water. In bioavailability estimations, passive sampling is designed for equilibrium and non-depletive conditions. This means that the amount of chemical sampled does not alter the solid-water equilibrium, i.e., it is essential that Cfree is not affected significantly by the sampler. Equilibrium achievement is critical, and it may take days or weeks. Cfree can be calculated from the concentration of the pollutant in the passive sample polymer at equilibrium (Cp), and the polymer-to-water partitioning coefficient (Kpw): Cfree values can be the basis of predictions for bioaccumulation that use the equilibrium partitioning approach, either directly or through a bioconcentration factor, and for sediment toxicity in conjunction with actual toxicity tests. Passive sampling methods are well suited for contaminated sediments, and they have already been implemented in regulatory environmental assessments based on bioavailability (Burkhard et al., 2017). Determining bioavailability with desorption extraction methods The determination of Ffast can be achieved with the use of methods that trap the desorbed chemical once it appears in the aqueous phase. Far from equilibrium conditions, desorption is driven to its maximum rate by placing a material in the aqueous phase that acts as an infinite sink (comparable to the sun irradiation of a melting iceberg in Figure 1, right). The most accepted materials for these desorption extraction methods are Tenax, a sorptive resin, and cyclodextrin, a solubilizing agent (ISO, 2018). These methods allow a permanent aqueous chemical concentration of almost zero, and therefore, sorption of the chemical back to the soil or sediment can be neglected. Several extraction steps can be used, covering a variable time span, which depends on the environmental sample. The following first-order, two-compartment kinetic model can be used to analyze desorption extraction data: In this equation, St and So (mg) are the soil-sorbed amounts of the chemical at time t (h) and at the start of the experiment, respectively. Ffast and Fslow are the fast- and slow-desorbing fractions, and kfastand kslow(h-1) are the rate constants of fast and slow desorption, respectively. To calculate the values of the different constants and fractions (Ffast, Fslow, kfast, and kslow) exponential curve fitting can be used. The ln form of the equation can be used to simplify curve fitting. Once the desorption kinetics are known, the method can be simplified for a series of samples, by using single time point-extractions. A time period of 20 h has been suggested as a sufficient time period to approximate Ffast. It is highly convenient for operational reasons (ISO, 2018), but indicative at best, since the time needed to extract Ffast tends to vary between chemicals and soils/sediments. References Burkhard, L.P., Mount, D.,R., Burgess, R.,M. (2017). Developing Sediment Remediation Goals at Superfund Sites Based on Pore Water for the Protection of Benthic Organisms from Direct Toxicity to Nonionic Organic Contaminants EPA/600/R 15/289; U.S. Environmental Protection Agency Office of Research and Development: Washington, DC. ISO (2018). Technical Committee ISO/TC 190 Soil quality - Environmental availability of non-polar organic compounds - Determination of the potentially bioavailable fraction and the non-bioavailable fraction using a strong adsorbent or complexing agent; International Organization for Standardization: Geneva, Switzerland. Jonker, M.T.O., van der Heijden, S.A., Adelman, D., Apell, J.N., Burgess, R.M., Choi, Y., Fernandez, L.A., Flavetta, G.M., Ghosh, U., Gschwend, P.M., Hale, S.E., Jalalizadeh, M., Khairy, M., Lampi, M.A., Lao, W., Lohmann, R., Lydy, M.J., Maruya, K.A., Nutile, S.,A., Oen, A.M.P., Rakowska, M.I., Reible, D., Rusina, T.P., Smedes, F., Wu, Y. (2018) Advancing the use of passive sampling in risk assessment and management of sediments contaminated with hydrophobic organic chemicals: results of an international ex situ passive sampling interlaboratory comparison. Environmental Science & Technology 52 (6), 3574-3582. 3.6.2. Question 1 Describe why the freely dissolved concentration is often used as an indicator of the bioavailability of organic chemicals in soil and sediment. 3.6.2. Question 2 What is passive sampling and how can this be used to determine the bioavailable concentrations of chemicals in soil or sediment? 3.6.2. Question 3 What is the meant by fast-desorbing fraction of a chemical in soil or sediment and how can this be determined with desorption extraction methods? 3.6.3. Assessing available metal concentrations Authors: Kees van Gestel Reviewer: Martina Vijver, Steve Lofts Leaning objectives: You should be able to: • mention different methods for assessing chemically available metal fractions in soils and sediments. • indicate the relative binding strengths of metals extracted with the different methods or in different steps of a sequential extraction procedure. • explain the pros and cons of chemical extraction methods for assessing metal (bio)availability in soils and sediments. Keywords: Chemical availability, actual and potential uptake, toxicokinetics, toxicodynamics. Introduction: Total concentrations are not very informative about the availability of metals in soils or sediments. Fate and behavior of metals - in general terms mobility - as well as biological uptake and toxicity is highly determined by their speciation. Speciation describes the partitioning of a metal among the various forms in which it may exist (see section on Metal speciation). For assessing the risk of metals to man and the environment, speciation therefore is highly relevant as it may determine their availability for uptake and effects in organisms. Several tools have been developed to determine available metal concentrations or their speciation in soils and sediments. As indicated in the section on Availability and bioavailability, such chemical methods are just indicative, and to a large extent ignore dynamics of availability. Moreover, availability is also influenced by biological processes, with abiotic-biotic interactions influencing the bioavailability process being species- and often even life-stage specific. Nevertheless, chemical extractions may provide useful information to predict or estimate the potential risks of metals and therefore are preferred over the determination of total metal concentrations. The available methods include: 1. Porewater extraction 2. Extractions with water 3. Extractions with diluted salts 4. Extractions with chelating agents 5. Extractions with diluted acids 6. Sequential extractions using a series of different extraction solutions 7. Passive sampling methods Porewater extraction probably best approaches the readily available fraction of metals in soil, which drives mobility and is the fraction of metals experienced directly by many organisms exposed. In general, pore water is extracted from soil or sediment by centrifugation, and filtration over a 0.45 µm (or 0.22 µm) filter to remove larger particles and perhaps some of the dissolved organic matter. Filtration, however, will not remove all complexes, making it impossible to determine solely the dissolved metal fraction in the pore water. Nevertheless, porewater metal concentrations have been shown to have significant correlations with metal uptake (e.g. for copper uptake by barley and tomato by Zhao et al., 2006) and to be useful for predicting toxic threshold concentrations of metals, with correction for pH (e.g. for nickel toxicity to tomato and barley by Rooney et al., 2007). Extraction with water simulates the immediately available fraction, so the fraction present in the soil solution or pore water. By extracting soil with water, the pore water however, is diluted, which on one hand may facilitate metal analysis by creating larger volumes of solution, but on the other hand may lead to differences between measured and actual metal concentrations in the pore water as it may impact chemical equilibria. Extraction with diluted salts aims to determine the fraction of metal that is easily available or may become available as it is in the exchangeable form. This refers to cationic metals that may be bound to the negatively charged soil particles (see section on Soil). Buffered salt solutions, for instance 1 M NH4-acetate at pH 4.8 (with acetic acid) or at pH 7, may under- or overestimate available metal concentrations because of their interference with soil pH. Unbuffered salt solutions therefore are more widely used and may for instance include 0.001 or 0.01 M CaCl2, 0.1 M NaNO3 or 1 M NH4NO3 (Gupta and Aten, 1993; Novozamsky et al., 1993). Gupta and Aten (1993) showed good correlations between the uptake of some metals in plants and 0.1 M NaNO3 extractable concentrations in soil, while Novozamsky et al. (1993) found similar well-fitting correlations using 0.01 M CaCl2. The latter method also seemed well capable of predicting metal uptake in soil invertebrates, and therefore has been more widely accepted for predicting metal availability in soil ecotoxicology. Figure 1 (Zhang et al., 2019) provides an example with the correlation between Pb toxicity to enchytraeid worms in different soils and 0.01 M CaCl2 extractable concentrations. Extractions with water (including porewater) and dilute salts are most accurately described as measures of the chemical solubility of the metal in the soil. The values obtained can be useful indicators of the relative metal reactivity across soils, but tend to be less useful for bioavailability assessment, unless the soils under consideration have a narrow range of soil properties. This is because the solutions obtained from such soils themselves have varying chemical properties (e.g. pH, DOC concentration) which are likely to affect the availability of the measured metal to organisms. Extraction with chelating agents, such as EDTA (0.01-0.05 M) or DTPA (0.005 M) (as their sodium or ammonium salts), aims at assessing the availability of metals for plants. Many plants have the ability to actively affect metal speciation in the soil by producing root exudates. These extractants may form very stable water-soluble complexes with many different polyvalent cationic metals. It should be noted that the large variation in plant species and corresponding physiologies as well as their interactions with symbiotic microorganisms (e.g. mycorrhizal fungi) make that there is no single extraction method is capable of properly predicting metal availability to all plant species. Extraction with diluted acids has been advocated for predicting the potentially available fraction of metals in soils, so the fraction that may become available in the long run. It is a quite rigorous extraction method that can be executed in a robust way. Metal concentrations determined by extracting soils with 0.43 M HNO3 showed very good correlation with oral bioaccessible concentrations (Rodrigues et al., 2013), probably because it to some degree simulates metal release under acidic stomach conditions. Both extraction methods with chelating agents and diluted acid may also dissolve solids, such as carbonates and Fe- and Al-oxides. This raises concerns as to the interpretation of results of these extraction systems, and especially to their generalization to different soil-plant systems (Novozamsky et al., 1993). The extractions with chelating agents and dilute acids are considered methods to estimate the 'geochemically active' metal in soil - the pool of adsorbed metal that can participate in solid-solution adsorption/desorption and exchange equilibria on timescales of hours to days. This pool, along with the basic soil properties such as pH etc., also controls the readily available concentrations obtained with water/weak salt/porewater extraction. From the bioavailability point of view, these extractions tend to be most useful as inputs to bioavailability/toxicity models such as that of Lofts et al. (2014), which take further account of the effects of metal speciation and soil chemistry on metal bioavailability to environmental organisms. Sequential extraction brings together different extraction methods, and aims to determining either how strongly metals are retained or to which components of the solid phase they are bound in soils or sediments. This allows to determine how metals are bound to different fractions within the same soil or sediment, and allows interpretation to the bioavailability dynamics. By far the most widely used method of sequential extraction is the one proposed by Tessier et al. (1979). Five fractions are distinguished, indicating how metals are interacting with soil or sediment components: see Figure 2. Where the Tessier method aims at assessing the environmental availability of metals in soils and sediments, similar sequential extraction methods have also been developed for assessing the potential availability of metals for humans (bioaccessibility) following gut passage of soil particles (see e.g. Basta and Gradwohl, 2000). Passive sampling may also be applied to assess available metal concentrations. The best known method is that of Diffusive Gradients in Thin films (DGT), developed by Zhang et al. (1998). In this method, a resin (Chelex) with high affinity for metals is placed in a device and covered with a diffusive gel and a 0.45 µm cellulose nitrate membrane (Figure 3). The membrane is brought into contact with the soil. Metals dissolved in the soil solution will diffuse through a membrane and diffusive gel and bind to the resin. Based on the thickness of the membrane and gel and the contact time with the soil, the metal concentration in the pore water can be calculated from the amount of metal accumulated in the resin. The method may be indicative of available metal concentrations in soils and sediments, but can only work effectively when soil is sufficiently moist to guarantee optimal diffusion of metals to the resin. For the same reasons, the method probably is better suited for assessing the availability of metals to plants than to invertebrates, especially for animals that are not in continuous contact with the soil solution. Several of the above described methods have been adopted by the International Standardization Organization (ISO) in (draft) standardized test guidelines for assessing available metal fractions in soils, sediments and waste materials, e.g. to assess the potential for leaching to groundwater or their potential bioaccessibility. This includes e.g. ISO/TS 21268-1 (2007) "Soil quality - Leaching procedures for subsequent chemical and ecotoxicological testing of soil and soil materials - Part 1: Batch test using a liquid to solid ratio of 2 l/kg dry matter", ISO 19730 (2008) "Soil quality -Extraction of trace elements from soil using ammonium nitrate solution" and ISO 17586 (2016) "Soil quality -- Extraction of trace elements using dilute nitric acid". References: Basta, N., Gradwohl, R. (2000). Estimation of Cd, Pb, and Zn bioavailability in smelter-contaminated soils by a sequential extraction procedure. Journal of Soil Contamination 9, 149-164. Gupta, S.K., Aten, C. (1993). Comparison and evaluation of extraction media and their suitability in a simple model to predict the biological relevance of heavy metal concentrations in contaminated soils. International Journal of Environmental Analytical Chemistry 51, 25-46. Lofts, S., Spurgeon, D.J., Svendsen, C., Tipping, E. (2004). Deriving soil critical limits for Cu, Zn, Cd, and Pb: A method based on free ion concentrations. Environmental Science and Technology 38, 3623-3631. Novozamsky, I., Lexmond, Th.M., Houba, V.J.G. (1993). A single extraction procedure of soil for evaluation of uptake of some heavy metals by plants. International Journal of Environmental Analytical Chemistry 51, 47-58. Rodrigues, S.M., Cruz, N., Coelho, C., Henriques, B., Carvalho, L., Duarte, A.C., Pereira, E., Römkens, P.F. (2013). Risk assessment for Cd, Cu, Pb and Zn in urban soils: chemical availability as the central concept. Environmental Pollution 183, 234-242. Rooney, C.P., Zhao, F.-J., McGrath, S.P. (2007). Phytotoxicity of nickel in a range of European soils: Influence of soil properties, Ni solubility and speciation. Environmental Pollution 145, 596-605. Tessier, A., Campbell, P.G.C., Bisson, M. (1979). Sequential extraction procedure for the speciation of particulate trace metals. Analytical Chemistry 51, 844-851. Zhang, H., Davison, W., Knight, B., McGrath, S. (1998). In situ measurements of solution concentrations and fluxes of trace metals in soils using DGT. Environmental Science and Technology 32, 704-710. Zhang, L., Verweij, R.A., Van Gestel, C.A.M. (2019). Effect of soil properties on Pb bioavailability and toxicity to the soil invertebrate Enchytraeus crypticus. Chemosphere 217, 9-17. Zhao, F.J., Rooney, C.P., Zhang, H., McGrath, S.P. (2006). Comparison of soil solution speciation and diffusive gradients in thin-films measurement as an indicator of copper bioavailability to plants. Environmental Toxicology and Chemistry 25, 733-742. 3.6.3. Question 1 Which metal fractions are extracted with the Tessier method, and what does that say about metal availability? 3.6.3. Question 2 Why is porewater extraction to be preferred above water extraction? 3.6.3. Question 3 What is the principle of the DGT passive sampling method, and why does it not work for fairly dry soils? 3.6.3. Question 4 Which method may give the best estimate of potentially available metals for oral uptake by mammals? 3.6.3. Question 5 What are the pros and cons of chemical extraction methods for assessing the availability of metals in soils or sediments
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.05%3A_Availability_and_bioavailability.txt
3.7. Degradation 3.7.1. Chemical and photochemical degradation processes Authors: John Parsons Reviewers: Steven Droge, Kristopher McNeill Leaning objectives: You should be able to: • understand the role of chemical and photochemical reactions in the removal of organic chemicals from the environment • understand the most important chemical and photochemical reactions in the environment • understand the role of direct and indirect photodegradation Keywords: Environmental degradation reactions, Hydrolysis, Reduction, Dehalogenation, Oxidation, Photodegradation Introduction Transformation of organic chemicals in the environment can occur by a variety of reactions. These may be purely chemical reactions, such as hydrolyses or redox reactions, photochemical reactions with the direct or indirect involvement of light, or biochemical reactions. Such transformations can change the biological activity (toxicity) of a molecule; it can change the physico-chemical properties and thus change its environmental partitioning processes; it can change its bioavailability, for example facilitating biodegradation; or it may contribute to the complete removal (mineralization) of the chemical from the environment. In many cases, chemicals may be removed by combinations of these different processes and it is sometimes difficult to unequivocally identify the contributions of the different mechanisms. Indeed, combinations of different mechanisms are sometimes important, for example in cases where microbial activity is responsible for creating conditions that favour chemical reactions. Here we will focus on two types of reactions: Abiotic (dark) reactions and photochemical reactions. Biodegradation reactions are covered elsewhere (see section on Biodegradation). Chemical degradation Hydrolytic reactions are important chemical reactions removing organic contaminants and are particularly important for chemicals containing acid derivatives as functional groups. Common examples of such chemicals are pesticides of the organophosphate and carbamate classes such as parathion, diazinon, aldicarb and carbaryl. Organophosphate chemicals are also used as flame retardants and are widely distributed in the environment. Some examples of hydrolysis reactions are shown in Figure 1. As the name suggests, hydrolysis reactions involve using water (hydro-) to break (-lysis) a bond. Hydrolyses are reactions with water to produce an acid and either an alcohol or amine as products. Hydrolyses can be catalysed by either OH- or H+ ions and their rates are therefore pH dependent. Some examples of pH-dependent ester hydrolysis reactions are shown in Figure 2. Halogenated organic molecules may also be hydrolysed to form alcohols (releasing the halogen as a halide ion). The rates of these reactions vary strongly depending on the structure of the organohalogen molecule and the halogen substituent (with Br and I being substituted more rapidly than Cl, and much more rapidly than F) and in general the rates of these reactions are too slow to be of more than minor importance except for tertiary organohalogens and secondary organohalogens with Br and I (Schwarzenbach et al. 2017). In some cases, other substitution reactions not involving water as reactant may be important. Some examples include Cl- in seawater converting CH3I to CH3Cl and reaction of thiols with alkyl bromines in anaerobic groundwater and sediment porewater under sulfate-reducing conditions (Schwarzenbach et al. 2017) Redox (reduction and oxidation) reactions are another important reaction class involved in the degradation of organic chemicals. In the presence of oxygen, the oxidation of organic chemicals is thermodynamically favourable but occurs at insignificant rates unless oxygen is activated in the form of oxygen radicals or peroxides (following light absorption for example, see below) or if the reaction is catalysed by transition metals or transition metal-containing enzymes (see the sections on Biodegradation and Xenobiotic metabolism and defence). Reduction reactions are important redox reactions for environmental contaminants in anaerobic environments such as sediment and groundwater aquifers. Under these conditions, organic chemicals containing reducible functional groups such as carboxylic acids and nitro groups undergo reduction reactions (Table 1). Table 1: Examples of chemical redox reactions that may occur in the environment (adapted from Schwarzenbach et al. 2017) Photodegradation Sunlight is an important source of energy to initiate chemical reactions and photochemical reactions are particularly important in the atmosphere. Aromatic compounds and other chemicals containing unsaturated bonds that are able to absorb light in the frequency range available in sunlight become exited (energized) and this can lead to chemical reactions. These reactions lead to cleavage of bonds between carbon atoms and other atoms such as halogens to produce radical species. These radicals are highly reactive and react further to remove hydrogen or OH radicals from water to produce C-H or C-OH bonds or may react with themselves to produce larger molecules. Well known examples of atmospheric photochemical stratospheric reactions of CFCs that have had a negative impact on the so-called ozone layer and photochemical oxidations of hydrocarbons that are involved in the generation of smog. In the aquatic environment, light penetration is sufficient to lead to photochemical reactions of organic chemicals at the water surface or in the top layer of clear water. The presence of particles in a waterbody reduces light intensity through light scattering as does dissolved organic matter through light absorption. Photodegradation contributes significantly to removing oil spills and appears to favour the degradation of longer chain alkanes compared to the preferential attack of linear and small alkanes by biodegradation (Garrett et al., 1998). Cycloalkanes and aromatic hydrocarbons are also removed by photodegradation (D'Auria et al., 2009). There is comparatively little known about the role photodegradation of other organic pollutants in the marine environment although there is, for example, evidence that triclosan is removed by photolysis in the German Bight area of the North Sea (Xie et al., 2008). In the soil environment, there is some evidence that photodegradation may contribute to the removal of a variety of organic chemicals such as pesticides and chemicals present in sewage sludge that is used as a soil amendment but the significance of this process is unclear. Similarly, chemicals that have accumulated in ice, for example as a result of long range transport to polar regions, also seem to be susceptible to photodegradation. Some examples of photodegradation reactions are shown in Figure 4. An important category of photochemical reactions are indirect reactions in which organic chemicals react with photochemically produced radicals, in particular with reactive oxygen species such as OH radicals, ozone and singlet oxygen. These reactive species are present at very low concentrations but are so reactive that under certain conditions they can contribute significantly to the removal of organic chemicals. Products of these reactions are a variety of oxidized derivatives which are themselves radicals and therefore react further. OH radicals are the most important of these photochemically produced species and can react with organic chemicals by removing hydrogen radicals, reacting with unsaturated bonds in alkenes, aromatics etc. to produce hydroxylated products. In water, natural organic matter absorbs light and can participate in indirect photodegradation reactions. Other constituents in surface water, such as nitrogen oxides and iron complexes may also be involved in indirect photodegradation reactions. References Schwarzenbach, R.P., Gschwend, P.M., Imboden, D.M. (2017). Environmental Organic Chemistry, Third Edition, Wiley, ISBN 978-1-118-76723-8 van Leeuwen, C.J., Vermeire, T.G. (2007). Risk Assessment of Chemicals: An Introduction (2nd ed.), Springer, ISBN 978-1-4020-6101-1 3.7.1. Question 1 Which organic chemicals would you expect to undergo hydrolytic degradation in the environment? Explain why these reaction depend on the pH. 3.7.1. Question 2 Reductive dehalogenation reactions are often observed to occur for organochlorine compounds in anaerobic environments. Why are these reactions called reductive dehalogenation? What products would you expect to be formed by the reductive dehalogenation of tetrachloroethene (Cl2C=CCl2)? Describe two ways in which bacteria are involved in these reactions. 3.7.1. Question 3 In which environmental compartments is photochemical transformation or photodegradation a potentially important degradation mechanism for organic chemicals and why is this the case? Explain with examples the differences between direct and indirect photodegradation in the environment. 3.7.2. Biodegradation Author: John Parsons Reviewers: Steven Droge, Russell Davenport Leaning objectives: You should be able to: • the contribution of biochemical reactions in removing chemicals from the environment • explain the differences between biotransformation, primary biodegradation and mineralization • describe the most important biodegradation reactions under aerobic and anaerobic conditions Keywords: Primary biodegradation, mineralisation, readily biodegradable chemicals, persistent chemicals, oxygenation reactions, reductions reactions Introduction: Biodegradation and biotransformation both refer to degradation reactions that are catalyzed by enzymes. In general, biodegradation is usually used to describe the degradation carried out by microorganisms and biotransformation often refers to reactions that follow the uptake of chemicals by higher organisms. This distinction is important and arises from the role that bacteria and other microorganisms play in natural biogeochemical cycles. As a result, microorganisms have the capacity to degrade most (perhaps all) naturally occurring organic chemicals in organic matter and convert them to inorganic end products. These reactions supply the microorganisms with the nutrients and energy they need to grow. This broad degradative capacity means that they are able to degrade many anthropogenic chemicals and potentially convert them to inorganic end products, a process referred to as mineralisation. Although higher organisms are also able to degrade (metabolise) many anthropogenic chemicals, these chemicals are not taken up as source of nutrients and energy. Many anthropogenic chemicals can disturb cell functioning processes, and the biotransformation process has been proposed as a detoxification mechanism. Undesirable chemicals that may accumulate to potentially harmful levels are converted to products that are more rapidly excreted. In most cases, a polar and/or ionizable unit is attached to the chemical in one or two steps, making the compound more soluble in blood and more readily removed via the kidneys to the urine. This also renders most hazardous chemicals less toxic than the original chemical. Such biotransformation steps always costs energy (ATP, or through the use of e.g. NADH or NADPH in the enzymatic reactions) from the organism. Biotransformation is sometimes also used to describe degradation by microorganisms when this is limited to a conversion of a chemical into a new product. Biodegradation is for many organic contaminants the major process that removes them from the environment. Measuring the rates of biodegradation therefore is a prominent aspect of chemical risk assessment. Internationally recognized standardised protocols have been developed to measure biodegradation rates of chemicals. Well know examples of these are the OCED Guidelines. These guidelines include screening tests designed to identify chemicals can be regarded as readily (i.e. rapidly) biodegradable as well as more complex tests to measure biodegradation rates of chemicals that degrade slowly in a variety of simulated environments. For more complex mechanistic studies, microorganisms able to degrade specific chemicals are isolated from environmental samples and cultivated in laboratory systems. In principle, biodegradation of a chemical can be determined by either following the concentration of the chemical during the test or by following the conversion to end products (in most cases by either measuring oxygen consumption or CO2 production). Although measuring the concentration gives the most directly relevant information on a chemical, it requires the availability or development of analytical methods which is not always within the capability of routine testing laboratories. Measuring the conversion to CO2 is comparatively straightforward but the production of CO2 from other chemicals present in the test system (such as soil or dissolved organic matter) should be accounted for. This can be done by using 14C-labelled chemicals in the tests but not all laboratories have facilities for this. The main advantage of this approach is that demonstration of quantitative conversion of a chemical to CO2 etc. means that there is no concern about the accumulation of potentially toxic metabolites. Since it is an enzymatically catalysed process, the rates of biodegradation should be modelled using the Michaelis Menten kinetics, or Monod kinetics if growth of the microorganisms is taken into account. In practice, however, first order kinetics are often used to model biodegradation in the absence of significant growth of the degrading microorganisms. This is more convenient that using Michaelis Menten kinetics but there is some justification for this simplification since the concentrations of chemicals in the environment are in general much lower than the half saturation concentrations of the degrading enzymes. Table 1. Influence of molecular structure on the biodegradability of chemicals in the aerobic environment. Type of compounds or substituents More biodegradable Less biodegradable hydrocarbons linear alkanes < C12 linear alkanes > C12 alkanes with not too high molecular weight high molecular weight alkanes linear chain branched chain -C-C-C- -C-O-C- aliphatic aromatic aliphatic chlorine Cl more than 6 carbons from terminal C Cl at less than 6 carbons from terminal C Substituents to an aromatic ring -OH -F -CO2H -Cl -NH2 -NO2 -OCH3 -CF3 Whether expressed as terms of first order kinetics or Michaelis Menten parameters, rates of biodegradation vary widely for different chemicals showing that chemical structure has a large impact on biodegradation. Large variations in biodegradation rates are however often observed for the same chemical in different experimental systems. This shows that environmental properties and conditions also play a key role in determining removal by biodegradation and it is often almost impossible to distinguish the effects of chemical properties from those of environmental properties. In other words, there is no such thing as an intrinsic biodegradation rate of a chemical. Nevertheless, we can derive some generic relationships between the structure and biodegradability of chemicals, as listed in Table 1. Examples are that branched hydrocarbon structures are degraded more slowly than linear hydrocarbon structures, and cyclic and in particular aromatic chemicals are degraded more slowly than aliphatic (non-aromatic) chemicals. Substituents and functional groups also have a major impact on biodegradability with halogens and other electron withdrawing substituents having strongly negative effects. It is therefore no surprise than the list of persistent organic pollutants is dominated by organohalogen compounds and in particular those with aromatic or alicyclic structures. It should be recognized that biodegradation rates have often been observed to change over time. Long term exposure of microbial communities to new chemicals has often been observed to lead to increasing biodegradation rates. This phenomenon is called adaptation or acclimation and is often the case following repeated application of a pesticide at the same location. An example is shown for atrazine in Figure 2 where degradation rates increase following longer exposure to the pesticide. Another recent example is the differences in biodegradation rates of the builder L-GLDA (tetrasodium glutamate diacetate) by activated sludge from different waste water treatment plants in the USA. Sludge from regions where L-GLDA was not or only recently on the market required long lag time before degradation started whereas sludge from regions where L-GLDA -containing products had been available for several months required shorted lag phases. Adaptation can results from i) shifts in composition or abundances of species in a bacterial community, ii) mutations within single populations, iii) horizontal transfer of DNA or iv) genetic recombination events, or combinations of these. Biodegradation reactions and pathways Biodegradation of chemicals that we regard as pollutants takes place when these chemicals are incorporated into the metabolism of microorganisms. The reactions involved in biodegradation are therefore similar to those involved in common metabolic reactions, such as hydrolyses, oxidations and reductions. Since the conversion of an organic chemical to CO2 is an overall oxidation reaction, oxidation reactions involving molecular oxygen are probably the most important reactions. These reactions with oxygen are often the first but essential step in degradation and can be regarded as activation step converting relatively stable molecules to more reactive intermediates. This is particularly important for aromatic chemicals since oxygenation is required to make aromatic rings susceptible to ring cleavage and further degradation. These reactions are catalysed by enzymes called oxygenases of which there are broadly speaking two classes. Monoxygenases are enzymes catalysing reactions in which one oxygen atom of O2 reacts with an organic molecule to produce a hydroxylated product. Examples of such enzymes are the cytochrome P450 family and are present in all organisms. These enzymes are for example involved in the oxidation of alkanes to carboxylic acids as part of the "beta-oxidation" pathway, which shortens linear alkanoic acids in steps of C2-units, as shown in Figure 4. Dioxygenases are enzymes catalysing reactions in which both oxygen atoms of O2 react with organic chemicals and appear to be unique to microorganisms such as bacteria. Examples of these reactions are shown for benzene in Figure 5. Similar reactions are involved in the degradation of more complex aromatic chemicals such as PAHs and halogenated aromatics. The absence of oxygen in anaerobic environments (sediments and groundwater) does not preclude oxidation of organic chemicals. Other oxidants present (nitrate, sulphate, Fe(III) etc) may be present in sufficiently high concentrations to act as oxidants and terminal electron acceptors supporting microbial growth. In the absence of oxygen, activation relies on other reactions, the most important reactions seem to be carboxylation or addition of fumarate. Figure 6 shows an example of the degradation of naphthalene to CO2 in sediment microcosms under sulphate-reducing conditions. Other important reactions in anaerobic ecosystems (sediments and groundwater plumes) are reductions. This affects functional groups, for example reduction of acids to aldehydes to alcohols, nitro groups to amino groups and, particularly important, substitution of halogens by hydrogen. The latter reactions can contribute to the conversion of highly chlorinated chemicals, that are resistant to oxidative biodegradation, to less chlorinated products which are more amenable to aerobic biodegradation. Many examples of these reductive dehalogenation reactions have been shown to occur in, for example, tetrachloroethene-contaminated groundwater (e.g. from dry-cleaning processes) and PCB-contaminated sediment. These reactions are exothermic under anaerobic conditions and some microorganisms are able to harvest this energy to support their growth. This can be considered to be a form of respiration based on dechlorination and is sometimes referred to as chlororespiration. As is the case for abiotic degradation, hydrolyses are also important reactions in biodegradation pathways, particularly for chemicals that are derivatives of organic acids, such as carbamate, ester and organophosphate pesticides where hydrolyses are often the first step in their biodegradation. These reactions are similar to those described in the section on Chemical degradation. References Itrich, N.R., McDonough, K.M., van Ginkel, C.G., Bisinger, E.C., LePage, J.N., Schaefer, E.C., Menzies, J.Z., Casteel, K.D., Federle, T.W. (2015). Widespread microbial adaptation to L-glutamate-N,N,-diacetate (L-GLDA) following its market introduction in a consumer cleaning product. Environmental Science & Technology 49, 13314-13321. Janssen, D. B., Dinkla, I. J. T., Poelarends, G. J., Terpstra, P. (2005). Bacterial degradation of xenobiotic compounds: evolution and distribution of novel enzyme activities, Environmental Microbiology 7, 1868-1882. Kleemann, R., Meckenstock, R.U. (2017). Anaerobic naphthalene degradation by Gram-positive, iron-reducing bacteria. FEMS Microbial Ecology 78, 488-496. Schwarzenbach, R.P., Gschwend, P.M., Imboden, D.M. (2017). Environmental Organic Chemistry, Third Edition, Wiley, ISBN 978-1-118-76723-8 Van Leeuwen, C., Vermeire, T.G. (2007). Risk Assessment of Chemicals: An Introduction (2nd ed.), Springer, ISBN 978-1-4020-6101-1 Zhou, Q., Chen, L. C., Wang, Z., Wang, J., Ni, S., Qiu, J., Liu, X., Zhang, X., Chen, X. (2017). Fast atrazine degradation by the mixed cultures enriched from activated sludge and analysis of their microbial community succession. Environmental Science & Pollution Research 24, 22152-22157. 3.7.2. Question 1 Degradation by microorganisms plays an important role in the environmental fate of industrial organic chemicals. Explain briefly why the role of microorganisms is so important. Biodegradation rates depend on amongst other factors on the structure of the chemicals. Mention three structural factors responsible for slow biodegradation. 3.7.2. Question 2 DDT is one of the original "dirty dozen" Persistent Organic Pollutants (POPs). Explain what these POPs are and why they are labelled as persistent. What structural features are responsible for them being labelled as POPs? 3.7.2. Question 3 Groundwater used to prepare drinking water is discovered to be contaminated with toluene and tetrachloroethene. Although nothing is known about the geochemical conditions in the groundwater aquifer, you are asked to investigate whether there is any evidence for biodegradation of these compounds occurring in the aquifer. Suggest which compounds could be analysed as evidence for biodegradation. 3.7.3. Degradation test methods Authors: John Parsons Reviewers: Steven Droge, Russell Davenport Leaning objectives: You should be able to: • explain the strategy used in standardised biodegradability testing • describe the most important aspects of standard biodegradability testing protocols • interpret the results of standardised biodegradability tests Keywords: Environmental fate, chemical degradation, photochemical degradation, biodegradation, mineralisation, degradation rate Introduction Many experimental approaches are possible to measure the environmental degradation of chemicals, ranging from highly controlled laboratory experiments to environmental monitoring studies. While each of these approaches has its advantages and disadvantages, a standardised and relatively straightforward set of protocols has clear advantages such as suitability for a wide range of laboratories, broad scientific and regulatory acceptance and comparability for different chemicals. The system of OECD test guidelines (see links in the reference list of this chapter) is the most important set of standardised protocols although other test systems may be used in other regulatory contexts. As well as tests covering environmental fate processes, they also cover physical-chemical properties, bioaccumulation, toxicity etc. These guidelines have been developed in an international context and are adopted officially after extensive validation and testing in different laboratories. This ensures their wide acceptance and application in different regulatory contexts for chemical hazard and risk assessment. Chemical degradation tests The OECD Guidelines include only two tests specific for chemical degradation. This might seem surprising but it should not be forgotten that chemical degradation could also contribute to the removal observed in biodegradability tests. The OECD Guidelines for chemical degradation are OECD Test 111: Hydrolysis as a Function of pH (OECD 2004) and OECD Test 316: Phototransformation of Chemicals in Water - Direct Photolysis (OECD 2008). If desired, sterilised controls may also be used to determine the contribution of chemical degradation in biodegradability tests. OECD Test 111 measures hydrolytic transformations of chemicals in aquatic systems at pH values normally found in the environment (pH 4 - 9). Sterile aqueous buffer solutions of different pH values (pH 4, 7 and 9) containing radio-labelled or unlabelled test substance (below saturation) are incubated in the dark at constant temperature and analysed after appropriate time intervals for the test substance and for hydrolysis products. The preliminary test is carried out for 5 days at 50°C and pH 4.0, 7.0 and 9.0, this is known as a first tier test. Further second tier tests study the hydrolysis of unstable substances and the identification of hydrolysis products and may extend for 30 days. OECD Test 316 measures direct photolysis rate constants using xenon arc lamp capable of simulating natural sunlight in the 290 to 800 nm or natural sunlight, and extrapolated to natural water. If estimated losses are superior or equal to 20%, the transformation pathway and the identities, concentrations, and rate of formation and decline of major transformation products are identified. Biodegradability tests Biodegradation is in general considered to be the most important removal process for organic chemicals in the environment and it is therefore no surprise that biodegradability testing plays a key role in the assessing the environmental fate and subsequent exposure risks of chemicals. Biodegradation is an extensively researched area but data from standardised tests are favoured for regulatory purposes as they are assumed to yield reproducible and comparable data. Standardised tests have been developed internationally, most importantly under the auspices of the OECD and are part of the wider range of tests to measure physical-chemical, environmental and toxicological properties of chemicals. An overview of these biodegradability tests is given in Table 1. The way that biodegradability testing is implemented can vary in detail depending on the regulatory context but in general it is based on a tiered approach with all chemicals being subjected to screening tests to identify chemicals that can be considered to be readily biodegradable and therefore removed rapidly from wastewater treatment plants (WWTPs) and the environment in general. These tests were originally developed for surfactants and often use activated sludge from WWTPs as a source of microorganisms since biodegradation during wastewater treatment is a major conduit of chemical emissions to the environment. The so-called ready biodegradability tests are designed to be stringent with low bacterial concentrations and the test chemical as the only potential source of carbon and energy at high concentrations. The assumption is that chemicals that show rapid biodegradation under these unfavourable conditions will always be degraded rapidly under environmental conditions. Biodegradation is determined as conversion to CO2 (mineralisation), either by directly measuring CO2 produced, or the consumption of oxygen, or removal of dissolved organic carbon, as this is the most desirable outcome of biodegradation. The results that have to be achieved for a chemical to be considered readily biodegradable vary slightly depending on the test, but as an example in the OECD 301D test (OECD 2014), the consumption of oxygen should reach 70% of that theoretically required for complete mineralisation within 28 days. Table 1. The OECD biodegradability tests OECD TEST GUIDELINE PARAMETER MEASURED REFERENCE Ready biodegradability tests 301A: DOC Die-away test DOC OECD 1992a 301B: CO2 evolution test CO2 OECD 1992a 301C: Modified MITI(I) test O2 OECD 1992a 301D: Closed bottle test O2 OECD 1992a 301E: Modified OECD screening test DOC OECD 1992a 301F: Manometric respirometry test O2 OECD 1992a 306: Biodegradability in seawater DOC OECD 1992c 310: Test No. 310: Ready Biodegradability - CO2 in sealed vessels (Headspace Test). CO2 OECD 2014 Inherent biodegradability tests 302A: Modified Semi-continuous Activated Sludge (SCAS) test DOC OECD 1981b 302B: Zahn-Wellens test DOC OECD 1992b 302C: Modified MITI(II) test O2 OECD 2009 Simulation tests 303A: Activated sludge units DOC OECD 2001 303B: Biofilms DOC OECD 2001 304A: Inherent biodegradability in soil 14CO2 OECD 1981a 307: Aerobic and anaerobic transformation in soil 14CO2/CO2 OECD 2002a 308: Aerobic and anaerobic transformation in aquatic sediment systems 14CO2/CO2 OECD 2002b 309: Aerobic mineralization in surface water 14CO2/CO2 OECD 2004b 311: Anaerobic biodegradability of organic compounds in digested sludge: by measurement of gas production CO2 and CH4 OECD 2006 314: Simulation tests to assess the biodegradability of chemicals discharged in wastewater Concentration of chemical, 14CO2/CO2 OECD 2008a These test systems are widely applied for regulatory purposes but they do have a number of issues. These include the fact that there are practical difficulties when applied to volatile or poorly soluble chemicals, but probably the most important is that for some chemicals the results can be highly variable. This is usually attributed to the source of the microorganisms used to inoculate the system. For many chemicals, there is a wide variability in how quickly they are degraded by activated sludge from different WWTPs. This is probably the result of different exposure concentrations and exposure periods to the chemicals, and may also be caused by dependence on the ability of small populations of degrading microorganisms, which may not always be included in the sludge samples used in the tests. These issues are not dealt with in any systematic way in biodegradability testing. It has been suggested that a preliminary period of exposure to the chemicals to be tested would allow sludge to adapt to the chemicals and may yield more reproducible test results. Further suggestions include using a higher, more environmentally relevant, concentration of activated sludge as the inoculum. Failure to comply with the pass criteria in ready biodegradability tests does not necessarily mean that the chemical is persistent in the environment since it is possible that slow biodegradation may occur. These chemicals may therefore be tested further in higher tier tests, for what is referred to as inherent biodegradability in tests performed under more favourable conditions or in simulation tests representing specific compartments, to determine whether biodegradation may contribute significantly to their removal. These tests are also standardised (see Table 1). Simulation tests are designed to represent environmental conditions in specific compartments, such as redox potential, pH, temperature, microbial community, concentration of test substance and occurrence and concentration of other substrates. The criteria used in classifying the biodegradability of chemicals depend on the regulatory context. Biodegradability tests can be used for different purposes: in the EU this includes 3 distinct purposes; classification and labelling, hazard/persistent assessment, and environmental risk assessment Recently regulatory emphasis has shifted to identifying hazardous chemicals, and therefore those chemicals that are less biodegradable and likely to persist in the environment. Examples for the classification as PBT (persistent, bioaccumulative and toxic) or vPvB (very persistent and very bioaccumulative) chemicals are shown in Table 2. As well as the results of standardised tests, other data such as the results of environmental monitoring data or studies on the microbiology of biodegradation can also be taken into account in evaluations of environmental degradation in a so-called weight of evidence approach. Table 2. Criteria used to classify chemicals as PBT or vPvB (van Leeuwen & Vermeire 2007) Property PBT criteria vPvB criteria Persistence T1/2 >60 days in marine water, or T1/2 >40 days in fresh/estuarine water, or T1/2 >180 days in marine sediment, or T1/2 >120 days in fresh/estuarine sediment, or T1/2 >120 days in soil. T1/2 >60 days in marine, fresh or estuarine water, or T1/2 >180 days in marine, fresh or estuarine sediment, or T1/2 >180 days in soil Bioaccumulation BCF > 2000 L/kg BCF > 5000 L/kg Toxicity - NOEC < 0.01 mg/L for marine or freshwater organisms, or - substance is classified as carcinogenic, mutagenic, or toxic for reproduction, or - there is other evidence of chronic toxicity, according to Directive 67/548/EEC The results of biodegradability tests are sometimes also used to derive input data for environmental fate models (see section on Multicompartment modeling). It is however not always straightforward to transfer data measured in what is sometimes a multi-compartment test system into degradation rates in individual compartments as other processes (e.g. partitioning) need to be taken into account. References OECD, 1981a. OECD Guidelines for the Testing of Chemicals. Test No. 304A: Inherent Biodegradability in Soil. OECD, 1981b. OECD Guidelines for the Testing of Chemicals. Test No. 302A: Inherent Biodegradability: Modified SCAS Test. OECD, 1992a. OECD Guidelines for the Testing of Chemicals. Test No. 301: Ready Biodegradability. OECD, 1992b. OECD Guidelines for the Testing of Chemicals. Test No. 302B: Inherent Biodegradability: Zahn-Wellens/ EVPA Test. OECD, 1992c. OECD Guidelines for the Testing of Chemicals. Test No. 306: Biodegradability in Seawater. OECD, 2001. OECD Guidelines for the Testing of Chemicals. Test No. 303: Simulation Test - Aerobic Sewage Treatment - A: Activated Sludge Units; B: Biofilms. OECD, 2002a. OECD Guidelines for the Testing of Chemicals. Test No. 307: Aerobic and Anaerobic Transformation in Soil. OECD, 2002b. OECD Guidelines for the Testing of Chemicals. Test No. 308: Aerobic and Anaerobic Transformation in Aquatic Sediment Systems. OECD, 2004a. OECD Guidelines for the Testing of Chemicals. Test No. 111: Hydrolysis as a Function of pH. OECD, 2004b. OECD Guidelines for the Testing of Chemicals. Test No. 309: Aerobic Mineralisation in Surface Water - Simulation Biodegradation Test OECD, 2006. OECD Guidelines for the Testing of Chemicals. Test No. 311: Anaerobic Biodegradability of Organic Compounds in Digested Sludge: by Measurement of Gas Production. OECD, 2008a. OECD Guidelines for the Testing of Chemicals. Test No. 314: Simulation Tests to Assess the Biodegradability of Chemicals Discharged in Wastewater OECD, 2008b. OECD Guidelines for the Testing of Chemicals. Test No. 316: Phototransformation of Chemicals in Water - Direct Photolysis. OECD, 2014. OECD Guidelines for the Testing of Chemicals. Test No. 310: Ready Biodegradability - CO2 in sealed vessels (Headspace Test). Van Leeuwen, C.J., Vermeire, T.G. (2007). Risk Assessment of Chemicals: An Introduction (2nd ed.), Springer, ISBN 978-1-4020-6101-1 3.7.3. Question 1 The OECD has published extensive protocols that can be used to evaluate the degradability of chemicals in the environment. Which forms of abiotic (chemical) degradation do these include? 3.7.3. Question 2 Biodegradation tests used to evaluate the environmental effects of chemicals often measure the mineralisation of the chemicals, for example by measuring the amount of carbon dioxide produced. Explain why this is preferred to measuring the removal of the original chemical (primary biodegradation). 3.7.3. Question 3 Assessment of the biodegradability of chemicals often follows a tiered system in which they are first screened for their ready biodegradability before undergoing more extensive testing to assess their inherent biodegradability or biodegradation in specific environmental compartments using simulation tests. Explain why this approach is applied.
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.06%3A_Degradation.txt
3.8. Modelling exposure In preparation 3.8.2. Multicompartment modeling Authors: Dik van de Meent and Michael Matthies Reviewer: John Parsons Learning objectives: You should be able to • explain what a mass balance equation is • describe how mass balance equations are used in multimedia fate modeling • explain the concepts of thermodynamic equilibrium and steady state • give some examples of the use of multimedia mass balance modeling Keywords: mass balance equation, environmental fate model The mass balance equation Multicompartment (or multimedia) mass balance modeling starts from the universal conservation principle, formulated as balance equation. The governing principle is that the rate of change (of any entity, in any system) equals the difference between the sum of all inputs (of that entity) to the system and the sum of all outputs from it. Environmental modelers use the balance equation to predict exposure concentrations of chemicals in the environment by deduction from knowledge of the rates of input- and output processes, which can be understood easiest from considering the mass balance equation for one single environmental compartment (Figure 1): (eq. 1) where dmi,j/dt represents the change of mass of chemical i in compartment j (kg) over time (s), and inputi,j and outputi,j denote the rates of input and output of chemical to and from compartment j, respectively. One compartment model In multimedia mass balance modeling, mass balance equations (of the type shown in equation 1) are formulated for each environmental compartment. Outflows of chemical from the compartments are often proportional to the amounts of chemical present in the compartments, while external inputs (emissions) may often be assumed constant. In such cases, i.e. when first-order kinetics apply (see section 3.3 on Environmental fate of chemicals), mass balance equations take the form of equation 1 in section 3.3. For one compartment (e.g. a lake, as in Figure 1) only: (eq. 2) in which dm/dt (kg.s-1) is the rate of change of the mass (kg) of chemical in the lake, I (kg.s-1) is the (constant) emission rate, and the product k.m (kg.s-1) denotes the first-order loss rate of the chemical from the lake. It is obvious that eventually a steady state must develop, in which the mass of chemical in the lake reaches a predictable maximum (eq. 3) When the input rate (emission) is constant, i.e. that it does not vary with time, and is independent of the mass of chemical present, the mass of chemical in the systems is expected to increase exponentially, from its initial value at, to a steady level at . According to equation 3, a final mass level equal to is to be expected. Multi-compartment model The prefix 'multi' indicates that generally (many) more than one environmental compartment is considered. The Unit World (see below) contains air, water, biota, sediment and soil; more advanced global modeling systems may use hundreds of compartments. The case of three compartments (typically one air, one water, one soil) is schematically worked out in Figure 3. Each compartment can receive constant inputs (emissions, imports), and chemical can be exported from each compartment by degradation or advective outflow, as in the one-compartment model. In addition, chemical can be transported between compartments (simultaneous import-export). All mass flows are characterized by (pseudo) first-order rate constants (see section 3.3 on Environmental fate processes). The three mass balance equations eventually balance to zero at infinite time: (eq. 4) where the symbols denote mass in compartments i at steady state. Sets of n linear equations with n unknowns can be solved algebraically, by manually manipulating equations 4, until clean expressions for each of the three mi values are obtained, which inevitably becomes tedious as soon as more than two mass balance equations are to be solved - this did not withhold one of Prof. Mackay's most famous PhD students from successfully solving a set of 14 equations! An easier way of solving sets of n linear equations with n unknowns is by means of linear algebra. Using linear algebraic vector-matrix calculus, the equations 4 can be rewritten into one linear-algebraic equation: (eq. 5) in which in which is the vector of masses in the three compartments, is the model matrix of known rate constants and is the vector of known emission rates: The solution of equation 5 is in which is the vector of masses at steady state and is the inverse of model matrix . The linear algebraic method of solving linear mass balance equations is easily carried with spreadsheet software (such as MS Excel, LibreOffice Calc or Google Sheets), which contain built-in array functions for inverting matrices and multiplying them by vectors. Unit World modeling In the late 1970s, pioneering environmental scientists at the USEPA Environmental Research Laboratory in Athens GA, recognized that the universal (mass) balance equation, applied to compartments of environmental media (air, water, biota, sediment, soil) could serve as a means to analyze and understand differences in environmental behavior and fate of chemicals. Their 'evaluative Unit World Modeling' (Baughman and Lassiter, 1978; Neely and Blau, 1985) was the start of what is now known as multimedia mass balance modeling. The Unit World concept was further developed and polished by Mackay and co-workers (Neely and Mackay, 1982; Mackay and Paterson, 1982; Mackay et al., 1985; Paterson and Mackay, 1985, 1989). In Unit World modeling, the environment is viewed of as a set of well-mixed chemical reactors, each representing one environmental medium (compartment), to and from which chemical flows, driven by 'departure from equilibrium' - this is chemical technology jargon for expressing the degree to which thermodynamic equilibrium properties such as 'chemical potential' or 'fugacity' differ (Figure 4). Mackay and co-workers used fugacity in mass balance modeling as the central state variable. Soon after publication of this 'fugacity approach' (Mackay, 1991), the term 'fugacity model' became widely used to name all models of the 'Mackay-type', which applied 'Unit World mass balance modeling', even though most of these models kept using the more traditional chemical mass as a state variable. Complexity levels While conceptually simple (environmental fate is like a leaking bucket, in the sense that its steady-state water height is predictable from first-order kinetics), the dynamic character of mass balance modeling is often not so intuitive. The abstract mathematical perspective may best suit explain mass balance modeling, but this may not be practical for all students. In his book about multimedia mass balance modeling, Mackay chose to teach his students the intuitive approach, by means of his famous water tank analogy (Figure 4B). According to this intuitive approach, mass balance modeling can be done at levels of increasing complexity, where the lowest, simplest, level that serves the purpose should be regarded as the most suitable. The least complex is level I assuming no input and output. A chemical can freely (i.e. without restriction) flow from one environmental compartment to another, until it reaches its state of lowest energy: the state of thermodynamic equilibrium. In this state, the chemical has equal chemical potential and fugacity in all environmental media. The system is at rest; in the hydraulic analogy, water has equal levels in all tanks. This is the lowest level of model complexity, because this model only requires knowledge of a few thermodynamic equilibrium constants, which can be reasoned from basic physical substance properties. The more complex modeling level III describes an environment in which flow of chemical between compartments experiences flow resistance, so that a steady state of balance between outputs and inputs is reached only at the cost of permanent 'departure from equilibrium'. Degradation in all compartments and advective flows, e.g. rain fall or wind and water currents, are also considered. The steady state of level III is one in which fugacities of chemical in the compartments are unequal (no thermodynamic equilibrium); in the hydraulic analogy, water in the tanks rest at different heights. Naturally, solving modeling level III requires detailed knowledge of the inputs (into which compartment(s) is the chemical emitted?), the outputs (at what rates is the chemical degraded in the various compartments?) and the transfer resistances (how rapid or slow is the mass transfer between the various compartments?). Level III modelers are rewarded for this by obtaining more realistic model results. The fourth complex level of multimedia mass balance modeling (level IV, not shown in Figure 4B) produces transient (time dependent) solutions. Model simulations start (t = 0) with zero chemical (m = 0; empty water tanks). Compartments (tanks), fill up gradually until the system comes to a steady state, in which generally one or more compartments depart from equilibrium, as in level III modeling. Level IV is the most realistic representation of environmental fate of chemicals, but requires most detailed knowledge of mass flows and mass transfer resistances. Moreover, time-varying states are least easy to interpret and not always most informative of chemical fate. The most important piece of information to be gained from level IV modeling is the indication of time to steady state: how long does it take to clear the environment from persistent chemicals that are no longer used? Mackay describes an intermediate level of complexity (level II), in which outputs (degradation, advective outflows) balance inputs (as in level III), and chemical is allowed to freely flow between compartments (as in level I). A steady state develops in level II and there is thermodynamic equilibrium at all times. Modeling at level II does not require knowledge of mass transfer resistances (other than that resistances are negligible!), but degradation and outflow rates increase the model complexity compared to that of level I. In many situations, level II modeling yields surprisingly realistic results. Use of multimedia mass balance models Soon after publication of the first use of 'evaluative Unit World modeling' (Mackay and Paterson, 1982), specific applications of the 'Mackay approach' to multimedia mass balance modeling started to appear. The Mackay group published several models for the evaluation of chemicals in Canada, of which ChemCAN (Mackay et al., 1995) is known best. Even before ChemCAN, the Californian model CalTOX (Mckone, 1993) and the Dutch model SimpleBox (Van de Meent, 1993) came out, followed by publication of the model HAZCHEM by the European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC, 1994) and the German Umwelt Bundesamt's model ELPOS (Beyer and Matthies, 2002). Essentially, all these models serve the very same purpose as the original Unit World model, namely providing standardized modeling platforms for evaluating the possible environmental risks from societal use of chemical substances. Multimedia mass balance models became essential tools in regulatory environmental decision making about chemical substances. In Europe, chemical substances can be registered for marketing under the REACH regulation only when it is demonstrated that the chemical can be used safely. Multimedia mass balance modeling with SimpleBox (Hollander et al., 2014) and SimpleTreat (Struijs et al, 2016) plays an important role in registration. While early multimedia mass balance models all followed in the footsteps of Mackay's Unit World concept (taking the steady-state approach and using one compartment per environmental medium), later models became larger and spatially and temporally explicit, and were used for in-depth analysis of chemical fate. In the late 1990s, Wania and co-workers developed a Global Distribution Model for Persistent Organic Pollutants (GloboPOP). They used their global multimedia mass balance model to explore the so-called cold condensation effect, by which they explained the occurrence of relatively large amounts of persistent organic chemicals in the Arctic, where no one had ever used them (Wania, 1999). Scheringer and co-workers used their CliMoChem model to investigate long-range transport of persistent chemicals into Alpine regions (Scheringer, 1996; Wegmann et al., 2005). MacLeod and co-workers (Toose et al., 2004) constructed a global multimedia mass balance model (BETR World) to study long-range, global transport of pollutants. References Baughman, G.L., Lassiter, R. (1978). Predictions of environmental pollutant concentrations. In: Estimating the Hazard of Chemical Substances to Aquatic Life. ASTM STP 657, pp. 35-54. Beyer, A., Matthies, M. (2002). Criteria for Atmospheric Long-range Transport Potential and Persistence of Pesticides and Industrial Chemicals. Umweltbundesamt Berichte 7/2002, E. Schmidt-Verlag, Berlin. ISBN 3-503-06685-3. ECETOC (1994). HAZCHEM, A mathematical Model for Use in Risk Assessment of Substances. European Centre for Ecotoxicology and Toxicology of Chemicals, Brussels. Hollander, A., Schoorl, M., Van de Meent, D. (2016). SimpleBox 4.0: Improving the model, while keeping it simple... Chemosphere 148, 99-107. Mackay, D. (1991). Multimedia Environmental Fate Models: The Fugacity Approach. Lewis Publishers, Chelsea, MI. Mackay, D., Paterson, S. (1982). Calculating fugacity. Environmental Science and Technology 16, 274-278. Mackay, D., Paterson, S., Cheung, B., Neely, W.B. (1985). Evaluating the environmental behaviour of chemicals with a level III fugacity model. Chemosphere 14, 335-374. Mackay, D., Paterson, S., Tam, D.D., Di Guardo, A., Kane, D. (1995). ChemCAN: A regional Level III fugacity model for assessing chemical fate in Canada. Environmental Toxicology and Chemistry 15, 1638-1648. McKone, T.E. (1993). CALTOX, A Multi-media Total-Exposure Model for Hazardous Waste sites. Lawrence Livermore National Laboratory. Livermore, CA. Neely, W.B., Blau, G.E. (1985). Introduction to Exposure from Chemicals. In: Neely, W.B., Blau, G.E. (Eds). Environmental Exposure from Chemicals Volume I, CRC Press, Boca Raton, FL., pp 1-10. Neely, W.B., Mackay, D. (1982). Evaluative model for estimating environmental fate. In: Modeling the Fate of Chemicals in the Aquatic Environment. Ann Arbor Science, Ann Arbor, MI, pp. 127-144. Paterson, S. (1985). Equilibrium models for the initial integration of physical and chemical properties. In: Neely, W.B., Blau, G.E. (Eds). Environmental Exposure from Chemicals Volume I, CRC Press, Boca Raton, FL., pp 218-231. Paterson, S., Mackay, D. (1989). A model illustrating the environmental fate, exposure and human uptake of persistent organic chemicals. Ecological Modelling 47, 85-114. Scheringer, M. (1996). Persistence and spatial range as endpoints of an exposure-based assessment of organic chemicals. Environmental Science and Technology 30, 1652-1659. Struijs, J., Van de Meent, D., Schowanek, D., Buchholz, H., Patoux, R., Wolf, T., Austin, T., Tolls, J., Van Leeuwen, K., Galay-Burgos, M. (2016). Adapting SimpleTreat for simulating behaviour of chemical substances during industrial sewage treatment. Chemosphere 159:619-627. Toose, L., Woodfine, D.G., MacLeod, M., Mackay, D., Gouin, J. (2004). BETR-World: a geographically explicit model of chemical fate: application to transport of alpha-HCH to the Arctic Environmental Pollution 128, 223-40. Van de Meent D. (1993). SimpleBox: A Generic Multi-media Fate Evaluation Model. National Institute for Public Health and the Environment. RIVM Report 672720 001. Bilthoven, NL. Van de Meent, D., McKone, T.E., Parkerton, T., Matthies, M., Scheringer, M., Wania, F., Purdy, R., Bennett, D. (2000). Persistence and transport potential of chemicals in a multimedia environment. In: Klecka, g. et al. (Eds.) Evaluation of Persistence and Long-Range Transport Potential of Organic Chemicals in the Environment. SETAC Press, Pensacola FL, Chapter 5, pp. 169-204. Van de Meent, D., Hollander, A., Peijnenburg, W., Breure, T. (2011). Fate and transport of contaminants. In: Sánches-Bayo, F., Van den Brink, P.J., Mann, R.M. (eds.),Ecological Impacts of Toxic Chemicals, Bentham Science Publishers, pp. 13-42. Wania, F. (1999). On the origin of elevated levels of persistent chemicals in the environment. Environmental Science and Pollution Research 6, 11-19. Wegmann, F., Scheringer, M., Hungerbühler, K. (2005). First investigations of mountainous cold condensation effects with the CliMoChem model. Ecotoxicology and Environmental Safety 63, 42-51. 3.8.2. Question 1 Describe, using your own words, the essential characteristics of mass balance equations. 3.8.2. Question 2 What is "steady state"? What is "equilibrium"? Use a few lines of text to describe the essentials, indicating differences and commonalities. 3.8.2. Question 3 Mass balance equations are used in models to calculate concentrations of substances in the environment, given knowledge of the rates of emission. Give a worked-out example for a one-compartment situation, e.g. a fresh-water lake. 3.8.2. Question 4 Name and describe one (or more) example(s) of multimedia mass balance modeling. 3.8.3. Metal speciation models Authors: Wilko Verweij Reviewers: John Parsons, Stephen Lofts Learning objectives: You should be able to • Understand the basics of speciation modeling • Understand the factors determining speciation and how to calculate them • Understand in which types of situations speciation modeling can be helpful Keywords: speciation modeling, solubility, organic complexation Introduction Speciation models allow users to calculate the speciation of a solution rather than to measure it in a chemical way or to assess it indirectly using bioassays (see section 3.5). As a rule, speciation models take total concentrations as input and calculate species concentrations. Speciation models use thermodynamic data about chemical equilibria to calculate the speciation. This data, expressed in free energy or as equilibrium constants, can be found in the literature. The term 'constant' is slightly misleading as equilibrium constants depend on the temperature and ionic strength of the solution. The ionic strength is calculated from the concentrations (C) and charges (Z) of ions in solution using the equation: For many equilibria, no information is available to correct for temperature. To correct for ionic strength, many semi-empirical methods are available, none of which is perfect. How these models work For each equilibrium reaction, an equilibrium constant can be defined. For example, for the reaction Cu2+ + 4 Cl- ⇌ CuCl42- the equilibrium constant can be defined as Consequently, when the concentrations of free Cu2+ and free Cl- are known, the concentration of CuCl42- can be easily calculated as: [CuCl42-] = β * [Cu2+] * [Cl-]4 In fact, the concentrations of free Cu2+ and free Cl- are often NOT known, but what is known are the total concentrations of Cu and Cl in the system. In order to find the speciation, we need to set up a set of mass balance equations needs to be set up, for example: [total Cu] = [free Cu2+] + [CuOH+] + [Cu(OH)2] + [Cu(OH)3-] (..) + [CuCl+] + [CuCl2] (..) etc. [total Cl] = (..) Each concentration of a complex is a function of the free concentrations of the ions that make it up. So we can say that if we know the concentrations of all the free ions, we can calculate the concentrations of all the complexes, and then we can calculate the total concentrations. A solution to the problem cannot be found by rearranging the mass balance equations, because they are non-linear. What a speciation model does is to repeatedly estimate the free ion concentrations, on each loop adjusting them so that the calculated total concentrations more closely match the known totals. When the calculated and known total concentrations all agree to within a defined precision, the speciation has been calculated. The critical part of the calculation is adjusting the free ion concentrations in a sensible and efficient way to find the solution as quickly as possible. Several more or less sophisticated methods are available to solve this, but usually a Newton-Raphson method is applied. Influence of temperature and ionic strength In fact the explanation above is too simple. Equilbrium constants are valid under specific conditions for temperature and ionic strength (for example the standard conditions of 25oC and [endif]--> and need to be converted to the temperature and ionic strength of the system for which speciated is being calculated. It is possible to adapt the equilibrium constants for non-standard temperatures, but this requires knowledge of heat capacity (ΔH) data of each equilibrium. That knowledge is often not available. Constants can be converted from 25°C to other temperatures using the Van 't Hoff-equation: where K1 and K2 are the constants, T1 and T2 the temperatures, ΔH is the enthalpy of a reaction and R is the gas constant. Equilbrium constants are also valid for one specific value of ionic strength. For conversion from one value of ionic strength to another, many different approaches may be used. This conversion is quite important, because already at relatively low ionic strengths, deviations from ideality become significant, and the activity of a species starts to deviate from its concentration. Hence, the intrinsic, or thermodynamic, equilibrium constants (i.e. constants at a hypothetical ionic strength of zero) are no longer valid and the activity a of ions at non-zero ionic strength needs to be calculated from the concentration and the activity coefficient: a = γ * c where γ is the activity coefficient (dimensionless; sometimes also called f) and c is the concentration; a and c are in mol/liter. The first solution to calculate activity coefficients for non-zero ionic strength was proposed by Debye and Hückel in 1923. The Debye-Hückel theory assumes ions are point charges so it does not take into account the volume that these ions occupy nor the volume of the shell of ligands and/or water molecules around them. The Debye-Hückel gives good approximations, up to circa 0.01 M for a 1:1-electrolyte, but only up to circa 0.001 M for a 2:2-electrolyte. When the ionic strength exceeds these values, the activity coefficients that the Debye-Hückel approximation predicts deviate significantly from experimental values. Many environmental applications require conversions for higher ionic strengths making the Debye-Hückel-equation insufficient. To overcome this problem, many researchers have suggested other methods, like the extended Debye-Hückel-equation, the Güntelberg-equation and the Davies-equation, but also the Bromley-equation, the Pitzer-equation and the Specific Ion Interaction Theory (SIT). Many programs use the Davies-equation, which calculates activity coefficients γ as follows: where z is the charge of the species and I the ionic strength. Sometimes 0.2 instead of 0.3 is used. Basically all these approaches take the Debye-Hückel-equation as a starting point, and add one or more terms to correct for deviations at higher ionic strengths. Although many of these methods are able to predict the activity of ions fairly well, they are in fact mainly empirical extensions without a solid theoretical basis. Solubility Most salts have a limited solubility; in several cases the solubility is also important under conditions that occur in the environment. For instance, for CaCO3 the solubility product is 10-8.48, which means that when [Ca2+] * [CO32-] > 10-8.48, CaCO3 will precipitate, until [Ca2+] * [CO32-] = 10-8.48. But it also works the other way around: if solid CaCO3 is present in a solution where [Ca2+] * [CO32-] < 10-8.48 (note the '<'-sign), solid CaCO3 will dissolve, until [Ca2+] * [CO32-] = 10-8.48. Note that the Ca and CO3 in the formula here refer to free ions. For example, a 10-13 M solution of Ag2S will lead to precipitation of Ag2S. The free concentrations of Ag and S are 6.5*10-15 M and 1.8*10-22 M resp. (which corresponds with the solubility product of 10-50.12, but the dissolved concentrations of Ag and S are 7.1*10-15 M and 3.6*10-15 M resp., so for S seven order of magnitude higher. This is caused by the formation of S-complexes with protons (HS- and H2S (aq)) and to a lesser extent with Ag. Complexation by organic matter Complexation with Dissolved Organic Carbon (DOC) is different from inorganic complexation or complexation with well-defined compounds such as acetate or NTA. The reasons for that difference are as follows. • DOC is very heterogeneous; DOC isolated at two sites may be very different (not to mention the difficulty of selecting isolation procedures). • Complexation with DOC generally shows a continous range of equilibrium constants, due to chemical and steric differences in neighbouring groups. • Increased cation binding and/or the ionic strength of the solution change electrostatic interactions among the functional groups in DOC-molecules, which influences the equilibrium constants. • In addition, changing electrostatic interactions may cause conformational changes of the molecules. Among the most popular models to assess organic complexation are Model V (1992), VI (1998) and VII (2011), also known as WHAM, written by Tipping and co-authors (Tipping & Hurley, 1992; Tipping, 1994, 1998; Tipping, Lofts & Sonke, 2011). All these models assume that two types of binding occur: specific binding and accumulation in the diffuse double layer. Specific binding is the formation of a chemical bond between an ion and a functional group (or groups) on the organic molecule. Diffuse double layer accumulation is the accumulation of ions of opposite electrical charge adjacent to the molecule, without formation of a chemical bond (the electrical charge is usually negative, so the ions that accumulate are cations). For specific binding, all these models distinguish fulvic acids (FA) and humic acids (HA) which are treated separately. These two classes of DOC are typically the most abundant components of natural organic matter in the environment - in surface freshwaters, the fulvic acids are typically the most abundant. For each class, eight different discrete binding sites are used in the model. The sites have a range of acid-base properties. Metals bind to these sites, either to one site alone (monodentate), to two sites (bidentate) or, starting with Model VI, to three (tridentate). A fraction of the sites is allowed to form bidentate complexes. Starting with Model VI, for each bidentate and tridentate group three sub-groups are assumed to be present - this further increases the range of metal binding strengths. Binding constants depend on ionic strength and electrostatic interactions. Conditional constants are calculated in the same way in Model V, VI and VII, as follows: where: • Z is the charge of the organic acid (in moles per gram organic matter); • w is calculated by : where: • P is a constant term (different for FA and HA, and different for each model); • I is the ionic strength. Therefore, the conditional constant depends on the charge on the organic acids as well as on the ionic strength. For the binding of metals, the calculation of the conditional constant occurs in a similar way. The diffuse double layer is usually negatively charged, so it is usually populated by cations, in order to maintain electric neutrality. Calculations for the diffuse double layer are the same for Model V, Model VI and Model VII. The volume of the diffuse double layer is calculated separately for each type of acid, as follows: where: • NAv is Avogadro's number; • M is the molecular weight of the acid; • r is the radius of the molecule (0.8 nm for fulvic acids, 1.72 for humic acids); • κ is the Debye-Hückel parameter, which is dependent on ionic strength. Simply applying this formula in situations of low ionic strength and high content of organic acid would lead to artifacts (where volume of diffuse layer can be calculated to be more than 1 liter/liter). Therefore, some "tricks" are implemented to limit the volume of the diffuse double layer to 25% of the total. In case the acid has a negative charge (as it has in most cases), positive and neutral species are allowed to enter the diffuse double layer, just enough to make the diffuse double layer electrically neutral. When the acid has a positive charge, negative and neutral species are present. The concentration of species in the diffuse double layer is calculated by assuming that the concentration of that species in the diffuse double layer depends on the concentration in the bulk solution and the charge. In formula: where R is calculated iteratively, to ensure the diffuse double layer is electrically neutral. Applications Speciation models can be used for many purposes. Basically, two groups of applications can be distinguished. The first group consists of applications meant to understand the chemical behaviour of any system. The second group focuses on bioavailability. Chemical behaviour; laboratory situations Speciation models can be helpful in understanding chemical behaviour in either laboratory situations or field situations. For instance, if you want to add EDTA to a solution to prevent metals from precipitation, the choice of the EDTA-substance also determines the pH of the final solution. Figure 1 shows the pH of a 1 mM solution of EDTA for five different EDTA-salts. This shows that if you want to end up with a near neutral solution, the best choice is to add EDTA as the Na3HEDTA-salt. Adding a different salt requires adding either acid or base, or more buffer capacity, which in turn will influence the chemical behaviour of the solution. If you have field measurements of redox potential, speciation models can help to predict whether iron will be present as Fe(II) or Fe(III), which is important because Fe(II) behaves quite different chemically than Fe(III) and also has a quite different bioavailability. The same holds for other elements that undergo redox equilibria like N, S, Cu or Mn. Phase reactions can be predicted with speciation models, for example the dissolution of carbonate due to the gas solution reaction of CO2. Another example is the speciation in Dutch Standard Water (DSW), a frequently used test medium for ecotoxicological experiments, which is oversaturated with respect to CaCO3 and therefore displays a part of Ca as a precipitate. The fraction that precipitates is very small (less than 2% of the Ca) so it seems unimportant at first glance, but the precipitate induces a pH-shift of 0.22, a factor of almost two in the concentration of free H+. Many metals are amphoteric and have therefore a minimum solubility at a moderate pH, while dissolving more at both higher and lower pH-values. This can easily be seen in the case of Al: Figure 2 shows the concentration of dissolved Al as a function of pH (note log-scale for Y-axis). Around pH of 6.2, the solubility is at its minimum. At higher and lower pH-values, the solubility is (much) higher. Speciation models can also help to understand differences in the growth of organisms or adverse effects on organisms, in different chemical solutions. For example, Figure 3 shows that changes in speciation of boron can be expected only between roughly pH 8 and 10.5, so when you observe a biological difference between pH 7 and 8, it is not likely that boron is the cause. Copper on the other hand (see Figure 4) does display differences in speciation between pH 7 and 8 so is a more likely cause of different biological behaviour. Chemical behaviour: field situations In field situations, the chemistry is usually much more complex than under laboratory conditions. Decomposition of organisms (including plants) results in a huge variety of organic compounds like fulvic acids, humic acids, proteins, amino acids, carbohydrates, etc. Many of these compounds interact strongly with cations, some also with anions or uncharged molecules. In addition, metals easily adsorb to clay and sand particles that are found everywhere in nature. To make it more complex, suspended matter can contain a high content of organic material which is also capable of binding cations. For complexation by fulvic and humic acids, Tipping and co-workers have developed a unifying model (Tipping & Hurley, 1992; Tipping, 1994, 1998; Tipping, Lofts & Sonke, 2011). The most recent version, WHAM 7 (Tipping, Lofts & Sonke, 2011), is able to predict cation complexation by fulvic acids and humic acids over a wide range of chemical circumstances, despite the large difference in composition of these acids. This model is now incorporated in several speciation programs. Suspended matter may be of organic or of inorganic character. Inorganic matter usually consists of (hydr)oxides of metals, such as Mn, Fe, Al, Si or Ti, and clay minerals. In practice, the (hydr)oxides and clays occur together, but the mutual proportions may differ dramatically depending on the source. Since the chemical properties of these metal (hydr)oxides clays are quite different, there is a huge variation in the chemical properties of inorganic suspended matter in different places and different times. As a consequence, modeling interactions between dissolved constituents and suspended inorganic matter is challenging. Only by measuring some properties of suspended inorganic matter, can modeling be applied successfully. For suspended organic matter, the variation in properties is also large and modelling is challenging. Bioavailability Speciation models are useful in understanding and assessing the bioavailability of metals and other elements in test media. Test media often contain substances like EDTA to keep metals in solution. EDTA-complexes in general are not bioavailable, so in addition to keeping metals in solution they also change their bioavailability. Models can calculate the speciation and help you to assess what is actually happening in a test medium. An often forgotten aspect is the influence of CO2. CO2 from the ambient atmosphere can enter a solution or carbonate in solution (if in excess over the equilibrium concentration) can escape to the atmosphere. The degree to which this exchange takes place, influences the pH of the solution as well as the amount of carbonate that stays in solution (carbonates are often poorly soluble). Similarly, in field situations models can help to understand the bioavailability of elements. As stated above, the influence of DOC can nowadays be assessed properly in many situations, the influence of suspended matter remains more difficult to assess. Nevertheless models can deliver insights in seconds that otherwise can be obtained only with great difficulty. Models There are many speciation programs available and several of them are freely available. Usually they take a set of total concentrations as input, plus information about parameters such as pH, redox, concentration of organic carbon etc. Then the programs calculate the speciation and present them to the user. The equations cannot be solved analytically, so an iterative procedure is required. Although different numerical approaches are used, most programs construct a set of non-linear mass balance equations and solve them by simple or advanced mathematics. A complication in this procedure is that the equilibrium constants depend on the ionic strength of the solution, and that this ionic strength can only be calculated when the speciation is known. The same holds for the precipitation of solids. The procedure is shown in Figure 5. Limitations For modeling speciation, thermodynamic data is needed for all relevant equilibrium reactions. For many equilibria, this information is available, but not for all. This hampers the usefulness of speciation modeling. In addition, there can be large variations in the thermodynamic values found in the literature, resulting in uncertainty about the correct value. A factor of 10 between the highest and lowest values found is not an exception. This of course influences the reliability of speciation calculations. For many equilibria, the thermodynamic data is only available for the standard temperature of 25°C and no information is available to assess the data at other temperatures, although the effect of temperature can be quite strong. Also ionic strength has a high impact on equilibrium 'constants'; there are many methods available to correct for the effect of ionic strength, but most of them are at best semi-empirical. Simonin (2017) recently proposed a method with a solid theoretical basis; however, the data required for his method are available only for a few complexes so far. More fundamentally, you should realize that speciation programs typically calculate the equilibrium situation, while some reactions are very slow and, more inportant, nature is in fact a very dynamic system and therefore never in equilibrium. If a system is close to equilibrium, speciation programs can often make a good assessment of the actual situation, but the more dynamic a system, the more care you should take in believing the programs' results. Nevertheless it is good to realise that a chemical system will always move towards the equilibrium situation, while organisms may move them away from equilibrium. Phototrophs are able to move a system away from its equilibrium situation whereas decomposers and heterotrophs generally help to move a system towards its equilibrium state. Further reading Stumm, W., Morgan, J.J. (1981). Aquatic chemistry. John Wiley & Sons, New York. More, F.M.M., Hering, J.G. (1993). Principles and Applications of Aquatic Chemistry. John Wiley & Sons, New York. 3.8.3. Question 1 Briefly describe what speciation models are. 3.8.3. Question 2 What are the factors determining speciation and how can these be accounted for? 3.8.3. Question 3 Give two examples of situations where speciation modeling can be useful. In preparation 4.08: Section 4.1. Toxicokinetics Section 4.1. Toxicokinetics If you want to re-use this section, for e.g. in your electronic learning environment, feel free to copy this url: https://maken.wikiwijs.nl/162476/4_1__Toxicokinetics 4.09: Section 4.4. Increasing ecological realism in toxicity testing Section 4.4. Increasing ecological realism in toxicity testing If you want to re-use this section, for e.g. in your electronic learning environment, feel free to copy this url: https://maken.wikiwijs.nl/162480/4_4__Increasing_ecological_realism_in_toxicity_testing 4.10: Section 4.2. Toxicodynamics and Molecular Interactions Section 4.2. Toxicodynamics & Molecular Interactions If you want to re-use this section, for e.g. in your electronic learning environment, feel free to copy this url: https://maken.wikiwijs.nl/162477/4_2__Toxicodynamics___Molecular_Interactions
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.07%3A_Modelling_exposure.txt
In preparation 5.02: Population ecotoxicology in laboratory settings 5.2. Population ecotoxicology in laboratory settings Author: Michiel Kraak Reviewers: Nico van den Brink and Matthias Liess Learning objectives: You should be able to · motivate the importance of studying ecotoxicology at the population level. · name the properties of populations, unique to this level of biological organisation. · explain the implications of age and developmental stage specific sensitivities for population responses to toxicant exposure. Key words: Population ecotoxicology, density, age structure, population growth rate Introduction The motivation to study ecotoxicological effects at the population level is that generally the targets of environmental protection are indeed populations, communities and ecosystems. Additionally, several phenomena are unique to this level, including age specific sensitivity and interaction between individuals. Studying the population level is distinguished from the individual level and lower by a less direct link between the chemical exposure and the observed effects, due to individual variability and several feedback loops, loosening the dose-response relationships. Research at the population level is thus characterized by an increasing level of uncertainty if these processes are not properly addressed and by increasing time and efforts. Hence, it is not surprising that effects at the population are understudied. This is even more the case for investigations on higher levels like meta-populations, communities and ecosystems (see sections on meta-populations, communities and ecosystems). It is thus highly important to obtain data and insights into mechanisms leading to effects at the population level, keeping in mind the relevant interactions with lower and higher levels of organisation. Properties of populations are unique to this level of biological organization and include social structure (see section on invertebrate community ecotoxicology), genetic composition (see section on genetic variation), density and age structure. This gives room to age and developmental stage specific sensitivities to chemicals. For almost all species, young individuals like neonates or first instars are markedly more sensitive than adults or late instar larvae. This difference may run up to three orders of magnitude and consequently instar specific sensitivities may vary as much as species specific sensitivities (Figure 1). Population developmental stage specific sensitivities have also been reported. Exponentially growing daphnid populations exposed to the insecticide fenvalerate recovered much faster than populations that reached carrying capacity (Pieters and Liess, 2006). Given the age and developmental stage specific sensitivities, the timing of exposure to toxicants in relation to the critical life stage of the organism may seriously affect the extent of the adverse effects, especially in seasonally synchronised populations. A challenging question involved in population ecotoxicology is when a population is considered to be stable or in steady state. In spite of the various types of oscillation all populations depicted in Figure 2 can be considered to be stable. One could even argue that any population that does not go extinct can be considered stable. Hence, a single population could vary considerable in density over time, potentially strongly affecting the impact of exposure to toxicants. When populations suffer from starvation and crowding due to high densities and intraspecific competition, they are markedly more sensitive to toxicants, sometimes even up to a factor of 100 (Liess et al., 2016). This may even lead to unforeseen, indirect effects. Relative population growth rate (individual/individual/day) of high density populations of chironomids actually increased upon exposure to Cd, because Cd induced mortality diminished the food shortage for the surviving larvae (Figure 3). Only at the highest Cd exposure population growth rate decreased again. For populations at low densities, the anticipated decrease in population growth rate with increasing Cd concentrations was observed. Yet, at all Cd exposure levels growth rate of low density populations was markedly higher than that of high density populations. Population ecotoxicity tests In chronic ecotoxicity studies, preferably cohorts of individuals of the same size and age are selected to minimize variation in the outcome of the test, whereas in population ecotoxicology the natural heterogenous population composition is taken into account. This does make it harder though to interpret the obtained experimental data. Especially when studying populations of higher organisms in the wild, the increasing time to complete the research due to the long life span of these organisms imposes practical limitations (see section on wildlife population ecotoxicology). In the laboratory, this can be circumvented by selecting test species with relatively short life cycles, like algae, bacteria and zooplankton. For algae, a three or four day test can be considered as a multigeneration experiment and during 21 d female daphnids may release up to three clutches of neonates. These population ecotoxicity tests offer the unique possibility to calculate the ultimate population parameter, the population growth rate (r). This is a demographic population parameter, integrating survival, maturity time and reproduction (see section on population modeling). Yet, such chronic experiments are typically performed with cohorts and not with natural populations, making these experiments rather an extension of chronic toxicity tests than true population ecotoxicity tests. References Knillmann, S., Stampfli, N.C., Beketov, M.A., Liess, M. (2012). Intraspecific competition increases toxicant effects in outdoor microcosms. Ecotoxicology 21, 1857-1866. Liess, M., Foit, K., Knillmann, S., Schäfer, R.B., Liess, H.-D. (2016). Predicting the synergy of multiple stress effects. Scientific Reports 6, 32965. Pieters, B.J., Liess, M. (2006). Population developmental stage determines the recovery potential of Daphnia magna populations after fenvalerate application. Environmental Science and Technology 40, 6157-6162. 5.2. Question 1 Motivate the importance of studying ecotoxicology at the population level and higher. 5.2. Question 2 Name the properties of populations that are unique to this level of biological organisation. 5.2. Question 3 Why is it important to understand the implications of age and developmental stage specific sensitivities for population responses to toxicant exposure? 5.2. Question 4 Explain the results observed in Figure 3.
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.01%3A_Introduction-_Linking_population_community_and_ecosystem_responses.txt
5.3. Wildlife population ecotoxicology 5.3.1. Forensic investigation into crash of Asian vulture populations Author: Nico van den Brink Reviewers: Ansje Löhr, John Elliott Learning objectives: You should be able to • describe how forensic approaches are used in ecotoxicology • critically reflect on the uncertainty of prospective risk assessment of new chemicals Keywords: Pharmaceuticals, uncertainty, population decline, retrospective monitoring Introduction Historically, vulture populations in India, Pakistan and Nepal were too numerous to be effectively counted. In the mid-1990s numbers in northern India started to decline catastrophically, which was evidenced in the Keoladeo National Park (figure 1, Prakash 1999). Further monitoring of population numbers indicated unprecedented declines of over 90-99% from the mid-1990s to the early 2000s for Oriental White-backed vultures (Gyps bengalensis), Long-billed vultures (Gyps indicus) and also Slender-billed vultures (Gyps tenuirostris) (Prakash 1999). In the following years, similar declines were observed in Pakistan and Nepal, indicating that the causative factor was not restricted to a specific country or area. Total losses of vultures were estimated to be in the order of tens of millions. The first ideas about potential causes of those declines focussed on known infectious diseases or the possibility of new diseases to which the vulture population had not been previously exposed. However, no diseases were identified that had shown similar rates of mortalities in other bird species. Vultures are also considered to have a highly developed immune response given their diet of scavenging dead and often decaying animals. To obtain insights, initial interdisciplinary ecological studies were performed to provide a basic understanding of background mortality in the species affected. These studies started in large colonies in Pakistan, but were literally races against time, as some populations had already decreased by 50%, while others were already extirpated, (Gilbert et al., 2006). Despite those difficulties it was determined that mortalities were occurring principally in adult birds and not at the nestling phase. More in depth studies were performed to discriminate between natural mortality of for instance juvenile fledglings, which may be high in summer, just after fledging. After scrutinising the data no seasonality was observed in the abnormal, high mortality, indicating that this was not related to breeding activities. The investigations also revealed another important factor that these vultures were predominantly feeding on domestic livestock, while telemetric observations, using transmitters to assess flight and activity patterns f the birds, showed that the individual birds could range over very long distances to reach carcasses of livestock (up to over 100 km). Since no apparent causes for mortality were obtained in the ecological studies, more diagnostic investigations were started, focussing on infectious diseases and carried out in Pakistan (Oaks, 2011). However, that was easier said than done. Since large numbers of birds died, it was deemed essential to establish the logistics necessary to perform the diagnostics, including post-mortems, on all birds found dead. Although high numbers of birds died, hardly any fresh carcasses were available, due to remoteness of some areas, the presence of other scavengers and often hot conditions which fostered rapid decay of carcasses. Post-mortems on a selection off birds revealed that birds suspect of abnormal mortality all suffered from visceral gout, which is a white pasty smear covering tissues in the body including liver and heart. In birds, this is indicative for kidney failure. Birds metabolise nitrogen into uric acid (mammals into urea) which is normally excreted with the faeces. However, in case of kidney failure the uric acid is not excreted but deposited in the body. Further inspections of more birds confirmed this, and the working hypothesis became that the increased mortality was caused by a factor inducing kidney failure in the birds. Based on the establishment of kidney failure as the causative factor, histological and pathological studies were performed on several birds found dead which revealed that in birds with visceral gout, kidney lesions were severe with acute renal tubular necrosis (Oaks et al., 2004), confirming the kidney failure hypothesis. However, no indications of inflammatory cell infiltrations were apparent, ruling out the possibilities of infectious diseases. Those observation shifted the focus to potential toxic effects, although no previous case was known with a chemical causing such severe and extremely acute effects. First the usual suspects for kidney failure were addressed, like trace metals (cadmium, lead) but also other acute toxic chemicals like organophosphorus and carbamate pesticides and organochlorine chemicals None of those chemicals occurred at levels of concern and were ruled out. That left the researchers without leads to any clear causative factor, even after years of study! Some essential pieces of information were available, however: 1) acute renal failure seemed associated with the mortality, 2) no infectious agent was likely to be causative pointing to chemical toxicity, 3) since exposure was likely to be via the diet the chemical exposure needed to be related to livestock (the predominant diet for the vultures), pointing to compounds present in livestock such as veterinarian products, 4) widespread use of veterinarian chemicals had started relatively recently. After a survey of veterinarians in the affected areas of Pakistan, a single veterinarian pharmaceutical matched the criteria, diclofenac. This is a non-steroid anti-inflammatory drug (NSAID) since long used in human medicine but only introduced since the 1990s as a veterinarian pharmaceutical in India, Pakistan and surrounding countries. NSAIDs are known nephrotoxic compounds, although no cases were known with such acute and sever impacts. Chemical analyses of kidneys of vultures confirmed that kidneys of birds with visceral gout contained diclofenac, birds without signs of visceral gout did not. Also kidneys from birds that showed visceral gout and that died in captivity while being studied, were positive for diclofenac, as was the meat they were fed with. This all indicated diclofenac toxicity as the cause of the mortality, which was validated in exposure studies, dosing captive vultures with diclofenac. The species of Gyps vultures appeared extremely sensitive to diclofenac, showing toxic effects at 1% of the therapeutic dose for livestock mammalian species. This underlying mechanism for that sensitivity has yet to be explained, but initially it was also unclear why the populations were impacted to such severe extent. That was found to be related to the feeding ecology of the vultures. They were shown to fly long ranges to search for carcasses, and as a result of that they show very aggregated feeding, i.e. a lot of birds on a single carcass (Green et al., 2004). Hence, a single contaminated carcass may expose an unexpected large part of the population to diclofenac. Hence, a combination of extreme sensitivity, foraging ecology and human chemical use caused the onset of extreme population declines of some Asian vulture species of the Gyps genus, or so called "Old World vultures". This case demonstrated the challenges involved in attempting to disentangle the stressors causing very apparent population effects even on imperative species like vultures. It took several years of different groups of excellent researcher to perform the necessary research and forensic studies (under sometimes difficult conditions). Lessons learned are that even for compounds that have been used for a long time and thought to be well understood, unexpected effects may become evident. There is consensus that such effects may not be covered in current risk assessments of chemicals prior to their use and application, but this also draws attention to the need for continued post-market monitoring of organisms for potential exposure and effects. It should be noted that even nowadays, although the use of diclofenac is prohibited in larger parts of Asia, continued use still occurs due to its effectiveness in treating livestock and its low costs making it available to the farmers. Nevertheless, populations of Gyps vultures have shown to recover slowly. References Green, R.E., Newton, I.A., Shultz, S., Cunningham, A.A., Gilbert, M., Pain, D.J., Prakash, V. (2004). Diclofenac poisoning as a cause of vulture population declines across the Indian subcontinent. Journal of Applied Ecology 41, 793-800. Gilbert, M., Watson, R.T., Virani, M.Z., Oaks, J.L., Ahmed, S., Chaudhry, M.J.I., Arshad, M., Mahmood, S., Ali, A., Khan, A.A. (2006). Rapid population declines and mortality clusters in three Oriental whitebacked vulture Gyps bengalensis colonies in Pakistan due to diclofenac poisoning. Oryx 40, 388-399. Oaks, J.L., Gilbert, M., Virani, M.Z., Watson, R.T., Meteyer, C.U., Rideout, B.A., Shivaprasad, H.L., Ahmed, S., Chaudhry, M.J.I., Arshad, M., Mahmood, S., Ali, A., Khan, A.A. (2004). Diclofenac residues as the cause of vulture population decline in Pakistan. Nature 427, 630-633. Oaks, J.L., Watson, R.T. (2011). South Asian vultures in crisis: Environmental contamination with a pharmaceutical. In: Elliott, J.E., Bishop, C.A., Morrissey, C.A. (Eds.) Wildlife Ecotoxicology. Springer, New York, NY. pp. 413-441. Prakash, V. (1999). Status of vultures in Keoladeo National Park, Bharatpur, Rajasthan, with special reference to population crash in Gyps species. Journal of the Bombay Natural History Society 96, 365-378. 5.3.1. Question 1 Which ecological traits make vulture populations extremely vulnerable to exposure to chemicals like diclofenac? 5.3.1. Question 2 What indirect effects do you expect from the chemical induced crashes of the Asian vultures? 5.3.2. Otters, to PCB or not to PCB? Author: Nico van den Brink Reviewers: Ansje Löhr, Michiel Kraak, Pim Leonards, John Elliott Learning objectives You should be able to: • explain the derivation of toxic threshold levels by extrapolating between species • critically analyse implications of risk assessment for the conservation of species Keywords: Threshold levels, read across, species specific sensitivity The European otter (Lutra lutra) is a lively species which historically ranges all over Europe. In the second half of last century populations declined in North-West Europe, and at the end of the 1980s the species was declared extinct in the Netherlands. Several factors contributed to these declines, exposure to polychlorinated biphenyls (PCBs) and other contaminants was considered a prominent cause. PCBs can have different effects on organisms, primarily Ah-receptor mediated (see section on Receptor interactions). In order to assess the actual contribution of chemical exposure to the extinction of the otters, and the potential for population recovery it is essential to gain insight in the ratios between exposure levels and risk thresholds. However, since otters are rare and endangered, limited toxicological data is available on such thresholds. Most toxicological data is therefore inferred from research on another mustelids species the mink (Mustela vison) (Basu et al., 2007) a high trophic level, piscivorous species often used in toxicological studies. Several studies show that mink is quite sensitive to PCBs, showing e.g. effects on the length of the baculum of juveniles (Harding et al., 1999) and induction of hepatic enzyme systems and jaw lesions (Folland et al., 2016). Based on such studies, several threshold levels for otters were derived, depending on the toxic endpoints addressed. Based on number of offspring size and kit survival, EC50 were derived of approximately 1.2 to 2.4 mg/kg wet weight (Leonards et al., 1995), while for decreases in vitamin A levels due to PCB exposure, a safety threshold of 4 mg/kg in blood was assessed (Murk et al., 1998). To re-establish a viable population of otters in the Netherlands, a program was established in the mid-1990s to re-introduce otters in the Netherlands, including monitoring of PCBs and other organic contaminants in the otters. Otters were captured in e.g. Belarus, Sweden and Czech Republic. Initial results showed that these otters already contained < 1 mg/kg PCBs based on wet weight (van den Brink & Jansman, 2006), which was considered to be below the threshold limits mentioned before. Individual otters were radio-tagged, and most were recovered later as victims of car incidences. Over time, PCB concentrations had changed, although not in the same direction for all specimen. Females with high initial concentrations showed declining concentrations, due to lactation, while in male specimens most concentrations increased over time, as you would expect. Nevertheless, concentrations were in the range of the threshold levels, hence risks on effects could not be excluded. Since the re-introduction program was established in a relatively low contaminated area in the Netherlands, questions were raised for re-introduction plans in more contaminated areas, like the Biesbosch where contaminants may still affect otters . To assess potential risks of PCB contamination in e.g. the Biesbosch for otter populations a modelling study was performed in which concentrations in fish from the Biesbosch were modelled into concentrations in otters. Concentrations of PCBs in the fish differed between species (lipid rich fish such as eel greater concentrations than lean white fish), size of the fish (larger fish greater concentrations than smaller fish) and between locations within the Biesbosch. Using Biomagnification Factors (BMFs) specific for each PCB-congener (see section on Complex mixtures), total PCB concentrations in lipids of otters were calculated based on fish concentrations and different compositions of fish diets of the otters (e.g. white fish versus eel, larger fish versus smaller fish, different locations). Different diets resulted in different modelled PCB concentrations in the otters, however all modelled concentrations were above the earlier mentioned threshold levels (van den Brink and Sluiter, 2015). This would indicate that risks of effects for otters could not be ruled out, and led to the notion that release of otters in the Biesbosch would not be the best option. However, a major issue related to such risk assessment is whether the threshold levels derived from mink are applicable to otter. The resulting threshold levels for otter are rather low and exceedance of these concentrations has been noticed in several studies. For instance, in well-thriving Scottish otter populations PCBs levels have been recorded greater than 50 mg/kg lipid weight in livers (Kruuk & Conroy, 1996). This is an order of magnitude higher than the threshold levels, which would indicate that even at higher concentrations, at which effects are to be expected based on mink studies, populations of free ranging otters do not seem to be affected adversely. Based on this, the applicability of mink-derived threshold levels for otters may be open to discussion. The case on otters showed that the derivation of ecological relevant toxicological threshold levels may be difficult due to the fact that otters are not regularly used in toxicity tests. Application of data from a related species, in this case the American mink, however, may be limited due to differences in sensitivity. In this case, this could result in too conservative assessments of the risks, although it should be noted that this may be different in other combinations of species. Therefore, the read across of information of closely related species should therefore be performed with great care. References Basu, N., Scheuhammer, A.M., Bursian, S.J., Elliott, J., Rouvinen-Watt, K., Chan, H.M. (2007). Mink as a sentinel species in environmental health. Environmental Research 103, 130-144. Harding, L.E., Harris, M.L., Stephen, C.R., Elliott, J.E. (1999). Reproductive and morphological condition of wild mink (Mustela vison) and river otters (Lutra canadensis) in relation to chlorinated hydrocarbon contamination. Environmental Health Perspectives 107, 141-147. Folland, W.R., Newsted, J.L., Fitzgerald, S.D., Fuchsman, P.C., Bradley, P.W., Kern, J., Kannan, K., Zwiernik, M.J. (2016). Enzyme induction and histopathology elucidate aryl hydrocarbon receptor-mediated versus non-aryl receptor-mediated effects of Aroclor 1268 in American Mink (Neovison vison). Environmental Toxicology and Chemistry 35, 619-634. Kruuk, H., Conroy, J.W.H. (1996). Concentrations of some organochlorines in otters (Lutra lutra L) in Scotland: Implications for populations. Environmental Pollution 92, 165-171. Leonards, P.E.G., De Vries, T.H., Minnaard, W., Stuijfzand, S., Voogt, P.D., Cofino, W.P., Van Straalen, N.M., Van Hattum, B. (1995). Assessment of experimental data on PCB‐induced reproduction inhibition in mink, based on an isomer‐ and congener‐specific approach using 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin toxic equivalency. Environmental Toxicology and Chemistry 14, 639-652. Murk, A.J., Leonards, P.E.G., Van Hattum, B., Luit, R., Van der Weiden, M.E.J., Smit, M. (1998). Application of biomarkers for exposure and effect of polyhalogenated aromatic hydrocarbons in naturally exposed European otters (Lutra lutra). Environmental Toxicology and Pharmacology 6, 91-102. Van den Brink, N.W., Jansman, H.A.H. (2006). Applicability of spraints for monitoring organic contaminants in free-ranging otters (Lutra lutra). Environmental Toxicology & Chemistry 25, 2821-2826. 5.3.2. Question 1 Name three reasons why assessment of risk of PCBs to otters is relatively complicated 5.3.2. Question 2 How is it possible that although toxicity threshold levels are exceeded in some otter populations, for example in Scotland, the population seem to thrive really well?
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.03%3A_Wildlife_population_ecotoxicology.txt
5.4. Trait-based approaches Author: Paul J. Van den Brink Reviewers: Nico van den Brink, Michiel Kraak, Alexa Alexander-Trusiak Learning objectives: You should be able to • describe how the characteristics (traits) of species determine their sensitivity, recovery and the propagation of effects to higher levels of biological organisation. • explain the concept of response and effect traits. • explain how traits-based approaches can be implemented into environmental risk assessment. Keywords: Sensitivity, levels of biological organisation, species traits, recovery, indirect effects Introduction It is impossible to assess the sensitivity of all species to all chemicals. Risk assessments therefore, need methods to extrapolate the sensitivity of a limited number of species to all species present in the environment is desired. Statistical approaches, like the species sensitivity distribution concept, perform this extrapolation by fitting a statistical distribution (e.g. log-normal distribution) to a selected set of sensitivity data (e.g. 96h-EC50 data) in order to obtain a distribution of the sensitivity of all species. From this distribution a threshold value associated with the lower end of the distribution can be chosen and used as a protective threshold value (Figure 1). The disadvantage of this approach is that it does not include mechanistic knowledge on what determines species' sensitivity and uses species taxonomy rather than their characteristics. To overcome these and other problems associated with a taxonomy based approach (see Van den Brink et al., 2011 for a review) traits-based bioassessment approaches have been developed for assessing the effects of chemicals on aquatic ecosystem. In traits-based bioassessment approaches, species are not represented by their taxonomy but by their traits. A trait is a phenotypic or ecological characteristic of an organism, usually measured at the individual level but often applied as the average state/condition of a species. Examples of traits are body size, feeding habits, food preference, mode of respiration and lipid content. Traits describe the physical characteristics, ecological niche, and functional role of a species within the ecosystem. The recognized strengths of traits-based bioassessment approaches include: (1) traits add mechanistic and diagnostic knowledge, (2) traits are transferrable across geographies, (3) traits require no new sampling methodology as data that are currently collected can be used, (4) the use of traits has a long-standing tradition in ecology and can supplement taxonomic analysis. When traits are used to study effects of chemical stressors on ecosystem structure (community composition) and function (e.g. nutrient cycling) it is important to make a distinction between response and effects traits (Figure 2). Response traits are traits that enable a response of the species to the exposure to a chemical. An example of a response trait may be size related surface area of an organism. Smaller organisms have relatively large surface areas, because their surface to volume ratio is higher than for larger animals Herewith, the uptake rate of the chemical stressor is generally higher in smaller animals compared to larger ones (Rubach et al., 2012). Effects traits of organisms influence the surrounding environment by the organisms, by altering the structure and functioning of the ecosystem. An example of an effect trait is the food preference of an organism. For instance, if the small (response trait) and herewith sensitive organisms happen to be herbivorous (effect trait) an increase in algal biomass may be expected when the organisms are affected (Van den Brink, 2008). So, to be able to predict ecosystem level responses it is important to know the (cor)relations between response and effect traits as traits are not independent from each other but can be linked phylogenetically or mechanistically and thus form trait syndromes (Van den Brink et al., 2011). Predictive models for sensitivity using response traits One of the holy grails of ecotoxicology is to find out which species traits make one species more sensitive to a chemical stressor than another one. In the past, two approaches have been used to assess the (cor)relationships between species traits and their sensitivity, one based on empirical correlations between species' traits and their sensitivity as represented by EC50's (Rico and Van den Brink, 2015) and one based on a more mechanistic approach using toxicokinetic/toxicodynamic experiments and models (Rubach et al., 2012). Toxicokinetic-toxicodynamic models (TKTD models) simulate the time-course of processes leading to toxic effects on organisms (Jager et al., 2011). Toxicokinetics describe what an individual does with the chemical and, in their simplest form, include the processes of uptake and elimination, thereby translating an external concentration of a toxicant to an internal body concentration over time (see Section on Toxicokinetics and Bioaccumulation). Toxicodynamics describes what the chemical does with the organism, herewith linking the internal concentration to the effect at the level of the individual organism over time (e.g., mortality) (Jager et al., 2011) (see Sections on Toxicokinetics and Bioaccumulation and on Toxicodynamics and Molecular Interactions). Rubach et al. (2012) showed that almost 90% of the variation in uptake rates and 80% of the variation in elimination rates of an insecticide in a range of 15 freshwater arthropod species could be explained by 4 species traits. These traits were: i) surface area (without gills), ii) detritivorous feeding, iii) using atmospheric oxygen and iv) phylogeny in case of uptake, and i) thickness of exoskeleton, ii) complete sclerotization, iii) using dissolved oxygen and iv) % lipid of dry weight in case of elimination. For most of these traits, a mechanistic hypothesis between the traits and their influence on the uptake and elimination can be made (Rubach et al., 2012). For instance, a higher surface area to volume ratio increases the uptake of the chemical, so uptake is expected to be higher in small animals compared to larger animals. This shows that it is possible to construct mechanistic models that are able to predict the toxicokinetics of chemicals in species and herewith the sensitivity of species to chemicals based on their traits. The use of effect traits to model recovery and indirect effects Traits determining the way organisms within an ecosystem react to chemical stress are related to the intrinsic sensitivity of the organisms on the one hand (response traits) and their recovery potential and food web relations (effect traits) on the other hand (Van den Brink, 2008). Recovery of aquatic invertebrates is, for instance, determined by traits like number of life cycles per year, the presence of insensitive life stages like resting eggs, dispersal ability and having an aerial life stage (Gergs et al., 2011) (Figure 3). Besides recovery, effect traits will also determine how individual level effects will propagate to higher levels of biological organisation like the community or ecosystem level. For instance, when Daphnia are affected by a chemical, their trait related to food preference (algae) will ensure that, under nutrient-rich conditions, the algae will not be subjected to top-down control and will increase in abundance. The latter effects are called indirect effects, which are not a direct result of the exposure to the toxicant but an indirect one through competition, food-web relationships, etc.. References Gergs, A., Classen, S., Hommen, U. (2011). Identification of realistic worst case aquatic macroinvertebrate species for prospective risk assessment using the trait concept. Environmental Science and Pollution Research 18, 1316-1323. Jager, T., Albert, C., Preuss, T.G., Ashauer, R. (2011). General unified threshold model of survival-A toxicokinetic-toxicodynamic framework for ecotoxicology. Environmental Science and Technology 45, 2529-2540 Rico, A., Van den Brink, P.J. (2015). Evaluating aquatic invertebrate vulnerability to insecticides based on intrinsic sensitivity, biological traits and toxic mode-of-action. Environmental Toxicology and Chemistry 34, 1907-1917. Rubach, M.N., D.J. Baird, M-C. Boerwinkel, S.J. Maund, I. Roessink, Van den Brink P.J. (2012). Species traits as predictors for intrinsic sensitivity of aquatic invertebrates to the insecticide chlorpyrifos. Ecotoxicology 21, 2088-2101. Van den Brink, P.J. (2008). Ecological risk assessment: from book-keeping to chemical stress ecology. Environmental Science and Technology 42, 8999-9004. Van den Brink P.J., Alexander, A., Desrosiers, M., Goedkoop, W., Goethals, P., Liess, M., Dyer, S. (2011). Traits-based approaches in bioassessment and ecological risk assessment: strengths, weaknesses, opportunities and threats. Integrated Environmental Assessment and Management 7, 198-208. 5.4. Question 1 Which two traits may determine the sensitivity of invertebrates to insecticides? 5.4. Question 2 Will two daphnids which are equal except in their size be equally sensitive to a chemical? 5.4. Question 3 What is the difference between a response and an effect trait? 5.4. Question 4 Which traits are of importance for the recovery of impacted populations? 5.4. Question 5 How will the insecticide-induced death of water fleas propagate to the ecosystem level under eutrophic circumstances, and what are these types of effects called?
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.04%3A_Trait-based_approaches.txt
5.5. Population models Authors: A. Jan Hendriks and Nico van Straalen Reviewers: Aafke Schipper, John D. Stark and Thomas G. Preuss Learning objectives You should be able to • explain the assumptions underlying exponential and logistic population modelling • calculate intrinsic population growth rate from a given set of demographic data • outline the conclusions that may be drawn from population modelling in ecotoxicology • indicate the possible contribution of population models to chemical risk assessment Keywords: intrinsic rate of increase, carrying capacity, exponential growth, Introduction Ecological risk assessment of toxicants usually focuses on the risks run by individuals, by comparing exposures with no-effect levels. However, in many cases it is not the protection of individual plants or animals that is of interest but the protection of a viable population of a species in an ecological context. Risk assessment generally does not take into account the quantitative dynamics of populations and communities. Yet, understanding and predicting effects of chemicals at levels beyond that of individuals is urgently needed for several reasons. First, we need to know whether quality standards are sufficiently but not overly protective at the population level, when extrapolated from toxicity tests. Second, responses of isolated, homogenous cohorts in the laboratory may be different from those of interacting, heterogeneous populations in the field. Third, to set the right priorities in management, we need to know the relative and cumulative effect of chemicals in relation to other environmental pressures. Ecological population models for algae, macrophytes, aquatic invertebrates, insects, birds and mammals have been widely used to address the risk of potentially toxic chemicals, however, until recently, these models were only rarely used in the regulatory risk assessment process due to a lack of connection between model output and risk assessment needs (Schmolke et al., 2010). Here, we will sketch the basic principles of population dynamics for environmental toxicology applications. Exponential growth Ecological textbooks usually start their chapter on population ecology by introducing exponential and logistic growth. Consider a population of size N. If resources are unlimited, and the per capita birth (b) and death rates (d) are constant in a population closed to migration, the number of individuals added to the population per time unit (dN/dt) can be written as: or where r is called the intrinsic rate of increase. This differential equation can be solved with boundary condition N(0) = N0 to yield Since toxicants will affect either reproduction or survival, or both, they will also affect the exponential growth rate (Figure 1a). This suggests that r can be considered a measure of population performance under toxic stress. But rather than from observed population trajectories, r is usually estimated from life-history data. We know from basic demographic theory that any organism with "time-invariant" vital rates (that is, fertility and survival may depend on age, but not on time), will be growing exponentially at rate r. The intrinsic rate of increase can be derived from age-specific survival and fertility rates using the so-called Euler-Lotka equation, which reads: in which x is age, xm maximal age, l(x) survivorship from age zero to age x and m(x) the number of offspring produced per time unit at age x. Unfortunately this equation does not allow for a simple derivation of r; r must be obtained by iteration and the correct value is the one that, when combined with the l(x) and m(x) data, makes the integral equal to 1. Due to this complication approximate approaches are often applied. For example, in many cases a reasonably good estimate for r can be obtained from the age at first reproduction α, survival to first reproduction, S, and reproductive output, m, according to the following formula: This is due to the fact that for many animals in the environment, especially those with high reproductive output and low juvenile survivorship, age at first reproduction is the dominant variable determining population growth (Forbes and Calow, 1999). The classical demographic modelling approach, including the Euler-Lotka equation, considers time as a continuous variable and solves the equations by calculus. However, there is an equivalent formalism based on discrete time, in which population events are assumed to take place only at equidistant moments. The vital rates are then summarized in a so-called Leslie matrix, a table of survival and fertility scores for each age class, organized in such a way that when multiplied by the age distribution at any moment, the age distribution at the following time point is obtained. This type of modelling lends itself more easily to computer simulation. The outcome is much the same: if the Leslie matrix is time-invariant the population will grow each time step by a factor λ, which is related to r as ln λ = r (λ = 1 corresponds to r = 0). Mathematically speaking λ is the dominant eigenvalue of the Leslie matrix. The advantage of the discrete-time version is that λ can be more easily decomposed into its component parts, that is, the life-history traits that are affected by toxicants (Caswell, 1996). The demographic approach to exponential growth has been applied numerous times in environmental toxicology, most often in studies of water fleas (Suhett et al., 2015), and insects (Stark and Banks, 2003). The tests are called "life-table response experiments" (see section on Population ecotoxicology in a laboratory setting). The investigator observes the effects of toxicants on age-specific survival and fertility, and calculates r as a measure of population performance for each exposure concentration. An example is given in Figure 2, derived from a study by Barata et al. (2000). Forbes and Calow (1999) concluded that the use of r in ecotoxicology adds ecological relevance to the analysis, but it does not necessarily provide a more sensitive or less sensitive endpoint: r is as sensitive as the vital rates underlying its estimation. Hendriks et al. (2005) postulated that r should show a near-linear decrease with the concentration of a chemical, scaled to the LC50 (Figure 3). This relationship was confirmed in a meta-analysis of 200 laboratory experiments, mostly concerning invertebrate species (Figure 3). Anecdotal underpinning for large vertebrates comes from field cases where pollution limits population development. Logistic growth As exponentially growing populations are obviously rare, models that include some form of density-dependence are more realistic. One common approach is to assume that the birth rate b decreases with density due to increasing scarcity of resources. The simplest assumption is a linear decrease with N, expressed as follows: The question is, can the parameters of the logistic growth equation be used to measure population performance like in the case of exponential growth? Practical application is limited because the carrying capacity is difficult to measure under natural and contaminated conditions. Many field populations of arthropods, for example, fluctuate widely due to predator-prey dynamics, and hardly ever reach their carrying capacity within a growing season. An experimental study on the springtail Folsomia candida (Noël et al., 2006) showed that zinc in the diet did not affect the carrying capacity of contained laboratory populations, although there were several interactions below K that were influenced by zinc, including hormesis (growth stimulation by low doses of a toxicant), and Allee effects (loss of growth potential at low density due to lower encounter rate). Density-dependence is expected to act as buffering mechanism at the population level because toxicity-induced population decline diminishes competition, however, the effects very much depend on the details of population regulation. This was demonstrated in a model for peregrine falcon exposed to DDE and PBDEs (Schipper et al., 2013). While the equilibrium size of the population declined by toxic exposure, the probability of individual birds finding a suitable territory increased. However, at the same time the number of non-breeding birds shifting to the breeding stage became limiting and this resulted in a strong decrease in the equilibrium number of breeders. Mechanistic effect models To enhance the potential for application of population models in risk assessment, more ecological details of the species under consideration must be included, e.g. effects of dispersal, abiotic factors, predators and parasites, dispersal, landscape structure and many more. A further step is to track the physiology and ecology of each individual in the population. This is done in the dynamic energy budget modelling approach (DEB) developed by (Kooijman et al., 2009). By including such details, a model will become more realistic, and more precise predictions can be made on the effects of toxic exposures. These types of models are generally called "mechanistic effect models' (MEMs). They allow a causal link between the protection goal, a scenario of exposure to toxicants and the adverse population effects generated by model output (Hommen et al., 2015). The European Food Safety Authority (EFSA) in 2014 issued an opinion paper containing detailed guidelines on the development of such models and how to adjust them to be useful in the risk assessment of plant protection products.
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.05%3A_Population_models.txt
5.6. Metapopulations Author: Nico van den Brink Reviewers: Michiel Kraak, Heikki Setälä, Learning objectives You should be able to • explain the relevance of meta-population dynamics for environmental risks of chemicals • name the important mechanisms linking meta-populations to chemical risks Implications of meta-population dynamics on risks of environmental chemicals Populations can be defined as a group of organisms from the same species which live in a specific geographical area. These organisms interact and breed with each other. At a higher level, one can define meta-populations which can be described as a set of spatially separated populations which interact to a certain extent. The populations may function separately, but organisms can migrate between the populations. Generally the individual populations occur in more or less favourable habitat patches which may be separated by less favourable areas. However, in between populations, good habitats may also occur, where populations have not yet established , or the local populations may have gone extinct. Exchange between populations within a meta-population depends on i) the distances between the individual populations, ii) the quality of the habitat between the populations, e.g. the availability of so-called stepping stones, areas where organisms may survive for a while but which are too small or of too low habitat quality to support a local population and iii) the dispersal potential of the species. Due to the interactions between the different populations within a meta-population, chemicals may affect species at levels higher than the (local) population, also at non-contaminated sites. An important effect of chemicals at meta-population scale is that local populations may act as a source or sink for other populations within the meta-population. When a chemical affects the survival of organisms in a local population, the local population densities decline. This may increase the immigration of organisms from neighbouring populations within the meta-population. Decrease of local densities would decrease emigration, resulting in a net-influx of organisms into the contaminated site. This is the case when organisms do not sense the contaminants, or that the contaminants do not alter the habitat quality for the organisms. In case the immigration rate at the delivering/source population to replace the populations is too high to replace the leaving organisms, population densities in neighbouring populations may decline, even at the non-contaminated source sites. Consequently, local populations at contaminated sites may act as a sink for other populations within the meta-population, so chemicals may have a much broader impact than just local. On the contrary, when the local population is relatively small, or the chemical stress is not chronic, meta-population dynamics may also mitigate local chemical stress. Population level impacts of chemicals may be minimised by influx of organisms of neighbouring populations, potentially recovering the population densities prior to the chemical stress. Such recovery depends on the extent and duration of the chemical impact on the local populations and the capacity of the other populations to replenish the loss of the organisms in the affected population. Meta-population dynamics may thus alter the extent to which contaminants may affect local populations _ through migration between populations. However, chemicals may affect the total carrying capacity of the meta-population as a whole. This can be illustrated by the modelling approach developed by Levins in the late 1960s (Levins 1969). A first assumption in this model is that not all patches that can potentially carry a local population are actually occupied, so let F be the fraction of occupied patches (1-F being the fraction not occupied). Populations have an average change of extinction being e (day-1 when calculating on a daily base), while non-occupied patches have a change of c of being populated (day-1) from the populated patches. The daily change in numbers of occupied patches is therefore: In this formula c*F*(1-F) equals the number of non-occupied patches that are being occupied from the occupied patches, while e * F equals the fraction of patches that go extinct during the day. This can be recalculated to a carrying capacity (CC) of while the growth rate (GR) of the meta-population can be calculated by In case chemicals increase extinction risk (e), or decrease the chance on establishment in a new patch (c) this will affect the CC (which will decrease because e/c will increase) as well as the GR (will decrease, may even go below 0). However, this model uses average coefficients, which may not be directly applicable to individual contaminated sites within a meta-population. More (complex) recent approaches include the possibility to use local-population specific parameters and even more, such model include stochasticity, increasing their environmental relevance. Besides affecting populations directly in their habitats, chemicals may also affect the areas between habitat patches. This may affect the potential of organisms to migrate between patches. This may decrease the chances of organisms to repopulate non-occupied patches, i.e. decrease c, and as such both CC and GR. Hence, in a meta-population setting chemicals even in a non-preferred habitat may affect long term meta-population dynamics. References Levins, R. (1969). Some demographic and genetic consequences of environmental heterogeneity for biological control. Bulletin of the Entomological Society of America 15, 237-240 5.6. Question 1 What are drivers of recovery of local populations that are affected by a chemical stressor in a meta-population setting? 5.6. Question 2 In what type of meta-population would a local population be less affected, with smaller number of local populations which are relatively large, or a setup with lot of small local populations?
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.06%3A_Metapopulations.txt
5.7. Community ecotoxicology 5.7.1. Community Ecotoxicology: theory and concepts Authors: Michiel Kraak and Ivo Roessink Reviewers: Kees van Gestel, Nico van den Brink, Ralf B. Schäfer Learning objectives: You should be able to • motivate the importance of studying ecotoxicology at the community level. • define community ecotoxicology and to name specific phenomena at the community and ecosystem level. • explain the indirect effects observed in community ecotoxicology. • explain how communities can be studied and how data from experiments at the community level can be analyzed. • interpret community ecotoxicity data and to explain related challenges. Keywords: Community ecotoxicology, species interactions, indirect effects, mesocosm, ecosystem processes. Introduction The motivation to study ecotoxicological effects at the community level is that generally the targets of environmental protection are populations, communities and ecosystems. Consequently, when scaling up research from the molecular level, via cells, organs and individual organisms towards the population, community or even ecosystem level the ecological and societal relevance of the obtained data strongly increase (Figure 1). Yet, the difficulty of obtaining data increases, due to the increasing complexity, lower reproducibility and the increasing time needed to complete the research, which typically involves higher costs. Moreover, when effects are observed in the field it may be difficult to link these to specific chemicals and to identify the drivers of the observed effects. Not surprisingly, ecotoxicological effects at the community and ecosystem level are understudied. Community Ecotoxicology: definition and indirect effects Community ecology is defined as the study of the organization and functioning of communities, which are assemblages of interacting populations of species living within a particular area or habitat. Building on this definition, community ecotoxicology is defined as the study of the effects of toxicants on patterns of species abundance, diversity, community composition and species interactions. These species interactions are unique to the community and ecosystem level and may cause direct effects of toxicants on specific species to exert indirect effects on other species. It has been estimated that the majority of effects at these levels of biological organization are indirect rather than direct. These indirect effects are exerted via: • predator-prey relationships • consumer-producer relationships • competition between species • parasite-host relationships • symbiosis • biotic environment As an example, Roessink et al. (2006) studied the impact of the fungicide triphenyltin acetate (TPT) on benthic communities in outdoor mesocosms. For several species a dose-related decrease in abundance directly after application was observed, followed by a gradual recovery coinciding with decreasing exposure concentrations, all implying direct effects of the fungicide. For some species, however, opposite results were obtained and abundance increased shortly after application, followed by a gradual decline; see the example of the Culicidae in Figure 2. In this case, these typical indirect effects were explained by a higher sensitivity of the predators and competitors of the Culicidae. Due to diminished predation and competition and higher food availability abundances of the Culicidae temporarily increased after toxicant exposure. With the decreasing exposure concentrations over time, the populations of the predators and competitors recovered, leading to a subsequent decline in numbers of the Culicidae. The indirect effects described above are thus due to species-specific sensitivities to the compound of interest, which influence the interactions between species. Yet, at higher exposure concentrations also the less sensitive species will start to be affected by the chemical. This may lead to an "arch-shaped" relationship between the number of individuals of a certain species and the concentration of a toxicant. In a mesocosm study with the insecticide lambda-cyhalothin this was observed for Daphnia, which are prey for the more sensitive phantom midge Chaoborus (Roessink et al., 2005; Figure 3). At low exposure concentrations the indirect effects, such as release from predation by Chaoborus, led to an increase in abundance of the less sensitive Daphnia. At intermediate exposure concentrations there was a balance between the positive indirect effects and the adverse direct effects of the toxicant. At higher exposure concentrations the adverse direct effects overruled the positive indirect effects resulting in a decline in abundance of the Daphnia. These combined dose dependent direct and indirect effects are inherent to community-level experiments, but are compound and species-interaction specific. Investigating communities and analysing and interpreting community ecotoxicity data To study community ecotoxicology, experiments have to be scaled up and are therefore often performed in mesocosms, artificial ponds, ditches and streams, or even in the field, sometimes accompanied by the use of in- and exclosures. To assess the effects of toxicants on communities in such large systems requires meticulous sampling schemes, which often make use of artificial substrates and e.g. emergence traps for aquatic invertebrates with terrestrial adult life stages (see section on Community ecotoxicology in practice). Alternatively to scaling up the experiments in community ecotoxicology, the size of the communities may be scaled down. Algae and bacteria grown on coin sized artificial substrates in the field or in experimental settings provide the unique advantage that the experimental unit is actually an entire community. Given the large scale and complexity of experiments at the community level, such experiments generally generate overwhelming amounts of data, making appropriate analysis of the results challenging. Data analysis focusing on individual responses, so-called univariate analysis, that suffice in single species experiments, obviously falls short in community ecotoxicology, where cosm or (semi-)field communities sometimes consist of over a hundred different species. Hence, multivariate analysis is often more appropriate, similar to the approaches frequently applied in field studies to identify possible drivers of patterns in species abundances. Alternative approaches are also applied, like using ecological indices such as species richness or categorizing the responses of communities into effect classes (Figure 4). To determine if species under semi-field conditions respond equally sensitive to toxicant exposure as in the laboratory, the construction and subsequent comparison of species sensitivity distributions (SSD) (see section on SSDs) may be helpful. The analysis and interpretation of community ecotoxicity data is also challenged by the dynamic development of each individual replicate cosm, artificial pond, ditch or stream, including those from the control. From the start of the experiment, each control replicate develops independently, matures, and at the end of the experiments that generally last for several months control replicates may differ not only from the treatments, but also among each other. The challenge is then to separate the toxic signal from the natural variability in the data. In experiments that include a recovery phase, it is frequently observed that previously exposed communities do recover, but develop in another direction than the controls, which actually challenges the definition of recovery. Moreover, recovery can be decelerated or accelerated depending on the dispersal capacity of the species potentially inhabiting the cosms and the distance to nearby populations within a metapopulation (see section on Metapopulations). Other crucial factors that may affect the impact of a toxicant on communities, as well as their recovery from this toxicant exposure include habitat heterogeneity and the state of the community in combination with the moment of exposure. Habitat heterogeneity may affect the distribution of toxicants over the different environmental compartments and may provide shelter to organisms. Communities generally exhibit temporal dynamics in species composition and in their contribution to ecosystem processes (see section on Structure versus function), as well in the lifecycle stages of the individual species. Exponentially growing populations recover much faster than populations that reached carrying capacity and for almost all species, young individuals are up to several orders of magnitude more sensitive than adults or late instar larvae (see section on Population ecotoxicology). Hence, the timing of exposure to toxicants may seriously affect the extent of the adverse effects, as well as the recovery potential of the exposed communities. From community ecotoxicology towards ecosystems and landscapes When scaling up from the community to the ecosystem level, again unique characteristics emerge: structural characteristics like biodiversity, but also ecosystem processes, quantified by functional endpoints like primary production, ecosystem respiration, nutrient cycling and decomposition. Although a good environmental quality is based on both ecosystem structure and functioning, there is definitely a bias towards ecosystem structure, both in science and in policy (see section on Structure versus function). Levels of biological organisation higher than ecosystems are covered by the field of landscape ecotoxicology (see section on Landscape ecotoxicology) and in a more practical way by the concept of ecosystem services (see section on Ecosystem services). Further reading Clements, W.H., Newman, M.C. (2002). Community Ecotoxicology. John Wiley & Sons, Ltd. 5.7.1. Question 1 Motivate the importance of studying ecotoxicology at the community level. 5.7.1. Question 2 Define community ecotoxicology and name specific phenomena at the community and ecosystem level. 5.7.1. Question 3 Roessink et al. (2006) studied the impact of the fungicide triphenyltin acetate (TPT) on benthic communities in outdoor mesocosms. For several species they observed a dose related decrease in abundance directly after application, followed by a gradual recovery, see example in the graph below. For some species however, opposite results were obtained and the abundance increased shortly after application, followed by a gradual decline, see example in the graph below. Explain the results shown in the lower graph. 5.7.1. Question 4 Mention three ways to analyze data from experiments at the community level. 5.7.1. Question 5 Mention one advantage and three disadvantages of cosm experiments. 5.7.2. Community ecotoxicology in practice Author: Martina G. Vijver Reviewers: Paul J. van den Brink, Kees van Gestel Learning objectives: To be able to • describe the variety of ecotoxicological test systems available to address different research questions. • explain what type of information is gained from low as well as higher level ecotoxicological tests. • explain the advantages and disadvantages of different higher level ecotoxicological test systems Keywords: microcosms, mesocosms, realism, different biological levels Introduction: Linking effects at different levels of biological organization It is generally anticipated that ecotoxicological tests should provide data useful for making realistic predictions of the fate and effects of chemicals in natural ecosystems (Landner et al., 1989). The ecotoxicological test, if used in an appropriate way, should be able to identify the potential environmental impact of a chemical before it has caused any damage to the ecosystem. In spite of the considerable amount of work devoted to this problem and the plethora of test methods being published, there is still reason to question whether current procedures for testing and assessing the hazard of chemicals in the environment do answer the questions we have asked. Most biologists agree that at each succeeding level of biological organization new properties appear that would not have been evident even by the most intense and careful examination of lower levels of organization (Cairns Jr., 1983). These levels of biological hierarchy might be crudely characterized as subcellular, cellular, organ, organism, population, multispecies, community, and ecosystem (Figure 1). At the lower biological level, responses are faster than those occurring at higher levels of organization. Experiments executed at the lower biological level often are performed under standard laboratory conditions (see Section on Toxicity testing). The laboratory setting has advantages like allowing for replication, the use of relatively easy and simplified conditions that enable outcomes that are rather robust across different laboratories, the stressor of interest being more traceable under optimal stable conditions, and easy repetition of experiments. As a consequence, at the lower biological level the responses of organisms to chemical stressors tend to be more tractable, or more causal, than those identified when studying effects at higher tiered levels. The merit to perform cosm studies, so at the higher biological level (see Figure 1), is to investigate the impact of a stressor on a variety of species, all having interactions with each other. This enables detecting both direct and indirect effects on the structure of species assemblages due to the chemicals. Indirect effects can become manifest as disruptions of species interactions, e.g. competition, predator-prey interactions and the like. A second important reason for conducting cosm studies is that abiotic interactions at the level of the ecosystem can be accounted for, allowing for measurement of effects of chemicals under more environmentally realistic exposure conditions. Conditions that likely influence the fate and behavior of chemical are sorption to sediments and plants, photolysis, changes in pH (see section on Bioavailability for a more detailed description), and other natural fluctuations. What are cosm studies? Microcosm or mesocosm (or cosm) studies represent a bridge between the laboratory and the natural world (examples of aquatic cosms are given in Figure 2). The difference between micro- and mesocosms is mostly restricted to size (Cooper and Barmuta, 1993). Aquatic microcosms are 10-3 to 10 m3 in size, while mesocosms are 10 to 104 m3 or even larger equivalent to whole or natural systems. The originality of cosms is mainly based on the combination of ecological realism, achieved by the introduction of the basic components of natural ecosystems, and facilitated access to a number of physicochemical, biological, and toxicological parameters that can be controlled to some extent. The cosm approach also makes it possible to work with treatments that can be replicated, so enabling the study of multiple environmental factors which can be manipulated. The system allows the establishment of food webs, the assessment of direct and indirect effects, and the evaluation of effects of contamination on multiple trophic and taxonomic levels in an ecologically relevant context. Cosm studies make it possible to assess effects of contaminants by looking at the parts (individuals, populations, communities) and the whole (ecosystems) simultaneously. As given in the OECD guidance document (OECD, 2004), the size to be selected for a meso- or microcosm study will depend on the objectives of the study and the type of ecosystem that is to be simulated. In general, studies in smaller systems are more suitable for short-term studies of up to three to six months and studies with smaller organisms (e.g. planktonic species). Larger systems are more appropriate for long-term studies (e.g. 6 months or longer). Numerous ecosystem-level manipulations have been conducted since the early 1970s (Hurlbert et al., 1972). The Experimental Lakes Area (ELA) situated in Ontario, Canada deserves special attention because of its significant contributions to the understanding of how natural communities respond to chemical stressors. This ELA consists of 46 natural, relatively undisturbed lakes, which were designated specially for ecosystem-level research. Many different questions have been tackled here, e.g. manipulations with nutrients (amongst others Levine and Schindler, 1999), synthetic estrogens (e.g. Kidd et al., 2014) and Wallace with pesticides in the Coweeta district (Wallace et al., 1999). It is nowadays realized that there is a need for testing more than just individual species and to take into account ecosystem elements such as fluctuations of abiotic conditions and biotic interactions when trying to understand the ecological effects of chemicals. Therefore a selection of study parameters is often considered as given by OECD (2004): • Regarding treatment regime: • dosing regime, duration, frequency, loading rates, preparation of application • solutions, application of test substance, etc.; • meteorological records for outdoor cosms; • physicochemical water parameters (temperature, oxygen saturation, pH, etc.); • Regarding biological levels it should be recorded what sampling methods and taxonomic identification methods are used; • phytoplankton: chlorophyll-a; total cell density; abundance of individual dominant taxa; taxa (preferably species) richness, biomass; • periphyton: chlorophyll-a; total cell density; density of dominant species; species richness, biomass; • zooplankton: total density per unit volume; total density of dominant orders (Cladocera, Rotifera and Copepoda); species abundance; taxa richness, biomass; • macrophytes: biomass, species composition and % surface covering of individual plants; • emergent insects: total number emerging per unit of time; abundance of individual dominant taxa; taxa richness; biomass; density; life stages; • benthic macroinvertebrates: total density per unit area; species richness, abundance of individual dominant species; life stages; • fish: total biomass at test termination; individual fish weights and lengths for adults or marked juveniles; condition index; general behaviour; gross pathology; fecundity, if necessary. Two typical examples of results obtained in an aquatic cosm study A cosm approach assist in identifying and quantifying direct as well as indirect effects. Here two different types of responses are described, for more examples it is referred to the Section on Multistress. Joint interactions: Barmentlo et al. (2018) used an outdoor mesocosm system consisting of 65 L ponds. Using a full factorial design, they investigated the population responses of macroinvertebrate species assemblages exposed for 35 days to environmentally relevant concentrations of three commonly used agrochemicals (imidacloprid, terbuthylazine, and NPK fertilizers). A detrivorous food chain as well as an algal-driven food chain were inoculated into the cosms. At environmentally realistic concentrations of binary mixtures, the species responses could be predicted based on concentration addition (see Section on Mixture toxicity). Overall, the effects of trinary mixtures were much more variable and counterintuitive. This was nicely illustrated by how the mayfly Cloeon dipterum reacted to the various combinations of the pesticides. Compared to single substance exposures and binary mixtures, extreme low recovery of C. dipterum (3.6% of control recovery for both mixtures) was seen. However, after exposure to the trinary mixture, recovery of C. dipterum no longer deviated from the control, and therefore was was higher than expected. Unexpected effects of the mixtures were also obtained for both zooplankton species (Daphnia magna and Cyclops sp.) As expected, the abundance of both zooplankton species was positively affected by nutrient applications, but pesticide addition did not lower their recovery. These type of unexpected results can only been identified when multiple species and multiple stressors are tested and cannot be detected in a lab-test with single species. Indirect cascading effects: Van den Brink et al. (2009) studied the effects of chronic applications of a mixture of the herbicide atrazine and the insecticide lindane in indoor freshwater plankton-dominated microcosms. Both top-down and bottom-up regulation mechanisms of the species assemblage selected were affected by the pesticide mixture. Lindane exposure also caused a decrease in sensitive detritivorous macro-arthropods and herbivore arthropods. This allowed insensitive food competitors like worms, rotifers and snails to increase in abundance (although not always significantly). Atrazine inhibited algal growth and hence also affected the herbivores. A direct result of the inhibition of photosynthesis by atrazine exposure were lower dissolved oxygen and pH levels and an increase in alkalinity, nitrogen and electrical conductivity. See Figure 3 for a synthesis of all interactions observed in the study of Van den Brink et al. (2009). Realism of cosm studies There is a conceptual conflict between realism and replicability when applied to mesocosms. Replicability may be achieved, in part, by a relative simplification of the system. The crucial point in designing a model system may not be to maximize the realism, but rather to make sure that ecologically relevant information can be obtained. Reliability of information on ecotoxicological effects of chemicals tested in mesocosms closely depends on the representativeness of biological processes or structures that are likely to be affected. This means that within cosms key features at both structural and functional levels should be preserved as they ensure ecological representativeness. Extrapolation from small experimental systems to the real world seems generally more problematic than the use of larger systems in which more complex interactions can be studied experimentally as well. For that reason, Caquet et al. (2000) claim that testing chemicals using mesocosms refines the classical methods of ecotoxicological risk assessment because they provide conditions for a better understanding of environmentally relevant effects of chemicals. References Barmentlo S.H., Schrama M., Hunting E.R., Heutink R., Van Bodegom P.M., De Snoo G.R., Vijver M.G. (2018). Assessing combined impacts of agrochemicals: Aquatic macroinvertebrate population responses in outdoor mesocosms, Science of the Total Environment 631-632, 341-347. Caquet, T., Lagadic, L., Sheffield, S.R. (2000) Mesocosm in ecotoxicology: outdoor aquatic systems. Reviews of Environmental Contamination and Toxicology 165, 1-38. Cairns Jr. J. (1983). Are single species toxicity tests alone adequate for estimating environmental hazard? Hydrobiologica 100, 47-57. Cooper, S.D., Barmuta, L.A. (1993) Field experiments in biomonitoring. In Rosenberg, D.M., Resh, V.H. (Eds.) Freshwater Biomonitoring and Benthic Macroinvertebrates. Chapman and Hall, New York, pp. 399-441. OECD (2004). Draft Guidance Document on Simulated Freshwater Lentic Field Tests (Outdoor Microcosms and Mesocosms) (July 2004). Organization for Economic Cooperation and Development, Paris. http://www.oecd.org/fr/securitechimique/essais/32612239.pdf Hurlbert, S.H., Mulla, M.S., Willson, H.R. (1972) Effects of an organophosphorus insecticide on the phytoplankton, zooplankton, and insect populations of fresh-water ponds. Ecological Monographs 42, 269-299. Kidd, K.A., Paterson, M.J., Rennie, M.D., Podemski, C.L., Findlay, D.L., Blanchfield, P.J., Liber, K. (2014). Direct and indirect responses of a freshwater food web to a potent synthetic oestrogen. Philosophical Transactions of the Royal Society B Biological Sciences 369, Article AR 20130578, DOI:10.1098/rstb.2013.0578 Landner, L., Blanck, H., Heyman, U., Lundgren, A., Notini, M., Rosemarin, A., Sundelin, B. (1989) Community Testing, Microcosm and Mesocosm Experiments: Ecotoxicological Tools with High Ecological Realism. Chemicals in the Aquatic Environment. Springer, pp. 216-254. Levine, S.N., Schindler, D.W. (1999). Influence of nitrogen to phosphorus supply ratios and physicochemical conditions on cyanobacteria and phytoplankton species composition in the Experimental Lakes Area, Canada. Canadian Journal of Fisheries and Aquatic Sciences 56, 451-466. Newman, M.C. (2008). Ecotoxicology: The History and Present Directions. In Jørgensen, S.E., Fath, B.D. (Eds.), Ecotoxicology. Vol. 2 of Encyclopedia of Ecology, 5 vols. Oxford: Elsevier, pp.1195-1201. Van den Brink, P.J., Crum, S.J.H., Gylstra, R., Bransen, F., Cuppen, J.G.M., Brock, (2009). Effects of a herbicide - insecticide mixture in freshwater microcosms: risk assessment and ecological effect chain. Environmental Pollution 157, 237-249. Wallace, J.B., Grubaugh, J.W., Whiles, M.R. (1996). Biotic indices and stream ecosystem processes: Results from an experimental study. Ecological Applications 6, 140-151. 5.7.2. Question 1 Responses of organisms to long-term exposure can be detected at the sub-organism level by making use of biomarkers and then extrapolating these results to organism fitness and consequence at the population level. Mention two benefits of performing tests making use of biomarkers 5.7.2. Question 2 Mention at least two benefits of performing tests at the higher biological level such as community or ecosystem levels.
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.07%3A_Community_ecotoxicology.txt
5.8. Structure versus function incl. ecosystem services Author: Herman Eijsackers Reviewers: Nico van den Brink, Kees van Gestel, Lorraine Maltby Learning objectives: You should be able to • mention three levels of biodiversity • describe the difference between structural and functional properties of an ecosystem • explain why the functioning of an ecosystem generally tends to be less sensitive than its structure • describe the term Functional Redundancy and explain its meaning for interpreting effects on the structure and functioning of ecosystems Keywords: structural biodiversity, functional biodiversity, functional redundancy, food web interactions Biodiversity at three different levels In ecology, biodiversity describes the richness of natural life at three levels: genetic diversity, species diversity (the most well-known) and landscape diversity. The most commonly used index, the Shannon Wiener index, expresses biodiversity in general terms as the number of species in relation to the number of individuals per species. Precisely, this index stands for the sum of the natural logarithm of the number of individuals per species present: -∑(pi*ln(pi)) with pi = ni/N in which ni is the number of individuals of species i and N the total number of individuals of all species combined. In environmental toxicology, most attention is paid to species diversity. Genetic diversity plays a role in the assessment of more or less sensitive or resistant subspecies or local populations of a species, like in various mining areas. Landscape diversity is receiving attention only recently and aims primarily at the total load of e.g. pesticides applied in an agronomic landscape (see Section on Landscape ecotoxicology), although it should more logically focus on the interactions between the various ecosystems in a landscape, for instance a lake surrounded partly by a forest, partly by a grassland. Structural and functional diversity In general, the various types of interactions between species do not play a major role in the study of biodiversity neither within ecology nor in environmental toxicology. The diversity in interactions described in the food web or food chain is not expressed in a term like the Shannon-Wiener index. However, in aquatic as well as soil ecological research, extensive, quantitative descriptions have been made of various ecosystems. These model descriptions, like the one for arable soil below, are partly based on the taxonomic background of species groups and partly on their functional role in the food web, expressed as their way of feeding (see for instance the phytophagous nematodes feeding from plants, the fungivorous nematodes eating fungi and the predaceous nematodes eating other nematodes). The scheme in Figure 1 shows a very general soils food web and the different trophic levels. Much more detailed soil food web descriptions also are available, that do not only link the different trophic groups but also describe the energy flows within the system and through these flows the intensity and thus strength of the interactions that together determine the stability of the system (see e.g. de Ruiter et al., 1998). This food web shown in Figure 1 illustrates that biodiversity not only has a structural side: the various types of species, but also a functional one: which species are involved in the execution of which process. At the species level this functional aspect has been further elaborated in specific feeding leagues. At the ecosystem level this functional aspect has clearly been recognized in the last decades and resulted in the development of the concept of ecosystem services (see Section on Ecosystem services). However, these do not trace back to the individual species level and as such not to the functional aspect of biodiversity. Another development to be mentioned is that of trait-based approaches, which attempt to group species according to certain traits that are linked not only to exposure and sensitivity but also to their functioning. With that the trait-based approach may enable linking structural and functional biodiversity (see Section on Trait-based approaches). Functional redundancy When effects of contaminants on species are compared to effects on processes, the species effects are mostly more distinct than the process effects. In other words: effects on structural diversity will be seen already at lower concentrations, and probably also sooner, than effects on functional diversity. This can be explained by the fact that processes are executed by more than one species. When with increasing chemical exposure levels the most sensitive species disappear, their role is taken over by less sensitive species. This reasoning has been generalized in the concept of "functional redundancy", which postulates that not all species that can perform a specific process are always active, and thus necessary, in a specific situation. Consequently they are superfluous or redundant. When a sensitive species that can perform a similar function disappears, a redundant species may take over, so the function is still covered. It has to be realized, however, that in case this is relevant in situation A, that does not mean it is also relevant for situation B with different environmental conditions and another species composition. Another consequence of functional redundancy is that when functional biodiversity is affected, there is (severe) damage to structural biodiversity: most likely several important species will have gone extinct or are strongly inhibited. Examples of the relation between structure and functioning Redundant species are often less efficient in performing a certain function. Tyler (1984) observed in a gradient of Cu-contamination by a copper manufacturing plant in Gusum Sweden that specific enzyme functions as well as a general processes like mineralisation decreased faster than the total fungal biomass. (Figure 2b). The explanation was provided by experimental research by Ruhling et al. (1984) who selected a number of these micro-fungi in the field and tested them for their sensitivity for copper. The various species showed different concentration-effect relationships but all going to zero, except for two species which increased in abundance at the higher concentration so that the total biomass stayed more or less the same (Figure 2A). Another example of the importance of a combined approach to structural and functional diversity are the different ecological types of earthworms. According to their behaviour and role they are distinguished in the anecics (deep burrowing earthworms moving up and down from deeper soil layers to the soil surface and consuming leaf litter), the endogeics (active in the deeper laying mineral and humus soil layers and consuming fragmented litter material and humus), and the epigeics (active in the upper soil litter layer and consuming leaf litter). Adverse effects of contamination on the anecics will result in accumulation of litter at the soil surface, in reduced litter fragmentation by the epigeics and reduced humus-forming by the endogeics. In various studies it is shown that these earthworms have different sensitivities for different types of pesticides. However, so far the ranking of more or less sensitive species is different for different groups of pesticides. So, there is no general relation between the function of a species e.g. surface active earthworms (epigeics) and their exposure to and sensitivity for pesticides. Nevertheless, pesticide effects on anecics generally lead to reduced litter removal, effects on endogeics result in slower fragmentation, reduced humification etc., and an effect on earthworm communities in general may hamper soil aeration and lead to soil compaction. Another example of the impact of contaminants on functional diversity is from microbiological research on the impact of heavy metals by Doelman et al. (1994). They isolated fungi, bacteria and actinomycetes from various heavy metal contaminated and clean areas, tested these species for their sensitivity to zinc and cadmium, and divided them accordingly in a sensitive and resistant group. As a next step they measured to what extent both groups were able to degrade and mineralize a series of organic compounds. Figure 3 shows that the sensitive group is much more effective in degrading a variety of organic compounds, whereas the heavy metal resistant microbes are far less effective. This would indicate that although functional redundancy may alleviate some of the effects that contaminants have on ecosystem functioning, the overall performance of the community generally decreases upon contaminant exposure. The latter example also shows that genetic diversity, expressed as the numbers of sensitive and resistant species, plays a role in the functional stability and sustainability of microbial degradation processes in the soil. In conclusion, ecosystem services are worth studying in relation to contamination (Faber et al., 2019), but also more specific in relation to functional diversity at the species level. A promising field of research in this framework would include microorganisms in relation to the variety of degradation processes they are involved in. References De Ruiter, J.C., Neutel, A-M., Moore, J.C. 1995. Energetics, patterns of interaction strenghts and stability in real ecosystems. Science 269, 1257-60. Doelman, P., Jansen, E., Michels, M., Van Til, M. (1994). Effects of heavy metals in soil on microbial diversity and activity as shown by the sensitivity-resistance index, an ecologically relevant parameter Biology and Fertility of Soils 17, 177-184. Faber, J.H., Marshall, S., Van den Brink, P.J., Maltby, L. (2019). Priorities and opportunities in the application of the ecosystem services concept in risk assessment for chemicals in the environment. Science of the Total Environment 651, 1067-1077. Rühling, Å., Bååth, E., Nordergren, A., Söderström, B. (1984) Fungi in a metal-contaminated soil near the Gusum brass mill, Sweden. Ambio 13, 34-36. Tyler, G. (1984) The impact of heavy metal pollution on forests: A case study of Gusum, Sweden. Ambio 13, 18-24. 5.8. Question 1 Describe the structural and functional diversity at the species and at the landscape level 5.8. Question 2 What is meant by redundancy? 5.8. Question 3 Does redundancy have an impact of the sensitivity of species (structural diversity) versus processes (functional diversity)? 5.09: Landscape ecotoxicology In preparation
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.08%3A_Structure_versus_function_incl._ecosystem_services.txt
Author: Ad Ragas Reviewer: Martien Janssen Learning Objectives After this module, you should be able to: • explain the terms risk, hazard, risk assessment, risk management and solution-focused risk assessment ; • explain the different steps of the risk assessment process, the relation between these steps and how the principle of tiering works; • give an example of a risk indicator; • indicate the most important advantages and disadvantages of the risk assessment paradigm. Key words Risk, hazard, tiering, problem definition, exposure assessment, effect assessment, risk characterization Introduction We assess risks on a daily basis, although we may not always be aware of it. For example, when we cross the street, we - often implicitly - assess the benefits of crossing and weigh these against the risks of getting hit by a vehicle. If the risks are considered too high, we may decide not to cross the street, or to walk a bit further and cross at a safer spot with traffic lights. Risk assessment is common practice for a wide range of activities in society, for example for building bridges, protection against floods, insurance against theft and accidents, and the construction of a new industrial plant. The principle is always the same: we use the available knowledge to assess the probability of potential adverse effects of an activity as good as we can. And if these risks are considered too high, we consider options to reduce or avoid the risk. Terminology Risk assessment of chemicals aims to describe the risks resulting from the use of chemicals in our society. In chemical risk assessment, risk is commonly defined as "the probability of an adverse effect after exposure to a chemical". This is a very practical definition that provides natural scientists and engineers the opportunity to quantify risk using "objective" scientific methods, e.g. by quantifying exposure and the likelihood of adverse effects. However, it should be noted that this definition ignores more subjective aspects of risk, typically studied by social scientists, e.g. the perceptions of people and (dealing with) knowledge gaps. This subjective dimension can be important for risk management. For example, risk managers may decide to take action if a risk is perceived as high by a substantial part of the population, even if the associated health risks have been assessed as negligible by natural scientists and engineers. Next to the term "risk", the term "hazard" is often used. The difference between both terms is subtle, but important. A hazard is defined as the inherent capacity of a chemical (or agent/activity) to cause adverse effects. The labelling of a substance as "carcinogenic" is an example of a hazard-based action. The inherent capacity of the substance to trigger cancer, as for example demonstrated in an in vitro assay or an experiment with rats or mice, can be sufficient reason to label a substance as "carcinogenic". Hazard is thus independent of the actual exposure level of a chemical, whereas risk is not. Risk assessment is closely related to risk management, i.e. the process of dealing with risks in society. Decisions to accept or reduce risks belong to the risk management domain and involve consideration of the socio-economic implications of the risks as well as the risk management options. Whereas risk assessment is typically performed by natural scientists and engineers, often referred to as "risk assessors", risk management is performed by policy makers, often referred to as "risk managers". Risk assessment and risk management are often depicted as sequential processes, where assessment precedes management. However, strict separation of both processes is not always possible and management decisions may be needed before risks are assessed. For example, risk assessment requires political agreement on what should be protected and at what level, which is a risk management issue (see Section on Protection Goals). Similarly, the identification, description and assessment of uncertainties in the assessment is an activity that involves risk assessors as well as risk managers. Finally, it is often more efficient to define alternative management options before performing a risk assessment. This enables the assessment of the current situation and alternative management scenarios (i.e., potential solutions) in one round. The scenario with the maximum risk reduction that is also feasible in practice would then be the preferred management option. This mapping of solutions and concurrent assessment of the associated risks is also known as solution-focused risk assessment. Risk assessment steps and tiering Chemical risk assessment is typically organized in a limited number of steps, which may vary depending on the regulatory context. Here, we distinguish four steps (Figure1): 1. Problem definition (sometimes also called hazard identification), during which the scope of the assessment is defined; 2. Exposure assessment, during which the extent of exposure is quantified; 3. Effect assessment (sometimes also called hazard or dose-response assessment), during which the relationship between exposure and effects is established; 4. Risk characterization, during which the results of the exposure and effect assessments are combined into an estimate of risk and the uncertainty of this estimate is described. The four risk assessment steps are explained in more detail below. The four steps are often repeated multiple times before a final conclusion on the acceptability of the risk is reached. This repetition is called tiering (Figure 2). It typically starts with a simple, conservative assessment and then, in subsequent tiers, more data are added to the assessment resulting in less conservative assumptions and risk estimates. Tiering is used to focus the available time and resources for assessing risks on those chemicals that potentially lead to unacceptable risks. Detailed data are gathered only for chemicals showing potential risk in the lower, more conservative tiers. The order of the exposure and effect assessment steps has been a topic of debate among risk assessors and managers. Some argue that effect assessment should precede exposure assessment because effect information is independent of the exposure scenario and can be used to decide how exposure should be determined, e.g., information on toxicokinetics can be relevant to determine the exposure duration of interest. Others argue that exposure should precede effect assessment since assessing effects is expensive and unnecessary if exposure is negligible. The current consensus is that the preferred order should be determined on a case-by-case basis with parallel assessment of exposure and effects and exchange of information between the two steps as the preferred option. Problem definition The scope of the assessment is determined during the problem definition phase. Questions typically answered in the problem definition include: • What is the nature of the problem and which chemical(s) is/are involved? • What should be protected, e.g. the general population, specific sensitive target groups, aquatic ecosystems, terrestrial ecosystems or particular species, and at what level? • What information is already available, e.g. from previous assessments? • What are the available resources for the assessment? • What is the assessment order and will tiering be applied? • What exposure routes will be considered? • What is the timeframe of the assessment, e.g. are acute or (sub)chronic exposures considered? • What risk metric will be used to express the risk? • How will uncertainties be addressed? Problem definition is not a task for risk assessors only, but should preferably be performed in a collaborative effort between risk managers, risk assessors and stakeholders. The problem definition should try to capture the worries of stakeholders as good as possible. This is not always an easy task as these worries may be very broad and sometimes also poorly articulated. Risk assessors need a clearly demarcated problem and they can only assess those aspects for which assessment methods are available. The dialogue should make transparent which aspects of the stakeholder concerns will be assessed and which not. Being transparent about this can avoid disappointments later in the process, e.g. if aspects considered important by stakeholders were not accounted for because suitable risk assessment methods were lacking. For example, if stakeholders are worried about the acute and chronic impacts of pesticide exposure, but only the acute impacts will be addressed, this should be made clear at the beginning of the assessment. The problem definition phase results in a risk assessment plan detailing how the risks will be assessed given the available resources and within the available timeframe. Exposure assessment An important aspect of exposure assessment is the determination of an exposure scenario. An exposure scenario describes the situation for which the exposure is being assessed. In some cases, this exposure situation may be evident, e.g. soil organisms living a contaminated site. However, especially when we want to assess potential risks of future substance applications, we have to come up a typical exposure scenario. Such scenarios are for example defined before a substance is allowed to be used as a food additive or before a new pesticide is allowed on the market. Exposure scenarios are often conservative, meaning that the resulting exposure estimate will be higher than the expected average exposure. The exposure metric used to assess the risk depends on the protection target. For ecosystems, a medium concentration is often used such as the water concentration for aquatic systems, the sediment concentration for benthic systems and the soil concentration for terrestrial systems. These concentrations can either be measured or predicted using a fate model (see Section 3.8) and may or may not take into account bioavailability (see Section 3.6). For human risk assessment, the exposure metric depends on the exposure route. An air concentration is often used to cover inhalation, the average daily intake from food and water to cover oral exposure, and uptake through skin for dermal exposure. Uptake through multiple routes can also be combined in a dose metric for internal exposure, such as Area Under the Curve (AUC) in blood (see Section 6.3.1). Exposure metrics for specific wildlife species (e.g. top predators) and farm animals are often similar as those for humans. Measuring and modelling route-specific exposures is generally more complex than quantifying a simple medium concentration, because it does not only require the quantification of the substance concentration in the contact medium (e.g., concentration in drinking water), but also quantification of the contact intensity (e.g., how much water is consumed per day). Especially oral exposure can be difficult to quantify because it covers a wide range of different contact media (e.g. food products) and intensities varying from organism to organism. Effect assessment The aim of the effect assessment is to estimate a reference exposure level, typically an exposure level which is expected to cause no or very limited adverse effects. There are many different types of reference levels in chemical risk assessment; each used in a different context. The most common reference level for ecological risk assessment is the Predicted No Effect Concentration (PNEC). This is the water, soil, sediment or air concentration at which no adverse effects at the ecosystem level are being expected. In human risk assessment, a myriad of different reference levels are being used, e.g. the Acceptable Daily Intake (ADI), the oral and inhalatory Reference Dose (RfD), the Derived No Effect Level (DNEL), the Point of Departure (PoD) and the Virtually Safe Dose (VSD). Each of these reference levels is used in a specific context, e.g. for addressing a specific exposure route (ADI is oral), regulatory domain (the DNEL is used in the EU for REACH, whereas the RfD is used in the USA), substance type (the VSD is typical for genotoxic carcinogens) or risk assessment method (the PoD is typical for the Margin of Safety approach). What all reference levels have in common, is that they reflect a certain level of protection for a specific protection goal. In ecological risk assessment, the protection goal typically is the ecosystem, but it can also be a specific species or even an organism. In human risk assessment, the protection goal typically comprises all individuals of the human population. The definition of protection goals is a normative issue and it therefore is not a task of risk assessors, but of politicians. The protection levels defined by politicians typically involve a high level of abstraction, e.g. "the entire ecosystem and all individuals of the human population should be protected". Such abstract protection goals do not always match with the methods used to assess the risks. For example, if one assumes that one molecule of a genotoxic carcinogen can trigger a deathly tumour, 100% protection for all individuals of the human population is feasible only by banning all genotoxic carcinogens (reference level = 0). Likewise, the safe concentration for an ecosystem is infinitely small if one assumes that the sensitivity of the species in the system follows a lognormal distribution which asymptotically approaches the x-axis. Hence, the abstract protection goals have to be operationalized, i.e. defined in more practical terms and matching the methods used for assessing effects. This is often done in a dialogue between scientific experts and risk managers. An example is the "one in a million lifetime risk estimated with a conservative dose response model" which is used by many (inter)national organizations as a basis for setting reference levels for genotoxic carcinogens. Likewise, the concentration at which the no observed effect concentration (NOEC) for only 5% of the species is being exceeded is often used as a basis for deriving a PNEC. Once a protection goal has been operationalized, it must be translated into a corresponding exposure level, i.e. the reference level. This is typically done using the outcomes of (eco)toxicity tests, i.e. tests with laboratory animals such as rats, mice and dogs for human reference levels and with primary consumers, invertebrates and vertebrates for ecological reference levels. Often, the toxicity data are plotted in a graph with the exposure level on the x-axis and the effect or response level on the y-axis. A mathematical function is then fitted to the data; the so-called dose-response relationship. This dose-response relationship is subsequently used to derive an exposure level that corresponds to a predefined effect or response level. Finally, this exposure level is extrapolated to the ultimate protection goal, accounting for phenomena such as differences in sensitivity between laboratory and field conditions, between tested species and the species to be protected, and the (often very large) variability in sensitivity in the human population or the ecosystem. This extrapolation is done by dividing the exposure level that corresponds to a predefined effect or response level by one or more assessment or safety factors. These assessment factors do not have a pure scientific basis in the sense that they account for physiological differences which have actually been proven to exist. These factors also account for uncertainties in the assessment and should make sure that the derived reference level is a conservative estimate. The determination of reference levels is an art in itself and is further explained in sections 6.3.1 for human risk assessment and 6.3.2 for ecological risk assessment. Risk characterization The aim of risk characterization is to come up with a risk estimate, including associated uncertainties. A comparison of the actual exposure level with the reference level provides an indication of the risk: If the reference level reflects the maximum safe exposure level, then the risk indicator should be below unity (1.0). A risk indicator higher than 1.0 indicates a potential risk. It is a "potential risk" because many conservative assumptions may have been made in the exposure and effect assessments. A risk indicator above 1.0 can thus lead to two different management actions: (1) if available resources (time, money) allow and the assessment was conservative, additional data may be gathered and a higher tier assessment may be performed, or (2) consideration of mitigation options to reduce the risk. Assessment of the uncertainties is very important in this phase, as it reveals how conservative the assessment was and how it can be improved by gathering additional data or applying more advanced risk assessment tools. Risks can also be estimated using a margin-of-safety approach. In this approach, the reference level used has not yet been extrapolated from the tested species to the protection goal, e.g. by applying assessment factors for interspecies and interindividual differences in sensitivity. As such, the reference level is not a conservative estimate. In this case, the risk indicator reflects the "margin of safety" between actual exposure and the non-extrapolated reference level. Depending on the situation at hand, the margin-of-safety typically should be 100 or higher. The main difference between the traditional and the margin-of-safety approach in risk assessment is the timing for addressing the uncertainties in the effect assessment. Reflection Figure 3 illustrates the risk assessment paradigm using the DPSIR chain (Section 1.2). It illustrates how reference exposure levels are being derived from protection goals, i.e. the maximum level of impact that we consider acceptable. The actual exposure level is either measured or predicted using estimated emission levels and dispersion models. When measured exposure levels are used, this is called retrospective or diagnostic risk assessment: the environmental is already polluted and the assessor wants to know whether the risk is acceptable and which substances are contributing to it. When the environment is not yet polluted, predictive tools can be used. This is called prospective risk assessment: the assessor wants to know whether a projected activity will result in unacceptable risks. Even if the environment is already polluted, the risk assessor may still decide to prefer predicted over measured exposure levels, e.g. if measurements are too expensive. This is possible only if the pollution sources are well-characterized. Retrospective (diagnostic) and prospective risk assessments can differ substantially in terms of problem definitions and methods used, and are therefore discussed in separate sections in this online book. Figure 3 can also be used to illustrate some important criticism on the current risk assessment paradigm, i.e. the comparison between the actual exposure level and a reference level. In current assessments, only one point of the dose-response relationship is being used to assess risk, i.e. the reference level. Critics argue that this is suboptimal and a waste of resources because the dose-response information is not used to assess the actual risk. A risk indicator with a value of 2.0 implies that the exposure is twice as high as the reference level but this does not give an indication of how many individuals or species are being affected or of the intensity of the effect. If the dose-response relationship would be used to determine the risk, this would result in a better-informed risk estimate. A final critical remark that should be made, is the fact that risk assessment is often performed on a substance-by-substance basis. Dealing with mixtures of chemicals is difficult because each mixture has a unique composition in terms of compounds and concentration ratios between compounds. This makes it difficult to determine a reference level for mixtures. Mixture toxicology is slowly progressing and several methods are now available to address mixtures, i.e. whole mixture methods and compound-based approaches (Section 6.3.6). Another promising development are effect-based methods (Section 6.4.2). These methods do not assess risk based on chemical concentration, but on the toxicity measured in an environmental sample. In terms of DPSIR, these methods are assessing risks on the level of impacts rather than the level of state or pressures. 6.1. Question 1 Imagine the herbicide glyphosate would be banned based on its carcinogenic properties. Would this intervention be risk-based or hazard-based? 6.1. Question 2 Indicate whether the following activities should involve risk assessors, risk managers/politicians and/or stakeholders: 1. Determination of a safe dose level based on established protection goals; 2. Determination of protection goals; 3. Determination of intervention options; 4. Demarcation of the risk assessment problem; 5. Translation of abstract protection goals into operational goals. 6.1. Question 3 Indicate whether the following risk assessments are retrospective or prospective: 1. Determining the adverse impacts of a contaminated area on human health and the environment; 2. Quantifying the human health risk of current air pollution levels; 3. Determining whether the risks associated with a new pesticide are acceptable; 4. Predicting the risk of chemicals based on current emission levels. 6.1. Question 4 A risk assessment was performed for two different substances, i.e. A and B. The risk indicator value of substance A was 1.5 and that of substance B was 2.0. A risk manager proposes to first address substance B and subsequently substance A. Do you agree? Motivate your answer. 6.02: Ecosystem services and protection goals In preparation
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.01%3A_Introduction_-_The_Essence_of_Risk_Assessment.txt
6.3. Predictive risk assessment approaches and tools under review 6.3.2. Environmental realistic scenarios (PECs) – Eco Authors: Jos Boesten, Theo Brock Reviewer: Ad Ragas, Andreu Rico Learning objectives: You should be able to: • explain the role of exposure scenarios in environmental risk assessment (ERA) • describe the need for, and basic principles of defining exposure assessment goal • link exposure and effect assessments and can describe the role of environmental scenarios in future ERAs Keywords: pesticides, exposure, scenarios, assessment goals, effects Role of exposure scenarios in environmental risk assessment (ERA) An exposure scenario describes the combination of circumstances needed to estimate exposure by means of models. For example, scenarios for modelling pesticides exposure can be defined as a combination of abiotic (e.g. properties and dimensions of the receiving environment and related soil, hydrological and climate characteristics) and agronomic (e.g. crops and related pesticide application) parameters that are thought to represent a realistic worst-case situation for the environmental context in which the exposure model is to be run. A scenario for exposure of aquatic organisms could be e.g. a ditch with a minimum water depth of 30 cm alongside a crop growing on a clay soil with annual applications of pesticide using a 20-year time series of weather data and including pesticide exposure via spray drift deposition and leaching from drainpipes. Such a scenario would require modelling of spray drift, leaching from drainpipes and exposure in surface water, ending up in a 20-year time series of the exposure concentration. In this chapter, we explain the use of exposure scenarios in prospective ERA by giving examples for the regulatory assessment of pesticides in particular. Need for defining exposure assessment goals Between about 1995 and 2001 groundwater and surface water scenarios were developed for EU pesticide registration; also referred to as the FOCUS scenarios. The European Commission indicated that these should represent 'realistic worst-cases', a political concept which leaves considerable room for scientific interpretation. Risk assessors and managers agreed that the intention was to generate 90th percentile exposure concentrations. The concept of a 90th percentile exposure concentration assumes a statistical population of concentrations and 90% of these concentrations are lower than this 90th percentile (and thus 10% are higher). This 90th percentile approach has since then been followed for most environmental exposure assessments for pesticides at EU level. The selection of the FOCUS groundwater and surface water scenarios involved a considerable amount of expert judgement because this selection could not yet be based on well-defined GIS procedures and databases on properties of the receiving environment. The EFSA exposure assessment for soil organisms was the first environmental exposure assessment that could be based on a well-defined GIS procedure, using EU maps of parameters like soil organic matter, density of crops and weather. During the development of this exposure assessment, it became clear that the concept of a 90th percentile exposure concentration is too vague: it is essential to define also the statistical population of concentrations from which this 90th percentile is taken. Based on this insight, the EFSA Panel on Plant Protection Products and their Residues (PPR) developed the concept of the exposure assessment goals, which has become the standard within EFSA for developing regulatory exposure scenarios for pesticides. Procedure for defining exposure assessment goals Figure 1 shows how an exposure assessment goal for the risk assessment of aquatic organisms can be defined following this EFSA procedure. The left part specifies the temporal dimensions and the right part the spatial dimensions. In box E1, the Ecotoxicologically Relevant type of Concentration (ERC) is defined, e.g. the freely dissolved pesticide concentration in water for pelagic organisms. In box E2, the temporal dimension of this concentration is defined, e.g. annual peak or time-weighted average concentration for a pre-defined period. Based on these elements, the multi-year temporal population of concentrations can be generated for one single water body (E5) which would consist of e.g. 20 peak concentrations in case of a time series of 20 years. The spatial part requires definition of the type of water body (e.g. ditch, stream or pond; box E3) and the spatial dimension of this body (e.g. having a minimum water depth of 30 cm; box E4). Based on these, the spatial population of water bodies can be defined (box E6), e.g. all ditches with a minimum water depth of 30 cm alongside fields treated with the pesticide. Finally, then in box E7 the percentile combination to be taken from the spatial-temporal population of concentrations is defined. Specification of the exposure assessment goals does not only involve scientific information, but also political choices because this specification influences the strictness of the exposure assessment. For instance, in case of exposure via spray drift a minimum water depth of 30 cm in box E4 leads to about a three times lower peak concentration in the water than a minimum water depth of 10 cm. The schematic approach of Figure 1 can easily be adapted to other exposure assessment goals. Interaction between exposure and effect assessment for organisms Nearly all the environmental protection goals for pesticides involve assessment of risk for organisms; only groundwater and drinking water from surface water are based on a concentration of 0.1 μg/L which is not related to possible ecotoxicological effects. The risk assessment for organisms is a combination of an exposure assessment and an effect assessment as is illustrated by Figure 2. Both the effect and the exposure assessment are tiered approaches with simple and conservative first tiers and less simple and more realistic higher tiers. A lower exposure tier may consist of a simple conservative scenario whereas a higher exposure tier may e.g. be based on a scenario selected using sophisticated spatial modelling. The top part of the scheme shows the link to the risk managers which are responsible for the overall level of protection. This overall level of protection is linked to the so-called Specific Protection Goals which will be explained in Section 6.5.3 and form the basis for the definition of the effect and exposure assessment goals. So the exposure assessment goals and resulting exposure scenarios should be consistent with the Specific Protection Goals (e.g. algae and fish may require different scenarios). When linking the two assessments, it has to be ensured that the type of concentration delivered by the exposure assessment is consistent with that required by the effect assessment (e.g. do not use time-weighted average concentrations in acute effect assessment). Figure 2 shows that in the assessment procedure information flows always from the exposure assessment to the effect assessment because the risk assessment conclusion is based on the effect assessment. A relatively new development is to assess exposure and effects at the landscape level. This typically is a combination of higher-tier effect and exposure assessments. In such an approach, first the dynamics in exposure is assessed for the full landscape, and then combined with the dynamics of effects, for example based on spatially-explicit population models for species typical for that landscape. Such an approach makes a separate definition of the exposure and effect scenario redundant because this approach aims to deliver the exposure and effect assessment in an integrated way in space and time. Such an integrated approach requires the definition of "environmental scenarios". Environmental scenarios integrate both the parameters needed to define the exposure (exposure scenario) and those needed to calculate direct and indirect effects and recovery (ecological scenario) (see Figure 3). However, it will probably take at least a decade before landscape-level approaches, including agreed-upon environmental scenarios, will be implemented for regulatory use in prospective ERA. References Boesten, J.J.T.I. (2017). Conceptual considerations on exposure assessment goals for aquatic pesticide risks at EU level. Pest Management Science 74, 264-274. Brock, T.C.M., Alix, A., Brown, C.D., et al. (2010). Linking aquatic exposure and effects: risk assessment of pesticides. SETAC Press & CRC Press, Taylor & Francis Group, Boca Raton, FL, 398 pp. Rico, A., Van den Brink, P.J., Gylstra, R., Focks, A., Brock, T.C.M. (2016). Developing ecological scenarios for the prospective aquatic risk assessment of pesticides. Integrated Environmental Assessment and Management 12, 510-521. 6.3.2. Question 1 Why is a detailed specification of exposure assessment goals needed ? 6.3.2. Question 2 Why does specification of the exposure assessment goals include political choices ? 6.3.2. Question 3 Why does the risk assessment of organisms consist of two parallel tiered schemes for effects and exposure ? in preparation 6.3.4. Setting safe standards Authors: Els Smit, Eric Verbruggen Reviewers: Alexandra Kroll, Inge Werner Learning objectives You should be able to: • explain what a reference level for ecosystem protection is; • explain the basic concepts underlying the assessment factor approach for deriving PNECs; • explain why secondary poisoning needs specific consideration when deriving a PNEC using the assessment factor approach. Key words: PNEC, quality standards, extrapolation, assessment factor Introduction The key question in environmental risk assessment is whether environmental exposure to chemicals leads to unacceptable risks for human and ecosystem health. This is done by comparing the measured or predicted concentrations in water, soil, sediment, or air, with a reference level. Reference levels represent a dose (intake rate) or concentration in water, soil, sediment or air below which unacceptable effects are not expected. The definition of 'no unacceptable effects' may differ between regulatory frameworks, depending on the protection goal. The focus of this section is the derivation of reference levels for aquatic ecosystems as well as for predators feeding on exposed aquatic species (secondary poisoning), but the derivation of reference values for other environmental compartments follows the same principles. Terminology and concepts Various technical terms are in use as reference values, e.g. the Predicted No Effect Concentration (PNEC) for ecosystems or the Acceptable Daily Intake (ADI) for humans (Section on Human toxicology). The term "reference level" is a broad and generic term, which can be used independently of the regulatory context or protection goal. In contrast, the term "quality standard" is associated with some kind of legal status, e.g., inclusion in environmental legislation like the Water Framework Directive (WFD). Other terms exist, such as the terms 'guideline value' or 'screening level' which are used in different countries to indicate triggers for further action. While the scientific basis of these reference values may be similar, their implementation and the consequences of exceedance are not. It is therefore very important to clearly define the context of the derivation and the terminology used when deriving and publishing reference levels. PNEC A frequently used reference level for ecosystem protection is the Predicted No Effect Concentration (PNEC). The PNEC is the concentration below which adverse effects on the ecosystem are not expected to occur. PNECs are derived per compartment and apply to the organisms that are directly exposed. In addition, for chemicals that accumulate in prey, PNECs for secondary poisoning of predatory birds and mammals are derived. The PNEC for direct ecotoxicity is usually based on results from single species laboratory toxicity tests. In some case, data from field studies or mesocosms may be included. A basic PNEC derivation for the aquatic compartment is based on data from single species tests with algae, water fleas and fish. Effects on the level of a complex ecosystem are not fully represented by effects on isolated individuals or populations in a laboratory set-up. However, data from laboratory tests can be used to extrapolate to the ecosystem level if it is assumed that protection of ecosystem structure ensures protection of ecosystem functioning, and that effects on ecosystem structure can be predicted from species sensitivity. Accounting for Extrapolation Uncertainty: Assessment Factor (AF) Approach To account for the uncertainty in the extrapolation from single species laboratory tests to effects on real life ecosystems, the lowest available test result is divided by an assessment factor (AF). In establishing the size of the AF, a number of uncertainties must be addressed to extrapolate from single-species laboratory data to a multi-species ecosystem under field conditions. These uncertainties relate to intra- and inter-laboratory variation in toxicity data, variation within and between species (biological variance), test duration and differences between the controlled laboratory set-up and the variable field situation. The value of the AF depends on the number of studies, the diversity of species for which data are available, the type and duration of the experiments, and the purpose of the reference level. Different AFs are needed for reference levels for e.g. intermittent release, short-term concentration peaks or long-term (chronic) exposure. In particular, reference levels for intermittent release and short-term exposure may be derived on the basis of acute studies, but short-term tests are less predictive for a reference level for long-term exposure and larger AFs are needed to cover this. Table 1 shows the generic AF scheme that is used to derive PNECs for long-term exposure of freshwater organisms in the context of European regulatory framework for industrial chemicals (REACH; see Section on REACH environment). This scheme is also applied for the authorisation of biocidal products, pharmaceuticals and for derivation of long-term water quality standards for freshwater under the EU Water Framework Directive. Further details on the application of this scheme, e.g., how to compare acute and chronic data and how to deal with irregular datasets, are presented in guidance documents (see suggested reading: EC, 2018; ECHA, 2008). Similar schemes exist for marine waters, sediment, and soil. However, for the latter two environmental compartments often too little experimental information is available and risk limits have to be calculated by extrapolation from aquatic data using the Equilibrium Partitioning concept. The derivation of Regulatory Acceptable Concentrations (RAC) for plant protection products (PPPs) is also based on the extrapolation of laboratory data, but follows a different approach focussing on generating data for specific taxonomic groups, taking account of the mode of action of the PPP (see suggested reading: EFSA, 2013). Table 1. Basic assessment factor scheme used for the derivation of PNECs for freshwater ecosystems used in several European regulatory frameworks. Consult the original guidance documents for full schemes and additional information (see suggested reading: EC, 2018; ECHA, 2008). Available data Assessment factor At least one short-term L(E)C50 from each of three trophic levels (fish, invertebrates (preferred Daphnia) and algae) 1000 One long-term EC10 or NOEC (either fish or Daphnia) 100 Two long-term results (e.g. EC10 or NOECs) from species representing two trophic levels (fish and/or Daphnia and/or algae) 50 Long-term results (e.g. EC10 or NOECs) from at least three species (normally fish, Daphnia and algae) representing three trophic levels 10 Application of Species Sensitivity Distribution (SSD) and Other Additional Data The AF approach was developed to account for the uncertainty arising from extrapolation from (potentially limited) experimental datasets. If enough data are available for other species than algae, daphnids and fish, statistical methods can be applied to derive a PNEC. Within the concept of species sensitivity distribution (SSD), the distribution of the sensitivity of the tested species is used to estimate the concentration at which 5% of all species in the ecosystem is affected (HC5; see section on SSDs). When used for regulatory purposes in European regulatory frameworks, the dataset should meet certain requirements regarding the number of data points and the representation of taxa in the dataset, and an AF is applied to the HC5 to cover the remaining uncertainty from the extrapolation from lab to field. Where available, results from semi-field experiments (mesocosms, see section on Community ecotoxicology) can also be used, either on its own or to underpin the PNEC derived from the AF or SSD approach. SSDs and mesocosm-studies are also used in the context of authorisation of PPP. Reference levels for secondary poisoning Substances might be toxic to wildlife because of bioaccumulation in prey or a high intrinsic toxicity to birds and mammals. If this is the case, a reference level for secondary poisoning is derived for a simple food chain: water è fish or mussel è predatory bird or mammal. The toxicity data from bird or mammal tests are transformed into safe concentrations in prey. This can be done by simply recalculating concentrations in laboratory feed into concentrations in fish using default conversion factors (see e.g., ECHA, 2008). For the derivation of water quality standards under the WFD, a more sophisticated method was introduced that uses knowledge on the energy demand of predators and energy content in their food to convert laboratory data to a field situation. Also, the inclusion of other, more complex and sometimes longer food chains is possible, for which field bioaccumulation factors are used rather than laboratory derived values. Suggested additional reading EC (2018). Common Implementation Strategy for the Water Framework Directive (2000/60/EC). Guidance Document No. 27. Technical Guidance For Deriving Environmental Quality Standards. Updated version 2018. Brussels, Belgium. European Commission. circabc.europa.eu/ui/group/9...7a2a6b/details ECHA (2008). Guidance on information requirements and chemical safety assessment Chapter R.10: Characterisation of dose [concentration]-response for environment. Helsinki, Finland. European Chemicals Agency. May 2008. https://echa.europa.eu/documents/10162/13632/information_requirements_r10_en.pdf/bb902be7-a503-4ab7-9036-d866b8ddce69 EFSA (2013). Guidance on tiered risk assessment for plant protection products for aquatic organisms in edge-of-field surface waters. EFSA Journal 2013; 11(7): 3290 efsa.onlinelibrary.wiley.com...efsa.2013.3290 Traas, T.P., Van Leeuwen, C. (2007). Ecotoxicological effects. In: Van Leeuwen, C., Vermeire, T.C. (Eds.). Risk Assessment of Chemicals: an Introduction, Chapter 7. Springer. 6.3.4. Question 1 What is a PNEC? 6.3.4. Question 2 How is a basic PNEC commonly derived in Europe? 6.3.4. Question 3 Why are assessment factors applied? 6.3.4. Question 4 Which aspects are covered by the assessment factor? 6.3.4. Question 5 Within the EU REACH/WFD regulatory framework, which assessment factor may be applied to derive a PNEC for freshwater if you have one LC50 value for Oncorhynchus mykiss, one EC50 value for Daphnia magna, one EC10 value for Oncorhynchus mykiss, and one NOEC value for Pseudokirchneriella subcapitata? 6.3.5. Species Sensitivity Distributions (SSDs) Authors: Leo Posthuma, Dick de Zwart Reviewers: Ad Ragas, Keith Solomon Learning objectives: You should be able to: • explain that differences exist in the reaction of species to exposure to a chemicals; • explain that these differences can be described by a statistical distribution; • derive a Species Sensitivity Distribution (SSD) for sensitivity data; • derive benchmark concentration from an SSD; • derive a predicted impact from an SSD. Keywords: Species Sensitivity Distribution (SSD), benchmark concentration, Potentially Affected Fraction of species (PAF) Introduction The relationship between dose or concentration (X) and response (Y) is key in risk assessment of chemicals (see section on Concentration-response relationships). Such relationships are often determined in laboratory toxicity tests; a selected species is exposed under controlled conditions to a series of increasing concentrations to determine endpoints such as the No Observed Effect Concentration (NOEC), the EC50 (the Effect Concentration causing 50% effect on a studied endpoint such as growth or reproduction), or the LC50 (the Effect Concentration causing 50% lethal effects). For ecological risk assessment, multiple species are typically tested to characterise the (variation in) sensitivities across species or taxonomic groups within the ecosystem. In the mid-1980s it had been observed that-like many natural phenomena-a set of ecotoxicity endpoint data, representing effect concentrations for various species, follows a bell-shaped statistical distribution. The cumulative distribution of these data is a sigmoid (S-shaped) curve. It was recognized, that this distribution had particular utility for assessing, managing and protecting environmental quality regarding chemicals. The bell-shaped distribution was thereupon named a Species Sensitivity Distribution (SSD). Since then, the use of SSD models has grown steadily. Currently, the model is used for various purposes, providing important information for decision-making. Below, the dual utility of SSD models for environmental protection, assessment and management are shown first. Thereupon, the derivation and use of SSD models are elaborated in a stepwise sequence. The dual utility of SSD models A species sensitivity distribution (SSD) is a distribution describing the variance in sensitivity of multiple species exposed to a hazardous compound. The statistical distribution is often plotted using a log-scaled concentration axis (X), and a cumulative probability axis (Y, varying from 0 - 1; Figure 1). Figure 1 shows that different species (here the dots represent 3 test data for algal species, 2 data for invertebrate species and 2 data fish species) have different sensitivities to the studied chemical. First, the ecotoxicity data are collected, and log10-transformed. Second, the data set can be visually inspected by plotting the bell-shaped distribution of the log-transformed data; deviations of the expected bell-shape can be visually identified in this step. They may originate from causes such as a low number of data points or be indicative for a selective mode of action of the toxicant, such as a high sensitivity of insects to insecticides. Third, common statistical software for deriving the two parameters of the log-normal model (the mean and the standard deviation of the ecotoxicity data) can be applied, or the SSD can be described with a dedicated software tool such as ETX (see below), including a formal evaluation of the 'goodness of fit' of the model to the data. With the estimated parameters, the fitted model can be plotted, and this is often done in the intuitively attractive form of the S-shaped cumulative distribution. This curve then serves two purposes. First, the curve can be used to derive a so-called Hazardous Concentration on the X-axis: a benchmark concentration that can be used as regulatory criterion to protect the environment (YàX). That is, chemicals with different toxicities have different SSDs, with the more hazardous compounds plotted to the left of the less hazardous compounds. By selecting a protection level on the Y-axis-representing a certain fraction of species affected, e.g. 5%-one derives the compound-specific concentration standards. Second, one can derive the fraction of tested species probably affected at an ambient concentration (XàY), which can be measured or modelled. Both uses are popular in contemporary environmental protection, risk assessment, and management. Step 1: Ecotoxicity data for the derivation of an SSD model The SSD model for a chemical and an environmental compartment (e.g., surface water, soil or sediment) is derived based on pertinent ecotoxicity data. Those are typically extracted from scientific literature or ecotoxicity databases. Examples of such databases are the U.S. EPA's Ecotox database, the European REACH data sets and the EnviroTox database which contains quality-evaluated studies. The researcher selects the chemical and the compartment of interest, and subsequently extracts all test data for the appropriate endpoint (e.g., ECx-values). The set of test data is tabulated and ranked from most to least sensitive. Multiple data for the same species are assessed for quality and only the best data are used. If there is > 1 toxicity value for a species after the selection process, the geometric mean value is commonly derived and used. A species should only be represented once in the SSD. Data are often available for frequently tested species, representing different taxonomic and/or trophic levels. A well-known triplet of species frequently tested is "Algae, Daphnids and Fish", as this triplet is a requested minimum set for various regulations in the realm of chemical safety assessment (see section on Regulatory frameworks). For various compounds, the number of test data can be more than hundred, whilst for most compounds few data of acceptable quality may be available. Step 2. The derivation and evaluation of an SSD model Standard statistical software (a spreadsheet program) or a dedicated software model such as ETX can be used to derive an SSD from available data. Commonly, the fit of the model to the data set is checked to avoid misinterpretation. Misfit may be shown using common statistical testing (Goodness of Fit tests) or by visual inspection and ecological interpretation of the data points. That is, when a chemical specifically affects one group of species (e.g., insects having a high sensitivity for insecticides), the user may decide the derive an SSD model for specific groups of species. In doing so, the outcome will consist of two or more SSDs for a single compound (e.g., an SSDInsect and an SSDOther when the compound is an insecticide, whilst the SSDOther might be split further if appropriate). These may show a better goodness of fit of the model to the data, but - more importantly - they reflect the use of key knowledge of mode of action and biology prior to 'blindly' applying the model fit procedure. Step 3a. The SSD model used for environmental protection The oldest use of the SSD model is the derivation of reference levels such as the PNEC (YàX). That is, given the policy goal to fully protect ecosystems against adverse effects of chemical exposures (see Section on Ecosystem services and protection goals), the protective use is as follows. First, the user defines which ecotoxicity data are used. In the context of environmental protection, these have often been NOECs or low-effect levels (ECx, with low x, such as EC10) from chronic tests. This yields an SSD-NOEC or SSD-ECx. Then, the user selects a level of Y, that is: the maximum fraction of species for which the defined ecotoxicity endpoint (NOEC or ECx) may be exceeded, e.g., 0.05 (a fraction of 0.05 equals 5% of the species). Next, the user derives the Hazardous Concentration for 5% of the species (YàX). At the HC5, 5% of the species are exposed to concentrations greater than their NOEC, but-which is the obverse-95% of the species are exposed to concentration less than their NOEC. It is often assumed that the structural and functional integrity of ecosystems is sufficiently protected at the HC5 level if the SSD is based on NOECs. Therefore, many authorities use this level to derive regulatory PNECs (Predicted No Effect Concentration) or Environmental Quality Standards (EQS). The latter concepts are used as official reference levels in risk assessment, the first is the preferred abbreviation in the context of prospective chemical safety assessments, and the second is used in retrospective environmental quality assessment. Sometimes an extra assessment factor varying between 1 and 5 is applied to the HC5 to account for remaining uncertainties. Using SSDs for a set of compounds yields a set of HC5 values, which-in fact-represent a relative ranking of the chemicals by their potential to cause harm. Step 3b.The SSD model used for environmental quality assessment The SSD model also can be used to explore how much damage is caused by environmental pollution. In this case, a predicted or measured ambient concentration is used to derive a Potentially Affected Fraction of species (PAF). The fraction ranges from 0-1 but, in practice, it is often expressed as a percentage (e.g., "24% of the species is likely affected"). According to this approach, users often have monitored or modelled exposure data from various water bodies, or soil or sediment samples, so that they can evaluate whether any of the studied samples contain a concentration higher than the regulatory reference level (previous section) and, if so how many species are affected. Evidently, the user must clearly express what type of damage is quantified, as damage estimates based on an SSDNOEC or an SSDEC50 quantify the fractions of species affected beyond the no effect level and at the 50% effect level, respectively. This use of SSDs for a set of environmental samples yields a set of PAF values, which, in fact, represent a relative ranking of the pollution levels at the different sites in their potential to cause harm. Practical uses of using SSD model outcomes SSD model outcomes are used in various regulatory and practical contexts. 1. The oldest use of the model, setting regulatory standards, has a global use. Organizations like the European Union and the OECD, as well as many countries, apply SSD models to set (regulatory) standards. Those standards are then used prospectively, to evaluate whether the planned production, use or release of a (novel) chemical is sufficiently safe. If the predicted concentration exceeds the criterion, this is interpreted as a warning. Dependent on the regulatory context, the compound may be regulated, e.g., prohibited from use, or its use limited. The data used to build SSD models for deriving regulatory standards are often chronic test data, and no or low effect endpoints. The resulting standards have been evaluated in validation studies regarding the question of sufficient protection. Note that some jurisdictions have both protective standards as well as trigger values for remediation, based on SSD modelling. 2. The next use is in environmental quality assessment and management. In this case, the predicted or measured concentration of a chemical in an environmental compartment is often first compared to the reference level. This may already trigger management activities if the reference values have a regulatory status, such as a clean-up operation. The SSD may, however, be used to provide more detailed information on expected magnitude of impact, so that environmental management can prioritize most-affected sites for earlier remediation. The use of SSDs needs be tailored to the situation. That is, if the exposure concentrations form an array close to the reference value, the use of SSDNOEC\s is a logical step, as this ranks the site pollution levels (via the PAFs) regarding the potentially affected fraction of species experiencing slight exceedances of the no effect level. If the study area contains highly polluted sites, that approach may show that all measured concentrations are in the upper tail of the SSDNOEC sigmoid (horizontal part). In such cases, the SSDEC50 provides information on across-sites differences in expected impacts larger than the 50% effect level. 3. The third use is in Life Cycle Assessment of products. This use is comparative, so that consumers can select the most benign product, whilst producers can identify 'hot spots' of ecotoxicity in their production chains. A product often contains a suite of chemicals, so that the SSD model must be applied to all chemicals, by aggregating PAF-type outcomes over all chemicals. The model USEtox is the UN global consensus model for this application. Today, these three forms of use of SSD models have an important role in the practice of environmental protection, assessment and management on the global scale, which relates to their intuitive meaning, their ease of use, and the availability of a vast number of ecotoxicity data in the global databases. 6.3.5. Question 1 What is the basic concept underlying SSD models? 6.3.5. Question 2 What is the main assumption underlying SSD models? 6.3.5. Question 3 What is meant by "the dual utility of an SSD model" in environmental protection, assessment and management? 6.3.5. Question 4 Given that the SSD model is a statistical description of ecotoxicological differences in sensitivity between species for a chemical, what is a critical step in the derivation and use of SSD model outputs? 6.3.5. Question 5 Does an SSD describe or explain differences in species sensitivity for a chemical? under review
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.03%3A_Predictive_risk_assessment_approaches_and_tools.txt
6.4. Diagnostic risk assessment approaches and tools Author: Michiel Kraak Reviewers: Ad Ragas and Kees van Gestel Learning objectives: You should be able to • define and distinguish hazard and risk • distinguish predictive tools (toxicity tests) and diagnostic tools (bioassays) • list bioassays and risk assessment tools at different levels of biological organization, ranging from laboratory to field approaches Keywords: hazard assessment, risk assessment, prognosis, diagnosis, effect based monitoring, bioassays, effect directed analysis, mesocosm, biomonitoring, TRIAD approach, eco-epidemiology. To determine whether organisms are at risk when exposed to certain concentrations of hazardous compounds in the field, the toxicity of environmental samples can be analysed. To this purpose, several approaches and techniques have been developed, known as diagnostic tools. The tools described in Sections 6.5.1-6.5.8 have in common that they make use of living organisms to assess environmental quality. This is generally achieved by performing bioassays in which the selected test species are exposed to (concentrates or dilutions of) environmental samples after which their performance (survival, growth, reproduction etc) is measured. The species selected as test organisms for bioassays are generally the same as the ones selected for toxicity tests (see section on Selection of ecotoxicity test organisms). Each biological organization level has its own battery of test methods. At the lowest level of biological organization, a wide variety of in vitro bioassays is available (see section Effect based monitoring: in vitro bioassays). These comprise tests based on cell lines, but also bacteria and zebra fish embryos are employed. If the response of a bioassay to an environmental sample exceeds the predefined effect-based trigger value, the response is considered to be indicative of ecological risks. Yet, the compounds causing the observed toxicity are initially unknown. However, these can subsequently be elucidated with Effect Directed Analysis (see section Effect Directed Analysis). The sample causing the effect is subjected to fractionation and the fractions are tested again. This procedure is repeated until the sample is reduced to a few individual compounds, which can then be identified allowing to confirm their contribution to the observed toxic effects. At higher levels of biological organization, a wide variety of in vivo tests and test organisms are available, including terrestrial and aquatic plants and animals (see section Effect based monitoring: in vivo bioassays). Yet, different test species tend to respond very differently to specific toxicants and specific field collected samples. Hence, the results of a single species bioassay may not reliably reflect the risk of exposure to a specific environmental sample. To avoid over- and underestimation of environmental risks, it is therefore advisable to employ a battery of in vitro and in vivo bioassays. In a case study on effect-based water quality assessment, we showed the great potential of this approach, resulting in the ranking of sites based on ecological risks rather than on the absence or presence of compounds (see section Effect based water quality assessment). At the higher levels of biological organization, effect-based monitoring tools include bioassays performed in mesocosms (see section Community Ecotoxicology in practice) and in the field itself, the so called in situ bioassays (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms ). Cosm studies represent a bridge between the laboratory and the natural world. The originality of mesocosms is based on the combination of ecological realism, the ability to manipulate different environmental parameters and still having the opportunity to replicate treatments. In the field, the aim of biomonitoring is the in situ assessment of environmental quality on a regular basis in time, using living organisms (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms ). Organisms are collected from reference sites and exposed in cages or artificial substrates at the study sites, after which they are recollected and either their condition is analysed (in situ bioassay) or the internal concentrations of specific target compounds are measured, or both (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms). Finally, two approaches will be introduced that aid to bridge policy goals and ecosystem responses to perturbation: the TRIAD approach and eco-epidemiology. The TRIAD approach is a tool for site-specific ecological risk assessment, combining and integrating information on contaminant concentrations, bioassay results and ecological field inventories in a 'Weight of Evidence' approach (see section TRIAD approach). Eco-epidemiology is defined as the study of the distribution and causation of impacts of multiple stressor exposures in ecosystems, and the application of this study to reduce ecological impacts (see section Eco-epidemiology). 6.4. Question 1 Define and distinguish hazard and risk. 6.4. Question 2 Distinguish predictive tools (toxicity tests) and diagnostic tools (bioassays). 6.4. Question 3 What is the essence of diagnostic risk assessment? 6.4. Question 4 List and briefly describe bioassays at different levels of biological organisation. 6.4. Question 5 Name and briefly describe two approaches that aid to bridge policy goals and ecosystem responses to perturbation. 6.4.1. Effect-based monitoring: In vitro bioassays Author: Timo Hamers Reviewer: Beate Escher Learning objectives: You should be able to • explain why effect-based monitoring is "more comprehensive" than chemical-analytical monitoring • name several characteristics which make in vitro bioassays suitable for effect-based monitoring purposes • give examples of most widely used bioassays • describe the principles of a reporter gene assay, an enzyme induction assay, and an enzyme inhibition assay • indicate how results from effect-based monitoring with in vitro bioassays can be interpreted in terms of environmental risk Key words: effect-based monitoring; cell line; reporter gene assay; toxicity profile; trigger value Effect-based monitoring Diagnosis of the chemical status of the environment is traditionally performed by the analytical detection of a limited number of chemical compounds. Environmental quality is then assessed by making a compound-by-compound comparison between the measured concentration of an individual contaminant and its environmental quality standard (EQS). Such a compound-by-compound approach, however, cannot cover the full spectrum of contaminants given the unknown identity of the vast majority of compounds released into the environment. It also ignores the presence of unknown breakdown products formed during degradation processes and the presence of compounds with concentration levels below the analytical limit of detection. Furthermore, it overlooks combined effects of contaminants present in the complex environmental mixture. To overcome these shortcomings, effect-based monitoring has been proposed as a comprehensive and cost-effective, complementary strategy to chemical analysis for the diagnosis of environmental chemical quality. In effect-based monitoring the toxic potency of the complex mixture is determined as a whole by testing environmental samples in bioassays. Bioassays are defined as "biological test systems that consist of whole organisms or parts of organisms (e.g., tissues, cells, proteins), which show a measurable and potentially biologically relevant response when exposed to natural or xenobiotic compounds, or complex mixtures present in environmental samples" (Hamers et al. 2010). Bioassays making use of whole organisms are further referred to as in vivo bioassays (in vivo means "while living"). In vivo bioassays have relatively high ecological relevance as they provide information on survival, reproduction, growth, or behaviour of the species tested. In vivo bioassays will be addressed in a separate section. In vitro bioassays Bioassays making use of tissues, cells, proteins are called in vitro bioassays (in vitro means "in glass"), as - in the past -they were typically performed in test tubes or petri dishes made from glass. Nowadays, in vitro bioassays are more often performed in microtiter wells-plates containing multiple (6, 12, 24, 48, 96, 384, or 1536) test containers (called "wells") per plate. Most in vitro bioassays show a very mechanism-specific response, which is for instance indicative of the inhibition of a specific enzyme or the activation of a specific molecular receptor. In addition to the mechanism-specific information about the complex mixture present in the environment, in vitro bioassays have several other advantages. Small test volumes, for instance, make the in vitro assays suitable to test small samples. If sampling volumes are not restricted, however, the small volume of the in vitro bioassays allow that pre-concentrated samples (i.e. extracts) can be tested. Moreover, in vitro bioassays have short test durations (usually incubation periods range from 15 minutes to 48 hours) and can be performed in relatively high-throughput, i.e. multiple samples can be tested per microtiter plate experiment. Microtiter plate experiments require an easy read-out (e.g. luminescence, fluorescence, optical density), which is typically a direct measure for the toxic potency to which the bioassay was exposed. Finally, using cells or proteins for toxicity testing raises less ethical objections than the use of intact organisms as is done in in vivo bioassays. Cell-based in vitro bioassays can make use of different types of cells. Cells can be isolated from animal tissue and be grown in medium in cell culture flasks. If a flask grows full, cells can be diluted in fresh medium and be distributed over several new flasks (i.e. "passaging"). For cells freshly isolated from animal tissue (called primary cells), however, the number of passages is limited, due to the fact that the cells have a limited number of cell doublings. Thus, the use of primary cells in environmental monitoring is not preferred, as preparation of cell cultures is time-consuming and requires the use of animals. Moreover, the composition and activity of the cells may change from batch to batch. Instead, environmental monitoring often makes use of cell lines. A cell line is a cell culture derived from a single cell which has been immortalized, allowing the cell to divide infinitely. Immortalization of cells is obtained by selecting either a (mutated) cancer cell from a donor animal or human being, or by causing a mutation in an a healthy cell after isolation using chemicals or viruses. The advantage of a cell line is that all cells are genetically identical and can be used for an indefinite number of experiments. The drawback of cell lines is that the cells are cancer cells that do not behave like a healthy cell in an intact organism. For instance, cancer cells have lost their differentiated properties and have a short cell cycle due to increased proliferation (see section on In vitro toxicity testing). Examples of Reporter gene bioassays are a type of in vitro bioassays that are frequently used in effect-based monitoring. Such bioassays make use of genetically modified cell lines or bacteria that contain an incorporated gene construct encoding for an easily measurable protein (i.e. the reporter protein). This gene construct is developed in such a way that its expression is triggered by a specific interaction between the toxic compound and a cellular receptor. If the receptor is activated by the toxic compound, transcription and translation of the reporter protein takes place, which can be easily measured as a change in colour, fluorescence, or luminescence. The most well-known reporter gene bioassays are steroid hormone-sensitive bioassays. These bioassays are based on the principle by which steroid hormones act, i.e. activation of a receptor protein followed by translocation of the hormone-receptor complex to the nucleus where it binds to a hormone-responsive element of the DNA, thereby initiating transcription and translation of steroid hormone-dependent genes. In case of a hormone-responsive reporter gene bioassay, the reporter gene construct is also under transcriptional control of a hormone-responsive element. Activation of the steroid hormone receptor by an endocrine disrupting compound thus leads to expression of the reporter protein, which can easily be measured. Estrogenic activity, for instance, is typically measured in cell lines in which a plasmid is stably transfected into the cellular genome that encodes for the reporter protein luciferase (Figure 1). Expression of this enzyme is under transcriptional control of an estrogen-responsive element (ERE). Upon exposure to an environmental sample, estrogenic compounds present in the sample may enter the cell and bind and activate the estrogen receptor (ER). The activated ER forms a dimer with another activated ER and is translocated to the nucleus where the dimer binds to the ERE, causing transcription and translation of the luciferase reporter gene. After 24 hours, the exposure is terminated and the amount of luciferase enzyme can be easily quantified by lysis of the cells and adding the energy source ATP and the substrate luciferin. Luciferin is hydrolysed by luciferase, which is associated with the emission of light (i.e. the same reaction as occurs in fireflies or in glowing worms). The amount of light produced by the cells is quantified in a luminometer and is a direct measure for the estrogenic potency of the complex mixture to which the cells were exposed. Another classic bioassay for the detection of dioxin-like compounds is the ethoxyresorufin-o-deethylase (EROD) bioassay (Figure 2). The EROD bioassay is an enzyme induction bioassay that makes use of a hepatic cell line (i.e. derived from liver cells). Similar as described for the estrogenic compounds, dioxin-like compounds can enter these cells upon exposure to an environmental sample, and bind and activate a receptor protein, i.c. the arylhydrocarbon receptor (AhR) (see section on Receptor interactions). The activated AhR is subsequently translocated to the nucleus where it forms a dimer with another transcription factor (ARNT) that binds to the dioxin responsive element (DRE), causing transcription and translation of dioxin-responsive genes. One of these genes encodes for CYP1A1, a typical Phase I biotransformation enzyme. Upon lysis of the cells and addition of the substrate ethoxyresorufin, CYP1A1 is capable of hydrolysing this substrate into ethanol and resorufin, which is a fluorescent reaction product that can be measured easily. As such, the amount of fluorescence is a direct measure for the dioxin-like potency to which the cells were exposed. Another classic bioassay is the acetylcholinesterase (AChE) inhibition assay for the detection of organophosphate and carbamate insecticides (Figure 3). By making a covalent bond to the active site of the AChE enzyme, these compounds are capable of inhibiting the hydrolysis of the neurotransmitter acetylcholine (ACh) (see section on Protein inactivation). The in vitro AChE inhibition assay makes use of the principle that AChE can also hydrolyse an alternative substrate called acetylthiocholine (ATCh) into acetic acid and thiocholine (TCh). AChE inhibition leads to a decreased rate of TCh formation, which can be measured using an indicator, called Ellman's reagent. This indicator reacts with the thiol (-SH) group of TCh, resulting in a yellow breakdown product that can easily be measured photometrically. In the bioassay, purified AChE (commercially available for instance from electric eel) is incubated with an environmental sample in the presence of ATCh and Ellman's reagent. A decrease in the rate by which the yellow reaction product is formed is a direct measure for the inhibition of the AChE activity. Another bioassay that is used to detect mutagenic compounds in environmental samples is the Ames assay, which has been described in the section on Carcinogenicity and Genotoxicity. Interpretation of the toxicity profile In practice, multiple mechanism-specific in vitro bioassays are often combined into a test battery to cover the spectrum of toxicological endpoints in an (eco)system. As such, the battery can be considered as a safety net that signals the presence of toxic compounds at low concentrations. However, the question what combination of in vitro tests provides a sufficient level of coverage for the toxicological endpoints of concern still is an open one. Still, testing an environmental sample in a battery of mechanism-specific in vitro bioassays yields a toxicity profile of the sample, indicating its toxic potency towards different endpoints. Two main strategies have been described to interpret in vitro toxicity profiles in terms of risk. In the "benchmark strategy", the toxicity profiles are compared to one or more reference profiles (Figure 4). A reference profile may be defined as the profile that is generally observed in environmental samples from locations with good chemical and/or ecological quality. The benchmark approach indicates to what extent the observed toxicity profile deviates from a toxicity profile corresponding to the desired environmental quality. It also indicates the endpoints that are most affected by the environmental sample. In the "trigger value strategy" the response of each individual bioassay is compared to a bioassay response level at which chemicals are not expected to cause adverse effects at higher levels of biological organization. This endpoint-specific "safe" bioassay response level is called an effect-based trigger (EBT) value. The method for deriving EBT values is still under development. It can be based on different criteria, such as laboratory toxicity data, field concentrations, or EU environmental quality standards (EQS) of individual compounds, which are translated into bioassay-specific effect-levels (see section on Effect-based water quality assessment). In addition to the benchmark and trigger value approaches focusing on environmental risk assessment, effect-based monitoring with in vitro bioassays can also be used for effect-directed analysis (EDA). EDA focuses on samples that cause bioassay responses that cannot be explained by the chemicals that were analyzed in these samples. The goal of EDA is to detect and identify emerging contaminants that are responsible for the unexplained bioassay response and are not chemically analyzed because their presence or identity is unknown. In EDA, in vitro bioassay responses to fractionated samples are used to steer the chemical identification process of unknown compounds with toxic properties in the bioassays (see section on Effect-Directed Analysis). Further reading: Hamers, T., Leonards, P.E.G., Legler, J., Vethaak, A.D., Schipper, C.A. (2010). Toxicity profiling: an integrated effect-based tool for site-specific sediment quality assessment. Integrated Environmental Assessment and Management 6, 761-773 6.4.1. Question 1 Name advantages and disadvantages of effect-based and chemical-based monitoring strategies? 6.4.1. Question 2 Name at least three characteristics that make in vitro bioassays suitable for effect-based monitoring. 6.4.1. Question 3 Can the principle of the EROD assay also be used to develop a reporter gene assay? Explain your answer. 6.4.1. Question 4 In the benchmark approach (see text), toxicity profiles from sampling locations are compared to a reference profile. Should the reference profile always correspond to a clean situation? Motivate your answer. 6.4.2. Effect Directed Analysis Author: Marja Lamoree Reviewers: Timo Hamers, Jana Weiss Learning goals: You should be able to • explain the complementary nature of the analytical/chemical and biological/toxicological techniques used in Effect-Directed Analysis • explain the purpose of Effect-Directed Analysis • describe the steps in the Effect-Directed Analysis process • describe when the application of Effect-Directed Analysis is most useful Keywords: extraction, bioassay testing, fractionation, identification, confirmation In general, the quality of the environment may be monitored by two complementary approaches: i) quantitative chemical analysis of selected (priority) pollutants and ii) effect-based monitoring using in vitro/vivo bioassays. Compared to the more classical chemical analytical approach that has been used for decades, effect-based monitoring is currently applied in an explorative manner and has not yet matured into a routinely implemented monitoring tool that is anchored in legislation. However, in an international framework, developments to formalize the role of effect-based monitoring and to standardize the use of bioassay testing for environmental quality assessment are underway. A weakness of the chemical approach is that because of the preselection of target compounds for quantitative analysis other compounds that are of relevance for the environmental quality may be missed. In comparison, inclusiveness is one of the advantages of effect-based monitoring: all compounds - and not only a few pre-defined ones - having a specific effect will contribute to the total, measured biological activity (see Section In vitro bioassays). In turn, the effect-based approach strongly benefits from chemical analytical support to pinpoint which compounds are responsible for the observed activity and to be able to take measures for environmental protection, e.g. the reduction of the emission or discharge of a specific toxic compound into the environment. In Effect-Directed Analysis (EDA), the strengths of analytical chemical techniques and effect-based testing are combined with the aim to identify novel compounds that show activity in a biological analysis and that would have gone unnoticed using the chemical and the effect-based approach separately. A schematic representation of EDA is shown in Figure 1 and the various steps are described below in more detail. There is no limitation regarding the sample matrix: EDA has been applied to e.g. water, soil/sediment and biota samples. It is used for in-depth investigations at locations that are suspected to be contaminated but where the compounds responsible for the observed adverse effects are not known. In addition to environmental quality assessment, EDA is applied in the fields of food security analysis and drug discovery. In Table 1 examples of EDA studies are given. 1. Extract The first step is the preparation of an extract of the sample. For soil/sediment samples, a sieving step prior to the actual extraction may be necessary in order to remove large particles and obtain a sample that is well-defined in terms of particle size (e.g. <200 μm). Examples of biota samples are whole organism homogenates or parts of the organism, such as blood and liver. For the extraction of the samples, analytical techniques such as liquid/liquid or solid phase extraction are applied to concentrate the compounds of interest and to remove matrix constituents that may interfere with the later steps of the EDA. 2. Biological analysis The choice of endpoint to include in an EDA study is very important, as it dictates the nature of the toxicity of the compounds that may be identified (see Section on Toxicodynamics and Molecular Interaction). For application in EDA, typically in vitro bioassays that are carried out in multiwell (≥ 96 well) plates can be used, because of their low cost, high throughput and ease of use (see Section on In vitro bioassays), although sometimes in vivo assays (see Section on In vivo bioassays) are applied too. Table 1. Examples of EDA studies, including endpoint, type of bioassay, sample matrix and compounds identified. Endpoint Type of bioassay Sample matrix Type of compounds identified In vitro Estrogenicity Cell based reporter gene Sediment Endogenic hormones Anti-androgenicity Cell based reporter gene Sediment Plasticizers, organophosphorus flame retardants, synthetic fragrances idem Water Pharmaceuticals, pesticides, plasticizers, flame retardants, UV filters Mutagenicity Bacterial luminescence reporter strain Water Benzotriazoles Thyroid hormone disruption Radioligand binding Polar bear plasma Metabolites of PCBs, nonylphenols In vivo Photosystem II toxicity Pulse Amplitude Modulation fluorometry Water Pesticides Endocrine disruption Snail reproduction Sediment Phthalates, synthetic fragrances, alkylphenols 3. Fractionation Fractionation of the extract is achieved by the application of chromatography, resulting in the separation of the - in most cases - multitude of different compounds that are present in an extract of an environmental sample. Chromatographic separation is obtained after the migration of compounds through a sorbent bed. In most cases, the separation principle is based on the distribution of compounds between the liquid mobile phase and the solid stationary phase (liquid chromatography, or LC), but a chromatographic separation using the partitioning between the gas phase and a sorbent bed (gas chromatography, or GC) is also possible. At the end of the separation column, at specified time intervals fractions can be collected that are simpler in composition in comparison to the original extract: a reduction in the number of compounds per fraction is obtained. The collected fractions are tested in the bioassay and the responsive fractions are selected for further chemical analysis and identification (step 4). The time intervals for fraction collection vary between a few minutes in older applications and a few seconds in new applications of EDA, which enables fractionation directly into multiwell plates for high throughput bioassay testing. In cases where fractions are collected during time intervals in the order of minutes, the fractions are still so complex that a second round of fractionation to obtain fractions of reduced complexity is often necessary for the identification of compounds that are responsible for the observed effect (see Figure 2). 4. Chemical Analysis Chemical analysis for the identification of the compounds that cause the effect in the bioassay is usually done by LC coupled to mass spectrometric (MS) detection. To obtain high mass accuracy that facilitates compound identification, high resolution mass spectrometry (HR-MS) is generally applied. Fractions obtained after one or two fractionation steps are injected into the LC-MS system. In studies where fractionation into multiwell plates is used (and thus small fractions in the order of microliters are collected), only one round of fractionation is applied. In these cases, identification and fraction collection can be done in parallel, using a splitter after the chromatographic column that directs part of the eluent from the column to the well plate and the other part to the MS (see Figure 3). This is called high throughput EDA (HT-EDA). 5. Identification The use of HR-MS is necessary to obtain mass information to establish the molecular weight with high accuracy (e.g. 119.12423 Dalton) to derive the molecular formula (e.g. C6H5N3) of the compound. Optimally, HR-MS instrumentation is equipped with an MS-MS mode, in which compound fragmentation is induced by collisions with other molecules, resulting in fragments that are specific for the original compound. Fragmentation spectra obtained using the MS-MS mode of HR-MS instruments help to elucidate the structure of the compounds eluting from the column, see for an example Figure 4. Other information such as log Kow may be calculated using dedicated software packages that use elemental composition and structure as input. To aid the identification process, compound and mass spectral libraries are used as well as the more novel databases containing toxicity information (e.g. PubChem Bioassay, Toxcast). Mass spectrometry instrumentation vendor software, public/web-based databases and databases compiled in-house enable suspect screening to identify compounds that are known, e.g. because they are applied in consumer products or construction materials. When MS signals cannot be attributed to known compounds or their metabolites/transformation products, the identification approach is called non-target screening, where additional identification techniques such as Nuclear Magnetic Resonance (NMR) may aid the identification. The identification process is complicated and often time consuming, and results in a suspect list that needs to be evaluated for further confirmation of the identification. 6. Confirmation For an unequivocal confirmation of the identity of a tentatively identified compound, it is necessary to obtain a standard of the compound to investigate whether its analytical chemical behaviour corresponds to that of the tentatively identified compound in the environmental sample. In addition, the biological activity of the standard should be measured and compared with the earlier obtained data. In case both the chemical analysis and bioassay testing results support the identification, confirmation of compound identity is achieved. In principle, the confirmation step of an EDA study is very straightforward, but in current practice the standards are mostly not commercially available. Dedicated synthesis is time consuming and costly, therefore the confirmation step often is a bottleneck in EDA studies. The application of EDA is suitable for samples collected at specific locations where comprehensive chemical analysis of priority pollutants and other chemicals of relevance has been conducted already, and where ecological quality assessment has revealed that the local conditions are compromised (see other Sections on Diagnostic risk assessment approaches and tools). Especially those samples that show a significant difference between the observed (in vitro) bioassay response and the activity that may be calculated according to the concept of Concentration Addition (see Section on Mixture Toxicity) by using the relative potencies and the concentrations of compounds active in that bioassay need a further in-depth investigation. EDA can be implemented at these 'hotspots' of environmental contamination to unravel the identity of compounds that have an effect, but that were not included in the chemical monitoring of the environmental quality. Knowledge with regard to the main drivers of toxicity at a specific location supports accurate decision making that is necessary for environmental quality protection. 6.4.2. Question 1 Draw a scheme of EDA and name the different steps. 6.4.2. Question 2 What is the aim of the fractionation of an extract? 6.4.2. Question 3 Describe the confirmation step of EDA. 6.4.2. Question 4 Give an example of an EDA study with regard to endpoint, bioassay, matrix and type of compounds identified. 6.4.2. Question 5 Explain why quantitative chemical analysis of known pollutants of e.g. a water sample is complementary to effect-based testing of that sample using e.g. an in vitro bioassay? 6.4.3. Effect-based monitoring: In vivo bioassays Effect based monitoring: in vivo bioassays Authors: Michiel Kraak, Carlos Barata Reviewers: Kees van Gestel, Jörg Römbke Learning objectives: You should be able to: • define in vivo bioassays and to explain how in vivo bioassays are performed. • give examples of the most commonly used in vivo bioassays per environmental compartment. • motivate the necessity to incorporate several in vivo bioassays into a bioassay battery. Key words: risk assessment, diagnosis, effect based monitoring, in vivo bioassays, environmental compartment, bioassay battery Introduction To determine whether organisms are at risk when exposed to hazardous compounds present at contaminated field sites, the toxicity of environmental samples can be analysed. To this purpose, several diagnostic tools have been developed, including a wide variety of in vitro, in vivo and in situ bioassays (see sections on In vitro bioassays and on In situ bioassays). In vivo bioassays make use of whole organisms (in vivo means "while living"). The species selected as test organisms for in vivo bioassays are generally the same as the ones selected for single species toxicity tests (see sections 4.3.4, 4.3.5, 4.3.6 and 4.3.7 on the Selection of ecotoxicity test organisms). Likewise, also the endpoints measured in in vivo bioassays are the same as those in single species ecotoxicity tests (see section on Endpoints). In vivo bioassays therefore have a relatively high ecological relevance, as they provide information on the survival, reproduction, growth, or behaviour of the species tested. A major difference between toxicity tests and bioassays is the selection of the controls. In laboratory toxicity experiments the controls consist of non-spiked 'clean' test medium (see section on Concentration response relationships). In bioassays the choice of the controls is more complicated though. Non-treated test medium may be incorporated as a control in bioassays to check for the health and quality of the test organisms. But control media, like standard test water or artificial soil and sediment may differ in numerous aspects from natural environmental samples. Therefore, the control should preferably be a test medium that has exactly the same physicochemical properties as the contaminated sample, except for the chemical pollutants being present. This ideal situation, however, hardly ever exists. Hence, it is recommended to also incorporate environmental samples from less or non-contaminated reference sites into the bioassay and to compare the response of the organism to samples from contaminated sites with those from reference sites. Alternatively, controls can be selected as the least contaminated environmental samples from a gradient of pollution or as the dilution required to obtain no effect. As dilution medium artificial control medium can be used or medium from a reference site. The most commonly used For the soil compartment, the earthworms Eisenia fetida, E. andrei and Lumbricus rubellus, the enchytraeid Enchytraeus crypticus and the collembolan Folsomia candida are most frequently selected as in vivo bioassay test organisms. An example of employing earthworms to assess the ecotoxicological effects of Pb contaminated soils is given in Figure 1. The figure shows the total Pb concentrations in different field soils taken from a soccer field (S), a bullet plot (B), grassland (G1, G3) and a forest (F1-F3) site near a shooting range. The pH of the grassland soils was near neutral (pHCaCl2 = 6.5-6.8), but the pH was rather low (3.2-3.7) for all other field sites. Earthworms exposed to these soils showed a significantly reduced reproductive output (Figure 1) at the most contaminated sites. At the less contaminated sites, earthworm responses were also affected by the difference in soil pH, leading to low juvenile numbers in the acid soil F0 but high numbers in the near neutral reference R3 and the field soil G3. In fact, earthworm reproduction was highest in the latter soil, even though it did contain an elevated concentration of 355 ± 54 mg Pb/kg dry soil. In soil G1, which contained almost twice as much Pb (656 ± 60 mg Pb/kg dry soil), reproduction was much lower and also reduced compared to the control, suggesting the presence of additional, unknown stressor (Luo et al., 2014). For water, predominantly daphnids are employed, mainly Daphnia magna, but sometimes also other daphnid species or other aquatic invertebrates are selected. Also bioassays with several primary producers are available. An example of exposing daphnids (Chydorus sphaericus) to water samples is shown in Figure 2. The bars show the toxicity of the water samples and the diamonds the concentrations of cholinesterase inhibitors, as a proxy for the presence of insecticides. The toxicity of the water samples was higher when also the concentrations of insecticides were higher. Hence, in this case, the observed toxicity is well explained by the measured compounds. Yet, it has to be realized that this is an exception rather than a rule, since mostly a large portion of the toxic effects observed in surface waters cannot be attributed to compounds measured by water authorities and moreover, interactions are also not covered by such analytical data (see section on Effect based water quality assessment). For sediments, oligochaetes and chironomids are selected as test organisms, but sometimes also rooting macrophytes and benthic diatoms. An example of exposing chironomids (Chironomus riparius) to contaminated sediments is shown in Figure 3. Whole sediment bioassays with chironomids allow the assessment of sensitive species-specific sublethal endpoints (see section on Chronic toxicity), in this case emergence. Figure 3 shows that more chironomids emerged on the reference sediment than on the contaminated sediment and that the chironomids on the reference sediment also emerged faster than on the contaminated sediment. For sediment, also benthic diatoms are selected as in vivo bioassay test organisms. Figure 4 shows the growth of the benthic diatom Nitzschia perminuta after 4 days of exposure to 160 sediment samples. The dotted line represents control growth. The growth of the diatoms ranged from higher than the controls to no growth at all, raising the question which deviation from the control should be considered a significant adverse effect. In vivo bioassay batteries Environmental quality assessments are often performed with a single test species, like the four examples given above. Yet, toxicity is species and compound specific and this may therefore result in large margins of uncertainty in the environmental quality assessments, consequently leading to over- or underestimation of environmental risks. Obvious examples include the presence of herbicides that only would induce responses in bioassays with primary producers and the other way around, the presence of insecticides that induces strong effects on insects and to a lesser extent on other animals, but that would be completely overlooked in bioassays with primary producers. To reduce these uncertainties and to increase ecological relevance it is therefore advised to incorporate more test species belonging to different taxa in a bioassay battery (see section on Effect based water quality assessment). References Luo, W., Verweij, R.A., Van Gestel, C.A.M. (2014). Determining the bioavailability and toxicity of lead to earthworms in shooting range soils using a combination of physicochemical and biological assays. Environmental Pollution 185, 1-9. Pieters, B.J., Bosman-Meijerman, D., Steenbergen, E., Van den Brandhof, E.-J., Van Beelen, P., Van der Grinten, E., Verweij, W., Kraak, M.H.S. (2008). Ecological quality assessment of Dutch surface waters using a new bioassay with the cladoceran Chydorus sphaericus. Proceedings Netherlands Entomological Society Meetings 19, 157-164. 6.4.3. Question 1 Define in vivo bioassays and explain how in vivo bioassays are performed. 6.4.3. Question 2 Give examples of the most commonly used in vivo bioassays per environmental compartment. 6.4.3. Question 3 Motivate the necessity to incorporate several in vivo bioassays into a bioassay battery. 6.4.4. Effect Based water quality assessment Effect-based water quality assessment Authors: Milo de Baat, Michiel Kraak Reviewers: Ad Ragas, Ron van der Oost, Beate Escher Learning objectives: You should be able to • list the advantages and drawbacks of an effect-based monitoring approach in comparison to a compound-based approach for water quality assessment. • motivate the necessity of employing a bioassay battery in effect-based monitoring approaches. • explain the expression of bioassay responses in terms of toxic/bioanalytical equivalents of reference compounds. • translate the outcome of a bioassay battery into a ranking of contaminated sites based on ecotoxicological risk. Keywords: Effect-based monitoring, water quality assessment, bioassay battery, effect-based trigger values, ecotoxicological risk assessment Introduction Traditional chemical water quality assessment is based on the analysis of a list of a varying, but limited number of priority substances. Nowadays, the use of many of these compounds is restricted or banned, and concentrations of priority substances in surface waters are therefore decreasing. At the same time, industries have switched to a plethora of alternative compounds, which may enter the aquatic environment, seriously impacting water quality. Hence, priority substances lists are outdated, as the selected compounds are frequently absent, while many compounds with higher relevance are not listed as priority substances. Consequently, a large portion of toxic effects observed in surface waters cannot be attributed to compounds measured by water authorities, and toxic risks to freshwater ecosystems are thus caused by mixtures of a myriad of (un)known, unregulated compounds. Understanding of these risks requires a paradigm shift towards new monitoring methods that do not depend on chemical analysis of priority substances solely, but consider the biological effects of the entire micropollutant mixture first. Therefore, there is a need for effect-based monitoring strategies that employ bioassays to identify environmental risk. Responses in bioassays are caused by all bioavailable (un)known compounds and their metabolites, whether or not they are listed as priority substances. Table 1. Example of the bioassay battery employed by the SIMONI approach of Van der Oost et al. (2017) that can be applied to assess surface water toxicity. Effect-based trigger values (EBT) were previously defined by Escher et al. (2018) (PAH, anti-AR and ER CALUX) and Van der Oost et al. (2017). Bioassay Endpoint Reference compound EBT Unit in situ Daphnia in situ Mortality n/a 20 % mortality in vivo Daphniatox Mortality n/a 0.05 TU Algatox Algal growth inhibition n/a 0.05 TU Microtox Luminescence inhibition n/a 0.05 TU in vitro CALUX cytotox Cytotoxicity n/a 0.05 TU DR Dioxin (-like) activity 2,3,7,8-TCDD 50 pg TEQ/L PAH PAH activity benzo(a)pyrene 6.21 ng BapEQ/L PPARγ Lipid metabolism inhibition rosiglitazone 10 ng RosEQ/L Nrf2 Oxidative stress curcumin 10 µg CurEQ/L PXR Toxic compound metabolism nicardipine 3 µg NicEQ/L p53 -S9 Genotoxicity n/a 0.005 TU p53 +S9 Genotoxicity (after metabolism) n/a 0.005 TU ER Estrogenic activity 17ß-estradiol 0.1 ng EEQ/L anti-AR Antiandrogenic activity flutamide 14.4 µg FluEQ/L GR Glucocorticoid activity dexamethasone 100 ng DexEQ/L in vitro antibiotics T Bacterial growth inhibition (Tetracyclines) oxytetracycline 250 ng OxyEQ/L Q Bacterial growth inhibition (Quinolones) flumequine 100 ng FlqEQ/L B+M Bacterial growth inhibition (β-lactams and Macrolides) penicillin G 50 ng PenEQ/L S Bacterial growth inhibition (Sulfonamides) sulfamethoxazole 100 ng SulEQ/L A Bacterial growth inhibition (Aminoglycosides) neomycin 500 ng NeoEQ/L Bioassay battery The regular application of effect-based monitoring largely relies on the ease of use, endpoint specificity, costs and size of the used bioassays, as well as on the ability to interpret the measured responses. To ensure sensitivity to a wide range of potential stressors, while still providing specific endpoint sensitivity, a successful bioassay battery like the example given in Table 1 can include in situ whole organism assays (see section on Biomonitoring and in situ bioassays), and should include laboratory-based whole-organism in vivo (see section on In vivo bioassays) and mechanism-specific in vitro assays (see section on In vitro bioassays). Adverse effects in the whole-organism bioassays point to general toxic pressure and represent a high ecological relevance. In vitro or small-scale in vivo assays with specific drivers of adverse effects allow for focused identification and subsequent confirmation of (groups of) toxic compounds with specific modes of action. Bioassay selection can also be based on the Adverse Outcome Pathways (AOP) (see section on Adverse Outcome Pathways) concept that describes relationships between molecular initiating events and adverse outcomes. Combining different types of bioassays ranging from whole organism tests to in vitro assays targeting specific modes of action can thus greatly aid in narrowing down the number of candidate compound(s) that cause environmental risks. For example, if bioanalytical responses at a higher organisational level are observed (the orange and black pathways in Figure 1), responses in specific molecular pathways (blue, green, grey and red in Figure 1) can help to identify certain (groups of) compounds responsible for the observed effects. Toxic and bioanalytical equivalent concentrations The severity of the adverse effect of an environmental sample in a bioassay is expressed as toxic equivalent (TEQ) concentrations for toxicity in in vivo assays or as bioanalytical equivalent (BEQ) concentrations for responses in in vitro bioassays. The toxic equivalent concentrations and bioanalytical equivalent concentrations represent the joint toxic potency of all unknown chemicals present in the sample that have the same mode of action (see section on Toxicodynamics and molecular interactions) as the reference compound and act concentration-additively (see section on Mixture toxicity). The toxic equivalent concentrations and bioanalytical equivalent concentrations are expressed as the concentration of a reference compound that causes an effect equal to the entire mixture of compounds present in an environmental sample. Figure 2 depicts a typical dose-response curve for a molecular in vitro assay that is indicative of the presence of compounds with a specific mode of action targeted by this in vitro assay. A specific water sample induced an effect of 38% in this assay, equivalent to the effect of approximately 0.02 nM bioanalytical equivalents. Effect-based trigger values The identification of ecological risks from bioassay battery responses follows from the comparison of bioanalytical signals to previously determined thresholds, defined as effect-based trigger values (EBT), that should differentiate between acceptable and poor water quality. Since bioassays potentially respond to the mixture of all compounds present in a sample, effect-based trigger values are expressed as toxic or bioanalytical equivalents of concentrations of model compounds for the respective bioassay (Table 1). Ranking of contaminated sites based on effect-based risk assessment Once the toxic potency of a sample in a bioassay is expressed as toxic equivalent concentrations or bioanalytical equivalent concentrations, this response can be compared to the effect-based trigger value for that assay, thus determining whether or not there is a potential ecological risk from contaminants in the investigated water sample. The ecotoxicity profiles of the surface water samples generated by a bioassay battery allow for calculation and ranking of a cumulative ecological risk for the selected locations. In the example given in Figure 3, water samples of six locations were subjected to the SIMONI bioassay battery of Van der Oost et al. (2017), consisting of 17 in situ, in vivo and in vitro bioassays. Per site and per bioassay the response is compared to the corresponding effect-based trigger value and classified as 'no response' (green), 'response below the effect-based trigger value' (yellow) or 'response above the effect-based trigger value' (orange). Next, the cumulative ecological risk per location is calculated. The resulting integrated ecological risk score allows ranking of the selected sites based on the presence of ecotoxicological risks rather than on the presence of a limited number of target compounds. This in turn permits water authorities to invest money where it matters most: identification of compounds causing adverse effects at locations with indicated ecotoxicological risks. Initially, the compounds causing the observed effect-based trigger value exceedance will not be known, however, this can subsequently be elucidated with targeted or non-target chemical analysis, which will only be necessary at locations with indicated ecological risks. A potential follow-up step could be to investigate the drivers of the observed effects by means of effect-directed analysis (see section on Effect-directed analysis). References Escher, B. I., Aїt-Aїssa, S., Behnisch, P. A., Brack, W., Brion, F., Brouwer, A., et al. (2018). Effect-based trigger values for in vitro and in vivo bioassays performed on surface water extracts supporting the environmental quality standards (EQS) of the European Water Framework Directive. Science of the Total Environment 628-629, 748-765. Van der Oost, R., Sileno, G., Suarez-Munoz, M., Nguyen, M.T., Besselink, H., Brouwer, A. (2017). SIMONI (Smart Integrated Monitoring) as a novel bioanalytical strategy for water quality assessment: part I - Model design and effect-based trigger values. Environmental Toxicology and Chemistry 36, 2385-2399. Additional reading Altenburger, R., Ait-Aissa, S., Antczak, P., Backhaus, T., Barceló, D., Seiler, T.-B., et al. (2015). Future water quality monitoring - Adapting tools to deal with mixtures of pollutants in water resource management. Science of the Total Environment 512-513, 540-551. Escher, B.I., Leusch, F.D.L. (2012). Bioanalytical Tools in Water Quality Assessment. IWA publishing, London (UK). Hamers, T., Legradi, J., Zwart, N., Smedes, F., De Weert, J., Van den Brandhof, E-J., Van de Meent, D., De Zwart, D. (2018). Time-Integrative Passive sampling combined with TOxicity Profiling (TIPTOP): an effect-based strategy for cost-effective chemical water quality assessment. Environmental Toxicology and Pharmacology 64, 48-59. 6.4.4. Question 1 List 5 advantages of an effect-based approach over a compound based approach for water quality assessment. 6.4.4. Question 2 Motivate the necessity of employing a bioassay battery in effect based monitoring approaches. 6.4.4. Question 3 Explain how bioassay responses are expressed in terms of toxicity equivalents of reference compounds? (you may wish to draw a figure). 6.4.4. Question 4 Translate the outcome of a bioassay battery into a ranking of contaminated sites based on ecotoxicological risk. (you may wish to draw a figure). 6.4.5. Biomonitoring: in situ bioassays and contaminant concentrations in organisms Author: Michiel Kraak Reviewers: Ad Ragas, Suzanne Stuijfzand, Lieven Bervoets Learning objectives: You should be able to • name tools specifically designed for ecological risk assessment in the field. • define biomonitoring and to describe biomonitoring procedures. • list the characteristics of suitable biomonitoring organisms. • list the most commonly used biomonitoring organisms per environmental compartment. • argue the advantages and disadvantages of in situ bioassays. • argue the advantages and disadvantages of measuring contaminant concentrations in organisms. Key words: Biomonitoring, test organisms, in situ bioassays, contaminant concentrations in organisms, environmental quality Introduction Several approaches and tools are available for diagnostic risk assessment. Tools specially developed for field assessments include the TRIAD approach (see section on TRIAD approach), in situ bioassays and biomonitoring. In ecotoxicology, biomonitoring is defined as the use of living organisms for the in situ assessment of environmental quality. Passive biomonitoring and active biomonitoring are distinguished. For passive biomonitoring, organisms are collected at the site of interest and their condition is assessed or the concentrations of specific target compounds in their tissues are analysed, or both. By comparing individuals from reference and contaminated sites an indication of the impact on local biota at the site of interest is obtained. For active biomonitoring, organisms are collected from reference sites and exposed in cages or artificial substrates at the study sites. Ideally, reference organisms are simultaneously exposed at the site of origin to control for potential effects of the experimental set-up on the test organisms. As an alternative to field collected animals, laboratory cultured organisms may be employed. After exposure at the study sites for a certain period of time, the organisms are recollected and either their condition is analysed (in situ bioassay) or the concentrations of specific target compounds are measured in the organisms, or both. The results of biomonitoring studies may be used for management decisions, e.g. when accumulation of contaminants has been demonstrated in the field and especially when the sources of the pollution have been identified. However, the use of biomonitoring studies in environmental management has not been captured in formal protocols or guidelines like those of the Water Framework Directive (WFD) or - to a lesser extent - the TRIAD approach and effect-based quality assessments. Biomonitoring studies are typically applied on an case-by-case basis and their application therefore strongly depends on the expertise and resources available for the assessment. The text below explains and discusses the most important aspects of biomonitoring techniques used in diagnostic risk assessment. Selection of biomonitoring test organisms The selection of adequate organisms for biomonitoring partly follows the selection of test organisms for toxicity tests (see section on the Selection of test organisms). Suitable biomonitoring organisms: • Are sedentary, since sedentary organisms may adapt more easily to the in situ experimental setup than more mobile organisms, for which caging may be an additional stress factor. Moreover, for sedentary organisms the relationship between the accumulated compounds and the environmental quality at the exposure site is straightforward, although this is more relevant to passive than to active biomonitoring. • Are representative for the community of interest and native to the study sites, since this will ensure that the biomonitoring organisms tolerate local conditions other than contamination, preventing that stressors other than poor environmental conditions may affect their performance. Obviously, it is also undesirable to introduce exotic species into new environments. • Are long living, at least substantially longer than the exposure duration and preferably large enough to obtain sufficient material for chemical analysis. • Are easy to handle. • Respond to a gradient of environmental quality, if the purpose of the biomonitoring study is to analyse the condition of the organisms after recollection (in situ bioassay). • Accumulate contaminants without being killed, if the purpose of the biomonitoring study is to measure contaminant concentrations in the organisms after recollection. • Are large enough to obtain sufficient biomass for the analysis of the target compounds above the limits of detection, if the purpose of the biomonitoring study is to measure contaminant concentrations in the organisms after recollection. Based on the above listed criteria, in the marine environment mussels belonging to the genus Mytilus are predominantly selected. The genus Mytilus has the additional advantage of a global distribution, although represented by different species. This facilitates the comparison of contaminant concentrations in the organisms all around the globe. Lugworms have occasionally also been used for biomonitoring in marine systems. For freshwater, the cladoceran Daphnia magna is most frequently employed, although occasionally other species are selected, including mayflies, snails, worms, amphipods, isopods, caddisflies and fish. Given the positive experience with marine mussels, freshwater bivalves are also employed as biomonitoring organisms. Sometimes primary producers have been used, mainly periphyton. Due to the complexity of the sediment and soil compartments, few attempts have been made to expose organisms in situ, mainly restricted to chironomids on sediment. In situ exposure devices An obvious requirement of the in situ exposure devices is that the test organisms do not suffer from (sub)lethal effects of the experimental setup. If the organisms are large enough, cages may be used, like for freshwater and marine mussels. For daphnids, a simple glass jar with a permeable lid suffices. For riverine insects, the device should allow the natural flow of the stream to pass, but meanwhile prevent the organisms from escaping. In the device shown in Figure 1a, caddisfly larvae containing tubes are connected to floating tubes, maintaining the larvae at a constant depth of 65 cm. In the tubes, the caddisfly larvae are able to settle and build nets on artificial substrate, a plastic doormat with bristles standing out. An elegant device for in situ colonization of periphyton was developed by Blanck (1985)(Figure 1b). Sand-blasted glass discs (1.5 cm2 surface) are used as artificial substratum for algal attachment. Substrata are placed vertically in the water, parallel to the current, by means of polyethylene racks, each rack supporting a total of 170 discs. After the colonization period, the periphyton containing glass discs can be harvested, offering the unique possibility to perform laboratory or field experiments with entire algal and microbial communities, replicated 170 times. In situ bioassays After exposure at the study sites for a certain period of time, the organisms are recollected and their condition can be analysed (Figure 2). The endpoint is mostly survival, especially in routine monitoring programs. If the in situ exposure lasts long enough, also effects on species specific sublethal endpoints can be assessed. For daphnids and snails, this is reproduction and for isopods growth. For aquatic insects (mayflies, caddisflies, damselflies, chironomids), emergence has been assessed as a sensitive ecological relevant endpoint (Barmentlo et al., 2018). In situ bioassays come closest to the actual field situation. Organisms are directly exposed at the site of interest and respond to all joint stressors present. Yet, this is also the limitation of the approach. If organisms do respond it remains unknown what causes the observed adverse effects. This could be (the combination of) any natural or anthropogenic physical or chemical stress factor. In situ bioassays can therefore be best combined with laboratory bioassays (see section on Bioassays) and the analysis of physico-chemical parameters, conform the TRIAD approach (see section on TRIAD approach). If the adverse effects are also observed in the bioassays under controlled laboratory conditions, then poor water quality is most likely the cause. The water sample may then be subjected to suspected target analysis, non-target analysis or effect directed analysis (EDA). If adverse effects are observed in situ but not in the laboratory, then the presence of hazardous compounds is most likely not the cause. Instead, the effects may be attributable to e.g. low pH, low oxygen concentrations, high temperatures etc, which may be verified by physico-chemical analysis in the field. Online biomonitoring A specific application of in situ bioassays are the online systems for continuous water quality monitoring. In these systems, behaviour is generally the endpoint (see section on Endpoints). Organisms are exposed in a laboratory setting in situ (on shore or on a boat) in an experimental device to a continuous flow of surface water. If the water quality changes, the organisms respond by changing their behaviour. Above a certain threshold an alarm may go off and, for instance, the intake of surface water for drinking water preparation can be temporarily stopped. Contaminant concentrations in organisms As an addition or as an alternative to analysing the condition of the exposed biomonitoring organisms upon retrieval, contaminant concentrations in organisms can be analysed. This has several advantages over chemical analysis of environmental samples: biomonitoring organisms may be exposed for days to weeks at the site of interest, providing time integrated measurements of contaminant concentrations, in contrast to the chemical analysis of grab samples. This way, biomonitoring organisms actually serve as 'biological passive samplers' (see to section on Experimental methods of assessing available concentrations of organic chemicals). Another advantage of measuring contaminant concentrations in organisms is that they only take up the bioavailable (fraction of) substances, ecologically very relevant information, that remains unknown if chemical analysis is performed on water, sediment, soil or air samples. Yet, elevated concentrations in organisms do not necessarily imply toxic effects, and therefore these measurements are best complemented with determining their condition, as described above. Moreover, analysing contaminants in organisms may be more expensive than measurements of environmental samples, due to a more complex sample preparation. Weighing the advantages and disadvantages, the explicit strength of biomonitoring programs is that they provide insight into the spatial and temporal variation in bioavailable contaminant concentrations. In Figure 3 two examples are given. The left panel shows the concentrations of PCBs in zebra mussels at different sampling sites in Flanders, Belgium (Bervoets et al., 2004). The right panel shows the rapid (within 2 wk) Cd accumulation and depuration in biofilms translocated from a reference to a polluted site and from a polluted to a reference site, respectively (Ivorra et al., 1999). References Barmentlo, S.H., Parmentier, E.M., De Snoo, G.R., Vijver, M.G. (2018). Thiacloprid-induced toxicity influenced by nutrients: evidence from in situ bioassays in experimental ditches. Environmental Toxicology and Chemistry 37, 1907-1915. Bervoets, L., Voets, J., Chu, S.G., Covaci, A., Schepens, P., Blust, R. (2004). Comparison of accumulation of micropollutants between indigenous and transplanted zebra mussels (Dreissena polymorpha). Environmental Toxicology and Chemistry 23, 1973-1983. Blanck, H. (1985). A simple, community level, ecotoxicological test systemusing samples of periphyton. Hydrobiologia 124, 251-261. Ivorra, N., Hettelaar, J., Tubbing, G.M.J., Kraak, M.H.S., Sabater, S., Admiraal, W. (1999). Translocation of microbenthic algal assemblages used for in situ analysis of metal pollution in rivers. Archives of Environmental Contamination and Toxicology 37, 19-28. Stuijfzand, S.C., Engels, S., Van Ammelrooy, E., Jonker, M. (1999). Caddisflies (Trichoptera: Hydropsychidae) used for evaluating water quality of large European rivers. Archives of Environmental Contamination and Toxicology 36, 186-192. Vuori, K.M. (1995). Species- and population-specific responses of translocated hydropsychid larvae (Trichoptera, Hydropsychidae) to runoff from acid sulphate soils in the River Kyronjoki, western Finland. Freshwater Biology 33, 305-318. 6.4.5. Question 1 Define biomonitoring. 6.4.5. Question 2 Explain the difference between passive and active biomonitoring. 6.4.5. Question 3 What can be measured in biomonitoring organisms after (re)collection? 6.4.5. Question 4 Explain the difference between passive and active biomonitoring. 6.4.5. Question 5 List the characteristics of suitable biomonitoring organisms. 6.4.5. Question 6 Name the advantage and disadvantage of in situ bioassays. 6.4.5. Question 7 Name the advantages and disadvantages of measuring contaminant concentrations in organisms. 6.4.6. TRIAD approach for site-specific ecological risk assessment Author: Michiel Rutgers Reviewers: Kees van Gestel, Michiel Kraak, Ad Ragas Learning goals: You should be able • to describe the principles of the TRIAD approach • to explain the importance of weight of evidence in risk assessment • to use the results for an assessment by applying the TRIAD approach Keywords: Triad, site-specific ecological risk assessment, weight of evidence Like the other diagnostic tools described in the previous sections (see sections on Effect-based monitoring In vivo bioassays and In vitro bioassays, Effect-directed analysis, and Effect-based water quality assessment and Biomonitoring), the TRIAD approach is a tool for site-specific ecological risk assessment of contaminated sites (Jensen et al., 2006; Rutgers and Jensen, 2011). Yet, it differs from the previous approaches by combining and integrating different techniques through a 'weight of evidence' approach. To this purpose, the TRIAD combines information on contaminant concentrations (environmental chemistry), the toxicity of the mixture of chemicals present at the site ((eco)toxicology), and observations of ecological effects (ecology) (Figure 1). The mere presence of contaminants is just an indication of potential ecological effects to occur. Additional data can help to better assess the ecological risks. For instance, information on actual toxicity of the contaminated site can be obtained from the exposure of test organisms to (extracts of) environmental samples (bioassays), while information on ecological effects can be obtained from an inventory of the community composition at the specific site. When these disciplines tend to converge to corresponding levels of ecological effects, a weight of evidence is established, making it possible to finalize the assessment and to support a decision for contaminated site management. The TRIAD approach thus combines the information obtained from three lines of evidence (LoE): 1. LoE Chemistry: risk information obtained from the measured contaminant concentrations and information on their fate in the ecosystem and how they can evoke ecotoxicological effects. This can include exposure modelling and bioavailability considerations. 2. LoE Toxicity: risk information obtained from (eco)toxicity experiments exposing test organisms to (extracted) samples of the site. These bioassays can be performed on site or in the laboratory, under controlled conditions. 3. LoE Ecology: risk information obtained from the observation of actual effects in the field. This is deduced from data of ecological field surveys, most often at the community level. This information may include data on the composition of soil communities or other community metrics and on ecosystem functioning. The three lines of evidence form a weight of evidence when they are converging, meaning that when the independent lines of evidence are indicating a comparable risk level, there is sufficient evidence for providing advice to decision makers about the ecological risk at a contaminated site. When there is no convergence in risk information obtained from the three lines of evidence, uncertainty is large. Further investigations are then required to provide a unambiguous advice. Table 1. Basic data for site-specific environmental risk assessment (SS-ERA) sorted per line of evidence (LoE). Data and methods are described in Van der Waarde et al. (2001) and Rutgers et al. (2001). Tests and abbreviations used in the table: • Toxic Pressure metals (sum TP metals). The toxic pressure of the mixture of metals in the sample, calculated as the potentially affected fraction in a Species Sensitivity Distribution with NOEC values (see Section on SSDs) and a simple rule for mixture toxicity (response addition; Section on Mixture toxicity). • Microtox. A bioassay with the luminescent bacterium Allivibrio fischeri, formerly known as Vibrio fischeri. Luminescence is reduced when toxicity is high. • Lettuce Growth and Lettuce Germination. A bioassay with the growth performance and the germination percentage of lettuce (seeds). • Bait Lamina. The bait-lamina test consists of vertically inserting 16-hole-bearing plastic strips filled with a plant material preparation into the soil. This gives an indication of the feeding activity of soil animals. • Nematodes abundance and Nematodes Maturity Index 2-5. The biomass and the Maturity Index (MI) of the nematode community in soil samples provide information about soil health (Van der Waarde et al. 2001). The results of a site-specific ecological risk assessment (SS-ERA) applying the TRIAD approach are first organized basic tables for each sample and line of evidence separately. Table 1 shows an example. This table also collects supporting data, such as soil pH and organic matter content. Subsequently, these basic data are processed into ecological risk values by applying a risk scale running from zero (no effects) to one (maximum effect). An example of a metric used is the multi-substance Potentially Affected Fraction of species from the mixture of contaminants (see Section on SSDs). These risk values are then collected in a TRIAD table (Table 2), for each endpoint separately, integrated per line of evidence individually, and finally integrated over the three lines of evidence. Also the level of agreement between the three lines of evidence is given a score. Weighting values are applied, e.g. equal weights for all ecological endpoints (depending on number of methods and endpoints), and equal weights for each line of evidence (33%). When differential weights are preferred, for instance when some data are judged as unreliable, or some endpoints are considered more important than others, the respective weight factors and the arguments to apply them must be provided in the same table and accompanying text. Table 2. Soil Quality TRIAD table demonstrating scaled risk values for two contaminated sites (A, B) and a Reference site (based on real data, only for illustration purposes). Risk values are collected per endpoint, grouped according to respective Lines of Evidence (LoE), and finally integrated into a TRIAD value for risks. The deviation indicates a level of agreement between LoE (default threshold 0.4). For site B, a Weight of Evidence (WoE) is demonstrated (D<0.4) making decision support feasible. By default equal weights can be used throughout. Differential weights should be indicated in the table and described in the accompanying text. References ISO (2017). ISO 19204: Soil quality -- Procedure for site-specific ecological risk assessment of soil contamination (soil quality TRIAD approach). International Standardization Organization, Geneva. https://www.iso.org/standard/63989.html. Jensen, J., Mesman, M. (Eds.) (2006). LIBERATION, Ecological risk assessment of contaminated land, decision support for site specific investigations. ISBN 90-6960-138-9, Report 711701047, RIVM, Bilthoven, The Netherlands. Rutgers, M., Bogte, J.J., Dirven-Van Breemen, E.M., Schouten, A.J. (2001) Locatiespecifieke ecologische risicobeoordeling - praktijkonderzoek met een Triade-benadering. RIVM-rapport 711701026, Bilthoven. Rutgers, M., Jensen, J. (2011). Site-specific ecological risk assessment. Chapter 15, in: F.A. Swartjes (Ed.), Dealing with Contaminated Sites - from Theory towards Practical Application, Springer, Dordrecht. pp. 693-720. Van der Waarde, J.J., Derksen, J.G.M, Peekel, A.F., Keidel, H., Bloem, J., Siepel, H. (2001) Risicobeoordeling van bodemverontreiniging met behulp van een triade benadering met chemische analyses, bioassays en biologische veldinventarisaties. Eindrapportage NOBIS 98-1-28, Gouda. 6.4.6. Question 1 A sediment sample was analyzed for Priority Hazardous Substances (PHSs), but none were detected. Yet, this sediment sample caused high mortality in laboratory bioassays with three sediment inhabiting species. At the site where the sample was taken biodiversity was very low. Explain these observations. 6.4.6. Question 2 In a sediment sample, the total concentration of Priority Hazardous Substances (PHSs) was shown to be very high. Yet, this sediment sample caused no mortality in laboratory bioassays with three sediment inhabiting species. Moreover, at the sample site in the field species rich communities were observed. Explain these observations. 6.4.6. Question 3 What is the added value of using bioassays and field observations over chemical analysis when assessing the potential risk of a contaminated site? 6.4.6. Question 4 What is the added value of performing an assessment along three independent Lines of Evidence (LoE)? 6.4.7. Eco-epidemiology Authors: Leo Posthuma, Dick de Zwart Reviewers: Allan Burton, Ad Ragas Learning objectives: You should be able to: • explain that and how effects of chemicals and their mixtures can be demonstrated in monitoring data sets; • explain that effects can be characterized with various impact metrics; • formulate whether and how the choice of impact sensitivity metric is relevant for the sensitivity and outcomes of a diagnostic assessment; • explain how ecological and ecotoxicological analysis methods relate; • explain how eco-epidemiological analyses are helpful in validating ecotoxicological models utilized in ecotoxicological risk assessment and management. Keywords: eco-epidemiology, mixture pollution, diagnosis, impact magnitude, probable causes, validation Introduction Approaches for environmental protection, assessment and management differ between 'classical' stressors (such as excess nutrients and pH) and chemical pollution. For the 'classical' environmental stress factors, ecologists use monitoring data to develop concepts and methods to prevent and reduce impacts. Although there are some clear-cut examples of chemical pollution impacts [e.g., the decline in vulture populations in South East Asia due to diclofenac (Oaks et al. 2004), and the suit of examples in the book 'Silent Spring' (Carson 1962)], ecotoxicologists commonly have assessed the stress from chemical pollution by evaluating exposures vis a vis laboratory toxicity data. Current pollution often consists of complex mixtures of chemicals, with highly variable patterns in space and time. This poses problems when one wants to evaluate whether observed impacts in ecosystems can be attributed to chemicals or their mixtures. Eco-epidemiological methods have been established to discern such pollution stress. These methods provide the diagnostic tools to identify the impact magnitude and key chemicals that cause impacts in ecosystems. The use of these methods is further relevant for validating the laboratory-based risk assessment approaches developed by ecotoxicology. The origins of eco-epidemiology Risk assessments of chemicals provide insights in expected exposures and impacts, commonly for separate chemicals. These are predictive outcomes with a high relevance for decision making on environmental protection and management. The validation of those risk assessments is key to avoid wrong protection and management decisions, but it is complex. It consists of comparing predicted risk levels to observed effects. This begs the question on how to discern effects of chemical pollution in the field. This question can be answered based on the principles of ecological bio-assessments combined with those of human epidemiology. A bio-assessment is a study of stressors and ecosystem attributes, made to delineate causes of impacts via (often) statistical associations between biotic responses and particular stressors. Epidemiology is defined as the study of the distribution and causation of health and disease conditions in specified populations. Applied epidemiology serves as a scientific basis to help counteracting the spreading of human health problems. Dr. John Snow is often referred to as the 'father of epidemiology'. Based on observations on the incidence, locations and timings of the 1854 cholera outbreak in London, he attributed the disease to contaminated water taken from the Broad Street pump well, counteracting the prevailing idea that the disease was caused by transmission via air. His proposals to control the disease were effective. Likewise, eco-epidemiology - in its ecotoxicological context - has been defined as the study of the distribution and causation of impacts of multiple stressor exposures in ecosystems. In its applied form, it supports the reduction of ecological impacts of chemical pollution. Human-health eco-epidemiology is concerned with environment-mediated disease. The first literature mention of eco-epidemiological analyses on chemical pollution stems from 1984 (Bro-Rasmussen and Løkke 1984). Those authors described eco-epidemiology as a discipline necessary to validate the risk assessment models and approaches of ecotoxicology. In its initial years, progress in eco-epidemiological research was slow due to practical constraints such as a lack of monitoring data, computational capacity and epidemiological techniques. Current eco-epidemiology Current eco-epidemiological studies in ecotoxicology aim to diagnose the impacts of chemical pollution in ecosystems, and utilize a combination of approaches in order to diagnose the role of chemical mixtures in causing ecological impacts in the field. The combination of approaches consists of: 1. Collection of monitoring data on abiotic characteristics and the occurrence and/or abundance of biotic species, for the environmental compartment under study; 2. If needed: data optimization, usually to align abiotic and biotic monitoring data, including the chemicals; 3. Statistical analysis of the data set using eco-epidemiological techniques to delineate impacts and probable causes, according to the approaches followed in 'classical' ecological bio-assessments; 4. Interpretation and use of the outcomes for either validation of ecotoxicological models and approaches, or for control of the impacts sensu Dr. Snow. Key examples of chemical effects in nature Although impacts of chemicals in the environment were known before 1962, Rachel Carson's book Silent Spring (see Section on the history of Environmental toxicology) can be seen as early and comprehensive eco-epidemiological study that synthesized the available information of impacts of chemicals in ecosystems. She considered effects of chemicals a novel force in natural selection when she wrote: "If Darwin were alive today the insect world would delight and astound him with its impressive verification of his theories of survival of the fittest. Under the stress of intensive chemical spraying the weaker members of the insect populations are being weeded out." Clear examples of chemical impacts on species are still reported. Amongst the best-known examples is a study on vultures. The population of Indian vultures declined more than 95% due to diclofenac exposure which was used intensively as a veterinary drug (Oaks et al. 2004). The analysis of chemical impacts in nature becomes however more complex over time. The diversity of chemicals produced and used has vastly increased, and environmental samples contain thousands of chemicals at often low concentrations. Hence, contemporary eco-epidemiology is complex. Nonetheless, various studies demonstrated that contemporary mixture exposures affect species assemblages. Starting from large-scale monitoring data and following the four steps mentioned above, De Zwart et al. (2006) were able to show that effects on fish species assemblages could be attributed to both habitat characteristics and chemical mixtures. Kapo and Burton Jr (2006) showed the impacts of multiple stressors and chemical mixtures in aquatic species assemblages with similar types of data, but slightly different techniques. Eco-epidemiological studies of the effects of chemicals and their mixtures currently represent different geographies, species groups, stressors and chemicals/mixtures that are considered. The potential utility eco-epidemiological studies was reviewed by Posthuma et al. (2016). The review showed that mixture impacts occur, and that they can be separated from natural variability and multiple-stressor impacts. That means that water managers can develop management plans to counteract stressor impacts. Thereby, the study outcomes are used to prioritize management to sites that are most affected, and to chemicals that contribute most to those effects. Based on sophisticated statistical analyses, Berger et al. (2016) suggested chemicals can induce effects in the environment at concentrations much lower than expected based on laboratory experiments. Schäfer et al. (2016) argued that eco-epidemiological studies that cover both mixtures and other stressors are essential for environmental quality assessment and management. In practice, however, the analysis of the potential impacts of chemical mixtures is often still separate from the analysis of impacts of other stressors. Steps in eco-epidemiological analysis Various regulations require collection of monitoring data, followed by bio-assessment, such as the EU Water Framework Directive (see section on the Water Framework Directive). Therefore, monitoring data sets are increasingly available. The data set is subsequently curated and/or optimized for the analyses. Data curation and management steps imply amongst others that taxonomic names of species are harmonized, and that metrics for abiotic and biotic variables represent the conditions for the same place and time as much as possible. Next, the data set is expanded with novel variables, e.g. a metric for the toxic pressure exerted by chemical mixtures. An example of such a metric is the multi-substance Potentially Affected Fraction of species (msPAF). This metric transfers measured or predicted concentrations into the Potentially Affected Fraction of species (PAF), the values of which are then aggregated for a total mixture (De Zwart and Posthuma 2005). This is crucial, as adding each chemical of interest as a separate variable implies an increasingly expanding number of required sampling sites to maintain statistical power to diagnose impacts and probable causation. The interpretation of the outcomes of the statistical analyses of the data set is the final step. Here, it must be acknowledged that statistical association is not equal to causation, and that care must be taken to explain the findings as indicative for mixture effects. Depending on the context of the study, this may then trigger a refined assessment, or alignment with other methods to collect evidence, or a direct use in an environmental management program. Eco-epidemiological methods A very basic eco-epidemiological method is quantile regression. Whereas common regression methods explore the magnitude of the change of the mean of the response variable (e.g., biodiversity) in relation to a predictor variable (e.g., pollutant stress), the quantile regression looks at the tails of the distributions of the response variable. How this principle operates is illustrated in Figure 1. When a monitoring data set contains one stressor variable at different levels (i.e., a gradient of data), the observations typically take the shape of a common stressor-response relationship (see section on Concentration-effect relationships). If the monitoring sites are affected by an extra stressor, the maximum-performance under the first stressor cannot be reached, so that the area under the curve contains the XY-points for this situation. Further addition of stressor variables and levels fills this space under the curve. When the raw data plotted as XY show an 'empty area' lacking XY-points, e.g. in the upper right corner, it is likely that the stressor variable can be identified as a stressor that limits the response variable, for example: chemicals limit biodiversity. The quantile regression calculates an upper percentiles (e.g., the 95th percentile) of the Y-values in assigned subgroups of X-values ("bins"). Such a procedure yields a picture such as Figure 1. More complex methods for analysis of (bio)monitoring data have been developed and applied. The methods are closely associated to those developed for, and utilized in, applied ecology. Well-known examples are 'species distribution models' (SDM), which are used to describe the abundance or presence of species as a function of multiple environmental variables. A well-known SDM is the bell-shaped curve relating species abundances to water pH: numbers of individuals of a species are commonly low at low and high pH, and the SDM is characterized as an optimum model for species abundance (Y) versus pH (X). Statistical models can also describe species abundance, presence or biodiversity, as a function of multiple stressors, for example via Generalized Linear Models. These have the general shape of: Log(Abundance)= (a. pH + a' pH2) + (b. OM + b' OM2) + …… + e, with a, a', b and b' being estimated from fitting the model to the data, whilst pH and OM are the abiotic stressor variables (acidity and Organic Matter, respectively); the quadratic terms are added to allow for optimum and minimum shaped relationships. When SSD models (see Section on Species Sensitivity Distribution) are used to predict the multi-substance Potentially Affected Fraction of species, the resulting mixture stress proxy can be analysed together with the other stressor variables. Data analyses from monitoring data from the United States and the Netherlands have, for example, shown that the abundance of >60% of the taxa is co-affected by mixtures of chemicals. An example study is provided by Posthuma et al. (2016). Prospective mixture impact assessments In addition to the retrospective analysis of monitoring data, in search of chemical impacts, recent studies also show examples of prospective studies of effects of mixtures. Different land uses imply different chemical use patterns, summarized as 'signatures'. That is, agricultural land use will yield intermittent emissions of crop-specific plant protection products, aligning with the growing season. Emissions from populated areas will show continuous emission of household chemicals and discontinuous emissions of chemicals in street run-off associated to heavy rain events. The application of emission, fate and ecotoxicity models showed that aquatic ecosystems are subject to the 'signatures', with associated predicted impact magnitudes (Holmes et al. 2018; Posthuma et al. 2018). Although such prospective assessments did not yet prove ecological impacts, they can assist in avoiding impacts by preventing the emission 'signatures' that are identified as potentially most hazardous. The use of eco-epidemiological output Eco-epidemiological analysis outputs serve two purposes, closely related to prospective and retrospective risk assessment of chemical pollution: 1. Validation of ecotoxicological models and approaches; 2. Derivation of control measures, to reduce impacts of diagnosed probable causes of impacts. If needed, multiple lines of evidence can be combined, such as in the Triad approach (see section on TRIAD) or approaches that consider more than three lines of evidence (Chapman and Hollert, 2006). The higher the importance of a good diagnosis, the better the user may rely on multiple lines of evidence. First, the validation of ecotoxicological models and approaches is crucial, to avoid that important environmental protection, assessment and management activities rely on approaches that have limited relationship to field effects. Eco-epidemiological analyses have, for example, been used to validate the protective benchmarks used in the chemical-oriented environmental policies. Second, the outcomes of an eco-epidemiological analysis can be used to control causes of impacts to ecosystems. Some studies have, for example, identified a statistical association between observed impacts (species expected but absent) and pollution of surface waters with mixtures of metals. Though local experts first doubted this association due to lack of industrial activities with metals in the area, they later found the association relevant given the presence of old spoil heaps from past mining activities. Metals appeared to leach into the surface waters at low rates, but the leached mixtures appeared to co-vary with species missing (De Zwart et al. 2006). References Berger, E., Haase, P., Oetken, M., Sundermann, A. (2016). Field data reveal low critical chemical concentrations for river benthic invertebrates. Science of The Total Environment 544, 864-873. Bro-Rasmussen, F., Løkke, H. (1984). Ecoepidemiology - a casuistic discipline describing ecological disturbances and damages in relation to their specific causes; exemplified by chlorinated phenols and chlorophenoxy acids. Regulatory Toxicology and Pharmacology 4, 391-399. Carson, R. (1962). Silent spring. Boston, Houghton Mifflin. Chapman, P.M., Hollert, H. (2006). Should the sediment quality triad become a tetrad, a pentad, or possibly even a hexad? Journal of Soils and Sediments 6, 4-8. De Zwart, D., Dyer, S.D., Posthuma, L., Hawkins, C.P. (2006). Predictive models attribute effects on fish assemblages to toxicity and habitat alteration. Ecological Applications 16, 1295-1310. De Zwart, D., Dyer, S.D., Posthuma, L., Hawkins, C.P. (2006). Use of predictive models to attribute potential effects of mixture toxicity and habitat alteration on the biological condition of fish assemblages. Ecological Applications 16, 1295-1310. De Zwart, D., Posthuma, L. (2005). Complex mixture toxicity for single and multiple species: Proposed methodologies. Environmental Toxicology and Chemistry 24,: 2665-2676. Holmes, C.M., Brown, C.D., Hamer, M., Jones, R., Maltby, L., Posthuma, L., Silberhorn, E., Teeter, J.S., Warne, M.S.J., Weltje, L. (2018). Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes. Environmental Toxicology and Chemistry 37, 674-689. Kapo, K.E., Burton Jr, G.A. (2006). A geographic information systems-based, weights-of-evidence approach for diagnosing aquatic ecosystem impairment. Environmental Toxicology and Chemistry 25, 2237-2249. Oaks, J.L., Gilbert, M., Virani, M.Z., Watson, R.T., Meteyer, C.U., Rideout, B.A., Shivaprasad, H.L., Ahmed, S., Chaudhry, M.J., Arshad, M., Mahmood, S., Ali, A., Khan, A.A. (2004). Diclofenac residues as the cause of vulture population decline in Pakistan. Nature 427(6975), 630-633. Posthuma, L., Brown, C.D., de Zwart, D., Diamond, J., Dyer, S.D., Holmes, C.M., Marshall, S., Burton, G.A. (2018). Prospective mixture risk assessment and management prioritizations for river catchments with diverse land uses. Environmental Toxicology and Chemistry 37, 715-728. Posthuma, L., De Zwart, D., Keijzers, R., Postma, J. (2016). Water systems analysis with the ecological key factor 'toxicity'. Part 2. Calibration. Toxic pressure and ecological effects on macrofauna in the Netherlands. Amersfoort, the Netherlands, STOWA. Posthuma, L., Dyer, S.D., de Zwart, D., Kapo, K., Holmes, C.M., Burton Jr, G.A. (2016). Eco-epidemiology of aquatic ecosystems: Separating chemicals from multiple stressors. Science of The Total Environment 573, 1303-1319. Posthuma, L., Suter, II, G.W., Traas, T.P. (Eds.) (2002). Species Sensitivity Distributions in Ecotoxicology. Boca Raton, FL, USA, Lewis Publishers. Schäfer, R.B., Kühn, B., Malaj, E., König, A., Gergs, R. (2016). Contribution of organic toxicants to multiple stress in river ecosystems. Freshwater Biology 61, 2116-2128 6.4.7. Question 1 Which motivations do you know for executing eco-epidemiological analyses? 6.4.7. Question 2 Pollution in the 1950s and 1960s showed clear evidence for major impacts of chemicals in nature, whilst regulatory management actions taken since then have reduced clear recognition of chemical impacts in nature. Which ecotoxicological model has rejuvenated the development of eco-epidemiological methods? 6.4.7. Question 3 What is a simple approach, via which even raw-data plottings already show whether a stress factor (such as chemical mixture exposure) is limiting for ecology
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.04%3A_Diagnostic_risk_assessment_approaches_and_tools.txt
6.5. Regulatory Frameworks Regulatory frameworks Authors: Charles Bodar and Joop de Knecht Reviewers: Kees van Gestel Learning objectives: You should be able to • explain how the potential environmental risks of chemicals are legally being controlled in the EU and beyond • mention the different regulatory bodies involved in the regulation of different categories of chemicals • explain the purpose of the Classification, Labelling and Packaging (CLP) approach and its difference with the risk assessment of chemicals Keywords: chemicals, environmental regulations, hazard, risk Introduction There is no single, overarching global regulatory framework to manage the risks of all chemicals. Instead, different regulations or directives have been developed for different categories of chemicals. These categories are typically related to the usage of the chemicals. Important categories are industrial chemicals (solvents, plasticizers, etc.), plant protection products, biocides and human and veterinary drugs. Some chemicals may belong to more than one category. Zinc, for example, is used in the building industry, but it also has biocidal applications (antifouling agent) and zinc oxide is used as a veterinary drug. In the European Union, each chemical category is subject to specific regulations or directives providing the legal conditions and requirements to guarantee a safe production and use of chemicals. A key element of all legal frameworks is the requirement that sufficient data on a chemical should be made available. Valid data on production and identity (e.g. chemical structure), use volumes, emissions, environmental fate properties and the (eco)toxicity of a chemical are the essential building blocks for a sound assessment and management of environmental risks. Rules for the minimum data set that should be provided by the actors involved (e.g. producers or importers) are laid down in various regulatory frameworks. With this data, both hazard and risk assessments can be carried out according to specified technical guidelines. The outcome of the assessment is then used for risk management, which is focused on minimizing any risk by taking measures, ranging from requests for additional data to restrictions on the particular use or a full-scale ban of a chemical. REACH REACH is a regulation of the European Union, adopted to improve the protection of human health and the environment from the risks that can be posed by chemicals, while enhancing the competitiveness of the EU chemicals industry. REACH stands for Registration, Evaluation, Authorisation and Restriction of Chemicals. The REACH regulation entered into force on 1st June 2007 to streamline and improve the former legislative frameworks on new and existing chemical substances. It replaced approximately forty community regulations and directives by one single regulation. REACH establishes procedures for collecting and assessing information on the properties, hazards and risks of substances. REACH applies to a very broad spectrum of chemicals: from industrial to household applications, and very much more. It requires that EU manufacturers and importers register their chemical substances if produced or imported in annual amounts of > 1 tonne, unless the substance is exempted from registration under REACH. At quantities of > 10 tonnes, the manufacturers, importers, and down-stream users are responsible to show that their substances do not adversely affect human health or the environment. The amount of standard information required to show safe use depends on the quantity of the substance that is manufactured or imported. Before testing on vertebrate animals like fish and mammals, the use of alternative methods must be considered. The European Chemical Agency (ECHA) coordinates and facilitates the REACH program. For production volumes above 10 tonnes per year, industry has to prepare a risk assessment, taking into account all risk management measures envisaged, and document this in a chemical safety assessment (CSA). A CSA should include an exposure assessment, hazard or dose-response assessment, and a risk characterization showing risks ratios below 1.0, i.e. safe use (see sections on REACH Human and REACH Eco). Classification, Labelling and Packaging (CLP) The EU CLP regulation requires manufacturers, importers or downstream users of substances or mixtures to classify, label and package their hazardous chemicals appropriately before placing them on the market. When relevant information (e.g. ecotoxicity data) on a substance or mixture meets the classification criteria in the CLP regulation, the hazards of a substance or mixture are identified by assigning a certain hazard class and category. An important CLP hazard class is 'Hazardous to the aquatic environment', which is divided into categories based on the criteria, for example, Category Acute 1, representing the most (acute) toxic chemicals (LC50/EC50 ≤ 1 mg/L). CLP also sets detailed criteria for the labelling elements, such as the well-known pictograms (Figure 1). Plant protection products regulation Plant protection products (PPPs) are pesticides that are mainly used to keep crops healthy and prevent them from being damaged by disease and infestation. They include among others herbicides, fungicides, insecticides, acaricides, plant growth regulators and repellents (see section on Crop Protection Products). PPPs fall under the EU Regulation (EC) No 1107/2009 which determines that PPPs cannot be placed on the market or used without prior authorization. The European Food and Safety Authority (EFSA) coordinates the EU regulation on PPPs. Biocides regulation The distinction between biocides and PPP is not always straightforward, but as a general rule of thumb the PPP regulation applies to substances used by farmers for crop protection while the biocides regulation covers all other pesticide applications. Different applications of the same active ingredient, one as a PPP and the other as a biocide, may thus fall under different regulations. Biocides are used to protect humans, animals, materials or articles against harmful organisms like pests or bacteria, by the action of the active substances contained in the biocidal product. Examples of biocides are antifouling agents, preservatives and disinfectants. According to the EU Biocidal Products Regulation (BPR), all biocidal products require an authorization before they can be placed on the market, and the active substances contained in the biocidal product must be previously approved. The European Chemical Agency (ECHA) coordinates and facilitates the BPR. More or less similar to other legislations, for biocides the environmental risk assessment is mainly performed by comparing compartmental concentrations (PEC) with the concentration below which unacceptable effects on organisms will most likely not occur (PNEC). Veterinary and human pharmaceuticals regulation Since 2006, EU law requires an environmental risk assessment (ERA) for all new applications for a marketing authorization of human and veterinary pharmaceuticals. For both products, guidance documents have been developed for conducting an ERA based on two phases. The first phase estimates the exposure of the environment to the drug substance. Based on an action limit the assessment may be terminated. In the second phase, information about the fate and effects in the environment is obtained and assessed. For conducting an ERA a base set, including ecotoxicity data, is required. For veterinary medicines, the ERA is part of a risk-benefit analysis, in which the positive therapeutic effects are weighed against any environment risks, whereas for human medicines the environmental concerns are excluded from the risk-benefit analysis. The European Medicines Agency (EMA) is responsible for the scientific evaluation, supervision and safety monitoring of medicines in the EU. Harmonization of testing Testing chemicals is an important aspect of risk assessment, e.g. testing for toxicity, for degradation or for a physicochemical property like the Kow (see Chapter 3). The outcome of a test may vary depending on the conditions, e.g. temperature, test medium or light conditions. For this reason there is an incentive to standardize the test conditions and to harmonize the testing procedures between agencies and countries. This would also avoid duplication of testing, and leading to a more efficient and effective testing system. The Organization for Economic Co-operation and Development (OECD) assists its member governments in developing and implementing high-quality chemical management policies and instruments. One of the key activities to achieve this goal is the development of harmonized guidelines to test and assess the risks of chemicals leading to a system of mutual acceptance of chemical safety data among OECD countries. The OECD also developed Principles of Good Laboratory Practice (GLP) to ensure that studies are of sufficient quality and rigor and are verifiable. The OECD also facilitates the development of new tools to obtain more safety information and maintain quality while reducing costs, time and animal testing, such as the OECD QSAR toolbox. 6.5. Question 1 What are the major categories of chemicals for which regulatory frameworks are present controlling the environmental risks of these chemicals? 6.5. Question 2 Is a chemical producer in the EU allowed to put a new chemical on the market without a registration or authorization? 6.5. Question 3 Is the CLP regulation based on the hazard, exposure and/or risk of chemicals? 6.5. Question 4 The amount of minimum information that is required under REACH for making a proper hazard and risk assessment is dependent on what? And what do you think is the rationale behind it? 6.5. Question 5 Name three European agencies or authorities that are coordinating important regulatory frameworks in the EU? 6.5.1. REACH human Authors: Theo Vermeire Reviewers: Tim Bowmer Learning objective: You should be able to: • outline how human risk assessment of chemicals is performed under REACH; • explain the regulatory function of human risk assessment in REACH. Keywords: REACH, chemical safety assessment, human, RCR, DNEL, DMEL Human risk assessment under REACH The REACH Regulation aims to ensure a high level of protection of human health and the environment, including the promotion of alternative methods for assessment of hazards of substances, as well as the free circulation of substances on the internal market while enhancing competitiveness and innovation. Risk assessment under REACH aims to realize such a level of protection for humans that the likelihood of adverse effects occurring is low, taking into account the nature of the potentially exposed population (including sensitive groups) and the severity of the effect(s). Industry therefore has to prepare a risk assessment (in REACH terminology: chemical safety assessment, CSA) for all relevant stages in the life cycle of the chemical, taking into account all risk management measures envisaged, and document this in the chemical safety report (CSR). Risk characterization in the context of a CSA is the estimation of the likelihood that adverse effect levels occur due to actual or predicted exposure to a chemical. The human populations considered, or protection goals, are workers, consumers and humans exposed via the environment. In risk characterization, exposure levels are compared to reference levels to yield "risk characterization ratios" (RCRs) for each protection goal. RCRs are derived for all endpoints (e.g. skin and eye irritation, sensitization, repeated dose toxicity) and time scales. It should be noted that these RCRs have to be derived for all stages in the life-cycle of a compound. Environmental exposure assessment for humans Humans can be exposed through the environment directly via inhalation of indoor and ambient air, soil ingestion and dermal contact, and indirectly via food products and drinking water (Figure 1). REACH does not consider direct exposure via soil. In the REACH exposure scenario, assessment of human exposure through the environment can be divided into three steps: 1. Determination of the concentrations in intake media (air, soil, food, drinking water); 2. Determination of the total daily intake of these media; 3. Combining concentrations in the media with total daily intake (and, if necessary, using a factor for bioavailability through the route of uptake concerned). A fourth step may be the consideration of aggregated exposure taking into account exposure to the same substance in consumer products and at the workplace. Moreover, there may be similar substances, acting via the same mechanism of action, that may have to be considered in the exposure assessment, for instance, as a worst case, by applying the concept of dose or concentration addition. The section on Environmental realistic scenarios (PECs) - Human explains the concept of exposure scenarios and how concentrations in environmental compartments are derived. Hazard identification and dose-response assessment The aim of hazard identification is to classify chemicals and to select key data for the dose-response assessment to derive a safe reference level, which in REACH terminology is called the DNEL (Derived No Effect Level) or DMEL (Derived Minimal Effect Level). For human end-points, a distinction is made between substances considered to have a threshold for toxicity and those without a threshold. For threshold substances, a No-Observed-Adverse Effect Level (NOAEL) or Lowest-Observed-Adverse-Effect Level (LOAEL) is derived, typically from toxicity studies with laboratory animals such as rats and mice. Alternatively a Benchmark Dose (BMD) can be derived by fitting a dose-response model to all observations. These toxicity values are then extrapolated to a DNEL using assessment factors to correct for uncertainty and variability. The most frequently used assessment factors are those for interspecies differences and those for intraspecies variability (see section on Setting safe standards). Additionally, factors can be applied to account for remaining uncertainties such as those due to a poor database. For substances considered to exert their effect by a non-threshold mode of action, especially mutagenicity and carcinogenicity, it is generally assumed, as a default assumption, that even at very low levels of exposure residual risks cannot be excluded. That said, recent progress has been made on establishing scientific, 'health-based' thresholds for some genotoxic carcinogens. For non-threshold genotoxic carcinogens it is recommended to derive a DMEL, if the available data allow. A DMEL is a cancer risk value considered to be of very low concern, e.g. a 1 in a million tumour risk after lifetime exposure to the chemical and using a conservative linear dose-response model. There is as yet no EU-wide consensus on acceptable levels of cancer risk. Risk characterization Safe use of substances is demonstrated when: • RCRs are below one, both at local and regional level. For threshold substances, the RCR is the ratio of the estimated exposure (concentration or dose) and the DNEL; for non-threshold substances the DMEL is used. • The likelihood and severity of an event such as an explosion occurring due to the physicochemical properties of the substance as determined in the hazard assessment is negligible. A risk characterization needs to be carried out for each exposure scenario (see Section on Environmental realistic scenarios (PECs) - Human) and human population. The assessment consists of a comparison of the exposure of each human population known to be or likely to be exposed with the appropriate DNELs or DMELs and an assessment of the likelihood and severity of an event occurring due to the physicochemical properties of the substance. Example of a deterministic assessment (Vermeire et al., 2001) Exposure assessment Based on an emission estimation for processing of dibutylphthalate (DBP) as a softener in plastics, the concentrations in environmental compartments were estimated. Based on modelling as schematically presented in Figure 1, the total human dose was determined to be 93 ug.kg bw-1. PEC-air 2.4 µg.m-3 PEC-surface water 2.8 µg.l-1 PEC- grassland soil 0.15 mg.kg-1 PEC-porewater agric. soil 3.2 µg.l-1 PEC-porewater grassl. soil 1.4 µg.l-1 PEC-groundwater 3.2 µg.l-1 Total Human Dose 93 µg.kgbw-1.d-1 Effects assessment The total dose should be compared to a DNEL for humans. DBP is not considered a genotoxic carcinogen but is toxic to reproduction and therefore the risk assessment is based on endpoints assumed to have a threshold for toxicity. The lowest NOAEL of DBP was observed in a two-generation reproduction test in rats and at the lowest dose-level in the diet (52 mg.kgbw-1.d-1 for males and 80 mg.kgbw-1.d-1 for females) a reduced number of live pups per litter and decreased pup weights were seen in the absence of maternal toxicity. The lowest dose level of 52 mg.kgbw-1.d-1 was chosen as the NOAEL. The DNEL was derived by the application of an overall assessment factor of 1000, accounting for interspecies differences, human variability and uncertainties due to a non-chronic exposure period. Risk characterisation The deterministic estimate of the RCR would be based on the deterministic exposure estimate of 0.093 mg.kgbw-1.d-1 and the deterministic DNEL of 0.052 mg.kgbw-1.d-1. The deterministic RCR would then be 1.8, based on the NOAEL. Since this is higher than one, this assessment indicates a concern, requiring a refinement of the assessment or risk management measures. Additional reading Van Leeuwen C.J., Vermeire T.G. (Eds.) (2007) Risk assessment of chemicals: an introduction. Springer, Dordrecht, The Netherlands, ISBN 978-1-4020-6102-8 (e-book), https://doi.org/10.1007/978-1-4020-6102-8. Vermeire, T., Jager, T., Janssen, G., Bos, P., Pieters, M. (2001) A probabilistic human health risk assessment for environmental exposure to dibutylphthalate. Journal of Human and Ecological Risk Assessment 7, 1663-1679. 6.5.1. Question 1 Uncertainty happens! It is inherent to risk assessment. Where, in your view, are the greatest sources of uncertainty in the process of risk assessment? 6.5.1. Question 2 Are there risks identified in the example for humans indirectly exposed via the environment? To what extent are these potential or realistic risks by asking yourself the following questions: • Do you have relevant toxicological and exposure data ? • Are these fixed values or not? • How relevant or adverse are the toxicological effects observed? • Were appropriate assessment factors used? What would you recommend as a strategy to reduce the identified risks sufficiently? 6.5.2. REACH environment Author: Joop de Knecht Reviewers: Watze de Wolf Keywords: REACH, European chemicals regulation Introduction REACH establishes procedures for collecting and assessing information on the properties, hazards and risks of substances. At quantities of > 10 tonnes, the manufacturers, importers, and down-stream users must show that their substances do not adversely affect human health or the environment for the uses and operational conditions registered. The amount of standard information required to show safe use depends on the quantity of the substance that is manufactured or imported. This section explains how risks to the environment are assessed in REACH. Data requirements As a minimum requirement, all substances manufactured or imported in quantities of 1 tonne or more need to be tested in acute toxicity tests on Daphnia and algae, while also information should be provided on biodegradability (Table 1). Physical-chemical properties relevant for environmental fate assessment that have to be provided at this tonnage level are water solubility, vapour pressure and octanol-water partition coefficient. At 10 tonnes or more, this should be supplemented with an acute toxicity test on fish and an activated sludge respiration inhibition test. At this tonnage level, also an adsorption/desorption screening and a hydrolysis test should be performed. If the chemical safety assessment, performed at 100 tonnes or more in case a substance is classified based on hazard information, indicates the need to investigate further the effects on aquatic organisms, the chronic toxicity on these aquatic species should be determined. If the substance has a high potential for bioaccumulation (for instance a log Kow > 3), also the bioaccumulation in aquatic species should be determined. The registrant should also determine the acute toxicity to terrestrial species or, in the absence of these data, consider the equilibrium partitioning method (EPM) to assess the hazard to soil organisms. To further investigate the fate of the substance in surface water, sediment and soil, simulation tests on its degradation should be conducted and when needed further information on the adsorption/desorption should be provided. At 1000 tonnes or more, chronic tests on terrestrial and sediment-living species should be conducted if further refinement of the safety assessment is needed. Before testing vertebrate animals like fish and mammals, the use of alternative methods and all other options must be considered to comply with the regulations regarding (the reduction of) animal testing. Table 1 Required ecotoxicological and environmental fate information as defined in REACH 1-10 t/y • Acute Aquatic toxicity (invertebrates, algae) • Ready biodegradability 10-100 t/y • Acute Aquatic toxicity (fish) • Activated sludge respiration, inhibition • Hydrolysis as a function of pH • Adsorption/ desorption screening test 100-1000 t/y • Chronic Aquatic toxicity (invertebrates, fish) • Bioaccumulation • Surface water, soil and sediment simulation (degradation) test • Acute terrestrial toxicity • Further information on adsorption/desorption ≥ 1000 t/y • Further fate and behaviour in the environment of the substance and/or degradation products • Chronic terrestrial toxicity • Sediment toxicity • Avian toxicity Safety assessment For substances that are classified based on hazard information the registrant should assess the environmental safety of a substance, by comparing the predicted environmental concentration (PEC) with the Predicted No Effect Concentration (PNEC), resulting in a Risk Characterisation Ratio (RCR=PEC/PNEC). The use of the substance is considered to be safe when the RCR <1. Chapter 16 of the ECHA guidance offers methods to estimate the PEC based on the tonnage, use and operational conditions, standardised through a set of use descriptors, particularly the Environmental Release Categories (ERCs). These ERCs are linked to conservative default release factors to be used as a starting point for a first tier environmental exposure assessment. When substances are emitted via waste water, the physical-chemical and fate properties of the chemical substance are then used to predict its behaviour in the Wastewater Treatment Plant (WWTP). Subsequently, the release of treated wastewater is used to estimate the concentration in fresh and marine surface water. The concentration in sediment is estimated from the PEC in water and experimental or estimated sediment-water partitioning coefficient (Kpsed). Soil concentrations are estimated from deposition from air and the application of sludge from an WWTP. The guidance offers default values for all relevant parameters, thus a generic local PEC can be calculated and considered applicable to all local emissions in Europe, although the default values can be adapted to specific conditions if justified. The local risk for wide-dispersive uses (e.g. from consumers or small, non- industrial companies) is estimated for a default WWTP serving 10,000 inhabitants. In addition, a regional assessment is conducted for a standard area, a region represented by a typical densely populated EU area located in Western Europe (i.e. about 20 million inhabitants, distributed in a 200 x 200 km2 area). For calculating the regional PECs, a multi-media fate-modelling approach is used (e.g. the SimpleBox model; see Section on Multicompartment fate modelling). All releases to each environmental compartment for each use, assumed to constitute a constant and continuous flux, are summed and averaged over the year and steady-state concentrations in the environmental compartments are calculated. The regional concentrations are used as background concentrations in the calculation of the local concentrations. The PNEC is calculated using the lowest toxicity value and an assessment factor (AF) related to the amount of information (see section Setting safe standards or chapter 10 of the REACH guidance. If only the minimum set of aquatic acute toxicity data is available, i.e. LC50s or EC50s for algae, daphnia and fish, a default value of 1000 is used. When one, two or three or more long-term tests are available, a default AF of 100, 50 and 10 is applied to No Observed Effect Concentrations (NOECs), respectively. The idea behind lowering the AF when more data become available is that the amount of uncertainty around the PNEC is being reduced. In the absence of ecotoxicological data for soil and/or sediment-dwelling organisms, the PNECsoil and/or PNECsed may be provisionally calculated using the EPM. This method uses the PNECwater for aquatic organisms and the suspended matter/water partitioning coefficient as inputs. For substances with a log Kow >5 (or with a corresponding log Kp value), the PEC/PNEC ratio resulting from the EPM is increased by a factor of 10 to take into account possible uptake through the ingestion of sediment. If the PEC/PNEC is greater than 1 a sediment test must be conducted. If one, two or three long-term No Observed Effect Concentrations (NOECs) from sediment invertebrate species representing different living and feeding conditions are available, the PNEC can be derived using default AFs of 100, 50 and 10, respectively. For data rich chemicals, the PNEC can be derived using Species Sensitivity Distributions (SSD) or other higher-tier approaches. 6.5.2. Question 1 Who is responsible within the EU that industrial chemicals do not pose a risk for the environment? 6.5.2. Question 2 In which circumstances will an environmental risk be identified? 6.5.2. Question 3 Describe how the ecotoxicogical safety levels (PNECs) for the aquatic environment are derived depending on the ecotoxicological information available? under review 6.5.4. Environmental Risk Assessment of Pharmaceuticals in Europe Author: Gerd Maack Reviewers: Ad Ragas, Julia Fabrega, Rhys Whomsley Learning objectives: You should be able to • Explain the philosophy and objective of the environmental risk assessment of pharmaceuticals. • mention the key aspects of the tiered approach of the assessment • identify the exposure routes for human and veterinary medicinal products and should know the respective consequences in the assessment Keywords: Human pharmaceuticals, veterinary pharmaceuticals, environmental impact, tiered approach Introduction Pharmaceuticals are a crucial element of modern medicine and confer significant benefits to society. About 4,000 active pharmaceutical ingredients are being administered worldwide in prescription medicines, over-the-counter medicines, and veterinary medicines. They are designed to be efficacious and stable, as they need to pass different barriers i.e. skin, the gastrointestinal system (GIT), or even the blood-brain barrier before reaching the target cells. Each target system has a different pH and different lipophilicity and the GIT is in addition colonised with specific bacteria, specialized to digest, dissolve and disintegrate organic molecules. As a consequence of this stability, most of the pharmaceutical ingredients are stable in the environment as well and could cause effects on non-target organisms. The active ingredients comprise a variety of synthetic chemicals produced by pharmaceutical companies in both the industrialized and the developing world at a rate of 100,000 tons per year. While pharmaceuticals are stringently regulated in terms of efficacy and safety for patients, as well as for target animal safety, user and consumer safety, the potential effects on non-target organisms and environmental effects are regulated comparably weakly. The authorisation procedure requires an environmental risk assessment (ERA) to be submitted by the applicants for each new human and veterinary medicinal product. The assessment encompasses the fate and behaviour of the active ingredient in the environment and its ecotoxicity based on a catalogue of standardised test guidelines. In the case of veterinary pharmaceuticals, constraints to reduce risk and thus ensure safe usage can be stipulated in most cases. In the case of human pharmaceuticals, it is far more difficult to ensure risk reduction through restriction of the drug's use due to practical and ethical reasons. Because of their unique benefits, a restriction is not reasonable. This is reflected in the legal framework, as a potential effect on the environment is not included in the final benefit risk assessment for a marketing authorisation. Human pharmaceuticals Human pharmaceuticals enter the environment mainly via surface waters through sewage systems and sewage treatment plants. The main exposure pathways are excretion and non-appropriate disposal. Typically only a fraction of the medicinal product taken is metabolised by the patients, meaning that the main share of the active ingredient is excreted unchanged into the wastewater system. Furthermore, sometimes the metabolites themselves are pharmacologically active. Yet, no wastewater treatment plant is able to degrade all active ingredients. So medicinal products are commonly found in surface water, to some extent in ground water, and sometimes even in drinking water. However, the concentrations in drinking water are orders of magnitude lower than therapeutic concentrations. An additional exposure pathway for human pharmaceuticals is the spreading of sewage sludge on soil, if the sludge is used as fertilizer on farmland. See for more details the Link "The Drugs We Wash Away: Pharmaceuticals, Drinking Water and the Environment". Veterinary pharmaceuticals Veterinary pharmaceuticals on the other hand enter the environment mainly via soil, either indirectly, if the slurry and manure from mass livestock production is spread onto agricultural land as fertiliser, or directly from pasture animals. Moreover, pasture animals might additionally excrete directly into surface water. Pharmaceuticals can also enter the environment via the detour of manure used in biogas plants. Assessment schemes Despite the differences mentioned above, the general scheme of the environmental risk assessment of human and veterinary pharmaceuticals is similar. Both assessments start with an exposure assessment. Only if specific trigger values are reached an in-depth assessment of fate, behaviour and effects of the active ingredient is necessary. Environmental risk assessment of human pharmaceuticals In Europe, an ERA for human pharmaceuticals has to be conducted according to the Guideline on Environmental Risk Assessment of Medicinal Products for Human Use (EMA 2006). This ERA consists of two phases. Phase I is a pre-screening for estimating the exposure in surface water, and if this Predicted Environmental Exposure Concentration (PEC) does not reach the action limit of 0.01 µg/L, in most cases, the ERA can stop. In case this action limit is reached or exceeded, a base set of aquatic toxicology, and fate and behaviour data need to be supplied in phase II Tier A. A risk assessment, comparing the PEC with the Predicted No Effect Concentration (PNEC), needs to be conducted. If in this step a risk is still identified for a specific compartment, a substance and compartment-specific refinement and risk assessment in Phase II Tier B needs to be conducted (Figure 2). Phase I: Estimation of Exposure In Phase I, the PEC calculation is restricted to the aquatic compartment. The estimation should be based on the drug substance only, irrespective of its route of administration, pharmaceutical form, metabolism and excretion. The initial calculation of the PEC in surface water assumes: • The predicted amount used per capita per year is evenly distributed over the year and throughout the geographic area (Doseai); • A fraction of the overall market penetration (Fpen), in other words 'how many people will take the medicinal product? Normally a default value of 1% is used; • The sewage system is the drug's main route of entry into surface water. The following formula is used to estimate the PEC in surface water: PECsurfacewater = mg/l DOSEai = Maximum daily dose consumed per capita [mg.inh-1.d-1] Fpen = Fraction of market penetration (= 1% by default) WASTEinhab = Amount of wastewater per inhabitant per day (= 200 l by default) DILUTION = Dilution Factor (= 10 by default) Three factors of this formula, i.e. Fpen, Wasteinhab and the Dilution Factor, are default values, meaning that the PECsurfacewater in Phase I entirely depends on the dose of the active ingredient. The Fpen can be refined by providing reasonably justified market penetration data, e.g. based on published epidemiological data. If the PECsurfacewater value is equal to or above 0.01 μg/l (mean dose ≥ 2 mg cap-1 d-1), a Phase II environmental fate and effect analysis should be performed. Otherwise, the ERA can stop. However, in some cases, the action limit may not be applicable. For instance, medicinal substances with a log Kow > 4.5 are potential PBT candidates and should be screened for persistence (P), bioaccumulation potential (B), and toxicity (T) independently of the PEC value. Furthermore, some substances may affect vertebrates or lower animals at concentrations lower than 0.01 μg/L. These substances should always enter Phase II and a tailored risk assessment strategy should be followed which addresses the specific mechanism of action of the substance. This is often true for e.g. hormone active substances (see section on Endocrine disruption). The required tests in a Phase II assessment (see below) need to cover the most sensitive life stage, and the most sensitive endpoint needs to be assessed. This means for instance that for substances affecting reproduction, the organism needs to be exposed to the substance during gonad development and the reproductive output needs to be assessed. Phase II: Environmental Fate and Effects Analysis A Phase II assessment is conducted by evaluating the PEC/PNEC ratio based on a base set of data and the predicted environmental concentration from Tier A. If a potential environmental impact is indicated, further testing might be needed to refine PEC and PNEC values in Tier B. Under certain circumstances, effects on sediment-dwelling organisms and terrestrial environmental fate and effects analysis are also required. Experimental studies should follow standard test protocols, e.g. OECD guidelines. It is not acceptable to use QSAR estimation, modelling and extrapolation from e.g. a substance with a similar mode of action and molecular structure (read across). This is in clear contrast to other regulations like e.g. REACH. Human pharmaceuticals are used all year round without any major fluctuations and peaks. The only exemption are substances used against cold and influenza. These substances have a clear peak in the consumption in autumn and winter times. In developed countries in Europe and North America, antibiotics display a similar peak as they are prescribed to support the substances used against viral infections. The guideline reflects this exposure scenario and asks explicitly for long-term effect tests for all three trophic levels: algae, aquatic invertebrates and vertebrates (i.e., fish). In order to assess the physio chemical fate, amongst other tests the sorption behaviour and fate in a water/sediment system should be determined. If, after refinement, the possibility of environmental risks cannot be excluded, precautionary and safety measures may consist of: • An indication of potential risks presented by the medicinal product for the environment. • Product labelling, Summary Product Characteristics (SPC), Package Leaflet (PL) for patient use, product storage and disposal. Labelling should generally aim at minimising the quantity discharged into the environment by appropriate mitigation measures Environmental risk assessment of veterinary pharmaceuticals In the EU, the Environmental Risk Assessment (ERA) is conducted for all veterinary medicinal products. The structure of an ERA for Veterinary Medicinal Products (VMPs) is quite similar to the ERA for Human Medicinal Products. It is also tier based and starts with an exposure assessment in Phase I. Here, the potential for environmental exposure is assessed based on the intended use of the product. It is assumed that products with limited environmental exposure will have negligible environmental effects and thus can stop in Phase I. Some VMPs that might otherwise stop in Phase I as a result of their low environmental exposure, may require additional hazard information to address particular concerns associated with their intrinsic properties and use. This approach is comparable to the assessment of Human Pharmaceutical Products, see above. Phase I: Estimation of Environmental Exposure For the exposure assessment, a decision tree was developed (Figure 3). The decision tree consists of a number of questions, and the answers of the individual questions will conclude in the extent of the environmental exposure of the product. The goal is to determine if environmental exposure is sufficiently significant to consider if data on hazard properties are needed for characterizing a risk. Products with a low environmental exposure are considered not to pose a risk to the environment and hence these products do not need further assessment. However, if the outcome of Phase I assessment is that the use of the product leads to significant environmental exposure, then additional environmental fate and effect data are required. Examples for products with a low environmental exposure are, among others are products for companion animals only and products that result in a Predicted Environmental Concentration in soil (PECsoil) of less than 100 µg/kg, based on a worst-case estimation. Phase II: Environmental Fate and Effects Analysis A Phase II assessment is necessary if either the trigger of 100 µg/kg in the terrestrial branch or the trigger of 1 µg/L in the aquatic branch is reached. It is also necessary, if the substance is a parasiticide for food producing animals. A Phase II is also required for substances that would in principle stop in Phase I, but there are indications that an environmental risk at very low concentrations is likely due to their hazardous profile (e.g., endocrine active medicinal products). This is comparable to the assessment for Human Pharmaceutical Products. For Veterinary Pharmaceutical Products also the Phase II assessment is sub-divided into several Tiers, see Figure 4. For Tier A, a base set of studies assessing the physical-chemical properties, the environmental fate, and effects of the active ingredient is necessary. For Tier A, acute effect tests are suggested, assuming a more peak like exposure scenario due to e.g. applying manure and dung on fields and meadows, in contrast to the permanent exposure of human pharmaceuticals. If for a specific trophic level, e.g. dung fauna or algae, a risk is identified (PEC/PNEC ≥1) (see Introduction to Chapter 6), long-term tests for this level have to be conducted in Tier B. For the trophic levels, without an identified risk, the assessment can stop. If the risk still applies with these long-term studies, a further refinement with field studies in Tier C can be conducted. Here a co-operation with a competent authority is strongly recommended, as these tests are tailored, reflected by the individual design of these field studies. In addition, and independent of this, risk mitigation measures can be imposed to reduce the exposure concentration (PEC). These can be, beside others, that animals must remain stabled for a certain amount of time after the treatment, to ensure that the concentration of active ingredient in excreta is low enough to avoid adverse effects on dung fauna and their predators. Alternatively, the treated animals are denied access to water as the active ingredient has harmful effects on aquatic organisms. Conclusion The Environmental Risk Assessment of Human and Veterinary Medicinal Products is a straightforward, tiered-based process with the possibility to exit at several steps in the assessment procedure. Depending on the dose, the physico-chemical properties, and the anticipated use, this can be quite early in the procedure. On the other hand, for very potent substances with specific modes of action the guidelines are flexible enough to allow specific assessments covering these modes of action. The ERA guideline for human medicinal products entered into application 2006 and many data gaps exist for products approved prior to 2006. Although there is a legal requirement for an ERA dossier for all marketing authorisation applications, new applications for pharmaceuticals on the market before 2006 are only required to submit ERA data under certain circumstances (e.g. significant increase in usage). Even for some of the blockbusters, like Ibuprofen, Diclofenac, and Metformin, full information on fate, behaviour and effects on non-target organisms is currently lacking. Furthermore, systematic post-authorisation monitoring and evaluation of potential unintended ecotoxicological effects does not exist. The market authorisation for pharmaceuticals does not expire, in contrast to e.g. an authorisation of pesticides, which needs to be renewed every 10 years. For Veterinary Medicinal Products, an in-depth ERA is necessary for food producing animals only. An ERA for non-food animals can stop with question 3 in Phase I (Figure 3) as it is considered that the use of products for companion animals leads to negligible environmental concentrations, which might not be necessarily the case. Here, the guideline does not reflect the state of the art of scientific and regulatory knowledge. For example, the market authorisation, as a pesticide or biocide, has been withdrawn or strongly restricted for some potent insecticides like imidacloprid and fipronil which both are authorised for use in companion animals. Further Reading Pharmaceuticals in the Environment:https://www.umweltbundesamt.de/en/publikationen/pharmaceuticals-in-the-environment-the-global Recommendations for reducing micro-pollutants in waters: https://www.umweltbundesamt.de/publikationen/recommendations-for-reducing-micropollutants-in 6.5.4. Question 1 Why are pharmaceuticals a problem for non-target organisms and for the environment? 6.5.4. Question 2 How do Human pharmaceuticals enter the environment? 6.5.4. Question 3 How do Veterinary pharmaceuticals enter the environment? 6.5.4. Question 4 What is the general scheme of the environmental risk assessment of human and veterinary pharmaceuticals? 6.5.4. Question 5 Why are long-term tests needed for the assessment of human pharmaceuticals, in contrast to the assessment of veterinary pharmaceuticals? under review 6.5.6. Policy on soil and groundwater regulation Author: Frank Swartjes Reviewers: Kees van Gestel, Ad Ragas, Dietmar Müller-Grabherr Learning objectives: You should be able to • explain how different countries regulate soil contamination issues • list some differences between different policy systems on soil and groundwater regulations • describe how risk assessment procedures are implemented in policy Keywords: Policy on soil contamination, Water Framework Directive, screening values comparison, Thematic Soil Strategy, Common Forum History As a bomb hit, soil contamination came onto the political agenda in the United States and in Europe through a number of disasters in the late 1970s and early 1980s. Starting point was the 1978 Love Canal disaster in upper New York State, USA, in which a school and a number of residences had been built on a former landfill for chemical waste disposal with thousands of tonnes of dangerous chemical wastes, and became a national media event. In Europe in 1979, the residential site of Lekkerkerk in the Netherlands became an infamous national event. Again, a residential area had been built on a former waste dump, which included chemical waste from the painting industry, and with channels and ditches that had been filled in with chemical waste-containing materials. Since these events, soil contamination-related policies emerged one after the other in different countries in the world. Crucial elements of these policies were a benchmark date for a ban on bringing pollutants in or on the soil ('prevention'), including a strict policy, e.g. duty of care, for contaminations that are caused after the benchmark date, financial liability for polluting activities, tools for assessing the quality of soil and groundwater, and management solutions (remediation technologies and facilities for disposal). Evolution in soil policies Objectives in soil policies often show evolution over time and changes go along with developing new concepts and approaches for implementing policies. In general, soil policies often develop from a maximum risk control until a functional approach. The corresponding tools for implementation usually develop from a set of screening values towards a systemic use of frameworks, enabling sound environmental protection while improving the cost-benefit-balance. Consequently, soil policy implementation usually goes through different stages. In general terms, four different stages can be distinguished, i.e., related to maximum risk control, to the use of screening values, to the use of frameworks and based on a functional approach. Maximum risk control follows the precaution principle and is a stringent way of assessing and managing contamination by trying to avoid any risk. Procedures based on screening values allow for a distinction in polluted and non-polluted sites for which the former, the polluted sites, require some kind of intervention. The scientific underpinning of the earliest generations of screening values was limited and expert judgement played an important role. Later, more sophisticated screening values emerged, based on risk assessment. This resulted in screening values for individual contaminants within the contaminant groups metals and metalloids, other inorganic contaminants (e.g., cyanides), polycyclic aromatic hydrocarbons (PAHs), monocyclic aromatic hydrocarbons (including BETX (benzene, toluene, xylene)), persistent organic pollutants (including PCBs and dioxins), volatile organic contaminants (including trichloroethylene, tetrachloroethylene, 1,1,1-trichloroethane, and vinyl chloride), petroleum hydrocarbons and, in a few countries only, asbestos. For some contaminants such as PAHs, sum-screening values for groups were derived in several countries, based on toxicity equivalents. In a procedure based on frameworks, often the same screening values generally act as a trigger for further, more detailed site-specific investigations in one or two additional assessment steps. In the functional approach, soil and groundwater must be suited for the land use it relates to (e.g., agricultural or residential land) and the functions (e.g., drinking water abstraction, irrigation water) it performs. Some countries skip the maximum risk control and sometimes also the screening values stages and adopt a framework and/or a functional approach. European collaboration and legislation In Europe, collaboration was strengthened by concerted actions such as CARACAS (concerted action on risk assessment for contaminated sites in the European Union; 1996 - 1998) and CLARINET (Contaminated Land Rehabilitation Network for Environmental Technologies; 1998 - 2001). These concerted actions were followed up by fruitful international networks that are still are active today. These are the Common Forum, which is a network of contaminated land policy makers, regulators and technical advisors from Environment Authorities in European Union member states and European Free Trade Association countries, and NICOLE (Network for Industrially Co-ordinated Sustainable Land Management in Europe), which is a leading forum on industrially co-ordinated sustainable land management in Europe. NICOLE is promoting co-operation between industry, academia and service providers on the development and application of sustainable technologies. In 2000, the EU Water Framework Directive (WFD; Directive 2000/60/EC) was adopted by the European Commission, followed by the Groundwater Directive (Directive 2006/118/EC) in 2006 (European parliament and the council of the European Union, 2019b). The environmental objectives are defined by the WFD. Moreover, 'good chemical status' and the 'no deterioration clause' account for groundwater bodies. 'Prevent and limit' as an objective aims to control direct or indirect contaminant inputs to groundwater, and distinguishes for 'preventing hazardous substances' to enter groundwater as well as 'limiting other non-hazardous substances'. Moreover, the European Commission adopted a Soil Thematic Strategy, with soil contamination being one out of the seven identified threats. A proposal for a Soil Framework Directive, launched in 2006, with the objective to protect soils across the EU, was formally withdrawn in 2014 because of a lack of support from some countries. Policies in the world Today, most countries in Europe and North America, Australia and New Zealand, and several countries in Asia and Middle and South America, have regulations on soil and groundwater contamination. The policies, however, differ substantially in stage, extent and format. Some policies only cover prevention, e.g., blocking or controlling the inputs of chemicals onto the soil surface and in groundwater bodies. Other policies cover prevention, risk based quality assessment and risk management procedures and include elaborated technical tools, which enable a decent and uniform approach. In particular the larger countries such as the USA, Germany and Spain, policies differ between states or provinces within the country. And even in countries with a policy on the federal level, the responsibilities for different steps in the soil contamination chain are very different for the different layers of authorities (at the national, regional and municipal level). In Figure 1 the European countries are shown that have a procedure based on frameworks (as described above), including risk-based screening values. It is difficult, if not impossible, to summarise all policies on soil and groundwater protection worldwide. Alternatively, some general aspects of these policies are given here. A fair first basic element in nearly all soil and groundwater policies, relating to prevention of contamination, is the declaration of a formal point in time after which polluting soil and groundwater is considered an illegal act. For soil and groundwater quality assessment and management, most policies follow the risk-based land management approach as the ultimate form of the functional approach described above. Central in this approach are the risks for specific targets that need to be protected up to a specified level. Different protection targets are considered. Not surprisingly, 'human health' is the primary protection target that is adopted in nearly all countries with soil and groundwater regulations. Moreover, the ecosystem is an important protection target for soil, while for groundwater the ecosystem as a protection target is under discussion. Another interesting general characteristic of mature soil and groundwater policies is the function-specific approach. The basic principle of this approach is that land must be suited for its purpose. As a consequence, the appraisal of a contaminated site in a residential area, for instance, follows a much more stringent concept than that of an industrial site. Risk assessment tools Risk assessment tools often form the technical backbone of policies. Since the late 1980s risk assessment procedures for soil and groundwater quality appraisal were developed. In the late 1980s the exposure model CalTOX was developed by the Californian Department of Toxic Substances Control in the USA, a few years later the CSOIL model in the Netherlands (Van den Berg, 1991/ 1994/ 1995). In Figure 2, the flow chart of the Dutch CSOIL exposure model is given as an example. Three elements are recognized in CSOIL, like in most exposure models: (1) contaminant distribution over the soil compartments; (2) contaminant transfer from (the different compartments of) the soil into contact media; and (3) direct and indirect exposure to humans. The major exposure pathways are exposure through soil ingestion, crop consumption and inhalation of indoor vapours (Elert et al., 2011). Today, several exposure models exist (see Figure 3 for some 'national' European exposure models). However, these exposure models may give quite different exposure estimates for the same exposure scenario (Swartjes, 2007). Moreover, procedures were developed for ecological risk assessment, including the Species Sensitivity Distributions (see section on SSDs), based on empirical relations between concentration in soil or groundwater and the percentage of species or ecological processes that experience adverse effects (PAF: potentially Affected Fraction). For site specific risk assessment, the TRIAD approach was developed, based on three lines of evidence, i.e., chemically-based, toxicity-based and using data from ecological field surveys (see section on the TRIAD approach). In the framework of the HERACLES network, another attempt was made to summarizing different EU policies on polluted soil and groundwater. A strong plea was made for harmonisation of risk assessment tools (Swartjes et al., 2009). The authors also described a procedure for harmonization based on the development of a toolbox with standardized and flexible risk assessment tools. Flexible tools are meant to cover national or regional differences in cultural, climatic and geological (e.g., soil type, depth of the groundwater table) conditions. It is generally acknowledged, however, that policy decisions should be taken on the national level. In 2007, an analysis of the differences of soil and groundwater screening values and of the underlying regulatory frameworks, human health and ecological risk assessment procedures (Carlon, 2007) was launched. Although screening values are difficult to compare, since frameworks and objectives of screening values differ significantly, a general conclusion can be drawn for e.g. the screening values at the potentially unacceptable risk level (often used as 'action' values, i.e. values that trigger further research or intervention when exceeded). For the 20 metals, most soil screening values (from 13 countries or regions) show between a factor of 10 and 100 difference between the lowest and highest values. For the 23 organic pollutants considered, most soil screening values (from 15 countries or regions) differ by a factor of between 100 and 1000, but for some organic pollutants these screening values differ by more than four orders of magnitude. These conclusions are merely relevant from a policy viewpoint. Technically, these conclusions are less relevant, since, the screenings values are derived from a combination of different protection targets and tools and based on different policy decisions. Differences in screening values are explained by differences in geographical and biological and socio-cultural factors in different countries and regions, different national regulatory and policy decisions and variability in scientific/ technical tools. Further reading Swartjes, F.A. (Ed.) (2011). Dealing with Contaminated Sites. From theory towards practical application. Springer Publishers, Dordrecht. Rodríguez-Eugenio, N., McLaughlin, M., Pennock, D. (2018). Soil Pollution: a hidden reality. Rome, FAO. 6.5.6. Question 1 What is the logical first step in policy on soil and groundwater protection, related to ' prevention' ? 6.5.6. Question 2 What are the most frequently used protection targets on policies on soil and groundwater in the world? 6.5.6. Question 3 What role should screening values play in sophisticated risk assessment procedures? 6.5.6. Question 4 What is the ideal approach when developing a new soil and groundwater policy? 6.5.6. Question 5 Regarding the harmonization of risk assessment tools: why are flexible risk assessment tools necessary? in preparation
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.05%3A_Regulatory_Frameworks.txt
in preparation 6.07: Risk perception 6.7. Risk perception Author: Fred Woudenberg Reviewers: Ortwin Renn Learning objectives: • To list and memorize the most important determinants of risk perception • To look at and try to understand the influence of risks in situations or activities which you or others encounter or undertake in daily life • To actively look for as many situations and activities as possible in which the annual risk of getting sick, being injured or to die has little influence on risk perception • To look for examples in which experts (if possible ask them) react like lay people in their own daily lives Key words: Risk perception, fear, worry, risk, context Introduction If risk perception had a first law like toxicology has with Paracelsus' "Sola dosis facit venenum" (see section on History of Environmental Toxicology) it would be: "People fear things that do not make them sick and get sick from things they do not fear." People can, for instance, worry extremely over a newly discovered soil pollution site in their neighborhood, which they hear about at a public meeting they have come to with their diesel car, and, when returning home, light an extra cigarette without thinking to relieve the stress. Sources: Left: File photo: Kalamazoo City Commissioner Don Cooney and Kalamazoo residents march to protest capping the Allied Paper Landfill, May 2013. https://www.wmuk.org/post/will-bioremediation-work-allied-microbial-ecologist-skeptical. Right: https://desmotivaciones.es/352654/Obama The explanation for this first law is quite easy. The annual risk of getting sick, being injured or to die has only limited influence on the perception of a risk. Other factors are more important. Figure 1 is a model of risk perception in its most basic form. In the middle of this figure, there is a list with several factors which determine risk perception to a large extent. In any given situation, they can each end up at the left, safe side or on the right, dangerous side. The model is a simplification. Research since the late sixties of the previous century has led to a collection of many more factors that often are connected to each other (for lectures of some well-known researchers see examples 1, 2, 3, 4 and 5). Why do people fear soil pollution? An example can show this interconnection and the discrepancy between the annual health risks (at the top of Figure 1) and other factors. The risk of harmful health effects for people living on polluted soil is often very small. The factor 'risk' thus ends at the left, safe side. Most of the other factors end up at the right. People do not voluntary choose to have a soil pollution in their garden. They have absolutely no control over the situation and an eventual sanitation. For this, they depend on authorities and companies. Nowadays, trust in authorities and companies is very small. Many people will suspect that these authorities care more about their money than about the health and well-being of their citizens and neighbours. A newly discovered soil pollution will get local media attention and this will certainly be the case if there is controversy. If the untrusted authorities share their conclusion that the risks are low, many people will suspect that they withhold information and are not completely open. Especially saying that there is 'no cause for alarm' will only make people worry more (see a funny example). People will not believe the conclusion of authorities that the risk is low, so effectively all factors end up at the dangerous side. Why smokers are not afraid For smoking a cigarette, the evaluation is the other way around. Almost everybody knows that smoking is dangerous, but people light their cigarette themselves. Most people at least think they have control over their smoking habit as they can decide to stop at any moment (but being addicted, they probably highly overestimate their level of control). For their information or taking measures, people do not depend on others and no information is withheld about smoking. Some smokers suffer from what is called optimistic bias, the idea that misery only happens to other. They always have the example of their grandfather who started smoking at 12 and still ran the marathon at 85. People can be upset if they learn that cigarette companies purposely make cigarettes more addictive. It makes them feel the company takes over control which people greatly resent. This, and not the health effects, can make people decide to quit smoking. This also explains why passive smoking is more effective than active smoking in influencing people's perceptions. Although the risk of passive smoking is 100 times smaller than the risk of active smoking, most factors end up at the right, dangerous side, making passive smoking maybe 100 times more objectionable and worrisome than active smoking. Experts react like lay people at home Many people are surprised to find out that the calculated or estimated health risks influences risk perception so little. But we experience it in our own daily lives, especially when we add another factor in the model: advantages. All of us perform risky activities because they are necessary, come with advantages or sometimes out of sheer fun. Most of us take part in daily traffic with an annual risk of dying far higher than 1 in a million. Once, twice or even more times a year we go on a holiday with a multitude of risks: transport, microbes, robbery, divorce. The thrill seekers of us go diving, mountain climbing, parachute jumping without even knowing the annual fatality rates. If the stakes are high, people can knowingly risk their life in order to improve it, as the thousands of immigrants trying to cross the Mediterranean illustrate, or even to give their life for a higher cause, like soldiers at war (Winston Churchill in 1940: "I have nothing to offer but blood, toil, tears and sweat"). An example at the other side can show it maybe even clearer. No matter how small the risk, it can be totally unacceptable and nonsensical. Suppose the government starts a new lottery with an extremely small chance of winning, say one in a billion. Every citizen must play and tickets are free. So far nothing strange, but there is a twitch. The main and only price of the lottery is a public execution broadcasted live on national TV. The government will probably not make itself very popular with this absurd lottery. When government, as still is done, tells people they have to accept a small risk because they accept larger risks of activities they choose themselves, it makes people feel they have been given a ticket in the above mentioned lottery. This is how people can feel if the government tells them the risk of the polluted soil they live on is extremely small and that it would be wiser for them to quit smoking. All risks have a context A main lesson which can be learned from the study of risk perception is that risks always occur in a context. A risk is always part of a situation or activity which has many more characteristics than only the chance of getting sick, being injured or to die. We do not judge risks, we judge situations and activities of which the risk is often only a small part. Risk perception occurs in a rich environment. After 50 years of research a lot has been discovered, but predicting how angry of afraid people will be in a new, unknown situation is still a daunting task. 6.7. Question 1 Name at least 5 important determinants of risk perception. 6.7. Question 2 Do people judge risks in isolation or the situation or activities they are part of? 6.7. Question 3 How large in general is the influence of the risk on risk perception? 6.7. Question 4 Do experts react differently from lay people if they encounter risks in their own daily lives?
textbooks/chem/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.06%3A_Risk_management_and_risk_communication.txt
The earth has been in a state of continual change since its formation. The major part of this change, involving volcanism and tectonics, has been driven by heat produced from the decay of radioactive elements within the earth. The other source of change has been solar energy, which acts as the driving force of weathering and is the ultimate source of energy for living organisms. The solar system was probably formed about 4.6 billion years ago, and the oldest known rocks have an age of 3.8 billion years. There is thus a gap of 0.8 billion years for which there is no direct evidence. It is known that the earth was subjected to extensive bombardment earlier in its history; recent computer simulations suggest that the moon could have resulted from an especially massive collision with another body. Although these major collisions have diminished in magnitude as the matter in the solar system has become more consolidated, they continue to occur, with the most recent one being responsible for the annihilation of the dinosaurs and much of the other life on Earth. The lack of many overt signs of these collisions (such as craters, for example) testifies to the dynamic processes at work on the Earth’s surface and beneath it. Chemical Composition of the Earth The earth is composed of 90 chemical elements, of which 81 have at least one stable isotope. The unstable elements are 43Tc and 61Pm, and all elements heavier than 83Bi. Note that the vertical axis is logarithmic, which has the effect of greatly reducing the visual impression of the differences between the various elements. The chart gives the abundances of the elements present in the solar system, in the earth as a whole, and in the various geospheres. Of particular interest are the differences between the terrestrial and cosmic abundances, which are especially notable in the cases of the lighter elements (H, C, N) and the noble gas elements (He, Ne, Ar, Xe, Kr). Example $1$ Given the mix of elements that are present in the earth, how might they combine so as to produce the chemical composition we now observe? Solution Given the mix of elements that are present in the earth, how might they combine so as to produce the chemical composition we now observe? Thermodynamics allows us to predict the composition that any isolated system will eventually reach at a given temperature and pressure. Of course the earth is not an isolated system, although most parts of it can be considered approximately so in many respects, on time scales sufficient to make thermodynamic predictions reasonably meaningful. The equilibrium states predicted by thermodynamics differ markedly from the observed compositions. The atmosphere, for example, contains 0.03% $\ce{CO2}$, 78% $\ce{N2}$ and 21% $\ce{O2}$; in a world at equilibrium the air would be 99% $\ce{CO2}$. Similarly, the oceans, containing about 3.5% NaCl, would have a salt content of 35% if they were in equilibrium with the atmosphere and the lithosphere. Trying to understand the mechanisms that maintain these non-equilibrium states is an important part of contemporary environmental geochemistry. Structure of the Earth Studies based on the reflection and refraction of the acoustic waves resulting from earthquakes show that the interior of the earth consists of four distinct regions. A combination of physical and chemical processes led to the differentiation of the earth into these major parts. This is believed to have occurred approximately 4 billion years ago. The Earth's Core The Earth’s core is believed to consist of two regions. The inner core is solid, while the outer core is liquid. This phase difference probably reflects a difference in pressure and composition, rather than one of temperature. Density estimates obtained from seismological studies indicate that the core is metallic, and mainly iron, with 8-10 percent of lighter elements. Hypotheses about the nature of the core must be consistent with the the core’s role as the source of the earth’s magnetic field. This field arises from convective motion of the electrically conductive liquid comprising the outer core. Whether this convection is driven by differences in temperature or composition is not certain. The estimated abundance of radioactive isotopes (mainly U238 and K40 in the core is sufficient to provide the thermal energy required to drive the convective dynamo. Laboratory experiments on the high-pressure behavior of iron oxides and sulfides indicate that these substances are probably metallic in nature, and hence conductive, at the temperatures (4000-5000K) and pressures (1.3-3.5 million atm) that are estimated for the core. Their presence in the core, alloyed with the iron, would be consistent with the observed density, and would also resolve the apparent lack of sulfur in the earth, compared to its primordial abundance. The mantle The region extending from the outer part of the core to the crust of the earth is known as the mantle. The mantle is composed of oxides and silicates, i.e., of rock. It was once believed that this rock was molten, and served as a source of volcanic magma. It is now known on the basis of seismological evidence that the mantle is not in the liquid state. Laboratory experiments have shown, however, that when rock is subjected to the high temperatures and pressures believed to exist in the mantle, it can be deformed and flows very much like a liquid. The upper part of the mantle consists of a region of convective cells whose motion is driven by the heat due to decay of radioactive potassium, thorium, and uranium, which were selectively incorporated in the crystal lattices of the lower-density minerals that form the mantle. There are several independent sources of evidence of this motion. First, there are gravitational anomalies; the force of gravity, measured by changes in elevation in the sea surface, is different over upward and downward moving regions, and has permitted the mapping of some of the convective cells. Secondly, numerous isotopic ratio studies have traced the exchange of material between oceanic sediments, upper mantle rock, and back into the continental crust, which forms from melting of the upper mantle. Thirdly, the composition of the basalt formed by upper mantle melting is quite uniform everywhere, suggesting complete mixing of diverse materials incorporated into the mantle over periods of 100 million years. High-pressure studies in the laboratory have revealed that olivine, a highly abundant substance in the mantle composed of Fe, Mg, Si, and O (and also the principal constituent of meteorites) can undergo a reversible phase change between two forms differing in density. Estimates of conditions within the upper mantle suggest that the this phase change could occur within this region in such as way as to contribute to convection. The most apparent effect of mantle convection is the motion it imparts to the earth’s crust, as evidenced by the the external topography of the earth. The crust The outermost part of the earth, known also as the lithosphere, is broken up into plates that are supported by the underlying mantle, and are moved by the convective cells within the mantle at a rate of a few centimetres per year. New crust is formed where plates move away from each other under the oceans, and old crust is recycled back into the mantle as where plates moving in opposite directions collide. The oceanic crust The parts of the crust that contain the world’s oceans are very different from the parts that form the continents. The continental crust is 10-70 km thick, while oceanic crust averages only 5-7 km in thickness. Oceanic crust is more dense (3.0-3.1 g cm–3) and therefore “floats” on the mantle at a greater depth than does continental crust (density 2.7-2.8 ). Finally, oceanic crust is much younger; the oldest oceanic crust is about 200 million years old, while the most ancient continental rocks were formed 3.8 billion years ago. New crust is formed from molten material in the upper mantle at the divergent boundaries that exist at undersea ridges. The melting is due to the rise in temperature associated with the nearly adiabatic decompression of the upper 50-70 km of mantle material as separation of the plates reduces the pressure below. The molten material collects in a magma pocket which is gradually exuded in undersea lava flows. The solidified lava is transformed into crust by the effects of heat and the action of seawater which selectively dissolves the more soluble components. An animated view of seafloor spreading. Plate collisions Where two plates collide, one generally plunges under the other and returns to the mantle in a process known as subduction. Since the continental plates have a lower density, they tend to float above the oceanic plates and resist subduction. At continental boundaries such as that of the North American west coast where an oceanic plate pushes under the continental crust, oceanic sediments may be sheared off, resulting in a low coastal mountain range (see here for a nice animation of this process.) Also, the injection of water into the subducting material lowers its melting point, resulting in the formation of shallow magma pockets and volcanic activity. Divergent plate boundaries can cross continents, however; temporary divergences create rift valleys such as the Rhine and Rio Grande, while permanent ones eventually lead to new oceanic basins. Collision of two continental plates can also occur; the most notable example is the one resulting in the formation of the Himalayan mountain chain. The Earth is composed of 90 chemical elements, of which 81 have at least one stable isotope. Most of these elements have also been detected in stars. Where did these elements come from? The accepted scenario is that the first major element to condense out of the primordial soup was helium , which still comprises about one-quarter of the mass of the known universe. Hydrogen is the least thermodynamically stable of the elements, and at very high temperatures will combine with itself in a reaction known as nuclear fusion to form the next element, 2He4. "Heavier" nuclei (that is , those having high atomic numbers, indicated here by the subscript preceding the element symbol), are more stable than "lighter" ones, so this fusion process can continue up to 56Fe, which is the most energetically stable of all the nuclides. Beyond this point, heavier nuclei slowly become less stable, so fission becomes more likely. FIssion, however, is not considered an important mechanism of primordialnucleosynthesis, so other processes are invoked, as discussed farther below. Primordial Chemistry According to the “big bang” theory for which there is now overwhelming evidence, the universe as we know it (that is, all space, time, and matter) had its origin in a point source or singularity that began an explosive expansion about 12-15 billion years ago, and which is still continuing. Following a brief period of extremely rapid expansion called inflation, protons and neutrons condensed out of the initial quantum soup after about 10–32 s. Helium and hydrogen became stable during the first few minutes, along with some of the very lightest nuclides up to 7Li, which were formed through various fusion and neutron-absorption processes. Formation of most heavier elements was delayed for about 106 years until nucleosynthesis commenced in the first stars. Hydrogen still accounts for about 93% of the atoms in the universe. The main lines of observational evidence that support this theory are the 2.7K background radiation that permeates the cosmos (the cooled-down remnants of the initial explosion), and the abundances of the lightest elements. Conventional physics is able to extrapolate back to about the first 10–33 second; what happened before then remains speculative. Stellar nucleosynthesis All elements beyond hydrogen were formed in regions where the concentration of matter was large, and the temperature was high; in other words, in stars. The formation of a star begins when the gravitational forces due to a large local concentration of hydrogen bring about a contraction and compression to densities of around 105 g cm–3. This is a highly exothermic process in which the gravitational potential energy is released as heat, about 1200 kJ per gram, raising the temperature to about 107 K. Under these conditions, hydrogen nuclei possess sufficient kinetic energy to overcome their electrostatic repulsion and undergo nuclear fusion: $4\;_{1}H^{1} \rightarrow _{2}He^{4} +2b^{+}+2g+2n$ Hydrogen burning There will be a net mass loss in above process, which will therefore be highly exothermic and is known as “hydrogen burning”. As hydrogen burning proceeds, the helium collects in the core of the star, raising the density to 108 g cm–3 and the temperature to 108 K. This temperature is high enough to initiate helium burning, which proceeds in several steps: 2 2He4 $\rightarrow$ 4Be8 + g The first product, 4Be8 has a half life of only 10–16 sec, but a sufficient amount accumulates to drive the following two reactions: 4Be8 + 2He4 $\rightarrow$ 6C12 + g 6C12 + 1H1 $\rightarrow$ 7N13 $\rightarrow$ 6C13 + b+ + g The size of a star depends on the balance between the kinetic energy of its matter and the gravitational attraction of its mass. As the helium burning runs its course, the temperature drops and the star begins to contract. The course of further nucleosynthesis and the subsequent fate of the star itself depends on the star’s mass. Small stars If the mass of the star is no greater than 1.4 times the mass of our sun, the star collapses to a white dwarf, and eventually cools to a dark, dense dead star. Big stars In larger stars, the gravitational forces are sufficiently strong to overcome the normal repulsion between atoms, and so gravitational collapse continues. The gravitational energy released in this process produces temperatures of 6 108 K, which are sufficient to initiate a complex series of nuclear reactions known as the carbon-nitrogen cycle. The net reaction of this cycle is the further fusion of hydrogen to helium, in which C12 acts as a catalyst, and various nuclides of nitrogen and oxygen are intermediates. The temperature is sufficiently high, however, to initiate fusion reactions of some of these intermediates: 6C12 + 6C12 $\rightarrow$ 10Ne20 + 2He4 2 8O16 $\rightarrow$ 14Si28 + 2He4 2 8O16 $\rightarrow$ 16S31 + 0n1 Supernovas The intense gamma radiation that is produced in some of these reactions breaks some of the product nuclei into smaller fragments, which can then fuse into a variety of heavier species, up to the limit of26Fe56, beyond which fusion is no longer exothermic. The greater relative abundance of elements such as 6C12, 8O16, and 10Ne20 which differ by a 2He4 nucleus, reflects the participation of the latter species in these processes. These exothermic reactions eventually produce temperatures of 8 109 K, while contraction continues until the central core is essentially a ball of neutrons having a radius of about 10 km and a density of 1014 g cm–3. At the same time the outer shell of the star is blasted away in an explosion known as a supernova. Since 26Fe56 has the highest binding energy per nucleon of any nuclide, there are no exothermic processes which can lead to the formation of heavier elements. Fusion into heavier species is also precluded by the electrostatic repulsion of the highly charged nuclei. However, the process of neutron capture can still take place (this is the same process that is used to make synthetic elements). The neutrons are by-products of a large variety of stellar processes, and are present in a wide range of energies. Two general types of neutron capture processes are recognized. In an “s” (slow) process, only a single neutron is absorbed and the product usually decomposes by b-decay into a more proton-rich species. Elements heavier than iron 26Fe56 + 0n1 $\rightarrow$ 26Fe56 $\rightarrow$ 26Fe59 $\rightarrow$ 27Co59 + 1e0 This process occurs at rates of about 105 yr–1 , and accounts for the lighter isotopes of many elements. The other process (the “r”, or rapid process) occurs in regions of high neutron density and involves multiple captures at rates of 0.1-10 sec–1: 26Fe56 + 13 0n1 $\rightarrow$ 26Fe56 $\rightarrow$ 27Co59 + 1e0 This mechanism favors the heavier, neutron-rich isotopes and the heaviest elements. Other elements A few nuclei are not accounted for by any of the processes mentioned. These are all low-abundance species, and they probably result from processes having low rates. Examples are Sn112 and Sn114which may be produced through proton-capture, and H2, Li6, Li7, Be, B10 and B11, which may come from spallation processes resulting from collisions of cosmic ray particles with heavier elements Formation of the solar system The solar system is believed to have formed about 5 billion years ago as a result of aggregation of cosmic dust and interstellar atoms in a region of space in which the density of such material happened to be greater than average. Over 99.8% of this mass, which consisted mostly of hydrogen, collapsed into a proto-sun; the gravitational energy released in this process raised the temperature sufficiently to initiate the hydrogen fusion reactions discussed above. The planets The remaining material probably formed a disk that rotated around the sun. As the temperature dropped to around 2000K, some of the most stable combinations of the elements began to condense out. These substances might have been calcium aluminum silicates, followed by the more volatile iron-nickel system, and then magnesium silicates. The further aggregation of these materials, together with the other constituents of the cooling disk, is now believed to be the origin of the planets. Density estimates indicate that the planets closest to the sun are predominantly rocky in nature, and probably condensed first. The outer planets (Uranus, Neptune and Pluto) appear to consist largely of water ice, methane, and ammonia, with a smaller rocky core.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.01%3A_Composition_and_Structure_of_the_Earth.txt
The Earth is composed of 90 chemical elements, of which 81 have at least one stable isotope. Most of these elements have also been detected in stars. Where did these elements come from? The accepted scenario is that the first major element to condense out of the primordial soup was helium, which still comprises about one-quarter of the mass of the known universe. Primordial Chemistry According to the “big bang” theory for which there is now overwhelming evidence, the universe as we know it (that is, all space, time, and matter) had its origin in a point source or singularity that began an explosive expansion about 12-15 billion years ago, and which is still continuing. Following a brief period of extremely rapid expansion called inflation, protons and neutrons condensed out of the initial quantum soup after about 10–32 s. Helium and hydrogen became stable during the first few minutes, along with some of the very lightest nuclides up to 7Li, which were formed through various fusion and neutron-absorption processes. Formation of most heavier elements was delayed for about 106 years until nucleosynthesis commenced in the first stars. Hydrogen still accounts for about 93% of the atoms in the universe. Stellar Nucleosynthesis All elements beyond hydrogen were formed in regions where the concentration of matter was large, and the temperature was high; in other words, in stars. The formation of a star begins when the gravitational forces due to a large local concentration of hydrogen bring about a contraction and compression to densities of around 105 g cm–3. This is a highly exothermic process in which the gravitational potential energy is released as heat, about 1200 kJ per gram, raising the temperature to about 107 K. Under these conditions, hydrogen nuclei possess sufficient kinetic energy to overcome their electrostatic repulsion and undergo nuclear fusion: $\ce{4 ^{1}_1H -> ^{4}_2He + 2 \beta^{+} + 2\gamma + 2 n}$ Hydrogen burning There will be a net mass loss in above process, which will therefore be highly exothermic and is known as “hydrogen burning”. As hydrogen burning proceeds, the helium collects in the core of the star, raising the density to 108 g cm–3 and the temperature to 108 K. This temperature is high enough to initiate helium burning, which proceeds in several steps: $\ce{2 ^{4}_2He -> 4Be8 + g}$ The first product, $\ce{^{8}_4Be}$ has a half life of only 10–16 sec, but a sufficient amount accumulates to drive the following two reactions: $\ce{^{8}_4Be + ^{4}_2He -> ^{12}_6C + \gamma}$ $\ce{ ^{12}_6C + ^{1}_1H -> ^{13}_1N + ^{13}_1C + \beta^{+} + \gamma}$ The size of a star depends on the balance between the kinetic energy of its matter and the gravitational attraction of its mass. As the helium burning runs its course, the temperature drops and the star begins to contract. The course of further nucleosynthesis and the subsequent fate of the star itself depends on the star’s mass. Small stars If the mass of the star is no greater than 1.4 times the mass of our sun, the star collapses to a white dwarf, and eventually cools to a dark, dense dead star. Big stars In larger stars, the gravitational forces are sufficiently strong to overcome the normal repulsion between atoms, and so gravitational collapse continues. The gravitational energy released in this process produces temperatures of 6 108 K, which are sufficient to initiate a complex series of nuclear reactions known as the carbon-nitrogen cycle. The net reaction of this cycle is the further fusion of hydrogen to helium, in which C12 acts as a catalyst, and various nuclides of nitrogen and oxygen are intermediates. The temperature is sufficiently high, however, to initiate fusion reactions of some of these intermediates: 6C12 + 6C12 $\rightarrow$ 10Ne20 + 2He4 2 8O16 $\rightarrow$ 14Si28 + 2He4 2 8O16 $\rightarrow$ 16S31 + 0n1 Supernovas The intense gamma radiation that is produced in some of these reactions breaks some of the product nuclei into smaller fragments, which can then fuse into a variety of heavier species, up to the limit of 26Fe56, beyond which fusion is no longer exothermic. The greater relative abundance of elements such as 6C12, 8O16, and 10Ne20 which differ by a 2He4 nucleus, reflects the participation of the latter species in these processes. These exothermic reactions eventually produce temperatures of 8 109 K, while contraction continues until the central core is essentially a ball of neutrons having a radius of about 10 km and a density of 1014 g cm–3. At the same time the outer shell of the star is blasted away in an explosion known as a supernova. Note Only six supernovas have been observed in our galaxy. The supernova of 1987 was the most recent; the one before this occurred in 1604, prior to the invention of the telescope. Tycho Brahe's observation of a supernova in 1572 was crucial in overturning the Aristotelian tradition of the immutability of the "fixed stars", or "firmament". The remains of these supernovas have been detected and studied by X-ray observations. Thus all of the elements in our solar system that are heavier than iron are the recycled remnants of former stars. Elements heavier than iron Since 26Fe56 has the highest binding energy per nucleon of any nuclide, there are no exothermic processes which can lead to the formation of heavier elements. Fusion into heavier species is also precluded by the electrostatic repulsion of the highly charged nuclei. However, the process of neutron capture can still take place (this is the same process that is used to make synthetic elements). The neutrons are by-products of a large variety of stellar processes, and are present in a wide range of energies. Two general types of neutron capture processes are recognized. In an “s” (slow) process, only a single neutron is absorbed and the product usually decomposes by b-decay into a more proton-rich species: 26Fe56 + 0n1 $\rightarrow$ 26Fe56 $\rightarrow$ 26Fe59 $\rightarrow$ 27Co59 + 1e0 This process occurs at rates of about 105 yr–1 , and accounts for the lighter isotopes of many elements. The other process (the “r”, or rapid process) occurs in regions of high neutron density and involves multiple captures at rates of 0.1-10 sec–1: 26Fe56 + 13 0n1 $\rightarrow$ 26Fe56 $\rightarrow$ 27Co59 + 1e0 This mechanism favors the heavier, neutron-rich isotopes and the heaviest elements. Other elements A few nuclei are not accounted for by any of the processes mentioned. These are all low-abundance species, and they probably result from processes having low rates. Examples are Sn112 and Sn114 which may be produced through proton-capture, and H2, Li6, Li7, Be, B10 and B11, which may come from spallation processes resulting from collisions of cosmic ray particles with heavier elements.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.02%3A_Origin_of_the_Elements.txt
The solar system is believed to have formed about 5 billion years ago as a result of aggregation of cosmic dust and interstellar atoms in a region of space in which the density of such material happened to be greater than average. Over 99.8% of this mass, which consisted mostly of hydrogen, collapsed into a proto-sun; the gravitational energy released in this process raised the temperature sufficiently to initiate the hydrogen fusion reactions discussed above. The planets The remaining material probably formed a disk that rotated around the sun. As the temperature dropped to around 2000K, some of the most stable combinations of the elements began to condense out. These substances might have been calcium aluminum silicates, followed by the more volatile iron-nickel system, and then magnesium silicates. The further aggregation of these materials, together with the other constituents of the cooling disk, is now believed to be the origin of the planets. Density estimates indicate that the planets closest to the sun are predominantly rocky in nature, and probably condensed first. The outer planets (Uranus, Neptune and Pluto) appear to consist largely of water ice, methane, and ammonia, with a smaller rocky core. Formation of the Earth The Earth formed by accretion of solid and particulate material that remained after the much more massive amounts of hydrogen and helium present in the original protoplanets had been dispersed out of the solar system. Gradually, the heat produced by decay of radioactive elements brought about partial melting of the silicate rocks; these lower density molten materials migrated upward, leaving the more dense, iron-containing minerals below. This process, which took about 2 million years, was the first of the three stages into which the chemical evolution of the earth is usually divided: 1. Primary differentiation of the elements between the core and mantle. 2. Secondary differentiation of the elements, reflecting relative ionic sizes, bonding properties, and solubilities (influencing phase behavior such as fractional crystallization, etc.) 3. Tertiary differentiation, still operative, involving the interaction of the crust with the hydrosphere and atmosphere. The above listing should not be taken too literally; all three kinds of processes have probably proceeded simultaneously, and over a number of cycles. Since the earth is losing approximately four times as much heat as is generated by radioactive decay, the principal driving force of primary and secondary differentiation has gradually slowed down. Partial melting of the upper mantle brought about further fractionation as silicon-containing materials of low density migrated outward to form a crust. In its early stages the stronger granitic rocks had not yet appeared, and the crust was mechanically weak. Upwelling flows of lava would break the surface, and the weight of the solidified lava would cause the crust to subside. In some places, magma would solidify underground, forming low-density rock (batholiths) that would eventually rise by buoyancy and push up overlying crust. These mountain-building periods probably occurred in 6-8 major episodes, each lasting about 800 million years. Rain At the same time, outgassing of solids released large amounts of HCl, CO, CO2, H2S, CH4 , SO2 , and SO3 into the primitive atmosphere. Large amounts of water were present in the primeval rocks in the form of hydrates, which were broken down as the result of the heating. Eventually when the outer crust cooled enough to permit condensation of the water vapor as rain, a new stage of chemical evolution began. The rain was initially highly acidic, equivalent to about 1M HCl; this reacted readily with the basic rocks having high contents of K, Na, Mg, and Ca, leaching them away and forming what would eventually evolve into the oceans. The partial dissolution of the rocks also resulted in large amounts of sediments, which played their own role in the transformation of the earth’s surface. The continents Within the crust, the lighter materials, being in isostatic equilibrium with the upper mantle, floated higher, and gradually became the nuclei of continents, which grew by accumulating similar material around their boundaries. This picture of continental development is supported by isotopic ratio studies which indicate that the nucleus of the North American continent, the Canadian Shield, is over 2.5 billion years old, while the peripheral parts are less than 0.6 billion years of age. Primary differentiation of the elements The more traditional geochemical view of primary differentiation begins with the assumption that the core of the earth is in a chemically reduced state, while the metallic elements constituting the mantle are almost entirely oxidized to their lower free energy cationic forms. Oxygen and sulfur acted as the major electron acceptors in this process, but the abundance of these elements was insufficient to oxidize much of the nickel or iron. Iron as a reductant Iron itself is believed to have played a crucial role in the primary differentiation of other metals and of oxidized metallic elements that iron is able to reduce. As the dense molten iron migrated in toward the core, it dissolved (formed a liquid alloy with) any other metals with which it came in contact, and it reduced (donated electrons to) those metallic cations that are less “active” metals than iron under these conditions. The resulting metal would then mix with more of the migrating liquid iron, and be carried along with it into the core. Redox power of the elements Accordingly, elements whose reduction potentials are more positive than iron (i.e., are lower-free energy electron sinks) are called siderophiles; these elements have a low abundance in the crust and upper mantle. The other two important classes of solid-forming elements are lithophile and chalcophile (see below.) These generally have more negative reduction potentials than iron, and are distinguished mainly by their relative affinity for oxygen or sulfur. The chalcophiles, of which Cu, Cd, and Sb are examples, tend to form larger, more polarizable ions which can associate with the sulfide ion. The lithophiles comprise those elements such as K, Al, Mn, and Si, which have smaller ions and which combine preferentially with oxygen. This broad classification is reflected in the dominant forms in which many of these elements occur in nature. Secondary differentiation of the elements The differential distribution of the elements within one of the main regions of the earth has been studied in detail only in that portion that is accessible, namely the upper crust. It is clear that fractional crystallization from the cooling magma has played an important role. The relative temperatures at which minerals crystallize is determined in large part by their lattice energies, which are in turn related to ionic sizes and charges. Minerals with small, highly charged ions will have higher melting points and should crystallize first. Thus the sodium-containing feldspar albite (NaAlSi2O8) is found nearer the surface than is its calcium analog anorthite (CaAlSi2O8). The less abundant elements often do not form minerals of their own, but may replace the ion of a more abundant mineral in its crystal lattice. This is known as isomorphous replacement, and it naturally depends on the relative ionic radii. Some ion pairs that undergo isomorphous replacement in minerals are K+ and Ba2+, Si4+ and Ge4+. Phase behavior The Phase Rule can be invoked to explain in a very rough way the differentiation of the elements into distinct solid phases. P = C + 2 – F Taking the degrees of freedom as 2 (fixed temperature and pressure), the six major elemental components (O, Si, Al, Fe, Mg and Na) can form up to six phases. Actually, more than 99% of igneous rocks comprise seven principal mineral phases. These are: the silica minerals, feldspars, feldspathoids, olivine, pyroxenes, amphiboles and micas. The differential deposition of minerals is also influenced by the temperature-composition phase relations as exemplified by the ordinary two-component phase diagram. If the mineral that is rich in one component and which first crystallizes out is also more dense, then the richer ore will occur near the bottom of the deposit, while a more mixed ore (approaching the eutectic) will remain near the top. Geochemical classification of the elements Whether an element is concentrated in the crust or elsewhere depends on its chemical behavior and on the physical properties of its stable compounds. Geochemists have found it convenient to establish the following general classifications: • lithophiles ("rock-loving") elements are those such as Fe, Al, and Si which tend to occur as oxides (and to a lesser extent as chlorides and carbonates.) Elements in this, the largest of all the groups, are concentrated in the crust. • chalcophiles also occur in the crust, but mainly in combination with sulfur and the other chalcogen elements of Groups 15-16. • siderophiles refer to the elements such as Ni which have concentrated in the core along with Fe. • atmosphiles consist of N, H and their volatile compounds and the noble gases which concentrate in the atmosphere.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.03%3A_Formation_and_evolution_of_the_Earth.txt
The structure and composition of the outer part of the lithosphere has been profoundly affected by interactions with the atmosphere over one-quarter of the surface area of the earth, and with the hydrosphere over the remaining area. Further modification of the outermost parts of the crust has occurred as the result of the activities of living organisms. These changes have transformed much of the outermost parts of the crust into an unconsolidated surface region called the regolith. Further weathering and translocation of soluble substances often results in a sequence of horizons consisting of sediments, soils, or evaporites. Chemically, the earth’s crust consists of about 80 elements distributed in approximately 2000 compounds or minerals, many of which are of variable composition. Over 99% of the mass of the crustal material is made up of only eight of these elements, however: Table \(1\): Average amounts of elements in crustal rocks, mg/g. O 466,000 Si 277,200 Al 81,300 Fe 50,000 Ca 36300 Na 28,300 K 25,900 Mg 20,900 Ti 4,400 H 1,400 The crust has its origin in the upwelling convection currents that bring mantle material near to the surface at the mid-ocean ridges. The reduced pressure causes it to melt into magma. The magma may solidify before it reaches the surface, forming basalt, or it may energe from the surface in a volcanic eruption. The oceanic crust consists mostly of the simpler silicate minerals, which are said to be basic or mafic. The more evolved, silicon-rich rocks found in the continental crust are known as acidic or sialic. Oceanic crust is continually being extruded from regions of the plastic mantle that intrude upward to just beneath the ocean’s floor at the mid-ocean ridges. A corresponding amount of this crust is being returned to the lithosphere at subduction zones off the West coasts of the Americas, and in the process pushing up the mountain ranges that lie along these coasts. The subducted oceanic crust is reheated and combined with sedimentary material to undergo partial remelting and reworking; this is believed by some to be the origin of granite. Subduction proceeds at a rate of a few cm per year, and the complete cycle time is on the order of a few hundred million years. Both oceanic and contintental crusts float on the more dense upper lithosphere, and gradually shift their positions as they push against each other, and in response to the slow convective motions in the medium that supports them. The continental crust is thicker than the oceanic crust, but it is also less dense, which allows it to float higher (and thus to differentiate continents from oceans.) The lower density also prevents it from being subducted. Recycling can occur indirectly as continental material erodes and is deposited as sediments on the ocean floor, but this is a much slower process and one that takes billions instead of millions of years. Some of the very oldest rocks, found in Greenland and Labrador, have been dated at 3.9 billion years, and thus approach the age of the Earth itself. Chemistry of the crust When magma crystallizes it forms igneous rock, the major component of the Earth’s crust. The crystallization is a complex process which is not entirely understood, due largely to the lack of sufficient thermodynamic data on the various components at high temperatures and pressures. It is known that the different components of magma have differing melting points and densities, and that the phase behavior of multicomponent systems based on some of these substances is quite complex, involving binary and ternary eutectics, solid solutions, the presence of dissolved water (under pressure), and incongruent melting. One consequence of this complexity is that the composition of the magma will change as crystallization takes place; different substances will crystallize at various stages, and the resulting solids may migrate toward the top or bottom of the region if their densities differ greatly from that of the magma It is well known that larger crystals form when a melt cools more slowly. This principle affords a simple distinction between the coarser-grained plutonic rocks, which are believed to have been formed by gradual cooling of magma pockets within the crust, and the fine-grained volcanic rocks such as basalt. Under the influence of heat and pressure, particularly at plate boundaries, solid crustal material may undergo partial or complete remelting, followed by cooling and transformation into metamorphic rocks such as gneiss, micas, quartzite, and possibly granites. Granite was once thought to be an igneous rock, originating from the crystallization of a particular kind of magma. The association of granitic rocks with mountainous regions, and the similarity of their compositions in widely scattered regions, lends credence to the more recent hypothesis that granitic rocks are of metamorphic origin. Another class of rock is sedimentary rock, formed from the consolidation of material produced by weathering and other chemical, and biological processes. Sedimentary rocks cover about three-quarters of the land area of the earth; 80% are shales, 15% sandstones and 5% limestones. Composition of rock The chemical composition of rocks tends to be complex and variable, and can only be specified in a precise way at the structural level. The traditional way of expressing rock compositions is in terms of the mass percent of the oxides of the elements present in the rock. Table \(1\): : Chemical composition of a typical rock (quartz-feldspar-biotite gneiss). The figures are in percent by weight. oxide common name fresh rock weathered rock SiO2 silica 71.54 70.30 Al2O3 alumina 14.62 18.34 Fe2O3 ferric oxide 0.69 1.55 FeO ferrous oxide 1.64 0.22 MgO magnesia 0.77 0.21 CaO lime 2.08 0.10 Na2O soda 3.84 0.09 K2O potash 0.32 5.88 H2O water 0.32 5.88 others   0.65 0.54 total   100.07 99.70 This does not mean that these oxides, or the structural units they represent, are actually present as such in a rock. In the chemical analysis of rocks, oxygen is generally not determined separately. When it is, however, it is found in an amount that would be expected to combine stoichiometrically with the other elements present. Thus the composition of albite can be written as either NaAlSi3O8 or Na2O·Al2O3·6SiO2 . Some rocks contain varying ratios of certain elements. For example olivine, which can be considered a solid solution of Mg2SiO4 and Fe2SiO4, can be represented by (Mg,Fe)2SiO4; this implies that the ratio of metal to silica is constant, and that magnesium is ordinarily present in greater amount than iron. The major structural elements of rock (both in the crust and in the mantle) are the silicate minerals, built from silicon atoms surrounded tetrahedrally by four oxygens. The simplest of these consist just of SiO44 tetrahedra interspersed with positive ions to achieve electroneutrality; olivine, (Mg,Fe)2SiO4 is a well known example. More commonly, the silicate groups polymerize by sharing one or more oxygen atoms at adjacent tetrahedral corners. Depending on the number of joined corners per silicate unit, this can lead to the formation of a wide variety of chains (pyroxenes, amphiboles) and sheets (micas), culminating in the complete tetrahedral polymerization that produces quartz, SiO2. Higher degrees of polymerization are associated with higher ratios of Si to O, smaller quantities of positive ions, and higher melting points. Thus when magma cools, the first silicates to crystallize are the olivines, followed by chain and sheet minerals having progressively higher degrees of polymerization and smaller fractions of cations of metals such as Fe and Mg. Distribution of elements; ores Although some elements are distributed fairly uniformly throughout the crust, others occur at greatly enhanced concentrations in localized areas. There are two general processes that result in these localized excesses, which are called ores when their extraction and refining is economically feasible The first of these relates to how well a metallic ion can fit into the silicate lattice structure. Ions having the right charge and size can readily enter this structure, displacing the more common ions of Fe, Al and Mg. Such ions (of which Ga3+ is an example) are readily soluble in other minerals and thus are widely disributed and do not concentrate into ores. Other ions may be too large (Cs and La), too small (Li, Be, B) or too highly charged (Nb, Ta, W) to be accommodated in silicate mineral structures; these elements tend to remain in the magma as it solidifies, finally forming solid minerals only in the last stages of cooling. The other major source of ores is hydrothermal formation. Magma contains some water itself, and additional water from the surface is able to reach the heated rock near magma chambers. At the very high temperatures and pressures that prevail in these regions, the water can dissolve many compounds such as sulfides which are normally considered highly insoluble. When these superheated solutions rise to the surface the solids are re-deposited, often in highly concentrated form. Ores of Cu, Sn, W, and possibly some iron ores, as well as some native metals such as gold, are believed to be formed in this way. Hydrothermal vents known as “black smokers” have been observed at sites of sea-floor spreading; the “smoke” consists of metallic sulfides which precipitate in the cold seawater. The veins of pyrites (FeS2) and similar sulfide minerals that are often observed in rock formations are the result of hydrothermal solutions that once penetrated cracks and fissures in the rock. Deep-sea hydrothermal vents (many more pictures)                   Theories of ore formation (a more technical tutorial) Chemical weathering The weathering of rocks at the earth’s surface is a complex process involving both physical and chemical changes. The latter tend in principle to be rather simple kinds of reactions involving dissolution, reaction with carbon dioxide, hydrolysis, hydration, and oxidation. The difficulty in studying them and in arriving at a quantitative description is that these reactions occur very slowly and may never reach an equilibrium state. A comparison of the two rightmost columns in Table 2 on page 14 provides some illustration of the overall effect of these changes, although it must be emphasized that these are relative composition data, and thus cannot show how much of a given component has been lost. In general, sodium, calcium and magnesium seem to be lost more rapidly than potassium and silicon, while iron and aluminum decrease very slowly. Individual rates are of course dependent on the particular structural units containing the element, and also vary somewhat with grain size and condition of the surface. Action of water Water is undoubtedly the most important weathering agent. Not only does it act as a solvent for ionic dissolution products, but it also brings other active agents such as carbon dioxide and oxygen into intimate contact with the rock material. As water percolates into the outermost layers of the crust, it extends the zone of weathering beneath the surface; the effects of this are quite noticeable in a number of buried sedimentary materials such as Paleozoic sandstones, which tend to be depleted of all but the most resistant minerals. Dissolution, the simplest of all the weathering processes, usually results in ionic species, some of which may react with water to yield acidic or alkaline solutions. Dissolution of silica, however, results in the neutral species H4SiO4. Reactions involving hydration and dehydration are very common, and since the free energy changes tend to be small, these reactions can usually take place in either direction under slightly different conditions. Thus gypsum and anhydrite are interconvertable at observable rates under common environmental conditions: CaSO4·2H2O --> CaSO4 + 2 H2O In many cases, however, the reaction products are not very well characterized, thermodynamic data is lacking, and the reactions proceed so slowly that they are not entirely understood. For example, both hydrous and anhydrous iron oxides can be found in similar geologic environments, but the little is known about the interconversion process, represented approximately as Fe2O3 + H2O --> 2 FeOOH Solid carbonates tend to dissolve in acidic solutions, including those produced when atmospheric carbon dioxide dissolves in water. Thus the major surface limestone deposits (largely CaCO3, with some admixture of MgCO3) tend to be highly eroded in non-arid regions, and the local groundwater may have Ca2+ as high as 0.1-0.2M. Thermodynamics can unambiguously predict the most stable oxidation state of a metal ion under given conditions of pH and oxidant concentration. The mechanisms tend to be very uncertain, however. For one thing, both the reactant and product can often exist in various states of hydration, and the dissolved species (which probably undergo the actual oxidation) often consist of polycations and complexed species. Oxidation of iron Compounds of Fe(II), for example, will always tend to oxidize to Fe(III) in the presence of air; the various oxides of iron are responsible for the bright colors seen in many geological formations, and in certain soils. Some of the net reactions that probably occur are Fe2SiO4 + 1/2 O2 + H2O --> Fe2O3 + H2SiO4 2 FeCO3 + 1/2 O2 --> Fe2O3 + 2 H2CO3 An environmental side effect of the first process is the release of hydrated silica. Also, where both starting materials are present, the carbonic acid produced in the second reaction is believed to promote the dissolution of ferrous silicate, creating a source of Fe(II) ions that can be rapidly oxidized: Fe2SiO4 + 4 H2CO3 --> 2 Fe2+ + 4 HCO3 + H4SiO4 Fe2+ + 4 HCO3 + 1/2 O2 --> Fe2O3 + 4 H2CO3 The oxidation of sulfides can produce strongly acidic solutions: 2 FeS2 + 15/2 O2 + 4 H2O --> Fe2O3 + 2 SO42 + 8 H+ The effects of this can be seen in formations containing outcrops of pyrite veins, where the surrounding rocks are heavily stained with yellow and brown Fe(III) oxides, and the groundwater tends to be highly acidic. This process is mediated by microorganisms, and is an important source of acid pollution associated with mines and mine tailings. Sequence of weathering The various components of rocks weather at different rates. The more basic components such as CaO and MgO tend to disappear first, especially if in contact with groundwaters containing high CO2 concentrations. For rocks in general, the first reaction is usually hydration, followed by hydrolysis which can be summarized by 4 KAlSi3O8(s) + 22 H2O --> Al4Si4O10(OH)8(s) + 8 H4SiO4(aq) + 4 K+ + 4 OH in which other Group 1 or 2 cations might replace potassium. The product Al4Si4O10(OH)8 is kaolinite, a form of clay (see below). In general, the rocks which crystallized first from the magma (the Ca-feldspars and olivines) weather more rapidly than do the lower-melting rocks. Clays Clays are the solid end products of the weathering of rocks. They are basically composed of alternating sheets of “SiO2” and “AlO6” units in ratios of 1:1 (kaolinite), 2:1 (montmorillonite and vermiculite) and 2:2 (chlorite). In between the sheets, and holding them together by hydrogen bonding are water molecules. Also present are cations such as K+, Ca2+ and Mg2+ which act to neutralize the negative charges of the oxide ions. Physical weathering The major agents of physical weathering of exposed rocks are rapid changes in temperature (promoting fracture by differential expansion), the abrasive action of windborne material and glacier movement, and especially by the penetration of water into cracks and its subsequent freezing. The expansion of water on freezing can exert a pressure of 150 kg cm–2, whereas the tensile strength of a typical rock is around 120 kg cm–2 . The roots of some plants are able to penetrate rock quite effectively, producing comparable expansive pressures in subsurface rocks. Composition and structure of soils Soils are a product of the interaction of water, air, and living organisms with exposed rocks or sediments at the earth’s surface. A typical soil contains about 45% inorganic solids and 5% organic solids by volume. Water and air each make up about 20-30%. Mineral Components of soil The primary inorganic components of soils consist of sand and silt particles that come directly from the parent rocks. This fraction is dominated by quartz and feldspars (aluminosilicates). Secondary components are formed by chemical changes within the soil itself, or in sediments from which the soil derived. These are most commonly clays, but may also include calcite, gypsum, and sulfide minerals such as pyrites; the latter are formed by bacterial action under reducing conditions in the presence of organic matter. The clays have an especially important effect on both the physical properties of the soil, and on its ability to store plant nutrients, including trace nutrients such as Mo and Mn. These properties are due to the high ion-exchange capacity of clays. The more highly charged cations such as Al3+ and Fe3+ tend to be more strongly absorbed within the inter-sheet regions than do Mg2+ or K+. As plants withdraw these latter cations from the soil water, more are released by the clay components, which thus act as nutrient reservoirs. The ion-exchange properties of clays also help to maintain the pH balance of soils, through the exchange of H+ and cations such as Ca2+. The soil pH, in turn, strongly affects the solubility of nutrient cations, and thus their availability to plants. For example, the uptake of phosphorus (in the form of H2PO42 , is only efficient within the rather narrow pH range between 6 and 7. Below 6, dihydrogen phosphates of Fe and Al are precipitated, while insoluble Ca3(PO4)2 forms at higher pH’s. Organic Components Part of the organic matter of soil consists of organisms (mainly bacteria and fungi) and roots and root hairs. The remainder is largely in the form of fulvic and humic acids. These substances of indefinite composition are classified on the basis of their solubility behavior; fulvic acids remain in solution at pH 2, but humic acids, having molecular weights of 20,000 to 100,000, are precipitated. Both are flexible polyelectrolytes that interact strongly with their own kind and with inorganic ions. Associated with the fulvic and humic fractions are a wide variety smaller molecules such as alkanes, amino acids, amino sugars, sulfur and phosphorus derivatives of sugars, etc. Part of the organic carbon in a fertile soil is recycled in 1-2 years; plant residues, which are the major source of soil organic matter, have a half-life of days to months. Once carbon gets incorporated into humic substances, it is locked into a much slower recycling process; the turnover times of fulvic acids are a hundred years or more, while those for fulvic acids are around a thousand years. For this reason, humic substances are the major reservoir of organic carbon in soil. Organic matter, particularly polysaccharides, binds strongly to the cation components of clay colloids; the two together act as cementing agents and strongly influence the consistency and structure of the soil. Water Soil water is held by capillary action and adsorption with varying degrees of tenacity. This water binding strength is traditionally expressed in terms of the pressure, or “tension” that would be required to force the water out of the soil. The tension of capillary water varies over a wide range of 0.1-32 atm; only in the lower half of this range will it be available to plants, which can exert an osmotic pressure of up to about 15 atm. Water in excess of the capillary capacity fills larger voids and is called gravitational water. Its presence in surface soils corresponds to a flooded condition that inhibits plant growth by reducing soil aeration. Air The gas phase within soil pores generally has a CO2 content of 5-50 times that of the atmosphere due to the action of organisms. O2 tends to be depleted to roughly the extent that CO2 is present in excess. Under conditions of poor aeration (i.e., exchange with the atmosphere), considerable quantities of N2O, NO, H2, CH4, C2H4, and H2S may also be present. What is soil? An introductory survey Soils interactive textbook Page last modified: 21.01.2008 © 1998 by Stephen Lower For information about this Web site or to contact the author, please see theChem1 Virtual Textbook home page. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License. O 466,000 Si 277,200 Al 81,300 Fe 50,000 Ca 36300
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.04%3A_The_Earth%27s_crust.txt
Water is the most abundant substance at the earth’s surface. Almost all of it is in the oceans, which cover 70% of the surface area of the earth. However, the amounts of water present in the atmosphere and on land (as surface runoff, lakes and streams) is great enough to make it a significant agent in transporting substances between the lithosphere and the oceans. Water interacts with both the atmosphere and the lithosphere, acquiring solutes from each, and thus provides the major chemical link between these two realms. The various transformations undergone by water through the different stages of the hydrologic cycle act to transport both dissolved and particulate substances between different geographic locations. 02: The Hydrosphere Water is the most abundant substance at the earth’s surface. Almost all of it is in the oceans, which cover 70% of the surface area of the earth. However, the amounts of water present in the atmosphere and on land (as surface runoff, lakes and streams) is great enough to make it a significant agent in transporting substances between the lithosphere and the oceans. Water interacts with both the atmosphere and the lithosphere, acquiring solutes from each, and thus provides the major chemical link between these two realms. The various transformations undergone by water through the different stages of the hydrologic cycle act to transport both dissolved and particulate substances between different geographic locations. Reservoir Volume / 106, km3 Percent of total Oceans 1370 97.25 Ice caps and glaciers 39 2.05 Deep groundwater 5.3 0.38 Shallow groundwater 4.3 0.30 Lakes 0.125 0.01 Soil moisture 0.065 0.005 Atmosphere 0.013 0.001 Rivers 0.0017 0.0001 Biosphere 0.0006 0.00004 Total 1408.7 100 Composition of Seawater The composition of the ocean has attracted the attention of some of the more famous names in science, including Robert Boyle, Antoine Lavoisier and Edmund Halley. Their early investigations tended to be difficult to reproduce, owing to the different conditions under which they crystallized the various salts. As many as 54 salts, double salts and hydrated salts can be obtained by evaporating seawater to dryness. At least 73 elements are now known to be present in seawater. cations g/kg anions g/kg Table 1: Major ions of seawater: These values, expressed in parts per thousand, are for seawater of 35% salinity. Na+ 10.77 Cl 19.354 Mg2+ 1.29 SO42 2.712 Ca2+ 0.412 Br 0.087 K+ 0.399 Sr2+ 0.0079 Al3+ 0.005 The best way of characterizing seawater is in terms of its ionic content, shown above. The remarkable thing about seawater is the constancy of its relative ionic composition. The overall salt content, known as the salinity (grams of salts contained in 1 kg of seawater), varies slightly within the range of 32-37.5%, corresponding to a solution of about 0.7% salt content. The ratios of the concentrations of the different ions, however, are quite constant, so that a measurement of Cl concentration is sufficient to determine the overall composition and total salinity. Although most elements are found in seawater only at trace levels, marine organisms may selectively absorb them and make them more detectable. Iodine, for example, was discovered in marine algae (seaweeds) 14 years before it was found in seawater. Other elements that were not detected in seawater until after they were found in marine organisms include barium, cobalt, copper, lead, nickel, silver and zinc. Si32, presumably deriving from cosmic ray bombardment of Ar, has been discovered in marine sponges. pH balance. Reflecting this constant ionic composition is the pH, which is usually maintained in the narrow range of 7.8-8.2, compared with 1.5 to 11 for fresh water. The major buffering action derives from the carbonate system, although ion exchange between Na+ in the water and H+ in clay sediments has recently been recognized to be a significant factor. Conservative and non-conservative substances The major ionic constituents whose concentrations can be determined from the salinity are known as conservative substances. Their constant relative concentrations are due to the large amounts of these species in the oceans in comparison to their small inputs from river flow. This is another way of saying that their residence times are very large. component concentration in river water concentration in seawater residence time, 106 y Table 2: Replacement time with respect to river addition for some components of seawater. Concentrations in micromols/L Cl 250 558,000 87 Na+ 315 479,000 55 Mg2+ 150 54,300 13 SO42 120 28,900 8.7 Ca2+ 367 10,500 1 K+ 36 10,400 10 HCO3 870 2000 0.083 H4SiO4 170 100 0.021 NO3 10 20 0.072 H2PO4 0.7 1 0.080 A number of other species, mostly connected with biological activity, are subject to wide variations in concentration. These include the nutrients NO3, NO2, NH4+, and HPO42, which may become depleted near the surface in regions of warmth and light. As was explained in the preceding subsection on coastal upwelling, offshore prevailing winds tend to drive Western coastal surface waters out to sea, causing deeper and more nutrient-rich water to be drawn to the surface. This upwelled water can support a large population of phytoplankton and thus of zooplankton and fish. The best-known example of this is the anchovy fishery off the coast of Peru, but the phenomenon occurs to some extent on the West coasts of most continents, including our own. Other non-conservative components include Ca2+ and SiO42. These ions are incorporated into the solid parts of marine organisms, which sink to greater depths after the organisms die. The silica gradually dissolves, since the water is everywhere undersaturated in this substance. Calcium carbonate dissolves at intermediate depths, but may reprecipitate in deep waters owing to the higher pressure. Thus the concentrations of Ca and of SiO42 tend to vary with depth. The gases O2 and CO2, being intimately involved with biological activity, are also non-conservative, as are N2O and CO. Organic matter Most of the organic carbon in seawater is present as dissolved material, with only about 1-2% in particulates. The total organic carbon content ranges between 0.5 mg/L in deep water to 1.5 mg/L near the surface. There is still considerable disagreement about the composition of the dissolved organic matter; much of it appears to be of high molecular weight, and may be polymeric. Substances qualitatively similar to the humic acids found in soils can be isolated. The greenish color that is often associated with coastal waters is due to a mixture of fluorescent, high molecular weight substances of undetermined composition known as “Gelbstoffe”. It is likely that the significance of the organic fraction of seawater may be much greater than its low abundance would suggest. For one thing, many of these substances are lipid-like and tend to adsorb onto surfaces. It has been shown that any particle entering the ocean is quickly coated with an organic surface film that may influence the rate and extent of its dissolution or decomposition. Certain inorganic ions may be strongly complexed by humic-like substances. The surface of the ocean is mostly covered with an organic film, only a few molecular layers thick. This is believed to consist of hydrocarbons, lipids, and the like, but glycoproteins and proteoglycans have been reported. If this film is carefully removed from a container of seawater, it will quickly be reconstituted. How significant this film is in its effects on gas exchange with the atmosphere is not known. Regulation of ocean composition The salinity of the ocean appears to have been about the same for at least the last 200 million years. There have been changes in the relative amounts of some species, however; the ratio of Na/K has increased from about 1:1 in ancient ocean sediments to its present value of 28:1. Incorporation of calcium into sediments by the action of marine organisms has depleted the Ca/Mg ratio from 1:1 to 1:3. Table: Mass balance of P, C, and Ca for the oceans element input to ocean dissolved in seawater in dead organisms loss to sediments residence time, y phosphorus 1 (erosion) 1 1 1 (organic) 10,000 carbon 100 as CO2 500 as carbonate 1000 125 100 (organic) 600 (carbonate) 165,000 calcium 500 (erosion) 5000 25 500 (as CaCO3) 106 If the composition of the ocean has remained relatively unchanged with time, the continual addition of new mineral substances by the rivers and other sources must be exactly balanced by their removal as sediment, possibly passing through one or more biological systems in the process. Where does the Salt Come From? In 1715 Edmund Halley suggested that the age of the ocean (and thus presumably of the world) might be estimated from the rate of salt transport by rivers. When this measurement was actually carried out in 1899, it gave an age of only 90 million years. This is somewhat better than the calculation made in 1654 by James Ussher, the Anglican Archbishop of Armagh, Ireland, based on his interpretation of the Biblical book of Genesis, that the world was created at 9 A.M. on October 23, 4004 BC, but it is still far too recent, being about when the dinosaurs became extinct. What Halley actually described was the residence time, which is about right for Na but much to long for some of the minor elements of seawater. The commonly stated view that the salt content of the oceans derives from surface runoff that contains the products of weathering and soil leaching is not consistent with the known compositions of the major river waters (See Table). The halide ions are particularly over-represented in seawater, compared to fresh water. These were once referred to as “excess volatiles”, and were attributed to volcanic emissions. With the discovery of plate tectonics, it became apparent that the locations of seafloor spreading at which fresh basalt flows up into the ocean from the mantle are also sources of mineral-laden water. Some of this may be seawater that has cycled through a hot porous region and has been able to dissolve some of the mineral material owing to the high temperature. Much of the water, however, is “juvenile” water that was previously incorporated into the mantle material and has never before been in the liquid phase. The substances introduced by this means (and by volcanic activity) are just the elements that are “missing” from river waters. Estimates of what fraction of the total volume of the oceans is due to juvenile water (most of it added in the early stages of mantle differentiation that began a billion years ago) range from 30 to 90%. Geochemical processes involving the oceans The oceans can be regarded as a product of a giant acid-base titration in which the carbonic acid present in rain reacts with the basic materials of the lithosphere. The juvenile water introduced at locations of ocean-floor spreading is also acidic, and is partly neutralized by the basic components of the basalt with which it reacts. Surface rocks mostly contain aluminum, silicon and oxygen combined with alkali and alkaline-earth metals, mainly potassium, sodium and calcium. The CO2 and volcanic gases in rainwater react with this material to form a solution of the metal ion and HCO3, in which is suspended some hydrated SiO2. The solid material left behind is a clay such as kaolinite, Al2Si2O5(OH)4. This first forms as a friable coating on the surface of the weathered rock; later it becomes a soil material, then an alluvial deposit, and finally it may reach the sea as a suspended sediment. Here it may undergo a number of poorly-understood transformations to other clay sediments such as illites. Sea floor spreading eventually transports these sediments to a subduction region under a continental block, where the high temperatures and pressures permit reactions that transform it into hard rock such as granite, thus completing the geochemical cycle. Deep-sea hydrothermal vents are now recognized to be another significant route for both the addition and removal of ionic substances from seawater. Distribution and cycling of elements in the oceans Although the relative concentrations of most of the elements in seawater are constant throughout the oceans, there are certain elements that tend to have highly uneven distributions vertically, and to a lesser extent horizontally. Neglecting the highly localized effects of undersea springs and volcanic vents, these variations are direct results of the removal of these elements from seawater by organisms; if the sea were sterile, its chemical composition would be almost uniform. Plant life can exist only in the upper part of the ocean where there is sufficient light available to drive photosynthesis. These plants, together with the animals that consume them, extract nutrients from the water, reducing the concentrations of certain elements in the upper part of the sea. When these organisms die, they fall toward the lower depths of the ocean as particulate material. On the way down, some of the softer particles, deriving from tissue, may be consumed by other animals and recycled. Eventually, however, the nutrient elements that were incorporated into organisms in the upper part of the ocean will end up in the colder, dark, and essentially lifeless lower part. Mixing between the upper and lower reservoirs of the ocean is quite slow, owing to the higher density of the colder water; the average residence time of a water molecule in the lower reservoir is about 1600 years. Since the volume of the upper reservoir is only about 1/20 of that of the lower, a water molecule stays in the upper reservoir for only about 80 years. Except for dissolved oxygen, all elements required by living organisms are depleted in the upper part of the ocean with respect to the lower part. In the case of the major nutrients P, N and Si, the degree of depletion is sufficiently complete (around 95%) to limit the growth of organisms at the surface. These three elements are said to be biolimiting. A few other biointermediate elements show partial depletion in surface waters: Ca (1%), C (15%), Ba (75%). The organic component of plants and animals has the average composition C80N15P . It is remarkable that the ratio of N:P in seawater (both surface and deep) is also 15:1; this raises the interesting question of to what extent the ocean and life have co-evolved. In the deep part of the ocean the elemental ratio corresponds to C800N15P, but of course with much larger absolute amounts of these elements. Eventually some of this deeper water returns to the surface where the N and P are quickly taken up by plants. But since plants can only utilize 80 out of every 800 carbon atoms, 90 percent of the carbon will remain in dissolved form, mostly as HCO3. To work out the balance of Ca and Si used in the hard parts of organisms, we add these elements to the average composition of the lower reservior to get Ca3200Si50C800N15P. Particulate carbon falls into the deep ocean in the ratio of about two atoms in organic tissue to one atom in the form of calcite. This makes the overall composition of detrital material something like C120N15P; i.e., 80 organic C’s and 40 in CaCO3. Accompanying these 40 calcite units will be 40 Ca atoms, but this represents a minor depletion of the 3200 Ca atoms that eventually return to the surface, so this element is only slightly depleted in the upper waters. Silicon, being far less abundant, is depleted to a much greater extent. The oceanic sediments The particulate shower A continual rain of particulate material from dead organisms falls through the ocean. This shower is comprised of three major kinds of material: calcite (CaCO3), silica (SiO2), and organic matter. The first two come from the hard parts of both plants and animals (mainly microscopic animals such as foraminifera and radiolarians). The organic matter is derived mainly from the soft tissues of organisms, and from animal fecal material. Some of this solid material dissolves before it reaches the ocean floor, but not usually before it enters the deep ocean where it will remain for about 1600 years. The remainder of this material settles onto the floor of the sea, where it forms one component of a layer of sediments that provide important information about the evolution of the sea and of the earth. Over a short time scale of months to years, these sediments are in quasi-equilibrium with the seawater. On a scale of millions of years, the sediments are merely way-stations in the geochemical cycling of material between the earth’s surface and its interior. The oceanic sediments have three main origins: • Detrital material is derived largely from particles deposited in the ocean by rivers and also directly by the wind. These materials are mostly aluminosilicates (clays), along with some quartz. These substances accumulate on the deep ocean floor at a rate of about 0.3 g cm–2 per 1000 years. • Authigenic materials are formed by precipitation within the ocean or by crystallization within the sediment itself. These constitute only a small fraction of the total sediment. • Biogenic components consist mainly of the calcium carbonate and silica that make up the hard parts of organisms. Scavenging by other organisms is so efficient that only about 0.3 % of the total deep sea sediment consists of actual organic material. Our main interest lies with the silica and calcium carbonate, since these substances form a crucial part of the biogeological cycle. Also, their distributions in the ocean are not uniform- a fact that must tell us something. The skeletons of diatoms and radiolarians are the principal sources of silica sediments. Since the ocean is everywhere undersaturated with respect to silica, only the most resistant parts of these skeletons reach the bottom of the deep ocean and get incorporated into sediments. Silica sediments are less common in the Atlantic ocean, owing to the lower content of dissolved silica. The parts of the ocean where these sediments are increasing most rapidly correspond to regions of upwelling, where deep water that is rich in dissolved silica rises to the surface where the silica is rapidly fixed by organisms. Where upwelling is absent, the growth of the organisms is limited, and little silica is precipitated. Since deep waters tend to flow from the Atlantic into the Pacific ocean where most of the upwelling occurs, Atlantic waters are depleted in silica, and silica sediments are not commonly found in this ocean. For calcium carbonate, the situation is quite different. In the first place, surface waters are everywhere supersaturated with respect to both calcite and aragonite, the two common crystal forms of CaCO3. Secondly, Ca2+ and HCO3 are never limiting factors in the growth of the coccoliths (plants) and forams (animals) that precipitate CaCO3; their production depends on the availability of phosphate and nitrogen. Because these elements are efficiently recycled before they fall into the deep ocean, their supply does not depend on upwelling, and so the production of solid is more uniformly distributed over the world’s oceans. More importantly, however, the chances that a piece of carbonate skeleton will end up as sediment will be highly dependent on both the local CO32 concentration and the depth of the ocean floor. These factors give rise to small-scale variations in the production of carbonate sediments that can be quite wide-ranging. Ocean sediments and continental drift New crust is being generated and moving away from the crests of the mid-ocean ridges at a rate of a few centimetres per year. Although the crests of these ridges are relatively high points, projecting to within about 3000 m of the surface, the continual injection of new material prevents sediments from accumulating in these areas. Farther from the crests, carbonate sediments do build up, eventually reaching a depth of about 500 m, but by this time the elevation has dropped off below the saturation horizon, so from this point on the carbonate sediments are overlaid by red clay. If we drill a hole down through a part of the ocean floor that is presently below the saturation horizon, the top part of the drill core will consist of clay, followed by CaCO3 at greater depths. The core may also contain regions in which silica predominates. Since silica production is very high in equatorial regions, the appearance of such a layer suggests that this particular region of the oceanic crust has moved across the equator.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.01%3A_Water_Water_Everywhere....txt
How inappropriate to call this planet Earth, when clearly it is Ocean. Arthur C. Clarke Water is the most abundant substance at the earth’s surface. Almost all of it is in the oceans, which cover 70% of the surface area of the earth. However, the amounts of water present in the atmosphere and on land (as surface runoff, lakes and streams) is great enough to make it a significant agent in transporting substances between the lithosphere and the oceans. Water interacts with both the atmosphere and the lithosphere, acquiring solutes from each, and thus provides the major chemical link between these two realms. The various transformations undergone by water through the different stages of the hydrologic cycle act to transport both dissolved and particulate substances between different geographic locations. Table\(1\):  Inventory of water on Earth Reservoir Volume / 106, km3 Percent of total Oceans 1370 97.25 Ice caps and glaciers 39 2.05 Deep groundwater 5.3 0.38 Shallow groundwater 4.3 0.30 Lakes 0.125 0.01 Soil moisture 0.065 0.005 Atmosphere 0.013 0.001 Rivers 0.0017 0.0001 Biosphere 0.0006 0.00004 Total 1408.7 100 Where did the water come from? It appears to have been bound up in the silica-based materials such as micas and amphiboles which accreted to form the Earth. The heat released during this process would have been sufficient to drive off this water, which amounted to about 0.01% by mass of the primoridal material. The hydrologic cycle The hydrologic cycle refers to the steady state that exists between evaporation, condensation, percolation, runoff, and circulation of water. The cycle is driven by solar energy, mainly through direct vaporization, but also by convective motion induced by uneven heating. The major interphase transport process of the hydrologic cycle is evaporation of water from the ocean. However, 90% of this vapor falls directly back into the ocean as rain, while 10% is transported over the land. Of the latter, about two-thirds evaporates again and one third runs off to the ocean. The movement of water on the earth's surface and through the atmosphere is known as the hydrologic cycle. Water is taken up by the atmosphere from the earth's surface in vapor form through evaporation. It may then be moved from place to place by the wind until it is condensed back to its liquid phase to form clouds. Water then returns to the surface of the earth in the form of either liquid (rain) or solid (snow, sleet, etc.) precipitation. Water transport can also take place on or below the earth's surface by flowing glaciers, rivers, and ground water flow. The amounts of water precipitated onto the land and oceans are in approximate proportion to the relative surface areas, but evaporation from the ocean exceeds that from the land by about 37,400 km–3 yr–1. This difference is the amount of water transported to the oceans by river runoff. When water condenses from the atmosphere in the form of rain, it is slightly enriched in H2O18. During epochs of glacial buildup the fraction of H2O18 in the oceans consequently decreases. Observation of H2O18/H2O16 ratios in marine sediments is one way of studying the timing and extent of past glaciations. Since the degree of heavy isotope enrichment of condensed water is temperature dependent, this same method can be used to estimate mean world temperatures in the distant past. The hydrologic cycle also has important effects on the energy budget of the earth. Atmospheric water vapor (along with carbon dioxide and methane) tends to absorb the long-wavelength infrared radiation emitted by the earth’s surface, partially trapping the incoming shorter-wavelength energy and thus maintaining the mean surface temperature about 30° higher than would be the case in the absence of water vapor. Of the 51% of the solar radiation incident on the atmosphere that reaches the earth’s surface, about half of this (23%) is used to evaporate water. During the ten days that an average molecule resides in the atmosphere, it will travel about 1000 km. The atmospheric transport of water from equatorial to subtropical regions serves as an important mechanism for the transport of thermal energy; at latitudes of about 40°, as much as one-third of the energy input comes from release of latent heat from water vapor formed in equatorial regions. Oceanic circulation About 97% of the earth’s water is contained in the two reservoirs which comprise the oceans. The upper mixed layer contains about 5% of the total; it is separated from the deeper and colder layer by the thermocline. Mixing between these two stratified layers is very slow; of the total ocean volume of 6.8 1018 m3, only about 0.71 1015 m3, or about 0.01%, moves between the two layers per year. The mean residence time of a water molecule in the deep layer is about 1600 years. The large-scale motions of ocean water are the primary means by which chemical substances, especially those taken up and excreted by organisms, are transported within the ocean. An understanding of the general patterns of this circulation is essential in order to analyze the observed distribution of many of the chemical elements in different parts of the ocean and in the oceanic sediments. Atmospheric circulation The circulation of the surface waters of the ocean is driven by the prevailing winds. The latter arise from uneven heating of the earth’s surface, and are arranged in bands that parallel the equator. Although the motion of the waters at the surface of the ocean are driven by the winds, they do not follow them in a simple manner. The reasons are threefold: the Coriolis effect, the presence of land masses, and uneveness in the sea level due to regional differences in temperature and atmospheric pressure. The most intense heat input into the atmosphere occurs near the equator, where the heated air rises and cools, producing intense local precipitation but little surface wind. After cooling and losing moisture, this air moves north and south and descends at a latitude of about 30°. As it descends, it warms (largely by adiabatic compression) and its relative humidity decreases. The extreme dryness of this air gives rise to the subtropical desert regions between about 15° and 30°. Part of this air flows back toward the equator, giving rise to the northeast and southwest trade winds; the deflection to the east or west is caused by the Coriolis effect. Another part of the descending air travels poleward, producing the prevailing westerlies. Eventually these collide with cold air masses moving away from the polar regions, producing a region of unstable air and storm activity known as a polar front. Some of this polar air picks up enough heat to rise and enter into polar cell circulation patterns. The flow of air in the prevailing westerlies is subject to considerable turbulence which gives rise to planetary waves. These are moving regions in which warm surface air is lifted to higher levels, producing lines of storms that travel from west to east, and exchanging more air between the polar and temperate regions. Surface currents of the oceans In the Northern hemisphere, the Coriolis effect not only deflects south-moving objects to the east but it also causes currents flowing parallel to the equator to veer to the right of the direction of flow, i.e. to the north or south. In addition, prevailing westerly winds and the eastward rotation of the earth cause water to pile up by a few centimeters at the western edges of the oceans. The resultant downhill flow, interacting with Coriolis forces, produces a western boundary current that runs south-to-north in the northern hemisphere. A similar but opposite effect gives rise to a south-flowing eastern boundary current on continental east coasts. Thermohaline circulation In contrast to the upper levels of the ocean, the deep ocean is stratified; the density increases with depth so as to inhibit the vertical transport of water. This stratification divides the deep oceans into several distinct water masses which undergo movement in a more or less horizonal plane, with adjacent masses sometimes moving in opposite directions. The winds and atmospheric effects outlined above affect only the upper part of the ocean. Below 100 meters or so, oceanic circulation is driven by the density of the seawater, which is determined by its temperature and its salinity. Variations in these two quantities give rise to the thermohaline circulation of the deep currents of the ocean. It all starts when seasonal ice begins to form in the polar regions. Because the salts dissolved in seawater cannot be accommodated within the ice structure, they are largely excluded from the new ice and remain in solution. This increases the density of the surrounding unfrozen water, causing it to sink into the bottom ocean. There are two major locations at which surface waters enter the deep ocean. The northern entry point is in the Norwegian Sea off Greenland; this water forms a mass known as the North Atlantic Deep Water (NADW) which flows southward across the equator. Most of the transport into the deep ocean takes place in the Weddell Sea off the coast of Antarctica. The highly saline water flows down the submerged Antarctic Slope to begin a 5000-year trip to the north across the bottom of the ocean. This is the major route by which dissolved CO2 and O2 (which are more soluble in this cold water) are transported into the deep ocean where it forms a water mass known as the Antarctic Bottom Water (AABW) which can be traced into all three oceans. In the south, a flow from the antarctic region forms a water mass known as the Antarctic Bottom Water (AABW) which can be traced into all three oceans. The Pacific Ocean lacks any major identifiable direct source of cold water, so it is less differentiated and its deep circulation is sluggish and poorly defined. As is apparent from the figure above, the vertical profiles of temperature and especially of the salinity are not uniform. To some extent, these two parameters have opposite effects: in equatorial regions, temperatures are higher (leading to lower density) but evaporation rates are also higher (leading to higher density). In polar regions, the formation of sea ice raises the density of the seawater (because only a small proportion of salt is incorporated into the ice). The nature and extent of the deep ocean currents differ in the Atlantic, Pacific, and Indian oceans. These currents are much slower than the surface currents, and in fact have not been measured directly; their existence is however clearly implied by the chemical composition and temperature of water samples taken from various parts of the ocean. Estimated rates are of the order of kilometers per month, in contrast to the few kilometers per hour of surface waters. The deep currents are the indirect results of processes occuring at the surface in which cold water of high salinity is produced as sea ice forms in the arctic and antarctic regions. This water is so dense that it sinks to the bottom, displacing warmer or less saline water as it moves. Coastal Upwelling Recirculation of deep water to the surface occurs to a very small extent in many regions, but it is especially pronounced where water entering the Antarctic Bottom mass displaces other bottom water, and where water piles up at the western edges of continents. This latter water flows downhill (forming the western boundary currents mentioned above) and is replaced by colder water from the deep ocean. The deep ocean contains few organisms to deplete the water of the nutrients it receives from the remains of the dead organisms floating down from above; this upwelled water is therefor exceptionally rich in nutrients, and strongly encourages the growth of new organisms that extend up the food chain to fish. Thus the wind-driven upwelling that occurs off the west coast of South America is responsible for the Peruvian fishing and guano fertilizer industry. About every seven years these prevailing winds disappear for a while, allowing warm equatorial waters to move in. This phenomenon is known as El Niño and it results in massive kills of plankton and fish. Decomposition of the dead organisms reduces the oxygen content of the water, causing the death of still more fish, and allowing reduced compounds such as hydrogen sulfide to accumulate. Matthias Tomczak's Introduction to physical oceanography is an excellent source of more information on the topics covered above. Page last modified: 21.01.2008 © 1998 by Stephen Lower For information about this Web site or to contact the author, please see theChem1 Virtual Textbook home page. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.02%3A_The_hydrosphere_and_the_oceans.txt
Composition of seawater The composition of the ocean has attracted the attention of some of the more famous names in science, including Robert Boyle, Antoine Lavoisier and Edmund Halley. Their early investigations tended to be difficult to reproduce, owing to the different conditions under which they crystallized the various salts. As many as 54 salts, double salts and hydrated salts can be obtained by evaporating seawater to dryness. At least 73 elements are now known to be present in seawater. cations g/kg anions g/kg Na+ 10.77 Cl 19.354 Mg2+ 1.29 SO42 2.712 Ca2+ 0.412 Br 0.087 K+ 0.399 Sr2+ 0.0079 Al3+ 0.005 Major ions of seawater These values, expressed in parts per thousand, are for seawater of 35% salinity. The best way of characterizing seawater is in terms of its ionic content, shown above. The remarkable thing about seawater is the constancy of its relative ionic composition. The overall salt content, known as the salinity (grams of salts contained in 1 kg of seawater), varies slightly within the range of 32-37.5%, corresponding to a solution of about 0.7% salt content. The ratios of the concentrations of the different ions, however, are quite constant, so that a measurement of Cl concentration is sufficient to determine the overall composition and total salinity. Although most elements are found in seawater only at trace levels, marine organisms may selectively absorb them and make them more detectable. Iodine, for example, was discovered in marine algae (seaweeds) 14 years before it was found in seawater. Other elements that were not detected in seawater until after they were found in marine organisms include barium, cobalt, copper, lead, nickel, silver and zinc. Si32, presumably deriving from cosmic ray bombardment of Ar, has been discovered in marine sponges. pH balance. Reflecting this constant ionic composition is the pH, which is usually maintained in the narrow range of 7.8-8.2, compared with 1.5 to 11 for fresh water. The major buffering action derives from the carbonate system, although ion exchange between Na+ in the water and H+ in clay sediments has recently been recognized to be a significant factor. Conservative and non-conservative substances The major ionic constituents whose concentrations can be determined from the salinity are known as conservative substances. Their constant relative concentrations are due to the large amounts of these species in the oceans in comparison to their small inputs from river flow. This is another way of saying that their residence times are very large. Ion Concentration in river water micromols/L Concentration in sea water micromols/L Residence time/106 years Cl 250 558,000 87 Na+ 315 479,000 55 Mg2+ 150 54,300 13 SO42 120 28,900 8.7 Ca2+ 367 10,500 1 K+ 36 10,400 10 HCO3 870 2000 0.083 H4SiO4 170 100 0.021 NO3 10 20 0.072 H2PO4 0.7 1 0.080 Replacement time with respect to river addition for some components of seawater A number of other species, mostly connected with biological activity, are subject to wide variations in concentration. These include the nutrients NO3, NO2, NH4+, and HPO42, which may become depleted near the surface in regions of warmth and light. As was explained in the preceding subsection on coastal upwelling, offshore prevailing winds tend to drive Western coastal surface waters out to sea, causing deeper and more nutrient-rich water to be drawn to the surface. This upwelled water can support a large population of phytoplankton and thus of zooplankton and fish. The best-known example of this is the anchovy fishery off the coast of Peru, but the phenomenon occurs to some extent on the West coasts of most continents, including our own. Other non-conservative components include Ca2+ and SiO42. These ions are incorporated into the solid parts of marine organisms, which sink to greater depths after the organisms die. The silica gradually dissolves, since the water is everywhere undersaturated in this substance. Calcium carbonate dissolves at intermediate depths, but may reprecipitate in deep waters owing to the higher pressure. Thus the concentrations of Ca and of SiO42 tend to vary with depth. The gases O2 and CO2, being intimately involved with biological activity, are also non-conservative, as are N2O and CO. Organic matter Most of the organic carbon in seawater is present as dissolved material, with only about 1-2% in particulates. The total organic carbon content ranges between 0.5 mg/L in deep water to 1.5 mg/L near the surface. There is still considerable disagreement about the composition of the dissolved organic matter; much of it appears to be of high molecular weight, and may be polymeric. Substances qualitatively similar to the humic acids found in soils can be isolated. The greenish color that is often associated with coastal waters is due to a mixture of fluorescent, high molecular weight substances of undetermined composition known as “Gelbstoffe”. It is likely that the significance of the organic fraction of seawater may be much greater than its low abundance would suggest. For one thing, many of these substances are lipid-like and tend to adsorb onto surfaces. It has been shown that any particle entering the ocean is quickly coated with an organic surface film that may influence the rate and extent of its dissolution or decomposition. Certain inorganic ions may be strongly complexed by humic-like substances. The surface of the ocean is mostly covered with an organic film, only a few molecular layers thick. This is believed to consist of hydrocarbons, lipids, and the like, but glycoproteins and proteoglycans have been reported. If this film is carefully removed from a container of seawater, it will quickly be reconstituted. How significant this film is in its effects on gas exchange with the atmosphere is not known. Regulation of ocean composition The salinity of the ocean appears to have been about the same for at least the last 200 million years. There have been changes in the relative amounts of some species, however; the ratio of Na/K has increased from about 1:1 in ancient ocean sediments to its present value of 28:1. Incorporation of calcium into sediments by the action of marine organisms has depleted the Ca/Mg ratio from 1:1 to 1:3. Mass balance of P, C, and Ca for the oceans Element input to ocean dissolved in seawater in dead organisms loss to sediments residence time, y phosphorus 1 (erosion) 1 1 1 (organic) 10,000 carbon 100 as CO2 500 as carbonate 1000 125 100 (organic) 600 (carbonate) 165,000 calcium 500 (erosion) 5000 25 500 (as CaCO3) 106 If the composition of the ocean has remained relatively unchanged with time, the continual addition of new mineral substances by the rivers and other sources must be exactly balanced by their removal as sediment, possibly passing through one or more biological systems in the process. Where does the salt come from? In 1715 Edmund Halley suggested that the age of the ocean (and thus presumably of the world) might be estimated from the rate of salt transport by rivers. When this measurement was actually carried out in 1899, it gave an age of only 90 million years. This is somewhat better than the calculation made in 1654 by James Ussher, the Anglican Archbishop of Armagh, Ireland, based on his interpretation of the Biblical book of Genesis, that the world was created at 9 A.M. on October 23, 4004 BC, but it is still far too recent, being about when the dinosaurs became extinct. What Halley actually described was the residence time, which is about right for Na but much to long for some of the minor elements of seawater. The commonly stated view that the salt content of the oceans derives from surface runoff that contains the products of weathering and soil leaching is not consistent with the known compositions of the major river waters (See Table). The halide ions are particularly over-represented in seawater, compared to fresh water. These were once referred to as “excess volatiles”, and were attributed to volcanic emissions. With the discovery of plate tectonics, it became apparent that the locations of seafloor spreading at which fresh basalt flows up into the ocean from the mantle are also sources of mineral-laden water. Some of this may be seawater that has cycled through a hot porous region and has been able to dissolve some of the mineral material owing to the high temperature. Much of the water, however, is “juvenile” water that was previously incorporated into the mantle material and has never before been in the liquid phase. The substances introduced by this means (and by volcanic activity) are just the elements that are “missing” from river waters. Estimates of what fraction of the total volume of the oceans is due to juvenile water (most of it added in the early stages of mantle differentiation that began a billion years ago) range from 30 to 90%. Geochemical processes involving the oceans The oceans can be regarded as a product of a giant acid-base titration in which the carbonic acid present in rain reacts with the basic materials of the lithosphere. The juvenile water introduced at locations of ocean-floor spreading is also acidic, and is partly neutralized by the basic components of the basalt with which it reacts. Surface rocks mostly contain aluminum, silicon and oxygen combined with alkali and alkaline-earth metals, mainly potassium, sodium and calcium. The CO2 and volcanic gases in rainwater react with this material to form a solution of the metal ion and HCO3, in which is suspended some hydrated SiO2. The solid material left behind is a clay such as kaolinite, Al2Si2O5(OH)4. This first forms as a friable coating on the surface of the weathered rock; later it becomes a soil material, then an alluvial deposit, and finally it may reach the sea as a suspended sediment. Here it may undergo a number of poorly-understood transformations to other clay sediments such as illites. Sea floor spreading eventually transports these sediments to a subduction region under a continental block, where the high temperatures and pressures permit reactions that transform it into hard rock such as granite, thus completing the geochemical cycle. Deep-sea hydrothermal vents are now recognized to be another significant route for both the addition and removal of ionic substances from seawater. Distribution and cycling of elements in the oceans Although the relative concentrations of most of the elements in seawater are constant throughout the oceans, there are certain elements that tend to have highly uneven distributions vertically, and to a lesser extent horizontally. Neglecting the highly localized effects of undersea springs and volcanic vents, these variations are direct results of the removal of these elements from seawater by organisms; if the sea were sterile, its chemical composition would be almost uniform. Plant life can exist only in the upper part of the ocean where there is sufficient light available to drive photosynthesis. These plants, together with the animals that consume them, extract nutrients from the water, reducing the concentrations of certain elements in the upper part of the sea. When these organisms die, they fall toward the lower depths of the ocean as particulate material. On the way down, some of the softer particles, deriving from tissue, may be consumed by other animals and recycled. Eventually, however, the nutrient elements that were incorporated into organisms in the upper part of the ocean will end up in the colder, dark, and essentially lifeless lower part. Mixing between the upper and lower reservoirs of the ocean is quite slow, owing to the higher density of the colder water; the average residence time of a water molecule in the lower reservoir is about 1600 years. Since the volume of the upper reservoir is only about 1/20 of that of the lower, a water molecule stays in the upper reservoir for only about 80 years. Except for dissolved oxygen, all elements required by living organisms are depleted in the upper part of the ocean with respect to the lower part. In the case of the major nutrients P, N and Si, the degree of depletion is sufficiently complete (around 95%) to limit the growth of organisms at the surface. These three elements are said to be biolimiting. A few other biointermediate elements show partial depletion in surface waters: Ca (1%), C (15%), Ba (75%). The organic component of plants and animals has the average composition C80N15P . It is remarkable that the ratio of N:P in seawater (both surface and deep) is also 15:1; this raises the interesting question of to what extent the ocean and life have co-evolved. In the deep part of the ocean the elemental ratio corresponds to C800N15P, but of course with much larger absolute amounts of these elements. Eventually some of this deeper water returns to the surface where the N and P are quickly taken up by plants. But since plants can only utilize 80 out of every 800 carbon atoms, 90 percent of the carbon will remain in dissolved form, mostly as HCO3. To work out the balance of Ca and Si used in the hard parts of organisms, we add these elements to the average composition of the lower reservior to get Ca3200Si50C800N15P. Particulate carbon falls into the deep ocean in the ratio of about two atoms in organic tissue to one atom in the form of calcite. This makes the overall composition of detrital material something like C120N15P; i.e., 80 organic C’s and 40 in CaCO3. Accompanying these 40 calcite units will be 40 Ca atoms, but this represents a minor depletion of the 3200 Ca atoms that eventually return to the surface, so this element is only slightly depleted in the upper waters. Silicon, being far less abundant, is depleted to a much greater extent. The oceanic sediments The particulate shower. A continual rain of particulate material from dead organisms falls through the ocean. This shower is comprised of three major kinds of material: calcite (CaCO3), silica (SiO2), and organic matter. The first two come from the hard parts of both plants and animals (mainly microscopic animals such as foraminifera and radiolarians). The organic matter is derived mainly from the soft tissues of organisms, and from animal fecal material. Some of this solid material dissolves before it reaches the ocean floor, but not usually before it enters the deep ocean where it will remain for about 1600 years. The remainder of this material settles onto the floor of the sea, where it forms one component of a layer of sediments that provide important information about the evolution of the sea and of the earth. Over a short time scale of months to years, these sediments are in quasi-equilibrium with the seawater. On a scale of millions of years, the sediments are merely way-stations in the geochemical cycling of material between the earth’s surface and its interior. The oceanic sediments have three main origins: • Detrital material is derived largely from particles deposited in the ocean by rivers and also directly by the wind. These materials are mostly aluminosilicates (clays), along with some quartz. These substances accumulate on the deep ocean floor at a rate of about 0.3 g cm–2 per 1000 years. • Authigenic materials are formed by precipitation within the ocean or by crystallization within the sediment itself. These constitute only a small fraction of the total sediment. • Biogenic components consist mainly of the calcium carbonate and silica that make up the hard parts of organisms. Scavenging by other organisms is so efficient that only about 0.3 % of the total deep sea sediment consists of actual organic material. Our main interest lies with the silica and calcium carbonate, since these substances form a crucial part of the biogeological cycle. Also, their distributions in the ocean are not uniform- a fact that must tell us something. The skeletons of diatoms and radiolarians are the principal sources of silica sediments. Since the ocean is everywhere undersaturated with respect to silica, only the most resistant parts of these skeletons reach the bottom of the deep ocean and get incorporated into sediments. Silica sediments are less common in the Atlantic ocean, owing to the lower content of dissolved silica. The parts of the ocean where these sediments are increasing most rapidly correspond to regions of upwelling, where deep water that is rich in dissolved silica rises to the surface where the silica is rapidly fixed by organisms. Where upwelling is absent, the growth of the organisms is limited, and little silica is precipitated. Since deep waters tend to flow from the Atlantic into the Pacific ocean where most of the upwelling occurs, Atlantic waters are depleted in silica, and silica sediments are not commonly found in this ocean. For calcium carbonate, the situation is quite different. In the first place, surface waters are everywhere supersaturated with respect to both calcite and aragonite, the two common crystal forms of CaCO3. Secondly, Ca2+ and HCO3 are never limiting factors in the growth of the coccoliths (plants) and forams (animals) that precipitate CaCO3; their production depends on the availability of phosphate and nitrogen. Because these elements are efficiently recycled before they fall into the deep ocean, their supply does not depend on upwelling, and so the production of solid is more uniformly distributed over the world’s oceans. More importantly, however, the chances that a piece of carbonate skeleton will end up as sediment will be highly dependent on both the local CO32 concentration and the depth of the ocean floor. These factors give rise to small-scale variations in the production of carbonate sediments that can be quite wide-ranging. Ocean sediments and continental drift New crust is being generated and moving away from the crests of the mid-ocean ridges at a rate of a few centimetres per year. Although the crests of these ridges are relatively high points, projecting to within about 3000 m of the surface, the continual injection of new material prevents sediments from accumulating in these areas. Farther from the crests, carbonate sediments do build up, eventually reaching a depth of about 500 m, but by this time the elevation has dropped off below the saturation horizon, so from this point on the carbonate sediments are overlaid by red clay. If we drill a hole down through a part of the ocean floor that is presently below the saturation horizon, the top part of the drill core will consist of clay, followed by CaCO3 at greater depths. The core may also contain regions in which silica predominates. Since silica production is very high in equatorial regions, the appearance of such a layer suggests that this particular region of the oceanic crust has moved across the equator. Page last modified: 21.01.2008 © 1998 by Stephen Lower For information about this Web site or to contact the author, please see the Chem1 Virtual Textbook home page. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.03%3A_Chemistry_and_geochemistry_of_the_oceans.txt
To the extent that the composition of the ocean remains constant, the rate at which any one element is introduced into seawater must equal the rate of its removal. A listing of the various routes of addition and removal, together with the estimated rate of each process, constitutes the budget for a given element. If that budget is greatly out of balance and no other transport routes are apparent, then it is likely that the ocean is not in a steady state with respect to that element, at least on a short time scale. It is important to understand, however, that short-term deviations from constant composition are not necessarily inconsistent with a long-term steady state. Deviations from the latter condition are most commonly inferred from geological evidence. The major input of elements to the oceans is river water. Groundwater seepage constitutes a very small secondary source. These were considered the only sources until the 1970’s, when the existence of hydrothermal springs at sites of seafloor spreading became known. There are presently no reliable estimates of the magnitude of this source. Pollution represents an additional input, mainly dissolved in river water, but also sometimes in rain and by dry deposition. Routes of removal are formation and burial of sediments, formation of evaporite deposits, direct input to the atmosphere by sea-salt aerosol transfer associated with bubble-breaking, and burial with sediments, either in interstitial water or adorbed onto active surfaces. Reaction with newly-formed basalt associated with undersea volcanic activity appears to be an important removal mechanism for some elements. Species Addition from Rivers Loss to Atmosphere Table 7: Input and output rates of major elements in seawater by two routes. The units are Tg y–1. The other major routes involve interaction with the seafloor through hyrothermal vents and incorporation into sediments, but only river input and loss to the atmosphere are quantifiable with any degree of accuracy. Cl 308 40 Na+ 269 21 S (mostly SO42–) 143 4 Mg2+ 137 3 K+ 52 1 Ca2+ 550 0.5 HCO3 1980 H4SiO4 180 Phosphorus The major elements undergoing steady-state dynamic change in the ocean are connected with biological processes. The key limiting element in the development of oceanic biomass is phosphorus, in the form of the phosphate ion. For terrestrial plant life, nitrogen is more commonly the limiting element, where it is taken up in the form of the nitrate ion. In the ocean, however, the ratio of the nitrate ion concentration to that of phosphorus has been found to be everywhere the same; this implies that the concentration of one controls that of the other. The source of nitrate ion is atmospheric N2, which is freely soluble in water and is thus always present in abundance. The conversion of N2 to NO3 is presumed to be biologically mediated, probably by bacteria. The constancy of the NO3/P concentration ratio implies that the phosphorus concentration controls the activity of the nitrogen-fixing organisms, and thus the availability of nitrogen to oceanic life. Photosynthetic activity in the upper part of the ocean causes inorganic phosphate to be incorporated into biomass, reducing the concentration of phosphorus; in warm surface waters, phosphate may become totally depleted. A given phosphorus atom may be traded several times among the plant, animal and bacterial populations before it eventually finds itself in biodebris (a dead organism or a fecal pellet) that falls into the deep part of the sea. Only about 1% of the phosphorus atoms that descend into deeper waters actually reach the bottom, where they are incorporated into sediments and permanently removed from circulation. The other 99% are released in the form of soluble phosphate, which is eventually brought to the surface in regions of upwelling. An average phosphorus atom will undergo one cycle of this circulation in about 1000 years; only a few months of this cycle will be spent in biomass. After an average of 100 such cycles, the atom will be removed from circulation and locked into the bottom sediment, and a new one will have entered the sea with river or juvenile water. Phosphorus is unique in that its major source of input to the oceans derives ultimately from pollution; in the long term, this represents a transfer of land-based phosphate deposits to the oceans. About half of the phosphorus input is in the form of suspended material, both organic and inorganic, the latter being in a variety of forms including phosphates adsorbed onto clays and iron oxide paticles, and calcium phosphate (apatatite) eroded from rocks. These various particulates are known to dissolve to some extent once they reach the ocean, but there is considerable uncertainty about the rates of these processes under various conditions. The major sink for oceanic phosphorus is burial with organic matter; this accounts for about two-thirds of the phosphorus removed. Most of the remainder is due to deposition with CaCO3. A minor removal route is through reaction with Fe(II) formed when seawater attacks hot basalt, and in the formation of evaporite deposits. However, there is more phosphorus in evaporite deposits in the Western U.S. than in all of the ocean, so it is apparent that the long-term phosphorus budget is still not clearly understood. Carbon Carbon enters the ocean from both the atmosphere (as CO2) and river water, in which the principal species is HCO3. Once in solution, the carbonate species are in equilibrium with each other and with H3O+, and the concentrations of all of these are influenced by the partial pressure of atmospheric CO2. The mass budget for calcium is linked to that of carbon through solubility equilibria with the various solid forms of CaCO3 (mainly calcite). During photosynthesis, C12 is taken up slightly more readily than the rare isotope C13. Since the rate of photosynthesis is controlled by the phosphate concentration, the C13/C12 ratio in the dissolved carbon dioxide of surface waters is slightly higher than in the ocean as a whole. Observations of carbon isotope ratios in buried sediments have been useful in tracking historical changes in phosphate concentrations. The C:P ratio and the regulation of atmospheric CO2 The ratio of carbon to phosphorus in sea salt is about eight times greater than the same ratio measured in organic debris. This implies that in exhausting the available phosphate, the living organisms in the upper ocean consume only 12.5% of the dissolved carbon. Even this relatively small withdrawal of carbon from the carbonate system is sufficient to noticeably reduce the partial pressure of gaseous CO2 in equilibrium with the ocean; it has been estimated that if all life in the ocean should suddenly cease, the atmospheric CO2 content would rise to about three times its present level. The regulation of atmospheric CO2 pressure by the oceans also works the other way: since the amount of dissolved carbonate in the oceans is so much greater than the amount of CO2 in the atmosphere, the oceans act to buffer the effects of additions of CO2 to the atmosphere. Calculations indicate that about half of the CO2 that has been produced by burning fossil fuel since the Industrial Revolution has ended up in the oceans. Bicarbonate This is of course the major carbonate species in the ocean. Although it is interconvertible with CO2 and is thus coupled to the carbon and photosynthetic cycles, itself can neither be taken up nor produced by organisms, and so it can be treated somewhat independently of biological activity. In this sense the only major input of into the oceans is river water. The two removal mechanisms are formation of CO2 $H^+ + HCO_3^– \rightarrow H_2O + CO^2$ and the (biologically mediated) formation of $CaCO_3^:$ $Ca^{2+} + HCO_3^– \rightarrow CaCO_3 + H^+$ Since the pH of the oceans does not change, $H^+$ is conserved and the removal of $HCO_3^–$ by biogenic secretion of $CaCO_3$ can be expressed by the sum of these reactions: $Ca^{2+} + 2 HCO_3^– \rightarrow CaCO_3 + CO_2 + H_2O$ whose reverse direction represents the dissolution of the skeletal remains of dead organisms as they fall to lower depths. Calcite The upper parts of the ocean tend to be supersaturated in CaCO3. Solid CaCO3, in the form of calcite, is manufactured by a large variety of organisms such as foraminifera. A constant rain of calcite falls through the ocean as these organisms die. The solubility of increases with pressure, so only that portion of the calcite that falls to shallow regions of the ocean floor is incorporated into sediments and removed from circulation; the remainder dissolves after reaching a depth known as the lysolcline. At the present time, the amounts of carbonate and Ca2+ supplied by erosion and volcanism appear to be only about one-third as great as the amount of calcite produced by organisms. As the carbonate concentration in a given region of the ocean becomes depleted due to higher calcite production, the lysocline moves up, tending to replenish the carbonate, and reducing the amount that is withdrawn by burial in sediments. Organic residues that fall into the deep sea are mostly oxidized to CO2, presumably by bacterial activity. Other Elements Calcium Calcium is removed from seawater solely by biodeposition as CaCO3, a process whose rate can be determined quite accurately both at the present and in the past. Table $1$: Present-day budget for oceanic calcium (Tg Ca/yr). inputs outputs rivers 550 CaCO3 into shallow water 520 volcanic basalt 191 CaCO3 into deep water 440 cation exchange 37 total 778   960 As is explained above, the upper part of the ocean is supersaturated in calcite but the lower ocean is not. For this reason, less than 20% of the CaCO3 that is formed ends up as sediment and is eventually buried. The main questions about the calcium budget tend to focus on the rates and locales at which dissolution of skeletal carbonates occurs, and on how to interpret the various kinds of existing carbonate sediments. For example, the crystalline form aragonite is less stable than calcite, and will presumably dissolve at a higher elevation. The absence of aragonite-containing pteropod shells in deeper deposits seems to confirm this, but in the absence of rate data is it difficult to know at what elevations these particular organisms originated. The data in the above Table indicate that at the present time there is a net removal of calcium from the oceans. This is due to the rise in sea level since the decline of the more recent glacial epoch during the past 11,000 years. The additional water has covered the continental shelves, increasing the amount of shallow ocean where the growth of organisms is most intense. Over the more distant past (25 million years) the calcium budget appears to be well balanced. Chloride Evidence from geology and peleontology indicates that the salinity, and hence the chloride concentration of sewater has been quite constant for about 600 million years. There have been periods when climatic conditions and coastal topography have led to episodes of evaporite formation, but these have evidently been largely compensated by the eventual return of the evaporite deposits to the sea. The natural input of chloride from rivers is about 215 Tg/yr, but the present input is about half again as great (Table 7 on page 33), due to pollution. Also, there are presently no significant areas where seawater is evaporating to dryness. Thus the oceanic chloride budget is considerably out of balance. However, the replacement time of Cl in the oceans is so long (87 million years) that this will probably have no long-term effect. Sodium Although sodium is tied to chloride, it is also involved in the formation of silicate minerals, the weathering of rocks, and in cation exchange with clay sediments. Its short-term budget is quite out of balance for the same reasons as is that of chloride. On a longer time scale, removal of sodium by reaction with hot basalt associated with undersea volcanic activity may be of importance. Sulfate Considerably more sulfate is being added to seawater than is being removed by the major mechanisms of sediment formation (mainly CaSO4 and pyrites). The natural river input is 82 Tg of S per year, while that due to pollution is 61 Tg/yr from rivers and 17 Tg/yr from rain and dry deposition. Magnesium This element is unusual in that its river-water input is balanced mostly by reaction with volcanic basalt; removal through biogenic formation of magnesian calcite (dolomite) accounts for only 11% of its total removal from the ocean. The present-day magnesium budget seems to have been balanced for the past 100 million years. However, most of the extensive dolomite deposits were formed prior to this time, so the longer-term magnesium budget is poorly understood. Potassium The potassium budget of the ocean is not well understood. The element is unusual in that only about 60% of its input is by rivers; the remainder is believed to come from newly formed undersea basalt. The big question about potassium is how it is removed; fixation by ion-exchange with illite clays seems to be a major mechanism, and its uptake by basalt (at lower temperatures than are required for its release) is also believed to occur. Silica About 85% of the silicon input to the oceans comes from river water in the form of silicic acid, H4SiO4. The remainder probably comes from basalt. It is removed by biogenic deposition as opaline silica, SiO2·n H2O produced mainly by planktonic organisms (radiolaria and diatoms). Unlike the case for CaCO3, the ocean is everywhere undersaturated in silica, especially near the surface where these organisms deplete it with greater efficiency than any other element. Because opaline silica dissolves so rapidly, only a small fraction makes it to the bottom. The major deposits occur in shallower waters where coastal upwelling provides a good supply of N and P nutrients for siliceous organisms. Thus over half of the biogenic silica deposits are found in the Antarctic ocean. In spite of the fact that dissolved silica has the shortest replacement time (21,000 years) of any major element in the ocean, its concentration appears to have been remarkably constant during geological time. This is taken as an indication of the ability of siliceous organisms to respond quickly to changes in local concentrations of dissolved silica. Nitrogen This element is complicated by its biologically-mediated exchange with atmospheric nitrogen, and by its existence in several oxidation states, all of which are interconvertible. Unlike the other major elements, nitrogen does not form extensive sedimentary deposits; most of the nitrogen present in dead organic material seems to be removed before it can be buried. Through this mechanism there is extensive cycling of nitrogen between the shallow and deep parts of the oceans. The real difficulty in constructing a budget for oceanic nitrogen is the very large uncertainty in the rates of the major input (fixation) and output. Both of these processes are biologically mediated, but little is known about what organisms are responsible, where they thrive and how they are affected by local nutrient supply and other conditions.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.04%3A_Chemical_budgets_of_oceanic_elements.txt
Life as we know it on the Earth is entirely dependent on the tenuous layer of gas that clings to the surface of the globe, adding about 1% to its diameter and an insignificant amount to its total mass. And yet the atmosphere serves as the earth’s window and protective shield, as a medium for the transport of heat and water, and as source and sink for exchange of carbon, oxygen, and nitrogen with the biosphere. The atmosphere acts as a compressible fluid tied to the earth by gravitation; as a receptor of solar energy and a thermal reservoir, it constitutes the working fluid of a heat engine that transports and redistributes matter and energy over the entire globe. The atmosphere is also a major temporary repository of a number of chemical elements that move in a cyclic manner between the hydrosphere, atmosphere, and the upper lithosphere. Finally, the atmosphere is a site for a large variety of complex photochemically initiated reactions involving both natural and anthropogenic substances. • 3.1: Structure and Composition of the Atmosphere The atmosphere acts as a compressible fluid tied to the earth by gravitation; as a receptor of solar energy and a thermal reservoir, it constitutes the working fluid of a heat engine that transports and redistributes matter and energy over the entire globe. The atmosphere is also a major temporary repository of a number of chemical elements that move in a cyclic manner between the hydrosphere, atmosphere, and the upper lithosphere. • 3.2: Origin and Evolution of the Atmosphere The atmosphere of the Earth (and also of Venus and Mars) is generally believed to have its origin in relatively volatile compounds that were incorporated into the solids from which these planets accreted. 03: The Atmosphere Life as we know it on the Earth is entirely dependent on the tenuous layer of gas that clings to the surface of the globe, adding about 1% to its diameter and an insignificant amount to its total mass. And yet the atmosphere serves as the earth’s window and protective shield, as a medium for the transport of heat and water, and as source and sink for exchange of carbon, oxygen, and nitrogen with the biosphere. The atmosphere acts as a compressible fluid tied to the earth by gravitation; as a receptor of solar energy and a thermal reservoir, it constitutes the working fluid of a heat engine that transports and redistributes matter and energy over the entire globe. The atmosphere is also a major temporary repository of a number of chemical elements that move in a cyclic manner between the hydrosphere, atmosphere, and the upper lithosphere. Finally, the atmosphere is a site for a large variety of complex photochemically initiated reactions involving both natural and anthropogenic substances. On the scale of cubic meters the air is a homogeneous mixture of its constituent gases, but on a larger scale the atmosphere is anything but uniform. Variations of temperature, pressure, and moisture content in the layers of air near the earth’s surface give rise to the dynamic effects we know as the weather. Although the density of the atmosphere decreases without limit with increasing height, for most practical purposes one can roughly place its upper boundary at about 500 km. However, half the mass of the atmosphere lies within 5 km, and 99.99% within 80 km of the surface. The average atmospheric pressure at sea level is 1.01 x 105 pascals, or 1010 millibars. A 1-cm2 cross section of the earth’s surface supports a column weighing 1030 g; the total mass of the atmosphere is about 5.27 x 1021 g. About 80% of the mass of the atmosphere resides in the first 10 km; this well-mixed region of fairly uniform composition is known as the troposphere. Solar irradiation of the Earth The gases ozone, water vapor, and carbon dioxide are only minor components of the atmosphere, but they exert a huge effect on the Earth by absorbing radiation in the ranges indicated by the shading. Ozone in the upper atmosphere filters out the ultraviolet light below about 360 nm that is destructive of life. O2, H2O, CO2 and CH4 are "greenhouse" gases that trap some of the heat absorbed from the Sun and prevent it from re-radiating back into space. Structure of the atmosphere We commonly think of gas molecules as moving about in a completely random manner, but the Earth’s gravitational field causes downward motions to be very slightly favored so that the molecules in any thin layer of the air collide more frequently with those in the layer below. This gives rise to a pressure gradient that is the most predictable and well known structural characteristic of the atmosphere. This gradient is described by an exponential law which predicts that the atmospheric pressure should decrease by 50% for every 6 km increase in altitude. This law also predicts that the composition of a gas mixture will change with altitude, the lower-molecular weight components being increasingly favored at higher altitudes. However, this gravitational fractionation effect is completely obliterated below about 160 km owing to turbulence and convective flows (winds). The atmosphere is divided vertically into several major regions which are distinguished by the sign of the temperature gradient. In the lowermost region, the troposphere, the temperature falls with increasing altitude. The major source of heat input into this part of the atmosphere is long-wave radiation from the earth’s surface, while the major loss is radiation into space. At higher elevations the temperature begins to rise with altitude as we move into a region in which heat is produced by exothermic chemical reactions, mainly the decomposition of ozone that is formed photochemically from dioxygen in the stratosphere. At still higher elevations the ozone gives out and the temperature begins to drop; this is the mesosphere, which is finally replaced by the thermosphere which consists largely of a plasma (gaseous ions). This outer section of the atmosphere which extends indefinitely to perhaps 2000 km is heated by absorption of intense u.v. radiation from the Sun and also from the solar wind, a continual rain of electrons, protons, and other particles emitted from the Sun’s surface. Structure of the atmosphere. The main divisions of the atmosphere are defined by the elevations at which the sign of the temperature gradient changes. The chemical formulas at the right show the major species of interest in the various regions. The shaded D- and E-layers are regions of high ion concentrations that reflect radio waves and are important in long-distance communication. Composition of the atmosphere Except for water vapor, whose atmospheric abundance varies from practically zero up to 4%, the fractions of the major atmospheric components N2, O2, and Ar are remarkably uniform below about 100 km. At greater heights, diffusion becomes the principal transport process, and the lighter gases become relatively more abundant. In addition, photochemical processes result in the formation of new species whose high reactivities would preclude their existence in significant concentrations at the higher pressures found at lower elevations. The atmospheric gases fall into three abundance categories: major, minor, and trace. Nitrogen, the most abundant component, has accumulated over time as a result of its geochemical inertness; a very small fraction of it passes into the other phases as a result of biological activity and natural fixation by lightning. It is believed that denitrifying bacteria in marine sediments may provide the major route for the return of N2 to the atmosphere. Oxygen is almost entirely of biological origin, and cycles through the hydrosphere, the biosphere, and sedimentary rocks. Argon consists mainly of Ar40 which is a decay product of K40 in the mantle and crust. Table 1: Major components nitrogen N2 78.08 % oxygen O2 20.95 % argon Ar 0.93 % The most abundant of the minor gases aside from water vapor is carbon dioxide, about which more will be said below. Next in abundance are neon and helium. Helium is a decay product of radioactive elements in the earth, but neon and the other inert gases are primordial, and have probably been present in their present relative abundances since the earth’s formation. Two of the minor gases, ozone and carbon monoxide, have abundances that vary with time and location. A variable abundance implies an imbalance between the rates of formation and removal. In the case of carbon monoxide, whose major source is anthropogenic (a small amount is produced by biological action), the variance is probably due largely to localized differences in fuel consumption, particularly in internal combustion engines. The nature of the carbon monoxide sink (removal mechanism) is not entirely clear; it may be partly microbial. Table 2: Minor components water H2O 0-4 % carbon dioxide CO2 325 ppm neon Ne 18 ppm helium He 5 ppm methane CH4 2 ppm krypton Kr 1 ppm hydrogen H2 0.5 ppm nitrous oxide N2O 0.3 ppm carbon monoxide CO 0.05-0.2 ppm ozone O3 0.02 - 10 ppm xenon Xe 0.08 ppm Table 3:Trace components ammonia NH3 4 ppb nitrogen oxide NO 1 ppb sulfur dioxide SO2 1 ppb hydrogen sulfide H2S 0.05 ppb Ozone Ozone is formed by the reaction of O2 with oxygen atoms produced photochemically. As a consequence the abundance of ozone varies with the time of day, the concentration of O atoms from other sources (photochemical smog, for example), and particularly with altitude; at 30 km, the ozone concentration reaches a maximum of 12 ppm. Carbon dioxide The concentration of atmospheric carbon dioxide, while fairly uniform globally, is increasing at a rate of 0.2-0.7% per year as a result of fossil fuel burning. The present CO2 content of the atmosphere is about 129 1018 g. Most of the CO2, however, is of natural origin, and represents the smallest part of the total carbonate reservoir that includes oceanic CO2, HCO3, and carbonate sediments. The latter contain about 600 times as much CO2 as the atmosphere, and the oceans contain about 50 times as much. These relative amounts are controlled by the rates of the reactions that interconvert the various forms of carbonate. The surface conditions on the earth are sensitively dependent on the atmospheric CO2 concentration. This is due mainly to the strong infrared absorption of CO2, which promotes the absorption and trapping of solar heat (see below). Since CO2 acts as an acid in aqueous solution, the pH of the oceans is also dependent on the concentration of CO2 in the atmosphere; it has been estimated that if only 1% of the carbonate presently in sediments were still in the atmosphere, the pH of the oceans would be 5.9, instead of the present 8.2. Energy balance of the atmosphere and earth The amount of energy (the solar flux) impinging on the outer part of the atmosphere is 1367 watts m–2 . About 30% of this is reflected or scattered back into space by clouds, dust, and the atmospheric gas molecules themselves, and by the earth’s surface. About 19% of the radiation is absorbed by clouds or the atmosphere (mainly by and O3 , but not CO2), leaving 51% of the incident energy available for absorption by the earth’s surface. If one takes into account the uneven illumination of the earth’s surface and the small flux of internal heat to the surface, the assumption of thermal equilibrium requires that the earth emit about 240 watts m–2. This corresponds to the power that would be emitted by a black body at 255 K, or –18°C, which is the average temperature of the atmosphere at an altitude of 5 km. The observed mean global surface temperature of the earth is 13°C, and is presumably the temperature required to maintain thermal equilibrium between the earth and the atmosphere. The greenhouse effect The energy radiated by the earth has a longer wavelength (maximum 12m) than the incident radiation. Most gases absorb radiation in this range quite efficiently, including those gases such as CO2 and N2O that do not absorb the incident radiation. The energy absorbed by atmospheric gases is re-radiated in all directions; some of it therefore escapes into space, but a portion returns to the earth and is reabsorbed, thus raising its temperature.This is commonly called the greenhouse effect. If the amount of an infrared-absorbing gas such as carbon dioxide increases, a larger fraction of the incident solar radiation is trapped, and the mean temperature of the earth will increase. Any significant increase in the temperature of the oceans would increase the atmospheric concentrations of both water and CO2, producing the possibility of a runaway process that would be catastrophic from a human perspective. Fossil fuel combustion and deforestation during the last two hundred years have increased the atmospheric CO2 concentration by 25%, and this increase is continuing. The same combustion processes responsible for the increasing atmospheric CO2 concentration also introduce considerable quantities of particulate materials into the upper atmosphere. The effect of these would be to scatter more of the incoming solar radiation, reducing the amount that reaches and heats the earth’s surface. The extent to which this process counteracts the greenhouse effect is still a matter of controversy; all that is known for sure is that the average temperature of the Earth is increasing. Nitrous oxide Carbon dioxide is not the only atmospheric gas of anthropogenic origin that can affect the heat balance of the earth; other examples are SO2 and N2O. Nitrous oxide is of particular interest, since its abundance is fairly high, and is increasing at a rate of about 0.5% per year. It is produced mainly by bacteria, and much of the increase is probably connected with introduction of increased nitrate into the environment through agricultural fertilization and sewage disposal. Besides being a strong infrared absorber, N2O is photochemically active, and can react with ozone. Any significant depletion of the ozone content of the upper atmosphere would permit more ultraviolet radiation to reach the earth. This would have numerous deleterious effects on present life forms, as well as contributing to a temperature increase. The warming effect attributed to anthropogenic additions of greenhouse gases to the atmosphere is estimated to be about 2 watts per m2 , or about 1.5% of the 150 watts per m2trapped by clouds and atmospheric gases. This is a relatively large perturbation compared to the maximum variation in solar output of 0.5 watts per m2 that has been observed during the past century. Continuation of greenhouse gas emission at present levels for another century could increase the atmospheric warming effect by 6-8 watts per m2. A less-appreciated side effect of the increase in atmospheric carbon dioxide (and of other plant nutrients such as nitrates) may be reduction in plant species diversity by selectively encouraging the growth of species which are ordinarly held in check by other species that are able to grow well with fewer nutrients. This effect, for which there is already some evidence, could be especially pronounced when the competing species utilize the C3 and C4 photosynthetic pathways that differ in their sensitivity to CO2. Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/03%3A_The_Atmosphere/3.01%3A_Structure_and_Composition_of_the_Atmosphere.txt
Prebiotic atmosphere The atmosphere of the Earth (and also of Venus and Mars) is generally believed to have its origin in relatively volatile compounds that were incorporated into the solids from which these planets accreted. Such compounds could include nitrides (a source of N2), water (which can be taken up by silica, for example), carbides, and hydrogen compounds of nitrogen and carbon. Many of these compounds (and also some noble gases) can form clathrate complexes with water and some minerals which are fairly stable at low temperatures. The high temperatures developed during the later stages of accretion as well as subsequent heating produced by decay of radioactive elements presumably released these gases the surface. Even at the present time, large amounts of CO2 , water vapor, N2, HCl, SO2 and H2S are emitted from volcanos. The more reactive of these gases would be selectively removed from the atmosphere by reaction with surface rocks or dissolution in the ocean, leaving an atmosphere enriched in its present major components with the exception of oxygen which is discussed in the next section. Any hydrogen present would tend to escape into space, causing the atmosphere to gradually became less reducing. However there is now some doubt that hydrogen and other volatiles (mainly the inert gases) were present in the newly accreted planets in anything like their cosmic abundance. The main evidence against this is the observation that gases such as helium, neon and argon, which are among the ten most abundant elements in the universe, are depleted on the earth by factors of 10–7 to 10–11 . This implies that there was a selective removal of these volatiles prior to or during the planetary accretion process. The overall oxidation state of the earth’s mantle is not consistent with what one would expect from equilibration with highly reduced volatiles, and there is no evidence to suggest that the composition of the mantle has not remained the same. If this is correct, then the primitive atmosphere may well have had about the same composition as the gases emitted by volcanos at the present time. These consist mainly of water and CO2, together with small amounts of N2, H2, H2S, SO2, CO, CH4, NH3, HCl and HF. If water vapor was a major component of outgassing of the accreted earth, it must have condensed quite rapidly into rain. Any significant concentration of water vapor in the atmosphere would have led to a runaway greenhouse effect, resulting in temperatures as high as 400°C. Origin of Atmospheric Oxygen Free oxygen is never more than a trace component of most planetary atmospheres. Thermodynamically, oxygen is much happier when combined with other elements as oxides; the pressure of O2 in equilibrium with basaltic magmas is only about 10–7 atm. Photochemical decomposition of gaseous oxides in the upper atmosphere is the major source of O2 on most planets. On Venus, for example, CO2 is broken down into CO and O2 . On the earth, the major inorganic source of O2 is the photolysis of water vapor; most of the resulting hydrogen escapes into space, allowing the O2 concentration to build up. An estimated 2 1011 g of O2 per year is generated in this way. Integrated over the earth’s history, this amounts to less than 3% of the present oxygen abundance. The partial pressure of O2 in the prebiotic atmosphere is estimated to be no more than 10–3 atm, and may have been several orders of magnitude less. The major source of atmospheric oxygen on the earth is photosynthesis carried out by green plants and certain bacteria: $\ce{H2O + CO2 -> (CH2O)_x + O2}$ A historical view of the buildup of atmospheric oxygen concentration since the beginning of the sedimentary record (3.7 109 ybp) can be worked out by making use of the fact that the carbon in the product of the above reaction has a slightly lower C13 content than does carbon of inorganic origin. Isotopic analysis of carbon-containing sediments thus provides a measure of the amounts of photosynthetic O2 produced at various times in the past. Figure $1$: Evolution of atmospheric oxygen content. Note carefully that the curve plots cumulative O2 production, but that until a few hundred million years ago, most of this was taken up by Fe(II) compounds in the crust and by reduced sulfur; only after this massive "oxygen sink" became filled did free O2 begin to accumulate in the atmosphere. Carbon Dioxide Carbon dioxide has probably always been present in the atmosphere in the relatively small absolute amounts now observed (around 54 x 1015 mol = 54 Pmol). The reaction of CO2 with silicate-containing rocks to form precambrian limestones suggest a possible moderating influence on its atmospheric concentration throughout geological time. $\ce{ CaSiO_3 + CO_2 \rightarrow CaCO_3 + SiO_2 }$ About ten percent of the atmospheric $CO_2$ is taken up each year by photosynthesis. Of this, all except 0.05 percent is returned by respiration, almost entirely due to microorganisms. The remainder leaks into the slow part of the geochemical cycle, mostly as buried carbonate sediments. Since the advent of the industrial revolution in 1860, the amount of $CO_2$ in the atmosphere has been increasing. Isotopic analysis shows that most of this has been due to fossil-fuel combustion; in recent years, the mass of carbon released to the atmosphere by this means has been more than ten times the estimated rate of natural removal into sediments. The large-scale destruction of tropical forests, which has accelerated greatly in recent years, is believed to exacerbate this effect by removing a temporary sink for $CO_2$. The oceans have a large absorptive capacity for $CO_2$ owing to its reaction with carbonate: $\ce{CO_2 + CO_3^{2–} \rightleftharpoons 2 HCO_3^–}$ There is about 60 times as much inorganic carbon in the oceans as in the atmosphere. However, efficient transfer takes place only into the topmost (100 m) wind-mixed layer of the ocean, which contains only about one atmosphere equivalent of $CO_2$. Further uptake is limited by the very slow transport of water into the deep ocean, which takes around 1000 years. For this reason, the buffering effect of the oceans on atmospheric $CO_2$ is not very effective; only about ten percent of the added $CO_2$ is taken up by the oceans. Figure. This set of plots, taken from an ETE page with data from NASA, shows the dramatic increase in atmospheric CO2since the beginning of the industrial age. The "squiggles" on the Mauna Loa data reflect seasonal variations; more CO2is taken up during the longer-daylight periods of the summer.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/03%3A_The_Atmosphere/3.02%3A_Origin_and_Evolution_of_the_Atmosphere.txt
The biosphere comprises the various regions near the earth’s surface that contain and are dynamically affected by the metabolic activity of the approximately 1.5 million animal species and 0.5 million plant species that are presently known and are still being discovered at a rate of about 10,000 per year. The biosphere is the youngest of the dynamical systems of the earth, having had its genesis about 2 billion years ago. It is also the one that has most profoundly affected the other major environmental systems, particularly the atmosphere and the hydrosphere. • 4.1: Chemistry and Energetics of the Life Process About a third of the chemical elements cycle through living organisms, which are responsible for massive deposits of silicon, iron, manganese, sulfur, and carbon. Large quantities of methane and nitrous oxide are introduced into the atmosphere by bacterial action, and plants alone inject about 400,000 tons of volatile substances (including some metals) into the atmosphere annually. • 4.2: Biogeochemical Evolution Present evidence suggests that blue-green algae, and possibly other primitive microbial forms of life, were flourishing 3 billion years ago. This brackets the origin of life to within one billion years; prior to 4 billion years ago, surface temperatures were probably above the melting point of iron, and there was no atmosphere nor hydrosphere. • 4.3: Gaia - Bioregulation of the Environment The physical conditions under which life as we know it can exist encompass a relatively narrow range of temperature, pH, osmotic pressure, and ultraviolet radiation intensity. It seems remarkable enough that life was able to get started at all; it is even more remarkable that it has continued to thrive in the face of all the perils that have, or could have occurred, during the past 3 billion years or so. 04: The Biosphere The biosphere comprises the various regions near the earth’s surface that contain and are dynamically affected by the metabolic activity of the approximately 1.5 million animal species and 0.5 million plant species that are presently known and are still being discovered at a rate of about 10,000 per year. The biosphere is the youngest of the dynamical systems of the earth, having had its genesis about 2 billion years ago. It is also the one that has most profoundly affected the other major environmental systems, particularly the atmosphere and the hydrosphere. "The tree of life" About a third of the chemical elements cycle through living organisms, which are responsible for massive deposits of silicon, iron, manganese, sulfur, and carbon. Large quantities of methane and nitrous oxide are introduced into the atmosphere by bacterial action, and plants alone inject about 400,000 tons of volatile substances (including some metals) into the atmosphere annually. It has been suggested that biological activity might be responsible for the deficiency of hydrogen on Earth, compared to its very high relative abundance in the solar system as a whole. Bacteria capable of reducing hydrogen compounds into H2 transform this element into a form in which it can escape from the earth; such bacteria may have been especially active in the reducing atmosphere of the early planet. A second mechamism might be the microbial production of methane, which presently injects about 109 T of CH4 into the atmosphere each year. Some of this reaches the stratosphere, where it is oxidized to CO2 and H2O. The water vapor is photolyzed to H2, which escapes into space. This may be the major mechanism by which water vapor (and thus hydrogen) is transported to the upper atmosphere; the low temperature of the upper atmosphere causes most of the water originating at lower levels to condense before it can migrate to the top of the atmosphere. The increase in the abundance of atmospheric oxygen from its initial value of essentially zero has without question been the most important single effect of life on earth, and the time scale of this increase parallels the development of life forms from their most primitive stages up to the appearance of the first land animals about 0.5 billion years ago. Bioenergetics All life processes involve the uptake and storage of energy, and its subsequent orderly release in small steps during the metabolic process. This energy is taken up in the combination of ADP with inorganic phosphate to form ATP, in which form the energy is stored and eventually delivered to sites where it can provide the free energy needed for driving non-spontaneous reactions such as protein and carbohydrate synthesis. ADP + PO43 → ATP ΔG° = +30 kJ The three main metabolic processes are glycolysis, respiration, and photosynthesis. The first two of these extract free energy from glucose by breaking it up into smaller, more thermodynamically stable fragments. Photosynthesis reverses this process by capturing the energy of sunlight into ATP which then drives the buildup of glucose from CO2. Glycolysis and fermentation. As its name implies, this most primitive (and least efficient) of all metabolic processes is based on the breakdown of a sugar into fragments having a smaller total free energy. Thus the 6-carbon sugar glucose can be broken down into two 3-carbon lactic acid units, or into three 2-carbon ethanol units. C6H12O6 → 2 CH3CHOHCOOH ΔG° = –197 kJ In this process, two molecules of ATP are produced, thus capturing 61 kJ of free energy. Since the standard free energy of glucose (with respect to its elements) is –2870 kJ, this represents an overall efficiency of about 2 percent. The net reaction of glycolysis is essentially a rearrangement of the atoms initially present in the energy source. This is a form of fermentation, which is defined as the enzymatic breakdown of organic molecules in which other organic compounds serve as electron acceptors. Since there is no need for an external oxidizing or reducing agent, there is no change in the oxidation state of the environment. Respiration When the enzymatic degradation of organic molecules is accompanied by transfer of electrons to an external (and usually better) electron acceptor, the process is known as respiration. The overall reaction of respiration is the oxidation of glucose to carbon dioxide: C6H12O6 + 6 O2 → 6 CO2 + 6 H2O ΔG° = –2380 kJ In this process, 36 molecules of ADP are converted into ATP, thus capturing 1100 kJ of free energy: an efficiency of 38 percent. The oxidizing agent (electron sink) need not be oxygen; some bacteria reduce nitrates to NO or to N2, and sulfates or sulfur to H2S. These metabolic products can have far-reaching localized environmental effects, particularly if hydrogen ions are involved. Falling down the respiratory ladder presents a succinct picture of oxidation-reduction and the role of non-O2 electron sinks in biological energy capture. Photosynthesis The energy of sunlight is trapped in the form of an intermediate which is able to deliver electrons to successively lower free-energy levels through the mediation of various molecules (mainly cytochromes) comprising an electron-transport chain. The free energy thus gained is utilized in part to reduce CO2 to glucose, which is then available to supply metabolic energy by glycolysis or respiration. In green plants and eukaryotic algae, the source of hydrogen is water, the net reaction being 6 CO2 + 6 H2O → C6H12O6 + 6 O2 ΔG° = +2830 kJ For every CO2 molecule fixed in this way, 469 kJ of free energy must be supplied. Red light of 680 nm wavelength has an energy of 168 kJ/mol; this implies that about three photons must be absorbed for every carbon atom taken up, but experiment indicates that about ten seem to be required. The photosynthetic - respiration cycle. Photosynthesis utilizes the energy of red light to add hydrogen (from H2O) and electrons (from the O in H2O) to CO2, reducing it to carbohydrate. Respiration is the reverse of this process; electrons are removed from the carbohydrate (food is "burned") in small steps, each one releasing a small amount of energy. Some of this energy is liberated as heat, but part of it is used to add a phosphate group to ADP, converting it into ATP. ATP (adenosine triphosphate) is to an organism's energy needs as money is to our material needs; it circulates to wherever it is required in order to bring about energy-requiring reactions or to make muscle cells contract. Each increment of energy given up by ATP converts it back to ADP and phosphate, ready to repeat the cycle. Green plants are able to operate in both modes during the daylight hours, reverting to respiration-only at night. Animals carry out only the right side of the cycle and thus require as source of carbohydrate ("food") either directly (by eating plants), or indirectly by eating other animals that eat plants. There are many kinds of photosynthetic bacteria, but with one exception (cyanobacteria) they are incapable of using water as a source of hydrogen for reducing carbon dioxide. Instead, they consume hydrogen sulfide or other reduced sulfur compounds, organic molecules, or elemental hydrogen itself, excreting the reducing agent in an oxidized state. Green plants, cyanobacteria, green filamentous bacteria and the purple nonsulfur bacteria utilize glucose by respiration during periods of darkness, while the green sulfur bacteria and the purple sulfur bacteria are strictly anaerobic.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/04%3A_The_Biosphere/4.01%3A_Chemistry_and_Energetics_of_the_Life_Process.txt
Present evidence suggests that blue-green algae, and possibly other primitive microbial forms of life, were flourishing 3 billion years ago. This brackets the origin of life to within one billion years; prior to 4 billion years ago, surface temperatures were probably above the melting point of iron, and there was no atmosphere nor hydrosphere. By about 3.8 billion years ago, or one billion years after the earth was formed, cooling had occurred to the point where rain was possible, and primitive warm, shallow oceans had formed. The atmosphere was anoxic and highly reducing, containing mainly CO2, N2, CO, H2O, H2S, traces of H2, NH3, CH4, and less than 1% of the present amount of O2, probably originating from the photolysis of water vapor. This oxygen would have been taken up quite rapidly by the many abundant oxidizable substances such as Fe(II), H2S, and the like. Timeline for development of the major life forms. Evidence for early life The fossil record that preserves the structural elements of organisms in sedimentary deposits has for some time provided a reasonably clear picture of the evolution of life during the past 750,000 years. In more recent years, this record has been considerably extended, as improved techniques have made it possible to study the impressions made by single-celled microorganisms embedded in rock formations. The main difficulty in studying fossil microorganisms extending back beyond a billion years is in establishing that the relatively simple structural forms one observes are truly biogenic. There are three major kinds of evidence for this. • Many of the most primitive life forms are still thriving, and these provide useful models with which some of the fossil forms can be compared. • Carbon isotope ratios provide a second independent line of evidence for early life, or at least that of photosynthetic origin. In photosynthesis, C12O2 is taken up slightly more readily than is the heavier (and rarer) isotope C13O2; thus all but the very earliest life forms have left an isotopic fossil record even though the structural fossil may no longer be identifiable. • A third evidence for early life is any indication of the presence of free oxygen in the local environment. Easily oxidizable species such as Fe(II) were very widely distributed on the primitive earth, and could not remain in contact with oxygen for very long without being oxidized. The oldest known formations of oxidized iron pyrite and of uranite are in sediments that were laid down between 2.0 and 2.3 billion years ago. If all three of these lines of evidence are present in samples that can be shown to be contemporaneous with the sediments in which they are found, then the argument for life is incontrovertible. One of the most famous of these sites was discovered near Thunder Bay, Ontario in the early 1950’s. The Gunflint Formation consists of an exposed layer of chert (largely silica) from which the overlying shale of the Canadian Shield had been removed. Microscopic examination of thin sections of this rock revealed a variety of microbial cell forms, including some resembling present freshwater blue-green algae. Also present in the Gunflint Deposits are the oldest known examples of metazoa, or organisms which display a clear differentiation into two or more types of cell. These deposits have been dated at 1.9-2.0 billion years. These filaments are believed to be the fossilized imprints of blue-green algae, one of the earliest life forms. They occur in the Bitter Springs Formation in Australia and are about 850 million years old. The evidence from very old paleomicrobiotic deposits is less clear. Western Australia has yielded fossil forms that are apparently 2.8 billion years old, and other deposits in the same region contain structures resembling living blue-green algae. Other forms, heavily modified by chemical infiltration, bear some resemblence to a present iron bacterium, and are found in sediments laid down 3.5 billion years ago, but evidence that these fossils are contemporaneous with the sediments in which they are found is not convincing. The oldest evidence of early life is the observed depletion of C13 in 3.8-billion year old rocks found in southwestern Greenland. Origin of life: the organic environment Under the conditions that prevailed at this time, most organic molecules would be thermodynamically stable, and there is every indication that a rich variety of complex molecules would be present. The most direct evidence of this comes from laboratory experiments that attempt to simulate the conditions of the primitive environment of this period, the first and most famous of these being the one carried out by Stanley Miller in 1953. Schematic of the Miller experiment. A mixture of the various reduced gases believed to be compose the early atmosphere circulate through an apparatus in which spark discharges (intended to simulate lightning) create a complex mixture of organic compounds. S.L. Miller: A production of amino acids under possible primitive earth conditions. 1953: Science (117) 528-529. U. Texas provided the above diagram, and contains a very good description of of primitive life with many pictures of fossil organisms. Since that time, other experiments of a similar nature have demonstrated the production of a wide variety of compounds under prebiotic conditions, including nearly all of the monomeric components of the macromolecules present in living organisms. In addition, small macromolecules, including peptides and sugars, as well as structural entities such as lipid-based micelles, have been prepared in this way. The discovery in 1989 of a number of amino acids in the iridum-rich clay layer at the Cretaceous-Tertiary boundary suggests that bio-precursor molecules can be formed or deposited during a meteoric impact. Although this particular event occurred only 65 million years ago (and is presumed to be responsible for the extinction of the dinosaurs), the Earth has always been subject to meteoric impacts, and it is conceivable that these have played a role in the origin of life. The presence of clays, whose surfaces are both asymmetric and chemically active, could have favored the formation of species of a particular chirality; a number of experiments have shown that clay surfaces can selectively adsorb amino acids which then form small peptides. It has been suggested that the highly active and ordered surfaces of clays not only played a crucial role in the formation of life, but might have actually served as parts of the first primitive self-replicating life forms, which only later evolved into organic species. The first organisms Since no laboratory experiment has yet succeeded in producing a self-replicating species that can be considered living, the mechanism by which this came about in nature must remain speculative. Infectious viruses have been made in the laboratory by simply mixing a variety of nucleotide precursors with a template nucleic acid and a replicase enzyme; the key to the creation of life is how to do the same thing without the template and the enzyme. Smaller polynucleotides may have formed adventitiously, possibly on the active surface of an inorganic solid. These could form complementary base-paired polymers, which might then serve as the templates for larger molecules. Non-enzymatic template-directed synthesis of nucleotides has been demonstrated in the laboratory, but the resulting polymers have linkages that are not present in natural nucleotides. It has been suggested that these linkages could have been selectively hydrolyzed by a long period of cycling between warm, cool, wet, and dry environmental conditions. The earth at that time was rotating more rapidly than it is now; cycles of hydration-dehydration and of heating-cooling would have been more frequent and more extreme. The first organisms would of necessity have been heterotrophs— that is, they derived their metabolic energy from organic compounds in the environment. Their capacity to synthesize molecules was probably very limited, and they would have had to absorb many key substances from their surroundings in order to maintain their metabolic activity. Among the most primitive organisms of this kind are the archaeons, which are believed to be predecessors of both bacteria and eucaryotes. DNA sequencing of one such organism, a methane-producer that lives in ocean-bottom sediments at 200 atm and 48-94°C, reveals that only about a third of the genes resemble those of bacteria or eucaryotes. It has been estimated that about 50 genes are required in order to define the minimal biochemical and structural machinery that a hypothetical simplest possible cell would have. Development of photosynthetic organisms The earliest organisms derived their metabolic energy from the organic substances present in their environment; once they began to reproduce, this nutrient source began to become depleted. Some species had probably by this time developed the ability to reduce carbon dioxide to methane; the hydrogen source could at first have been H2 itself (at that time much more abundant in the atmosphere), and later, various organic metabolites from other species could have served. Before the food supply neared exhaustion, some of these organisms must have developed at least a rudimentary means of absorbing sunlight and using this energy to synthesize metabolites. The source of hydrogen for the reduction of CO2 was at first small organic molecules; later photosynthetic organisms were able to break this dependence on organic nutrients and obtain the hydrogen from H2S. These bacterial forms were likely the dominant form of life for several hundred million years. Eventually, due perhaps to the failing supply of H2S, plants capable of mediating the photochemical extraction of hydrogen from water developed. This represented a large step in biochemical complexity; it takes 10 times as much energy to abstract hydrogen from water than from hydrogen sulfide, but the supply is virtually limitless. It appears that photosynthesis evolved in a kind of organism whose present-day descendents are known as cyanobacteria. Procaryotes and eucaryotes The five “kingdoms” into which living organisms are classified are Monera, Protista (protozoans, algae), Fungi, Plantae, and Animalia. The genetic (and thus, evolutionary) relations between these and the subcategories within them are depicted below. Superimposed on this, however, is an even more fundamental division between the procaryotes and eucaryotes. Procaryotes In this group are primitive organisms whose single cells contain no nucleus; the gene-bearing structure is a single long DNA chain that is folded irregularly throughout the cell. Procaryotic cells usually reproduce by budding or division; where sexual reproduction does occur, there is a net transfer of some genetic material from one cell to another, but there is never an equal contribution from both parents. In spite of their primitive nature, procaryotes constitute the majority of organisms in the biosphere. The division between bacteria and archaea within the procaryotic group is a fairly recent one. Archaea are now believed to be the most primitive of all organisms, and include the so-called extremophiles that occupy environmental niches in which life was at one time thought to be impossible; they have been found in sedimentary rocks, hot springs, and highly saline environments. Eucaryotes. All other organisms— seaweeds (algae), protozoa, molds, fungi, animals and plants, are composed of eucaryotic cells. These all have a membrane-bound nucleus, and with a few exceptions they all reproduce by mitosis, in which the chromosomes split longitudinally and move toward opposite poles. Other organelles unique to eucaryotes are mitochondria, ribosomes, and structural elements such as microtubules. Oxygen and biogeochemical evolution Oxygen is poisonous to all forms of life in the absence of enzymes that can reduce the highly reactive byproducts of oxidation and oxidative metabolism (peroxides, superoxides, etc.). All organic compounds are thermodynamically unstable in the presence of oxygen; carbon-carbon double bonds in lipids are subject to rapid attack. Prebiotic chemical evolution leading to the development of biopolymers was possible only under the reducing, anoxic conditions of the primitive atmosphere. The rise of atmospheric oxygen. Once organisms existed that could use water as a hydrogen source for the reduction of carbon dioxide, O2began to be introduced into the atmosphere. The widespread occurrence of ferrous compounds in surface rocks and sediments provided a sink for this oxygen that probably did not become saturated until about 2 billion years ago, when the atmospheric oxygen abundance first rose above about 1 percent.Illustration from All about entropy, the laws of thermdynamics, and order from disorder. There is lots of good stuff here! As the oxygen concentration began to rise, organisms in contact with the atmosphere had to develop protective mechanisms in order to survive. One indication of such adaptation is the discovery of fossil microbes whose cell walls are unusually thick. A more useful kind of adaptation was the synthesis of compounds that would detoxify oxygen by reacting rapidly either with O2 itself or with peroxides and other active species derived from it. Isoprenoids (the precursors of steroids) and porphyrins are examples of two general classes of compounds that are found in nearly all organisms, and which may have originated in this way. Later, highly efficient oxygen mediating enzymes such as peroxidase and catalase developed. The widespread phenomenon of bioluminescence may be the result of a very early adaptation to oxygen. The compound luciferin is a highly efficient oxygen detoxifier, which also happens to be able to emit light under certain conditions. Bioluminescence probably developed as a by-product in early procaryotic organisms, but was gradually lost as more efficient detoxifying mechanisms became available. Development of the eucaryotes In spite of the deleterious effects of oxygen on cell biomolecules, O2 is nevertheless an excellent electron sink, capable of releasing large quantities of energy through the oxidation of glucose. This energy can be efficiently captured through oxidative phosphorylation, the key process in respiration. A cell that utilizes oxygen must have a structural organization that isolates the oxygen-consuming respiratory centers from the other parts of the cell that would be poisoned by oxygen or its reaction products. Some procaryotic organisms have developed in this way; a number of cyanobacteria and other species are facultative anaerobes which can survive both in the presence and absence of oxygen. It is in the eucaryotic cell, however, that this organization is fully elaborated; here, respiration occurs in membrane-bound organelles called mitochondria. With only a few exceptions, all eucaryotic organisms are obligate aerobes; they can rarely survive and can never reproduce in the absence of oxygen. Mitotic cell division depends on the contractile properties of the protein actomyosin, which only forms when oxygen is present. The development of the eucaryotic cell about 1.4 billion years ago is regarded as the most significant event in the evolution of the earth and of the biosphere since the appearance of photosynthesis and the origin of life itself. How did it come about? The present belief, supported by an increasing amount of evidence, suggests that it began when one species of organism engulfed another. The ingested organism possessed biochemical machinery not present in the host, but which was retained in such a way that it conferred a selective evolutionary advantage on the host. Eventually the two organisms became able to reproduce as one, and so effectively became a single organism. This process is known as endosymbiosis According to this view, mitochondria represent the remains of a primitive oxygen-tolerant organism that was incorporated into one that could produce the glucose fuel for the oxygen to burn. Chloroplasts were once free-living photosynthesizing procaryotes similar to present-day cyanobacteria. It is assumed that some of these began parasitising respiratory organisms, conferring upon them the ability to synthesize their carbohydrate food during daylight. The immense selective advantage of this arrangement is evident in the extent of the plant kingdom. Consequences of the oxygen increase It is interesting that an atmospheric oxygen concentration of about 1 percent, known as the Pasteur point is both the maximum that obligate anaerobes can tolerate, and the minimum required for oxidative phosphorylation. Louis Pasteur discovered that some bacteria are anaerobic and unable to tolerate oxygen above 1% concentration. As was mentioned previously, the oxygen produced by the first photosynthetic organisms was taken up by ferrous iron in sediments and surface minerals. The widespread deposits known as banded iron formations consist of alternating layers of Fe(III)-containing oxides (hematite and magnetite) that were laid down between 1 and 2 billion years ago; the layering may reflect changing climatic or other environmental conditions that brought about a cycling of the organism population. During the buildup of oxygen, an equivalent amount of carbon had to be deposited in sediments in order to avoid the thermodynamically spontaneous back reaction which would consume the O2through oxidation of the organic matter. Thus the present levels of atmospheric oxygen are due to a time lag in the geochemical cycling of photosynthetic products. As the oxygen concentration increased, evolution seems to have speeded up; this may reflect both the increased metabolic efficiency and the greater biochemical complexity of the eucaryotic cell. The oldest metazoan (multiple-celled) fossils are coelenterates that appeared about 700 million years ago. Modern representatives of this group such as marine worms and jellyfish can tolerate oxygen concentrations as low as 7%, thus placing a lower boundary on the atmospheric oxygen content of that era. The oldest fossil organisms believed to have possessed gills, which function only above 10% oxygen concentration, appeared somewhat later.Carbon dioxide decreased as oxygen increased, as indicated by the prevalence of dolomite over limestone in early marine sediments.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/04%3A_The_Biosphere/4.02%3A_Biogeochemical_Evolution.txt
The Gaia Hypothesis The physical conditions under which life as we know it can exist encompass a relatively narrow range of temperature, pH, osmotic pressure, and ultraviolet radiation intensity. It seems remarkable enough that life was able to get started at all; it is even more remarkable that it has continued to thrive in the face of all the perils that have, or could have occurred, during the past 3 billion years or so. During the time that life has been evolving, the sun has also been going through the process of evolution characteristic of a typical star; one consequence of this is an increase in its energy output by about 30 percent during this time. If the sun’s output should suddenly drop to what it was 3 billion years ago, the oceans would freeze. How is it that the earth was not in a frozen state for the first 1.5 billion years of life’s existence? Alternatively, if conditions were somehow suitable 3 billion years ago, why have the oceans not long since boiled away? A rather non-traditional answer to this kind of question is that the biosphere is far from playing a passive role in which it is continually at the mercy of environmental conditions. Instead, the earth’s atmosphere, and to a lesser extent the hydrosphere, may be actively maintained and regulated by the biosphere. This view has been championed by the British geochemist J.E. Lovelock, and is known as the Gaia hypothesis. Gaia is another name for the Greek earth-goddess Ge, from which root the sciences of geography, geometry, and geology derive their names. Lovelock's book Gaia: a new look at life on Earth (Oxford, 1979) is a short and highly readable discussion of the hypothesis. Evidence in support of this hypothesis is entirely circumstantial, but nevertheless points to important questions that must be answered: how have the climatic and chemical conditions on the earth remained optimal for life during all this time; how can the chemical composition of the atmosphere remain in a state that is tens of orders of magnitude from equilibrium? Note Although the Gaia hypothesis has received considerable publicity in the popular press, it has never been very well received by the scientific community, many of whom feel that there is no justification for proposing a special hypothesis to describe a set of connections which can be quite adequately explained by conventional geochemical processes. More recently, even Lovelock has backed away from the teleological interpretation of these relations, so that the Gaia hypothesis should now be more properly described as a set of loosely connected effects, rather than as a hypothesis. Nevertheless, these effects and the mechanisms that might act to connect them are sufficiently interesting that it seems worthwhile to provide an overview of the major observations that led to the development of the hypothesis. Teleology is the doctrine that natural processes operate with a purpose. See No longer willful, Gaia becomes respectable. 1988: Science 240 393-395. Bioregulation of the Atmosphere The increase in the oxygen content of the atmosphere as a result of the development of the eucaryotic cell was discussed above. Why has the oxygen content leveled off at 21 percent? It is interesting to note that if the oxygen concentration in the atmosphere were only four percent higher, even damp vegetation, once ignited by lightning, would continue to burn, enveloping vast areas of the earth in a firestorm. Evidence for such a worldwide firestorm that may be related to the extinction of the dinosaurs has recently been discovered. The charcoal layers found in widely distributed sediments laid down about 65 million years ago are coincident with the iridium anomaly believed to be due to the collision of a large meteor with the earth. • Oxygen: Regulation of the oxygen partial pressure is probably achieved by a balance between its production through photosynthesis and its consumption during oxidation of organic matter; the present steady state requires the burial of about 0.1% the carbon that is fixed annually, leaving one O2 molecule in the air for each atom of carbon removed from the photosynthetic cycle. The large quantities of microbially-produced methane and N2O also constitute important oxygen sinks; if methanogenic bacteria should suddenly cease to exist, the O2 concentration would rise by 1% in about 12,000 years. This type of regulation implies a negative feedback mechanism, in which an increase in atmospheric oxygen would increase the activity of organisms capable of generating metabolic products that react with it. • Nitrous oxide: Nitrous oxide, in addition to serving as an oxygen sink, might also be a factor in the regulation of the intensity of the ultraviolet component of sunlight. N2O acts as a catalytic intermediate in the decomposition of stratospheric ozone, which shields the earth from excessive ultraviolet radiation. • Ammonia: Ammonia, another atmospheric gas, is produced by the biosphere in approximately the same quantities as methane, 109 tons per year, and at the expense of a considerable amount of metabolic energy. The function of NH3 could well be to regulate the pH of the environment; in the absence of ammonia, the large amounts of SO2 and HCl produced by volcanic action would reduce the pH of rain to about 3. The fact that the atmospheric concentration of ammonia is only 10–8 times that of N2 should not imply that this “trace” component plays a less significant role in the overall nitrogen cycle than does than N2. In fact, the annual rates of production of the two gases are roughly the same; the much lower steady-state concentration of NH3 is due to its faster turnover time. • Nitrogen: As stable as the triply-bonded N2 molecule is, there is a still more stable form of nitrogen: the hydrated nitrate ion. How is this stability consistent with the predominance of nitrogen in the atmosphere? The answer is that it is not: if it were not for nitrogen-fixing bacteria (powered directly or indirectly by the free energy of ATP captured from sunlight), the nitrogen content of the atmosphere would disappear to almost zero. This would raise the oxygen fraction to disastrously high levels, and the additional NO3 concentration would increase the ionic strength and osmotic pressure of seawater to levels inconsistent with most forms of life. Bioregulation of the Oceans The input of salts into the sea from streams and rivers is about 5.4 x 108 tons per year, into a total volume of about 1.2 x 109 km3 yr–1 . Upwelling of juvenile water and hydrothermal action at oceanic ridges provide additional inputs of salts. With a few bizarre exceptions such as the brine shrimp and halophilic bacteria, 6 percent is about the maximum salinity level that organisms can tolerate. The internal salinities of cells must be maintained at much lower levels (around 1%) to prevent denaturation of proteins and other macromolecules whose conformations are dependent on electrostatic forces. At higher levels than this, the electrostatic interaction between the salt ions and the cell membrane destroys the integrity of the latter so that it can no longer pump out salt ions that leak in along the osmotic gradient. At the present rate of salt input, the oceans would have reached their present levels of salinity millions of years ago, and would by now have an ionic strength far to high to support life, as is presently the case in the landlocked Dead Sea. The present average salinity of seawater is 3.4 percent. The salinity of blood, and of many other intra- and intercellular fluids in animals, is about 0.8 percent. If we assume that the first organisms were approximately in osmotic equilibrium with seawater, then our body fluids might represent “fossilized” seawater as it existed at the time our predecessors moved out of the sea and onto the land. By what processes is salt removed from the oceans in order to maintain a steady-state salinity? This remains one of the major open questions of chemical oceanography. There are a number of answers, mostly based on strictly inorganic processes, but none is adequately supported by available evidence. For example, Na+ and Mg2+ ions could adsorb to particulate debris as it drops to the seafloor, and become incorporated into sediments. The requirement for charge conservation might be met by the involvement of negatively charged silicate and hydroxyaluminum ions. Another possible mechanism might be the burial of salt beds formed by evaporation in shallow, isolated arms of the sea, such as the Persian Gulf. Extensive underground salt deposits are certainly found on most continents, but it is difficult to see how this very slow mechanism could have led to an unfluctuating salinity over shorter periods of highly variable climatic conditions. The possibility of biological control of oceanic salinity starts with the observation that about half of the earth’s biomass resides in the sea, and that a significant fraction of this consists of diatoms and other organisms that build skeletons of silica. When these organism die, they sink to the bottom of the sea and add about 300 million tons of silica to sedimentary rocks annually. It is for this reason that the upper levels of the sea are undersaturated in silica, and that the ratio of silica to salt in dead salt lakes is much higher than in the ocean. These facts could constitute a basis for a biological control of the silica content of seawater; any link between silica and salt could lead to the control of the latter substance as well. For example, the salt ions might adsorb onto the silica skeletons, and be carried down with them; if the growth of these silica-containing organisms is itself dependent on salinity, we would have our negative feedback mechanism. The continual buildup of biogenic sedimentary deposits on the ocean floor might possibly deform the thin oceanic crust by its weight, and cause local heating by its insulating properties. This could conceivably lead to volcanic action and the formation of new land mass, thus linking the lithosphere into Gaia.
textbooks/chem/Environmental_Chemistry/Geochemistry_(Lower)/04%3A_The_Biosphere/4.03%3A_Gaia_-_Bioregulation_of_the_Environment.txt
“If we do not change direction, we are likely to end up where we are headed,” (old Chinese proverb).“If we make the effort to learn its language, Earth will speak to us and tell us what we must do to survive.” 01: Sustainability and the Environment The old Chinese proverb certainly applies to modern civilization and its relationship to world resources that support it. Evidence abounds that humans are degrading the Earth life support system upon which they depend for their existence. The emission to the atmosphere of carbon dioxide and other greenhouse gases is almost certainly causing global warming and climate change. Discharge of pollutants has degraded the atmosphere, the hydrosphere, and the geosphere in industrialized areas and has placed great stress on parts of the biosphere. Natural resources including minerals, fossil fuels, fresh water, and biomass have become stressed and depleted. The productivity of agricultural land has been diminished by water and soil erosion, deforestation, desertification, contamination, and conversion to non-agricultural uses. Wildlife habitats including woodlands, grasslands, estuaries, and wetlands have been destroyed or damaged. About 3 billion people (half of the world’s population) live in dire poverty on less than the equivalent of U.S. \$2/day. The majority of these people lack access to sanitary sewers and the conditions under which they live give rise to debilitating viral, bacterial, and protozoal diseases. At the other end of the living standard scale, a relatively small fraction of the world’s population consumes an inordinate amount of resources with lifestyles that involve living too far from where they work in energy-wasting houses that are far larger than they need, commuting long distances in large “sport utility vehicles” that consume far too much fuel, and overeating to the point of unhealthy obesity with accompanying problems of heart disease, diabetes, and other obesity-related maladies. As We Enter the Anthropocene Humans have gained an enormous capacity to alter Earth and its support systems. Their influence is so great that we are now entering a new epoch, the Anthropocene, in which human activities have effects that largely determine conditions on the planet. The major effects of humans upon Earth have taken place within a minuscule period of time relative to that during which life has been present on the planet or, indeed, relative to the time that modern humans have existed. These effects are largely unpredictable, but it is essential for humans to be aware of the enormous power in their hands — and of their limitations if they get it wrong and ruin Earth and its climate as life support systems. Achieving Sustainability Although the condition of the world and its human stewards outlined above sounds rather grim and pessimistic, this is not a grim and pessimistic book. That is because the will and ingenuity of humans that have given rise to conditions leading to deterioration of Planet Earth can be — indeed, are being — harnessed to preserve the planet, its resources, and its characteristics that are conducive to healthy and productive human life. The key is sustainability or sustainable development defined by the Bruntland Commission in 1987 as industrial progress that meets the needs of the present without compromising the ability of future generations to meet their own needs. 1A key aspect of sustainability is the maintenance of Earth’s carrying capacity, that is, its ability to maintain an acceptable level of human activity and consumption over a sustained period of time. Although change is a normal characteristic of nature, sudden and dramatic change can cause devastating damage to Earth support systems. Change that occurs faster than such systems can adjust can cause irreversible damage to them. In addition to its main theme of green chemistry, a major purpose of this book is to serve as an overview of the science and technology of sustainability emphasizing sustainable chemistry as well as the general science and technology of sustainability. Rethinking Environmentalism and Sustainability The common view of a good, sustainable environment as a rural, low-population-density area may be misleading. A convincing argument for this proposition is made in the 2009 book Green Metropolis: Why Living Smaller, Living Closer, and Driving Less are the Keys to Sustainability.2Classified as an “eco-urbanist manifesto,” this book makes the somewhat surprising case that New York City’s Manhattan is a model of sustainability for the modern overpopulated world. This densely populated compact city emits less than one third as much greenhouse gas per person compared to the average for the United States. One reason why this is so is that the large apartment buildings and other large structures in New York City are very efficient in conserving heat; that which leaks from one tends to end up heating another. Cold air produced by air conditioning in the summer is similarly conserved. Another reason that the city is energy-efficient stems from its outrageously congested traffic and lack of affordable parking meaning that the automobile is impractical for most residents thereby forcing reliance on far more efficient public transportation. Only about one-fifth of New York City’s residents regularly commute with individual automobiles. In contrast, those who live “close to nature” in rural settings tend to dwell in free-standing houses that are inherently less energy efficient than apartment buildings and by necessity they must commute with energy-wasting vehicles. If they live on unimproved roads they may require especially inefficient large, rugged four-wheel-drive vehicles. Compensation cannot be made for such a lifestyle by measures advocated by many environmentalists, such as backyard compost piles and fuel-efficient vehicles. According to Green Metropolis, New York City, which has a population density more than 800 times that of the U.S. as a whole and about 30 times that of Los Angeles, offers a model for a growing world population to exist within the confines of Earth’s limited resources. The prescription for sustainability is to “live smaller, live closer, and drive less.” To that may be added “reproduce less” in that dense urban environments tend to discourage large families. A major culprit in the development of modern environmental problems is the public obsession with the private automobile, which enables destructive urban sprawl and excessive consumption of gasoline. One of the unintended consequences of the laudable goal of increased fuel economy in automobiles is to make them more affordable to use, thus facilitating destructive urban sprawl. The automobile-based societies of the U.S. and many other industrialized nations has been made possible by the exploitation of relatively abundant and inexpensive petroleum. In years to come, as petroleum inevitably becomes more scarce and expensive, these societies will have to undergo wrenching changes, the best end result of which would be much more sustainable, compact urban societies
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.01%3A_Sustainability.txt
Since this book deals with the environment, it is important to know what is meant by the environment. Essentially, the environment consists of our surroundings, which may affect us and which, in turn, we may affect. A part of the environment may consist of rock formations several kilometers below Earth’s surface, so deep that humans cannot reach them and of which they are scarcely aware except in those instances when the rock formations shift along a fault line and cause an earthquake, which may be very destructive and take many lives. Another part of the environment is the atmosphere touching Earth’s surface, a part of the environment with which humans are always in contact and which is essential for the life-giving oxygen that they require. In discussing the environment it is helpful to regard it as consisting of the following five spheres as shown in Figure 1.1: (1) The hydrosphere, (2) the atmosphere, (3) the geosphere, (4)the biosphere, and (5) the anthrosphere (that part of the environment constructed and operated by humans). These spheres overlap and interact strongly. For example, fish are part of the biosphere, dwell in the hydrosphere, and acquire dissolved oxygen that they need from the atmosphere. Mineral nutrients required by the fish and by the algae upon which the fish feed come from the geosphere. The part of the hydrosphere in which the fish reside may be a reservoir constructed by impounding a stream with a dam that is part of the anthrosphere. Many other such examples may be cited. Biogeochemical cycles describe the exchange of materials among the five environmental spheres. Aspects of these environmentally crucial cycles are covered in various parts of this book. As the name implies, these cycles involve biological and geochemical phenomena but may also include processes that occur in the atmosphere and the hydrosphere as well as human influences on them. An important part of these cycles consists of the interfaces between the environmental spheres. The interfaces are often very thin with respect to Earth’s whole environment. An important example of such an interface is the one between the geosphere and the atmosphere where most plants grow that support life on Earth. Typically this region extends into the geosphere for only the meter or less penetrated by plant roots and into the atmosphere only to the height of the plants. Within this region there are other interfaces including the biosphere/geosphere boundary between plant roots and soil and the biosphere/atmosphere boundary across which oxygen and carbon dioxide gas are exchanged between leaf surfaces and the atmosphere. The study of the environment is environmental science, in its broadest sense the science of the complex interactions that occur among the terrestrial, atmospheric, aquatic, living, and anthropological systems that compose Earth and the surroundings that may affect living things.3 It includes all the disciplines, such as chemistry, biology, ecology, sociology, and government, that affect or describe these interactions. For the purposes of this book, environmental science will be defined as the study of the earth, air, water, and living environments, and the effects of technology there on. To a significant degree, environmental science has evolved from investigations of the ways by which, and places in which, living organisms carry out their life cycles. This discipline used to be known as natural history, which later evolved into ecology, the study of environmental factors that affect organisms and how organisms interact with these factors and with each other.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.02%3A_The_Environment_and_the_Five_Environmental_Sphere.txt
Given the dependence of humans upon a livable environment, it is essential that it be maintained in a healthy state. The maintenance of a healthy environment is commonly termed sustainability. In recent years there has been a lot of activity in the area of sustainability. Earlier efforts in the sustainability arena centered around pollution and its effects. Degradation of the environment has been a concern of thoughtful people for many decades. Dating back to the early 1800s and even before, the widespread use of high-sulfur coal for fuel was noted as a cause of bad air quality and impaired visibility in urban areas such as London. Water polluted by pathogenic microorganisms sickened and killed millions of people for many centuries. By the end of World War II, the atmosphere of Los Angeles had become noxious, irritating, and unhealthy due to the presence of ozone and other chemical oxidants, aldehydes, and small particulate matter. In some respects, this condition resembled pollution of the London atmosphere observed earlier, which was a combination of smoke and fog which some called “smog.” So the condition afflicting Los Angeles and a number of similar cities came to be known as smog, but a kind of smog that developed in air having low humidity and exposed to intense sunlight, conditions opposite of those under which London smog was formed. Chemically, the two kinds of smog were totally different in that London smog was in a reducing atmosphere with high concentrations of chemically reducing SO2 whereas the Los Angeles smog is oxidizing and any SO2 emitted to it is rapidly oxidized to sulfuric acid. Concern over deterioration of the environment increased with the 1962 publication of Rachel Carson’s classic book Silent Spring,4 the theme of which was that DDT and other mostly pesticidal chemicals were becoming concentrated through the food chain with the result that birds at the top of the chain, such as eagles and hawks, were producing eggs with soft shells that failed to produce viable baby birds. The implication was that substances harming bird populations might harm humans as well. By about 1970 it was generally recognized that air, water, and land pollution was reaching intolerable levels. As a result, various countries passed and implemented laws designed to reduce pollutants and to clean up waste chemical sites at a cost that has easily exceeded one trillion dollars globally. More recently concern over environmental pollution has extended beyond a narrow focus upon pollution and its effects to include the broader area of sustainability. The achievement of sustainability certainly requires avoiding pollution and counteracting its effects. But it also mean maintaining flows of essential materials, energy, food, safe water, healthy air, and the other things that humans and other organisms on Planet Earth require for their survival and well-being. The term “green” has come to stand for sustainability in its various forms and is used throughout this book. Most of sustainability has to do with matter, and chemistry is the science of matter. It is only natural, therefore, that sustainable chemistry is now known as green chemistry, a discipline that has developed rapidly since about the mid-1990s. This book is about green chemistry. But the practice of green chemistry involves more broadly green science and technology, which are discussed in this book and related to green chemistry.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.03%3A_Seeing_Green.txt
All of that very small group of humans who have been privileged to view Earth from outer space have been struck with a sense of awe at the sight. Photographs of Earth taken at altitudes high enough to capture its entirety reveal a marvelous sphere, largely blue in color, white where covered by clouds, with desert regions showing up in shades of brown and red. But Earth is far more than a beautiful globe that inspires artists and poets. In a very practical sense it is a source of the life support systems that sustain humans and all other known forms of life. Earth obviously provides the substances required for life including water, atmospheric oxygen, carbon dioxide from which billions of tons of biomass are made each year by photosynthesis, and ranging all the way down to the trace levels of micronutrients such as iodine and chromium that organisms require for their metabolic processes. But more than materials are involved. Earth provides temperature conditions conducive to life and a shield against incoming ultraviolet radiation, its potentially deadly photons absorbed by molecules in the atmosphere, their energy dissipated as heat. Earth also has a good capacity to deal with waste products that are discharged to the atmosphere, into water, or into the geosphere. The capacity of Earth to provide materials, protection, and conditions conducive to life is known as its natural capital, which can be regarded as the sum of two major components: natural resources and ecosystem services. These conditions are giving rise to a new business model termed natural capitalism. Early hunter-gatherer and agricultural human societies made few demands upon Earth’s natural capital. As the industrial revolution developed from around 1800, natural resources were abundant and production of material goods was limited largely by labor and the capacity of machines to process materials. But now population is in excess, computerized machines have an enormous capacity to process materials, and the availability of natural capital is the limiting factor in production including availability of natural resources, the vital life-support ability of ecological systems, and the capacity of the natural environment to absorb the byproducts of industrial production, most notably greenhouse gas carbon dioxide. Rather than the adversarial relationship that has prevailed between the traditional business community and environmentalists with regard to economic development, a functioning system of natural capitalism properly values natural and environmental resources. The goal of natural capitalism is to increase well-being, productivity, wealth, and capital while reducing waste, consumption of resources, and adverse environmental effects. The traditional capitalist economic system has proven powerful in delivering consumer goods and services using the leverage of individual and corporate incentives. A functional system of natural capitalism retains these economic drivers while incorporating sustainable practices such as recycling wastes back into the raw material stream and emphasizing the provision of services rather than just material goods. In so doing a system of natural capitalism emulates nature’s systems through the practice of industrial ecology, discussed in detail in Chapter 14, and the application of the principles of green chemistry (see Chapter 2). The development of a functional system of natural capitalism requires several important changes in business practices. These include the following: 1. Implement technologies that are highly productive with greatly reduced use of nonrenewable minerals and energy. 2. Develop systems in which waste materials and energy from one sector are utilized by another sector (a functioning system of industrial ecology). 3. Change business models from those that emphasize selling goods to those that concentrate upon providing services, for example, by selling fewer automobiles and providing better mass transportation systems. 4. Reinvest in natural capital to increase production of ecosystem services. An example is the provision of constructed wetlands as part of wastewater treatment systems to provide wildlife breeding grounds along with finishing of treated wastewater effluent. Evolution of the Utilization of Natural Capital Figure \(1\) shows the burden on Earth’s natural capital as a function of the progression of economic development. During pre-industrial times the capacity of humans to deplete natural capital was minimal, largely because of limitations on the rates at which energy could be used. As the industrial revolution developed and humans learned how to harness energy sources, particularly from fossil fuels, Earth’s natural capital was increasingly consumed in areas such as exploitation of depleting resources and utilization of the hydrosphere, geosphere, and atmosphere for the disposal of wastes. As industrialization progressed, it became increasingly obvious that it was causing problems in areas such as air and water pollution, soil erosion exacerbated by the capabilities of fossil-fueled tillage machinery to disturb soil, and depletion of rich ore sources necessitating the mining of much larger amounts of less rich ores to obtain needed quantities of metals and other geospheric resources. As a consequence, laws were passed and regulations put into place to reduce pollution and to conserve resources. Particularly after the devastating Dust Bowl years of the 1930s in which much productive topsoil was lost to wind erosion, the U.S.government initiated programs of soil conservation with incentives to preserve the essential soil resource. Efforts to reduce air and water pollution concentrated initially on the most obvious pollutants, such as particles emitted from smokestacks, followed by greater emphasis upon more insidious pollutants, such as heavy metals in water. The regulatory approach has been evolving into one that emphasizes pollution prevention, recycling, and conservation of energy and materials. A final phase is sustainable development and utilization of green technology that can support growing economic development while substantially reducing exploitation of Earth’s natural capital.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.04%3A_Natural_Capital_of_the_Earth.txt
The achievement of sustainability and preservation of natural capital requires intense efforts by both individuals and groups. This was illustrated centuries ago in England by “the tragedy of the commons”5. The commons consisted of a pasture shared by village residences to provide forage for their cows, sheep, and horses. An individual family could increase its wealth (in meat, milk, or horsepower) by adding an animal. For example, a one-cow family could double its wealth in cows by buying another and putting it to graze in the pasture. If the pasture was accommodating 100 cows, for example, this would have an apparent cost of only about 1% for the small community as a whole. The natural tendency was for families to keep on adding cows until a point was reached at which the pasture became exhausted and unproductive due to overgrazing, the animals died or had to be slaughtered, and the entire support system to provide milk and meat based upon the natural capital of the pasture in the commons collapsed. During the fourteenth century this unfortunate circumstance became so widespread that the economies of many villages collapsed with whole populations no longer able to provide for their basic food needs. History has many examples of the tragedy of the commons. As an example, when the settlers began to cultivate what was formerly open rangeland in Edwards County, Texas, in the 1880s the ranchers who had used it for pasture met and proclaimed the following: “Resolved that none of us know, or care to know, anything about grasses, native or otherwise, outside of the fact that for the present, there are lots of them, the best on record, and we are getting the most of them while they last.”6 Soon the combined effects of overgrazing and drought reduced the yield of grass such that the ranchers’ livelihoods were threatened and the newly cultivated land became unproductive. Shortsighted attitudes towards Earth’s natural capital similar to those expressed by the ranchers continue to lead to many tragedies of the commons. In modern times heavy cultivation of marginal land is turning large areas to desert (desertification), the Amazon rain forest is being cut down and burned to provide a one-time harvest of wood and a few years of crop production (deforestation), severe deterioration of the global ocean fisheries resource is occurring, congested freeways at times become great linear parking lots and, of much direct concern to many university students and faculty, some parking facilities have become so oversold that their utility is seriously curtailed as paying customers cannot find parking space. The logic of the commons holds true in modern times in which the global commons consist of the air humans must breathe, water resources, agricultural lands, mineral resources, capacity of the natural environment to absorb wastes, and all other facets of natural capital. According to the logic of the commons, each consumer has the right to acquire a segment of natural capital, the cost of which is distributed throughout the commons and shared by all. The natural competition among consumers results in some consumers acquiring relatively more of Earth’s natural capital and becoming wealthier. Within limits this is a healthy consequence of capitalist systems. However, if enough consumer units use too much natural capital, it becomes exhausted and unsustainable, therefore unable to support the society as a whole, so that all suffer, including those on top of the consumer food chain. Automotive transportation illustrates a modern tragedy of the commons. Acquisition of an automobile adds to an individual’s possessions and mobility. The materials required to make a single automobile, the fuel to run it, and its exhaust pollutants make a minuscule impression on Earth’s natural capital. However, when millions of people acquire automobiles, the demand on Earth’s natural capital of materials, fuel, and ability to absorb pollutants becomes severely stressed, heavy traffic turns the automobile from a convenience into a burden, and, in some places at some times, the whole transportation system collapses. These “tragedies of the commons” illustrate the limitations of unregulated “free-for-all” capitalist economic systems in achieving sustainable development and make a strong case for collective actions in the public sector to ensure that humankind can exist within the limits of Earth’s natural capital. However, the collapse of Communist economic systems around 1990 left a legacy of abandoned, inefficient factories, poverty, and environmental degradation showing the adverse effects from discouraging private enterprise. In addition to enlightened regulations that ensure preservation of Earth’s essential support systems, successful economic systems require human ingenuity, initiative, and even greed. Getting these and other incentives to work well on a planet in which natural capital is the major limiting economic factor is the huge challenge facing modern and developing economies. There is an old African proverb that translates to, “It takes a village to raise a child.” The idea is of course that successful child-rearing requires the efforts of more than just parents, but requires the efforts of an entire village. The same principle applies to Planet Earth except that in this case billions of children are being raised and it will take the efforts of a very large village — the population of the entire world — to preserve the planet and its resources upon which those billions of children must depend for their existence.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.05%3A_Sustainability_as_a_Group_Effort-_It_Takes_a_%28Very_Big%29_Village.txt
As discussed in more detail in Chapter 15, the key to sustainability is abundant, environmentally safe energy. The evolution of humankind’s utilization of energy is illustrated in Figure \(1\). Until very recently in the history of humankind we have depended upon the sun to meet our energy needs. The sun has kept most of the land mass of Earth at a temperature that enables human life to exist. Solar radiation has provided the energy for photosynthesis to convert atmospheric carbon dioxide to plant biomass providing humans with food, fiber, and wood employed for dwelling construction and fuel. Animals feeding on this biomass provided meat for food and hides and wool that humans used for clothing. Eventually humans developed means of indirectly using solar energy. This was especially true of wind driven by solar heating of air masses and used to propel sailing vessels and eventually to power windmills employed for power. The solar-powered hydrologic cycle provided flowing water, the energy of which was harnessed by water wheels. Virtually all the necessities of life came from utilization of solar energy. The Brief Era of Fossil Fuels Dating from around 1800, humankind began to exploit fossil fuels for their energy needs. Initially, coal was burned for heating and to power newly developed steam engines for mechanical energy used in manufacturing and steam locomotives. After about 1900 petroleum developed rapidly as a source of fuel and, with the development of the internal combustion engine, became the energy source of choice for transportation needs. Somewhat later natural gas developed as an energy source. The result was a massive shift from solar and biomass energy sources to fossil fuels. Utilization of fossil carbon-based materials resulted in a revolution that went far beyond energy utilization. One important example was the invention by Carl Bosch and Fritz Haber in Germany in the early 1900s, of a process for converting atmospheric elemental nitrogen from air to ammonia, NH3 by the reaction of N2 with H2. This high-pressure, high-temperature process required large amounts of fossil fuel to provide energy and to react with steam to produce elemental hydrogen. The discovery of synthetic nitrogen fixation enabled the production of huge quantities of relatively inexpensive nitrogen fertilizer and the resulting increase in agricultural production may well have saved Europe, with a rapidly developing population at the time, from widespread starvation. (It also enabled the facile synthesis of great quantities of nitrogen-based explosives that killed millions of people in World War I and subsequent conflicts.) Fossil fuel, which has been described as “fossilized sunshine,”7 resulted in an era of unprecedented material prosperity and an increase in human population from around 1 billion to over 6 billion. By the year 2000 it had become obvious that the era of fossil fuels was not sustainable. One reason is that fossil fuel is a depleting resource that cannot last indefinitely as the major source of energy for the industrial society to which it has led. Approximately half of the world’s total petroleum resource has already been consumed so that petroleum will continue to become more scarce and expensive and can last for only a few more decades as the dominant fuel and organic chemicals raw material. Coal is much more abundant, but its utilization leads to the second reason that the era of fossil fuels must end because it is the major source of anthropogenic atmospheric carbon dioxide, greatly increased levels of which will almost certainly lead to global warming and massive climate change. Natural gas (methane, CH4) is an ideal, clean-burning fossil fuel that produces the least amount of carbon dioxide per unit energy generated. Rapidly expanding new discoveries of natural gas largely from previously inaccessible tight shale formations means that it can serve as a “bridging fuel” for several decades until other sources can be developed. Nuclear energy, properly used with nuclear fuel reprocessing, can take on a greater share of energy production, especially for base load electricity generation. But clearly drastic shifts must occur in the ways in which energy is obtained and used. Back to the Sun With the closing of the brief but spectacular era of fossil hydrocarbons, the story of humankind and its relationship to Planet Earth is becoming one of “from the sun to fossil fuels and back again” as humankind returns to the sun as the dominant source of energy and photosynthetic energy to convert atmospheric carbon dioxide to biomass raw materials. In addition to direct uses for solar heating and for photovoltaic power generation, there is enormous potential to use the sun for the production of energy and material. Arguably the fastest-growing energy source in the world is wind-generated electricity. The wind is produced when the sun heats masses of air causing the air to expand. Once the dominant source of energy and materials, biomass produced by solar-powered photosynthesis is beginning to live up to its potential as a source of feedstocks to replace petroleum in petrochemicals manufacture and of energy in synthetic fuels (see Chapter 14,“Chapter 14 Feeding the Anthrosphere: Utilizing Renewable and Biological Materials” and Chapter 15, “Sustainable Energy: The Essential Basis of Sustainable Systems”). Biomass is still evolving as a practical source of liquid fuels. The two main ones of these are fermentation to produce ethanol and synthesis of biodiesel fuel made from plant lipid oils. Unfortunately, although ethanol made from sugar derived from sugar cane that grows prolifically in some areas such as Brazil is an economical gasoline substitute, the net energy gain from ethanol derived from cornstarch relies on the grain, the most valuable part of the plant otherwise used for food and animal feed; the net energy gain is marginal. The economics of producing synthetic biodiesel fuel from sources such as soybeans may be somewhat better. However, production of this fuel from oil palm trees in countries such as Malaysia is resulting in destruction of rain forests and diversion of palm oil from the food supply. Practical means do exist to utilize biomass for energy and materials without seriously disrupting the food supply. Arguably the best way to do that is to thermochemically convert biomass to synthesis gas, a mixture of CO and H2 that can be combined chemically by long-established synthetic routes to produce methane, larger-molecule hydrocarbons, alcohols and other products (see Chapter 15). The main pathway for so doing is to utilize biomass from renewable non-food biosources, which include crop byproducts (wheat straw, rice straw, cornstalks produced in surplus during the production of grain) and dedicated crops among which are highly productive hybrid poplar trees and sawgrass. Microscopic algae are especially promising as a biomass source because of their much higher productivity than terrestrial plants, their ability to grow in brackish(somewhat saline) water in containments in desert areas, and their ability to utilize sewage as a nutrient source. When biomass is used to produce synthesis gas, the essential nutrients, especially potassium and phosphorus, can be reclaimed from the biomass residues and used as fertilizer material to promote the growth of additional biomass. Future scientific discoveries and technological advances will play key roles in the achievement of energy sustainability. Three areas in which Nobel-level breakthroughs are needed in the achievement of energy sustainability were expressed in a February 2009 interview by Dr.Steven Chu, a Nobel Prize winning physicist who had just been appointed Secretary of Energy inU.S. President Barack Obama’s new administration. The first of these is in solar power in which the efficiency of solar energy capture and conversion to electricity needed to improve several-fold. A second area of need is for improved electric batteries to store electrical energy generated by renewable means and to enable practical driving ranges in electric vehicles. A third area in need of a quantum leap is for improved crops capable of converting solar energy to chemical energy in biomass by photosynthesis at much higher efficiencies than the current levels of less than 1%achieved by many crops. In this case the potential for improvement is enormous because most plants convert less than 1% of the solar energy falling on them to chemical energy through photosynthesis. Through genetic engineering, it is likely that this efficiency could be improved several-fold leading to vastly increased generation of biomass. Clearly, the achievement of sustainability employing high-level scientific developments will be an exciting development in decades to come
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.06%3A_Sustainable_Energy-_Away_from_the_Sun_and_Back_Again.txt
As shown in Figure \(1\), science, engineering, and technology are closely interrelated. Science refers to a set of rigorously organized bodies of knowledge and their acquisition based upon several criteria. Dealing with the discovery, explanation, and development of theories pertaining to interrelated natural phenomena of energy, matter, time, and space, science consists of an organized body of facts consistent with a number of general laws that are verifiable by rigorously defined systematic experimental processes composing the scientific method. Although science seeks to avoid value judgements, scientific methods are used for worthy goals, such as development of genetically based cures for disease, as well as for more sinister purposes, such as the synthesis of more deadly military poisons. Pure science is conducted to extend knowledge without defined practical goals and applied science is directed toward practical, usually commercial, objectives; these two aspects of science are commonly applied together. Especially with the explosive growth of the internet, which allows dissemination of both information and misinformation, it is important to beware of “junk science” often used to attempt to support political, economic, or theological agendas in areas such as climate change and evolution. The practice of science dedicated to sustainability including the maintenance of environmental quality, the reduction of hazards, the efficient use of environmentally benign sources of energy, and the minimization of consumption of non-renewable resources is the basis of green science, which is one of the main themes of this book. Chemical science, the science of matter, has great accomplishments to its credit in improving human welfare but it has also led to environmental pollution, exposure to hazardous substances, consumption of resources such as petroleum feedstocks, and other unpleasant aspects of modern industrialized societies. These problems can be mitigated by the constructive application of chemical science known as green chemistry defined as the practice of chemistry in a manner that maximizes its benefits while eliminating or at least greatly reducing its adverse impacts.8 Based upon “twelve principles of green chemistry,”9since the mid-1990s the science of green chemistry has spawned a number of books, journal articles, and symposia as well as journals devoted to the topic and centers and societies of green chemistry. This book is primarily about green chemistry and how it relates to sustainability, green science, and green technology. 1.08: Green Technology Humans direct technology toward practical ends to make things they need with materials and to utilize energy in manufacturing, transportation, and the maintenance of hospitable living conditions. Long a matter of applied human ingenuity, technology is now mostly the product of engineering based on the fundamental knowledge of science and application of scientific principles. Technology uses the plans and means to achieve specific practical objectives provided by engineering to carry out desired objectives. Technology obviously has enormous importance in determining how human activities affect Earth and its life support systems. Three great “growth spurts” in human populations that have taken place since modern humans first appeared on Earth have been enabled by developments in technology that enabled successively higher levels of human populations to exist. The first of these culminating about 10,000 years ago with a human population of perhaps 2 or 3 million was enabled by the primitive, but remarkably effective tools that early humans developed. One such technology was the bow and arrow which allowed for killing game at a much safer distance from the unwilling and often dangerous source of meat than that required for spearing or clubbing the quarry. The technology for making garments from animal hides allowed humans to avoid potentially fatal hypothermia from exposure to Ice Age climates. About 10,000 years ago a second population growth spurt got underway as the “hunter/gatherer” societies that had sustained humans evolved into more reliable agricultural societies when humans learned to cultivate crops and domesticate animals used for meat, milk, and wool. These societies ensured a generally dependable food supply in smaller areas. As a result, humans were able to gather food from relatively small agricultural fields rather than having to scout large expanses of forest or grasslands for game to kill or berries to gather. Agricultural economic systems were based upon newly evolved technologies for cultivating soil, utilizing irrigation, and transporting food for trade by primitive sailing vessels. Ancillary technologies such as spinning wheels and looms for turning wool and plant fibers into cloth and water-powered mills for grinding grain also appeared. The development of agricultural systems had the major effect of allowing humans to remain in one place in settlements and, freed from the necessity of having to constantly hunt and gather food from their natural surroundings, humans could apply their ingenuity in areas such as developing more sophisticated tools. The agricultural revolution allowed a second large increase in numbers of humans and enabled the development of a human population of around 100 million 1000 years ago. The third great growth spurt in human populations came with the industrial revolution, beginning slowly several centuries ago and made possible by the ability to utilize energy other than that provided by human labor and animal power. Initially wind power and water power were harnessed for mills and factories to use to produce goods. After about 1800 this power potential was multiplied many-fold with the steam engine and later the internal combustion engine, turbines, nuclear energy, and electricity, enabling current world population of almost 7 billion and growing at an eventually unsustainable rate (though not as fast as some of the more pessimistic projections from past years). Unintended Consequences and the Need for Green Technology According to the law of unintended consequences, the predicted benefits of new technologies are often accompanied by substantial problems, sometimes called revenge effects, due to the unforeseen ways in which people interact with new technologies. The individual freedom of movement and huge economic boost that resulted from the nascent automobile industry were accurately predicted by visionaries in the early 1900s, but they did not foresee the millions of deaths from automobile accidents, unhealthy polluted air in urban areas, urban sprawl, and depletion of petroleum resources that occurred in the following century. The tremendous educational potential of personal computers was visualized when the first of these came on the market. Less predictable were the mind-numbing hours that young people (and some not so young) would waste playing senseless computer games or viewing questionable content on the internet. Defined as technology applied in a manner that minimizes environmental impact and resource consumption and maximizes economic output relative to materials and energy, green technology is designed to foresee and avoid revenge effects. Aided by increasingly sophisticated computer methodologies, the practitioners of green technology attempt to predict undesirable consequences of new technologies and put in place preventative measures before revenge effects have a chance to develop and cause major problems. A key component of the implementation of green technology is careful consideration of the practice of industrial ecology, which integrates the principles of science, engineering, and ecology in industrial systems through which goods and services are provided in a way that minimizes environmental impact and optimizes utilization of resources, energy, and capital. Above all a sustainable means of providing goods and services, industrial ecology considers every aspect of so doing from concept, through production, and to the final fate of products remaining after use. It is most successful in its application when it mimics natural ecosystems, which are inherently sustainable by nature. Rather than organisms and populations of organisms working in natural ecosystems, industrial ecology works through industrial ecosystems consisting of groups of industrial concerns, distributors, and other enterprises functioning to mutual advantage, using each others’ products, recycling each others’ potential waste materials, and utilizing energy as efficiently as possible. Industrial ecology is discussed in some detail in Chapter 13, “The Anthrosphere, Green Chemistry, and Industrial Ecology.”
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.07%3A_Green_Science.txt
An important component of green technology is life-cycle analysis (assessment) which considers process and product design in the management of materials from their source through manufacturing, distribution, use, reuse (recycle), and ultimate fate. The objective of life-cycle analysis is to determine, quantify, and minimize adverse resource, environmental, economic, and social impacts. The four major facets of life-cycle analysis are(1)determination of the scope of the assessment; (2) inventory analysis of materials mass and energy to enable development of mass and energy balances; (3) analysis of impact on the environment, human health, and other impacted areas; and (4) improvement analysis to determine ways in which greater efficiencies maybe achieved. Life-cycle analysis is summarized in Figure \(1\). Note that there are several possible recycling loops ranging from simple product reuse through material reprocessing and fabrication to waste mining in which wastes are processed to reclaim useful materials that can go back into the manufacturing process. The eco-economy is one in which the production of goods and services is totally integrated with the natural world. The practice of eco-efficiency (Figure \(2\)) enables provision of affordable goods and services to satisfy human needs sustainably, doing so with the minimum consumption of Earth’s natural capital and with most efficient utilization of energy. The practice of eco-efficiency has several major aspects related to sustainability. Dematerialization seeks to meet economic needs with minimum amounts of material using renewable and recycled sources wherever possible. Analogous to dematerialization is “de-energyization” which uses only minimum amounts of energy and takes energy from renewable sources. Service and knowledge flows are substituted for material flows wherever possible. Using natural ecosystems as models, production loops are closed to the maximum extent possible. Sustainability is enhanced by shifting from a supply-driven to a demand-driven economy. Rather than manufacturing large quantities of material goods that are then vigorously marketed, the emphasis is placed upon finding the real needsof consumers, then meeting them in the most efficient way possible. Functional extension is achieved by manufacturing products with enhanced functionality and selling services to increase product functionality. Eco-efficient products are designed to be as durable as possible consistent with their intended uses, to be long-lived, and designed for ease of recycling of components and materials. Dispersion of toxic materials is minimized or eliminated in eco-efficient systems. 1.10: Green Products and Services- Design for Sustainability Green products use relatively smaller amounts of material and energy and throughout their life cycles from manufacture to disposal minimize exposure of humans and the environment to hazardous substances, pollutants, and wastes. A green service fulfills these criteria in providing a service. A hybrid fuel/electric automobile with a capability of recharging its battery from the electrical grid has minimum environmental and sustainability impact so it is a green product. The function of such an automobile can be replaced by a green service consisting of efficient rail and bus transportation. Green products have several characteristics in common. One is high durability so long as it does not pose undue disposal problems. Another characteristic is low potential for exposure to toxic substances. A green product comes with minimal, recyclable packaging and is generally reusable, repairable, and capable of being remanufactured. Green materials used in consumer applications are relatively more concentrated with minimum inert ingredients so that they are economical to transport (a concentrated liquid laundry detergent compared to detergent in a granular formulation containing a lot of filler). Business and governmental practices and infrastructures can determine product sustainability. For example, a sustainable product requires that it be easily repaired and that repair parts are readily available. The sustainability of electrical batteries requires dropoff points and infrastructure for recycling of the materials in the batteries. A good example is the need for facilities to reclaim and recycle the lithium in lithium ion batteries as these sources of electrical energy storage become more popular in a world where sources of lithium are limited. Efforts have been made in some countries to require product take-back to enforce recycling components and materials in spent products. The practices of design for environment and design for sustainability are key aspects of eco-efficiency.10In the practice of design for environment, environmental performance and potential environmental impacts are given priority consideration in the earliest stages of product development including raw materials acquisition, manufacturing, packaging, distribution, installation, operation, and ultimate fate at the end of the useful product lifetime. Design for sustainability does much the same thing with emphasis upon minimizing impact upon Earth's natural capital. The following is a list of some specific design characteristics that go into design for environment and design for sustainability: • Material substitution to use readily available materials from renewable sources that require relatively less energy for processing and that are recyclable,nontoxic, and environmentally friendly • Minimize packaging • Products with long lifetimes • Promote recycling and reuse with products designed for separability, disassembly, reuse, remanufacture and recyclability. • Consumptive materials should be biodegradable or capable of being burned for energy without emitting harmful byproducts (avoid bound halogens and heavy metals in plastics) LITERATURE CITED 1. “World Commission on Environment and Development,”Our Common Future, Oxford University Press, New York, 1987. 2. Owen, David, Green Metropolis: Why Living Smaller, Living Closer, and Driving Less are the Keys to Sustainability, Riverhead Hardcover, 2009, New York. 3. Raven, Peter H., Linda R. Berg, and David M. Hassenzahl, Environment, 6th ed.,Wiley, 2008. 4. Carson, Rachel, Silent Spring, Fawcett Crest, New York, 1964. 5. “The Tragedy of the Commons,” Garrett Hardin, Science, 162, 1243, (1968). 6. Duncan, Dayton,Miles from Nowhere, Viking, New York, 1994, p. 145. 7. Baum, Rudy M., “Sustainability: Learning to Live Off the Sun in Real Time,”Chemical and Engineering News, 86, 42-46 (2008). 8. Anastas, Paul, Irvin J. Levy, and Kathryn E. Parent, Green Chemistry Education: Changing the Course of Chemistry, Oxford University Press, Oxford, UK, 2009. 9. Anastas, Paul T., and John C. Warner, Green Chemistry: Theory and Practice, Oxford University Press, Oxford, UK, 2009. 10. Jasch, Christine, Environmental and Material Flow Cost Accounting: Principles and Procedures (Eco-Efficiency in Industry and Science), Springer, Berlin, 2008.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.09%3A_Sustainability_and_the_Eco-Economy.txt
Access to and use of the internet is assumed in answering all questions including general information, statistics, constants, and mathematical formulas required to solve problems. These questions are designed to promote inquiry and thought rather than just finding material in the text. So in some cases there may be several “right” answers. Therefore, if your answer reflects intellectual effort and a search for information from available sources, your answer can be considered to be “right.” 1. According to some environmentalists, Earth is now entering the Anthropocene epoch. What is an epoch? In which epoch have humans lived until now? What are some of the past epochs that Earth has been through? Approximately when was the term Anthropocene coined and who is responsible for it? 2. It is noted in this chapter that the narrow layer between the geosphere and atmosphere where plants grow is an important environmental interface. The microclimate in this interface may vary significantly in its characteristics from the microclimate of the atmosphere just above it. What is microclimate? How does the microclimate in the narrow layer just described tend to vary from the climate just above it? 3. Thin layers and interfaces are very important in Earth’s environment. Describe some of these layers. 4. The assertion has been made that if Earth were a classroom globe, the layer of soil covering it would be about as thick as the dimensions of a human cell. Assume that a classroom globe is 25 centimeters in diameter. Look up Earth’s dimensions and using the assertion made above, calculate the average thickness of soil in cm. 5. What was the London smog disaster? When did it occur? What interaction between the atmosphere and anthrosphere caused it? Approximately how many people died as a result? 6. Approximately when did the steam engine become a practical machine? How did it enable the industrial revolution to occur? What are the analogies with current conditions in which increasingly large and sophisticated farm machinery are enabling an ongoing revolution in agriculture? 7. What was the Love Canal affair and how did it likely influence environmental laws and implementation of environmental regulations in the U.S.? 8. One of the most active areas in energy development in many nations including the U.S. is the development of new sources of natural gas. What is happening in the area of natural gas utilization? Does burning natural gas emit carbon dioxide to the atmosphere? How is it superior in that respect to coal and petroleum? What is meant by the term “bridging fuel” as applied to natural gas? 9. How are desertification and deforestation related to global warming? How do they contribute to each other? How does destruction of a rain forest contribute carbon dioxide to the atmosphere? 10. The process for chemical fixation of atmospheric nitrogen developed by Bosch and Haber in the early 1900s led to the ability to make huge quantities of explosives that subsequently took millions of lives in warfare. However, the initial product, NH3, is not explosive. Look up the formulas of several common explosives and suggest what is done to use NH3 to make explosives. 11. Secretary of Energy Steven Chu has suggested three areas in which Nobel-level breakthroughs are needed in the achievement of energy sustainability. Considering conditions on Earth and the rate of depletion of natural capital suggest other areas in the general area of sustainability in which breakthroughs are needed. 12. Of all nations, Brazil has been the most successful in using fuels from biological sources as renewable energy sources. What are the conditions in Brazil that have made that possible? 13. What are the major crops that enabled humans to transition from hunter/gatherer societies to agricultural societies? Where is it believed that this transition first took place and how long ago was it? 14. How does photochemical smog that plagues Los Angeles, Mexico City, and many other urban areas around the world illustrate the law of unintended consequences and revenge effects? 15. A basic premise of Green Science and Technology is that “human welfare must be measured interms of quality of life, not just acquisition of material possessions.” Suggest how the dwellings of humans and their living surroundings in general might reflect such a transition. 16. Figure 1.5 reflects various levels of materials use in which the innermost loops are most desirable. The most efficient materials use is product reuse in which a product or component is put directly back into the manufacturing loop. Suggest how a high level of component reuse at the manufacturing site might in fact reflect a less than optimum manufacturing process. 17. Gross domestic product per unit of energy use is a measure of efficiency of the economic systems of various countries and probably reflects the degree to which needed goods and services are provided relative to burden on natural capital. Look up the rank of various nations with respect to this ratio. Are there any surprises in this list? Are there nations on this list with a high rank that are probably not very desirable in terms of living amenities and are there others in which the opposite is true? Suggest two or three nations that have both a high rank and a high quality of life in general? 18. Much concern is being expressed over deteriorating infrastructure in the U.S. What is infrastructure and how may its degeneration contribute to a lack of sustainability? 19. Are paper grocery bags necessarily “green?” After doing some research on paper manufacture suggest ways in which they may not be ideal for sustainability. What is a greener alternative? 20. One recipe for sustainability may be expressed as “electrons, not paper.” Suggest what is meant by this expression and how it may be implemented for sustainability
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/Questions_and_Problems.txt
“We have no choice but to deal with chemistry because all things are chemical. The human body itself is a remarkably complex chemical system composed of thousands of chemicals, the main one of which is water.” 02: The Key Role of Chemistry and Making Chemistry Green Chapter 1 has provided an overview of environmental science and sustainability in general. Sustainability is all about the way that we deal — often poorly and wastefully — with matter and energy. As the science of matter and how it interacts with energy, chemistry is a crucial science in sustainability. Most of this book is about the sustainable practice of chemistry. Many people are freaked out by the idea of chemistry and try to avoid it. But avoiding chemistry is impossible. That is because all matter, all things, the air we must breathe, the water we must drink, and all living organisms are made of chemicals. People who try to avoid all things that they regard as chemical may fail to realize that chemical processes are continuously being carried out in their own bodies. These are processes that far surpass in complexity and variety those that occur in chemical manufacturing operations. So, even those people who want to do so cannot avoid chemistry; they are, themselves, complex chemical factories. The best course of action with anything that cannot be avoided and that might have an important influence on our lives (one’s chemistry professor may come to mind) is to try to understand it, to deal with it. To gain an understanding of chemistry is probably one of the main reasons why you are reading this book. As one of its major functions, this book seeks to present a body of chemical knowledge from the most fundamental level within a framework of the relationship of chemical science to human beings, their surroundings, and their environment. Face it, the study of chemistry-based upon facts about elements, atoms, compounds, molecules, chemical reactions, and other fundamental concepts needed to understand this science, though enticing to some, is found by many to be less than exciting. However, these concepts and many more are essential to a meaningful understanding of chemistry. Anyone interested in green chemistry clearly wants to know how chemistry influences people in the world around us. So this book discusses real-world chemistry, introducing chemical principles as needed. During the approximately two centuries that chemical science has been practiced on an ever-increasing scale, it has enabled the production of a wide variety of goods that are valued by humans. These include such things as pharmaceuticals that have improved health and extended life, fertilizers that have greatly increased food productivity and prevented widespread starvation, and semiconductors that have made possible computers and other electronic devices. Without the persistent efforts of chemists and the enormous productivity of the chemical industry, nothing approaching the high standard of living enjoyed in modern industrialized societies would be possible. But there can be no denying that in years past, and even at present, chemistry has been misused in many respects, such as the release of pollutants and toxic substances and the production of nonbiodegradable materials, resulting in harm to the environment and living things, including humans. It is now obvious that chemical science must be turned away from emphasis upon the exploitation of limited resources and the production of increasing amounts of products that ultimately end up as waste and toward the application of chemical science in ways that provide for human needs without damaging the Earth support system and depleting its natural capital (defined in Chapter 1) upon which all living things depend. Fortunately, the practice of chemical science and industry is moving steadily in the direction of environmental friendliness and resource sustainability. The practice of chemistry in a manner that maximizes its benefits while eliminating or at least greatly reducing its adverse impacts has come to be known as green chemistry, the central theme of this book. As will be seen in later chapters of this book, the practice of chemistry is divided into several main categories. Most elements other than carbon are involved with inorganic chemistry. Common examples of inorganic chemicals are water, salt (sodium chloride), air pollutant sulfur dioxide, and lime. Carbon occupies a special place in chemistry because it is so versatile in the kinds of chemical species (compounds) that it forms. Most of the tens of millions of known chemicals are substances based on carbon. These compounds are organic chemicals and addressed by the subject of organic chemistry. The unique chemistry of carbon is addressed specifically in Chapter 6, “The Wonderful World of Carbon: Organic Chemistry and Biochemicals.” The underlying theory and physical phenomena that explain chemical processes are explained by physical chemistry. Living organisms carry out a vast variety of chemical processes that are important in green chemistry and environmental chemistry. The chemistry that living organisms perform is biochemistry, which is addressed specifically in Chapters 7, “Chemistry of Life andGreen Chemistry.” It is always important to know the identities and quantities of various chemical species present in a system, including various environmental systems. Often, significant quantities of chemical species are very low, so sophisticated means must be available to detect and quantify such species. The branch of chemistry dealing with the determination of kinds and quantities of chemical species is analytical chemistry. As the chemical industry developed and grew during the early and mid-1900s, most practitioners of chemistry remained unconcerned with and largely ignorant of the potential for harm — particularly damage to the outside environment — of their products and processes. Environmental chemistry was essentially unknown and certainly not practiced by most chemists. Incidents of pollution and environmental damage, which were many and severe, were commonly accepted as a cost of doing business or blamed upon the industrial or commercial sectors. The unfortunate attitude that prevailed is summarized in a quote from a standard book on industrial chemistry from 1954 (American Chemical Industry—A History, W. Haynes, Van Nostrand Publishers, 1954): “By sensible definition any by-product of a chemical operation for which there is no profitable use is a waste. The most convenient, least expensive way of disposing of said waste — up the chimney or down the river — is best." Despite their potential to cause harm, nobody is more qualified to accept responsibility for environmental damage from chemical products or processes than are chemists who have the knowledge to understand how such harmful effects came about. As the detrimental effects of chemical manufacture and use became more obvious and severe, chemists were forced, often reluctantly, to deal with them. At present, enlightened chemists and chemical engineers do not view the practice of environmentally beneficial chemistry and manufacturing as a burden, but rather as an opportunity that challenges human imagination and ingenuity.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.01%3A_Chemistry_is_Good_%28and_Unavoidable%29.txt
It was noted in Section 1.2 that, the environment consists of our surroundings, which may affect us and which, in turn, we may affect. Obviously the chemical nature and processes of matter in the environment are important. Compared to the generally well defined processes that chemists study in the laboratory, those that occur in the environment are rather complex and must be viewed in terms of simplified models. A large part of this complexity is due to the fact that the environment consists of the five overlapping and interacting spheres — the atmosphere, the hydrosphere, the geosphere, the biosphere, and the anthrosphere mentioned in Section 1.2 and shown from the viewpoint of their chemical interactions in Figure \(1\). In order to better understand the chemistry that occurs in these spheres they are briefly described here. The atmosphere is a very thin layer compared to the size of Earth, with most atmospheric gases lying within a few kilometers of sea level. In addition to providing oxygen for living organisms, the atmosphere provides carbon dioxide required for plant photosynthesis, and nitrogen that organisms use to make proteins. The atmosphere serves a vital protective function in that it absorbs highly energetic ultraviolet radiation from the sun that would kill living organisms exposed to it. A particularly important part of the atmosphere in this respect is the stratospheric layer of ozone, an ultraviolet-absorbing form of elemental oxygen. Because of its ability to absorb infrared radiation by which Earth loses the energy that it absorbs from the sun, the atmosphere stabilizes Earth’s surface temperature. The atmosphere also serves as the medium by which the solar energy that falls with greatest intensity in equatorial regions is redistributed away from the Equator. It is the medium in which water vapor evaporated from oceans as the first step in the hydrologic cycle is transported over land masses to fall as rain over land. Earth’s water is contained in the hydrosphere. Although frequent reports of torrential rainstorms and flooded rivers produced by massive storms might give the impression that a large fraction of Earth’s water is fresh water, more than 97% of it is seawater in the oceans. Most of the remaining fresh water is present as ice in polar ice caps and glaciers. A small fraction of the total water is present as vapor in the atmosphere. The remaining liquid fresh water is that available for growing plants and other organisms and for industrial uses. This water may be present on the surface as lakes, reservoirs, and streams, or it may be underground as groundwater. The solid part of earth, the geosphere, includes all rocks and minerals. A particularly important part of the geosphere is soil, which supports plant growth, the basis of food for all living organisms. The lithosphere is a relatively thin solid layer extending from Earth’s surface to depths of 50–100 km. The even thinner outer skin of the lithosphere known as the crust is composed of relatively lighter silicate-based minerals. It is the part of the geosphere that is available to interact with the other environmental spheres and that is accessible to humans. The biosphere is composed of all living organisms. For the most part, these organisms live on the surface of the geosphere on soil, or just below the soil surface. The oceans and other bodies of water support high populations of organisms. Some life forms exist at considerable depths on ocean floors. In general, though, the biosphere is a very thin layer at the interface of the geosphere with the atmosphere. The biosphere is involved with the geosphere, hydrosphere, and atmosphere in biogeochemical cycles through which materials such as nitrogen and carbon are circulated. Through human activities, the anthrosphere, that part of the environment made and operated by humans, has developed strong interactions with the other environmental spheres. Many examples of these interactions could be cited. By cultivating large areas of soil for domestic crops, humans modify the geosphere and influence the kinds of organisms in the biosphere. Humans divert water from its natural flow, use it, sometimes contaminate it, then return it to the hydrosphere. Emissions of particles to the atmosphere by human activities affect visibility and other characteristics of the atmosphere. The emission of large quantities of carbon dioxide to the atmosphere by combustion of fossil fuels may be modifying the heat-absorbing characteristics of the atmosphere to the extent that global warming is almost certainly taking place. The anthrosphere perturbs various biogeochemical cycles. The effect of the anthrosphere over the last two centuries in areas such as burning large quantities of fossil fuels is especially pronounced upon the atmosphere and has the potential to change the nature of Earth significantly. According to Nobel Laureate Paul J. Crutzen of the Max Planck Institute for Chemistry, Mainz, Germany, this impact is so great that it will lead to a new global epoch to replace the halocene epoch that has been in effect for the last 10,000 years since the last Ice Age. Dr. Crutzen has coined the term anthropocene (from anthropogenic) to describe the new epoch that is upon us.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.02%3A_The_Environment_and_its_Chemistry.txt
The practice of green chemistry must be based upon environmental chemistry. This important branch of chemical science is defined as the study of the sources, reactions, transport, effects, and fates of chemical species in water, soil, air, and living environments and the effects of technology thereon.1 Figure 2.2 illustrates this definition of environmental chemistry with an important type of environmental chemical species. In this example, two of the ingredients required for the formation of photochemical smog — nitric oxide and hydrocarbons — are emitted to the atmosphere from vehicles and transported through the atmosphere by wind and air currents. In the atmosphere, energy from sunlight brings about photochemical reactions that convert nitric oxide and hydrocarbons to ozone, noxious organic compounds, and particulate matter, all characteristic of photochemical smog. Various harmful effects are manifested, such as visibility-obscuring particles in the atmosphere, or ozone, which is unhealthy when inhaled by humans, or toxic to plants. Finally, the smog products end up on soil, deposited on plant surfaces, or in bodies of water. Figure 2.2.1 showing the five environmental spheres may provide an idea of the complexity of environmental chemistry as a discipline. Enormous quantities of materials and energy are continually exchanged among the five environmental spheres. In addition to variable flows of materials, there are variations in temperature, intensity of solar radiation, mixing, and other factors, all of which strongly influence chemical conditions and behavior. Throughout this book the role of environmental chemistry in the practice of green chemistry is emphasized. Green chemistry is practiced to minimize the impact of chemicals and chemical processes upon humans, other living organisms, and the environment as a whole. It is only within the framework of a knowledge of environmental chemistry that green chemistry can be successfully practiced. There are several highly interconnected and overlapping categories of environmental chemistry. Aquatic chemistry deals with chemical phenomena and processes in water. Aquatic chemical processes are very strongly influenced by microorganisms in the water, so there is a strong connection between the hydrosphere and biosphere insofar as such processes are concerned. Aquatic chemical processes occur largely in “natural waters” consisting of water in oceans, bodies of fresh water, streams, and underground aquifers. These are places in which the hydrosphere can interact with the geosphere, biosphere, and atmosphere and is often subjected to anthrospheric influences. Aspects of aquatic chemistry are considered in various parts of this book and are addressed specifically in Chapter 9, “Water, the Ultimate Green Substance.” Atmospheric chemistry is the branch of environmental chemistry that considers chemical phenomena in the atmosphere. Two things that make this chemistry unique are the extreme dilution of important atmospheric chemicals and the influence of photochemistry. Photochemistry occurs when molecules absorb photons of high-energy visible light or ultraviolet radiation, become energized (“excited”), and undergo reactions that lead to a variety of products, such as photochemical smog. In addition to reactions that occur in the gas phase, many important atmospheric chemical phenomena take place on the surfaces of very small solid particles suspended in the atmosphere and in droplets of liquid in the atmosphere. Although no significant atmospheric chemical reactions are mediated by organisms in the atmosphere, microorganisms play a strong role in determining species that get into the atmosphere. As examples, bacteria growing in the absence of oxygen, such as in cows’ stomachs and under water in rice paddies, are the single greatest source of hydrocarbon in the atmosphere because of the large amounts of methane that they emit. The greatest source of organic sulfur compounds in the atmosphere consists of microorganisms in the oceans that emit dimethyl sulfide. Atmospheric chemistry is addressed specifically in Chapter 10, “Blue Skies for a Green Environment.” Chemical processes that occur in the geosphere involving minerals and their interactions with water, air, and living organisms are addressed by the topic of geochemistry. A special branch of geochemistry, soil chemistry, deals with the chemical and biochemical processes that occur in soil. Aspects of geochemistry are explained in Chapter 11, “The Geosphere and a Green Earth,” and soil and agricultural chemistry are covered in Chapter 12, “The Biosphere and Feeding a Hungry World. Environmental biochemistry addresses biologically mediated processes that occur in the environment. Such processes include, as examples, the biodegradation of organic waste materials in soil or water and processes within biogeochemical cycles, such as denitrification, which returns chemically bound nitrogen to the atmosphere as nitrogen gas. The basics of biochemistry are presented in Chapter 7, “The Chemistry of Life and Green Chemistry,” and other aspects of biochemistry are presented in Chapter 12, “The Biosphere and Feeding a Hungry World.” Chapter 14, “Feeding the Anthrosphere: Utilizing Renewable and Biological Materials,” discusses how chemical processes carried out by organisms can produce material feedstocks needed for the practice of green chemistry. The toxic effects of chemicals are of utmost concern to chemists and the public. Chapter 16, “Terrorism, Toxicity, And Vulnerability: Green Chemistry and Technology in Defense of Human Welfare,” deals with aspects of these toxic effects and discusses toxicological chemistry. Although there is not a formally recognized area of chemistry known as “anthrospheric chemistry,” most of chemical science and engineering developed to date deals with chemistry carried out in the anthrosphere. Included is industrial chemistry, which is very closely tied to the practice of green chemistry. A good way to view “anthrospheric chemistry” from a green chemistry perspective is within the context of industrial ecology. Industrial ecology considers industrial systems in a manner analogous to natural ecosystems. In a system of industrial ecology, various manufacturing and processing operations carry out “industrial metabolism” on materials.A successful industrial ecosystem is well balanced and diverse, with various enterprises that generate products for each other and use each other’s products and potential wastes. A well-functioning industrial ecosystem recycles materials to the maximum extent possible and produces little — ideally no — wastes. Therefore, a good industrial ecosystem is a green chemical system. Industrial ecology and anthrospheric environmental chemistry are addressed in Chapter 13, “TheAnthrosphere, Green Chemistry, and Industrial Ecology."
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.03%3A_What_is_Environmental_Chemistry.txt
Environmental chemistry has developed in response to problems and concerns regarding environmental pollution. Although awareness of chemical pollution had increased significantly in the two decades following World War II, the modern environmental movement dates from the 1962 publication of Rachel Carson’s classic book Silent Spring. The main theme of this book was the accumulation of DDT and other persistent, mostly pesticidal, chemicals through the food chain, which caused birds at the end of the chain to produce eggs with soft shells that failed to produce viable baby birds. The implication was that substances harming bird populations might harm humans as well. Around the time of the publication of Silent Spring another tragedy caused great concern regarding the potential effects of chemicals. This was the occurrence of approximately 10,000 births of children with badly deformed or missing limbs as a result of their mothers having taken the pharmaceutical thalidomide to alleviate the effects of morning sickness at an early stage of pregnancy. The 1960s were a decade of high concern and significant legislative action in the environmental arena aimed particularly at the control of water and air pollutants. By around 1970, it had become evident that the improper disposal of chemicals to the geosphere was also a matter of significant concern. Although many incidents of such disposal were revealed, the one that really brought the problem into sharp focus was the Love Canal site in Niagara Falls, New York. This waste dump was constructed in an old abandoned canal in which large quantities of approximately 80 waste chemicals had been placed for about two decades starting in the 1930s. It had been sealed with a clay cap and given to the city. A school had been built on the site and housing constructed around it. By 1971 it became obvious that the discarded chemicals were leaking through the cap. This problem led eventually to the expenditure of many millions of dollars to remediate the site and to buy out and relocate approximately one thousand households. More than any other single incident the Love Canal problem was responsible for the passage of legislation in the U.S., including Superfund, to clean up hazardous waste sites and to prevent their production in the future. By about 1970 it was generally recognized that pollution of air, water, and land was reaching intolerable levels. As a result, various countries passed and implemented laws designed to reduce pollutants and to clean up waste chemical sites at a cost that has easily exceeded one trillion dollars globally. In many respects, this investment has been strikingly successful. Streams that had deteriorated to little more than stinking waste drainage ditches (the Cuyahoga River in Cleveland, Ohio, once caught on fire from petroleum waste floating on its surface) have been restored to a healthy and productive condition. Despite a much increased population, the air quality in smog-prone Southern California has improved markedly. A number of dangerous waste disposal sites have been cleaned up. Human exposure to toxic substances in the workplace, in the environment, and in consumer products has been greatly reduced. The measures taken and regulations put in place have prevented devastating environmental problems from occurring. Initially, serious efforts to control pollution were based on a command and control approach, which specifies maximum concentration guideline levels of substances that can be allowed in the atmosphere or water and places limits on the amounts or concentrations of pollutants that can be discharged in waste streams. Command and control efforts to diminish pollution have resulted in implementation of various technologies to remove or neutralize pollutants in potential waste streams and stack gases. These are so-called end-of-pipe measures. As a result, numerous techniques, such as chemical precipitation of water pollutants, neutralization of acidic pollutants, stack gas scrubbing, and waste immobilization have been developed and refined to deal with pollutants after they are produced. Release of chemicals to the environment is now tracked in the U.S. through the Toxics Release Inventory TRI, under requirements of the Emergency Planning and Community Right to Know Act, which requires that information be provided regarding the release of more than 300 chemicals. The release of as much as one billion kilograms of these chemicals has been reported in the U.S. during a single year. Not surprisingly, the chemical industry produces the most such substances, followed by primary metals and paper manufacture. Significant amounts are emitted from transportation equipment, plastics, and fabricated metals, with smaller quantities from a variety of other enterprises. Although the quantities of chemicals released are high, they are decreasing, and the publicity resulting from the required publication of these releases has been a major factor in decreasing the amounts of chemicals released. Although much maligned, various pollution control measures implemented in response to command and control regulation have reduced wastes and improved environmental quality. Regulation-based pollution control has clearly been a success and well worth the expense and effort. However, it is much better to prevent the production of pollutants rather than having to deal with them after they are made. This was recognized in United States with the passage of the 1990 Pollution Prevention Act, which recognized that source reduction is fundamentally different and more desirable than waste management and pollution control. This act explicitly states that, wherever possible, wastes are not to be generated and their quantities are to be minimized. The means for accomplishing this objective can range from very simple measures, such as careful inventory control and reduction of solvent losses due to evaporation, to much more sophisticated and drastic approaches including complete redesign of manufacturing processes with waste minimization as a top priority. The means for preventing pollution are best implemented through the practice of green chemistry, which is discussed in detail in the following section.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.04%3A_Environmental_Pollution.txt
The limitations of a command and control system for environmental protection have become more obvious even as the system has become more successful. In industrialized societies with good, well-enforced regulations, most of the easy and inexpensive measures that can be taken to reduce environmental pollution and exposure to harmful chemicals have been implemented. Therefore, small increases in environmental protection now require relatively large investments in money and effort. Is there a better way? There is, indeed. The better way is through the practice of green chemistry. Green chemistry can be defined as the practice of chemical science and manufacturing in a manner that is sustainable, safe, and non-polluting and that consumes minimum amounts of materials and energy while producing little or no waste material. This definition of green chemistry is illustrated in Figure \(1\). The practice of green chemistry begins with recognition that the production, processing, use, and eventual disposal of chemical products may cause harm when performed incorrectly. In accomplishing its objectives, green chemistry and green chemical engineering may modify or totally redesign chemical products and processes with the objective of minimizing wastes and the use or generation of particularly dangerous materials. Those who practice green chemistry recognize that they are responsible for any effects on the world that their chemicals or chemical processes may have. Far from being economically regressive and a drag on profits, green chemistry is about increasing profits and promoting innovation while protecting human health and the environment. To a degree, we are still finding out what green chemistry is. That is because it is a rapidly evolving and developing subdiscipline in the field of chemistry. And it is a very exciting time for those who are practitioners of this developing science. Basically, green chemistry harnesses a vast body of chemical knowledge and applies it to the production, use, and ultimate disposal of chemicals in a way that minimizes consumption of materials, exposure of living organisms, including humans, to toxic substances, and damage to the environment. And it does so in a manner that is economically feasible and cost effective. In one sense, green chemistry is the most efficient possible practice of chemistry and the least costly when all of the costs of doing chemistry, including hazards and potential environmental damage are taken into account. Green chemistry is sustainable chemistry. There are several important respects in which green chemistry is sustainable: • Economic: At a high level of sophistication green chemistry normally costs less in strictly economic terms (to say nothing of environmental costs) than chemistry as it is normally practiced. • Materials: By efficiently using materials, maximum recycling, and minimum use of virgin raw materials, green chemistry is sustainable with respect to materials. • Waste: By reducing insofar as possible, or even totally eliminating their production, green chemistry is sustainable with respect to wastes 2.06: Green Chemistry and Synthetic Chemistry Synthetic chemistry is the branch of chemical science involved with developing means of making new chemicals and developing improved ways of synthesizing existing chemicals. A key aspect of green chemistry is the involvement of synthetic chemists in the practice of environmental chemistry. Synthetic chemists, whose major objective has always been to make new substances and to make them cheaper and better, have come relatively late to the practice of environmental chemistry. Other areas of chemistry have been involved much longer in pollution prevention and environmental protection. From the beginning, analytical chemistry has been a key to discovering and monitoring the severity of pollution problems. Physical chemistry has played a strong role in explaining and modeling environmental chemical phenomena. The application of physical chemistry to atmospheric photochemical reactions has been especially useful in explaining and preventing harmful atmospheric chemical effects including photochemical smog formation and stratospheric ozone depletion. Other branches of chemistry have been instrumental in studying various environmental chemical phenomena. Now the time has arrived for the synthetic chemists, those who make chemicals and whose activities drive chemical processes, to become intimately involved in making the manufacture, use, and ultimate disposal of chemicals as environmentally friendly as possible. Before environmental and health and safety issues gained their current prominence, the economic aspects of chemical manufacture and distribution were relatively simple and straightforward. The economic factors involved included costs of feedstock, energy requirements, and marketability of product. Now, however, costs must include those arising from regulatory compliance, liability, end-of-pipe waste treatment, and costs of waste disposal. By eliminating or greatly reducing the use of toxic or hazardous feedstocks and catalysts and the generation of dangerous intermediates and byproducts, green chemistry eliminates or greatly reduces the additional costs that have come to be associated with meeting environmental and safety requirements of conventional chemical manufacture. As illustrated in Figure \(1\), there are two general and often complementary approaches to the implementation of green chemistry in chemical synthesis, both of which challenge the imaginations and ingenuity of chemists and chemical engineers. The first of these is to use existing feedstocks but make them by more environmentally benign, “greener,” processes. The second approach is to substitute other feedstocks that are made by environmentally friendly means. In some cases, a combination of the two approaches is used. Yield and Atom Economy Traditionally, synthetic chemists have used yield, defined as a percentage of the degree to which a chemical reaction or synthesis goes to completion to measure the success of a chemical synthesis. For example, if a chemical reaction shows that 100 grams of product should be produced, but only 85 grams is produced, the yield is 85%. A synthesis with a high yield may still generate significant quantities of useless byproducts if the reaction does so as part of the synthesis process. Instead of yield, green chemistry emphasizes atom economy, the fraction of reactant material that actually ends up in final product. With 100 percent atom economy, all of the material that goes into the synthesis process is incorporated into the product. For efficient utilization of raw materials, a 100% atom economy process is most desirable. Figure 2.7.1 illustrates the concepts of yield and atom economy.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.05%3A_Practice_of_Green_Chemistry.txt
A major goal in the manufacture and use of commercial products, and, indeed, in practically all areas of human endeavor, is the reduction of risk. There are two major aspects of risk — the hazard presented by a product or process and exposure of humans or other potential targets to those hazards. \[\textrm{Risk = F{hazard x exposure}}\] This relationship simply states that risk is a function of hazard times exposure. It shows that risk can be reduced by a reduction of hazard, a reduction of exposure, and various combinations of both. The command and control approach to reducing risk has concentrated upon reduction of exposure. Such efforts have used various kinds of controls and protective measures to limit exposure. The most common example of such a measure in the academic chemistry laboratory is the wearing of goggles to protect the eyes. Goggles will not by themselves prevent acid from splashing into the face of a student, but they do prevent the acid from contacting fragile eye tissue. Explosion shields will not prevent explosions, but they do retain glass fragments that might harm the chemist or others in the vicinity. Reduction of exposure is unquestionably effective in preventing injury and harm. However, it does require constant vigilance and even nagging of personnel, as any laboratory instructor charged with making laboratory students wear their safety goggles at all times will attest. It does not protect the unprotected, such as a visitor who may walk bare-faced into a chemical laboratory ignoring the warnings for required eye protection. On a larger scale, protective measures may be very effective for workers in a chemical manufacturing operation but useless to those outside the area or the environment beyond the plant walls who do not have protection. Protective measures are most effective against acute effects, but less so against long-term chronic exposures that may cause toxic responses over many years period of time. Finally, protective equipment can fail and there is always the possibility that humans will not use it properly. Where feasible, hazard reduction is a much more certain way of reducing risk than is exposure reduction. The human factors that play so prominently in successfully limiting exposure and that require a conscious, constant effort are much less crucial when hazards have been reduced. Compare, for example, the use of a volatile, flammable, somewhat toxic organic solvent used for cleaning and degreasing of machined metal parts with that of a water solution of a nontoxic cleaning agent used for the same purpose. To safely work around the solvent requires an unceasing effort and constant vigilance to avoid such hazards as formation of explosive mixtures with air, presence of ignition sources that could result in a fire, and excessive exposure by inhalation or absorption through skin that might cause peripheral neuropathy (a nerve disorder) in workers. Failure of protective measures can result in a bad accident or serious harm to worker health. The water-based cleaning solution, however, would not present any of these hazards so that failure of protective measures would not create a problem. Normally, measures taken to reduce risk by reducing exposure have an economic cost that cannot be reclaimed in lower production costs or enhanced value of product. Of course, failure to reduce exposure can have direct, high economic costs in areas such as higher claims for worker compensation. In contrast, hazard reduction often has the potential to substantially reduce operating costs. Safer feedstocks are often less costly as raw materials. The elimination of costly control measures can lower costs overall. Again, to use the comparison of an organic solvent compared to a water-based cleaning solution, the organic solvent is almost certain to cost more than the aqueous solution containing relatively low concentrations of detergents and other additives. Whereas the organic solvent will at least require purification for recycle and perhaps even expensive disposal as a hazardous waste, the water solution may be purified by relatively simple processes, and perhaps even biological treatment, then safely discharged as wastewater to a municipal wastewater treatment facility. It should be kept in mind, however, that not all low-hazard materials are cheap, and may be significantly more expensive than their more hazardous alternatives. And, in some cases, nonhazardous alternatives simply do not exist.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.07%3A_Reduction_of_Risk-_Hazard_and_Exposure.txt
There are limits to the reduction in risk beyond which efforts to do so become counterproductive. As in other areas of endeavor, there are circumstances in which there is no choice but to work with hazardous substances. Some things that are inherently dangerous are rendered safe by rigorous training, constant attention to potential hazards, and understanding of hazards and the best way to deal with them. Consider the analogy of commercial flight. When a large passenger aircraft lands, typically 100 tons of aluminum, steel, flammable fuel, and fragile human flesh traveling at a speed of twice the legal interstate speed limits for automobiles come into sudden contact through air-filled rubber tires with an unforgiving concrete runway. That procedure is inherently dangerous! But it is carried out millions of times per year throughout the world with but few injuries and fatalities, a tribute to the generally superb design, construction, and maintenance of aircraft and the excellent skills and training of aircrew. The same principles that make commercial air flight generally safe also apply to the handling of hazardous chemicals by properly trained personnel under carefully controlled conditions. So, although much of this book is about risk reduction as it relates to chemistry, we must always be mindful of the risks of not taking risks. If we become so timid in all of our enterprises that we refuse to take risks, scientific and economic progress will stagnate. The U.S. space program is an example of an area in which progress has been made only by a willingness to take risks. However, progress has probably been slowed because of risk aversion resulting from previous accidents, especially the 1987 Challenger space shuttle tragedy. If we get to the point that no chemical can be made if its synthesis involves the use of a potentially toxic or otherwise hazardous substance, the progress of chemical science and the development of such beneficial products as new life-saving drugs or innovative chemicals for treating water pollutants may be held back. It may be argued that thermonuclear fusion entails significant risks as an energy source and that research on controlled thermonuclear fusion must therefore be stopped. But when that potential risk is balanced against the virtually certain risk of continuing to use fossil fuels that produce greenhouse gases that cause global climate warming, it seems sensible to at least continue research on controlled thermonuclear fusion energy sources. Another example is the use of thermal processes for treating hazardous wastes, somewhat risky because of the potential for the release of toxic substances or air pollutants, but still the best way to convert many kinds of hazardous wastes to innocuous materials. 2.09: Waste Prevention Waste prevention is better than having to treat or clean up wastes. In the earlier years of chemical manufacture the direct costs associated with producing large quantities of wastes were very low because such wastes were simply discarded into waterways, onto the ground, or in the air as stack emissions. With the passage and enforcement of environmental laws after about 1970, costs for waste treatment increased steadily. For example, General Electric announced in April, 2010, that it had spent \$561 in the first phase of dredging and removing PCBs from deposits in Hudson River sediments produced decades earlier when the extremely persistent PCBs were discarded to the river as wastes from the company’s manufacture of electrical equipment. The cost of the second phase of cleanup of these wastes was projected to be much more than the \$561million already spent. The cleanup of pollutants including asbestos, dioxins, pesticide manufacture residues, perchlorate and mercury are costing various concerns hundreds of millions of dollars. The eventual cleanup costs from the 2010 BP Deepwater Horizon oil well blowout in the Gulf of Mexico may eventually exceed the \$20 billion that the company initially set aside for the project. By the year 2000 in the United States, costs of complying with environmental and occupational health regulations had grown to a magnitude similar to that of research and development for industry as a whole. From a purely economic standpoint, therefore, a green chemistry approach that avoids these costs is very attractive, in addition to its large environmental benefits. Although the costs of such things as engineering controls, regulatory compliance, personnel protection, wastewater treatment, and safe disposal of hazardous solid wastes have certainly been worthwhile for society and the environment, they have become a large fraction of the overall expense of doing business. Companies must now do full cost accounting, taking into account the total costs of emissions, waste disposal, cleanup, and protection of personnel and the environment, none of the proceeds of which go into the final product.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.08%3A_The_Risks_of_No_Risks.txt
From the preceding discussion, it should be obvious that there are certain basic principles of green chemistry. Some publications recognize “the twelve principles of green chemistry.”2 This section addresses the main ones of these. As anyone who has ever spilled the contents of a food container onto the floor well knows, it is better to not make a mess than to clean it up once made. As applied to green chemistry, this basic rule means that waste prevention is much better than waste cleanup. Failure to follow this simple rule has resulted in most of the troublesome hazardous waste sites that are causing problems throughout the world today. One of the most effective ways to prevent generation of wastes is to make sure that insofar as possible all materials involved in making a product should be incorporated into the final product. Therefore, the practice of green chemistry is largely about incorporation of all raw materials into the product, if at all possible. We would not likely favor a food recipe that generated a lot of inedible byproduct. The same idea applies to chemical processes. In that respect, the concept of atom economy discussed in Section 2.6 is a key component of green chemistry. The use or generation of substances that pose hazards to humans and the environment should be avoided. Such substances include toxic chemicals that may be hazardous to workers. They include substances that are likely to become air or water pollutants and harm the environment or organisms in the environment. Here the connection between green chemistry and environmental chemistry is especially strong. Chemical products should be as effective as possible for their designated purpose, but with minimum toxicity. The practice of green chemistry is making substantial progress in designing chemicals and new approaches to the use of chemicals such that effectiveness is retained and even enhanced while toxicity is reduced. Chemical synthesis as well as many manufacturing operations make use of auxiliary substances that are not part of the final product. In chemical synthesis, such a substance consists of solvents in which chemical reactions are carried out. Another example consists of separating agents that enable separation of product from other materials. Since these kinds of materials may end up as wastes or (in the case of some toxic solvents) pose health hazards, the use of auxiliary substances should be minimized and preferably totally avoided. Energy consumption poses economic and environmental costs in virtually all synthesis and manufacturing processes. In a broader sense, the extraction of energy, such as fossil fuels pumped from or dug out of the ground, has significant potential to damage the environment. Therefore, energy requirements should be minimized. One way in which this can be done is through the use of processes that occur near ambient conditions, rather than at elevated temperature or pressure. One successful approach to this has been the use of biological processes, which, because of the conditions under which organisms grow, must occur at moderate temperatures and in the absence of toxic substances. Such processes are discussed further in Chapters 13 and 14. Raw materials extracted from earth are depleting in that there is a finite supply that cannot be replenished after they are used. So, wherever possible, renewable raw materials should be used instead of depletable feedstocks. As discussed further in Chapter 14, biomass feedstocks are highly favored in those applications for which they work. For depleting feedstocks, recycling should be practiced to the maximum extent possible. In the synthesis of an organic compound (see Chapter 6), it is often necessary to modify or protect groups on the organic molecule during the course of the synthesis. This often results in the generation of byproducts not incorporated into the final product, such as occurs when a protecting group is bonded to a specific location on a molecule, then removed when protection of the group is no longer needed. Since these processes generate byproducts that may require disposal, the use of protecting groups in synthesizing chemicals should be avoided insofar as possible. Reagents should be as selective as possible for their specific function. In chemical language, this is sometimes expressed as a preference for selective catalytic reagents over nonselective stoichiometric reagents. Products that must be dispersed into the environment should be designed to break down rapidly into innocuous products. One of the oldest, but still one of the best, examples of this is the modification of the surfactant in household detergents 15 or 20 years after they were introduced for widespread consumption to yield a product that is biodegradable. The poorly biodegradable surfactant initially used caused severe problems of foaming in wastewater treatment plants and contamination of water supplies. Synthesis of a biodegradable substitute solved the problem. Exacting “real-time” control of chemical processes is essential for efficient, safe operation with minimum production of wastes. This goal has been made much more attainable by modern computerized controls. However, it requires accurate knowledge of the concentrations of materials in the system measured on a continuous basis. Therefore, the successful practice of green chemistry requires real-time, in-process monitoring techniques coupled with process control. Accidents, such as spills, explosions, and fires, are major hazards in the chemical industry. Not only are these incidents potentially dangerous in their own right, they tend to spread toxic substances into the environment and increase exposure of humans and other organisms to these substances. For this reason, it is best to avoid the use or generation of substances that are likely to react violently, burn, build up excessive pressures, or otherwise cause unforeseen incidents in the manufacturing process. The principles outlined above are developed to a greater degree in the remainder of the book. They should be kept in mind in covering later sections.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.10%3A_Basic_Principles_of_Green_Chemistry.txt
Chapters 3-7 explain the basic principles of chemistry as they relate to green chemistry. For even greater detail on the basics of chemistry the reader is referred to a book on that subject.3However, at this point, it is useful to have a brief overview of chemistry, in a sense a minicourse in chemistry that provides the basic definitions and concepts of chemistry such as chemical compounds, chemical formulas, and chemical reactions before they are covered in detail in the later chapters. All chemicals are composed of fewer than 100 naturally-occurring fundamental kinds of matter called elements. Humans have succeeded in making about 30 artificial elements since the late 1930s, but the amounts of these are insignificant compared to the total of known chemicals. Elements, in turn, are composed of very small entities called atoms. Atoms of the same element may differ a bit in their masses, but all atoms of the same element behave the same chemically. So we can logically begin the study of chemistry with the atoms that make up the elements of which all matter is composed. Each atom of a particular element is chemically identical to every other atom. Each element is given an atomic number specific to the element, ranging from 1 to more than 100. The atomic number of an element is equal to the number of extremely small, positively charged protons contained in the nucleus located in the center of each atom of the element. Each electrically neutral atom has the same number of electrons as it has protons. The electrons are negatively charged and are in rapid motion around the nucleus, constituting a cloud of negative charge that makes up most of the volume of the atom. In addition to its atomic number, each element has a name and a chemical symbol, such as carbon, C; potassium, K (for its Latin name kalium); or cadmium, Cd. In addition to atomic number, name, and chemical symbol, each element has an atomic mass (atomic weight). The atomic mass of each element is the average mass of all atoms of the element so it is not a whole number. Atoms of most elements consist of two or more isotopes. All isotopes of the same element have identical chemical properties but differ in mass because of the presence in their nuclei of differing numbers of electrically neutral neutrons (see Chapter 3). 2.12: Combining Atoms to Make Molecules and Compounds About the only atoms that exist alone are those of the noble gases, a group of elements including helium, neon, argon, and radon located on the far right of the periodic table. Even the simple hydrogen atom in the elemental state is joined together with another hydrogen atom. Two or more uncharged atoms bonded together are called a molecule. As illustrated in Figure \(1\), the hydrogen molecule consists of 2 hydrogen atoms as denoted by the chemical formula of elemental hydrogen, H2. This formula states that a molecule of elemental hydrogen consists of 2 atoms of hydrogen, shown by the subscript of 2. The atoms are joined together by a chemical bond. As explained in Chapter 3, a hydrogen atom consists of a very small positively charged nucleus surrounded by a much larger cloud of negative charge from a single, rapidly moving, electron. But, hydrogen atoms are more “content” with 2 electrons. So two hydrogen atoms share their two electrons constituting the chemical bond in the hydrogen molecule. A bond composed of shared electrons is a covalent bond. Chemical Compounds The example just discussed was one in which atoms of the same element, hydrogen, join together to form a molecule. Most molecules consist of atoms of different elements joined together. An example of such a molecule is that of water, chemical formula H2O. This formula stands for the fact that the water molecule consists of two hydrogen atoms bonded to one oxygen atom, O, where the absence of a subscript number after the O indicates that there is 1 oxygen atom. The water molecule is shown in Figure \(2\). Each of the hydrogen atoms is held to the oxygen atom in the water molecule by two shared electrons in a covalent bond. A material such as water in which two or more elements are bonded together is called a chemical compound. It is because of the enormous number of combinations of two or more atoms of different elements that it is possible to make 20 million or more chemical compounds from fewer than 100 elements. Ionic Bonds Two different molecules have just been discussed in which atoms are joined together by covalent bonds consisting of shared electrons. Another way in which atoms can be joined together is by transfer of electrons from one atom to another. A single neutral atom has a number of electrons surrounding its nucleus that is the same as the number of protons in the nucleus of the atom. But, if the atom loses one or more negatively charged electrons, it ends up with a net positive electrical charge and the atom becomes a positively charged cation. An atom that has gained one or more negatively charged electrons attains a net negative charge and is called an anion. Cations and anions are attracted together in an ionic compound because of their opposite electrical charges. The oppositely charged ions are joined by ionic bonds in a crystalline lattice. Figure \(3\) shows the best known ionic compound, sodium chloride, NaCl (common table salt). The chemical formula of NaCl implies that there is 1 Na for each Cl. In this case these consist of Na+ cations and Cl- anions. For ionic compounds such as NaCl, the first part of the name is simply that of the metal forming the cation, in this case sodium. The second part of the name is based upon the anion, but has the ending ide. So the ionic compound formed from sodium and chlorine is sodium chloride. As shown by the preceding example, ionic compounds may consist of ions composed of atoms that have lost electrons (producing positively charged cations) and other atoms that have gained electrons (producing negatively charged anions). In addition to being charged atoms, ions may also consist of groups of several atoms with a net charge. Ammonium ion, NH4+, is such an ion. As shown below, the NH4+ cation consists of 4H atoms covalently bonded (by 2 shared electrons) to a central N atom, with the group of 5 total atoms having a net electrical charge of +1.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.11%3A_Some_Things_to_Know_About_Chemistry_before_You_Even_Start.txt
The preceding section has discussed chemical compounds and the two major kinds of bonds— covalent bonds and ionic bonds — that hold them together. Next is discussed the process of making and taking apart chemical compounds, chemical reactions. A chemical reaction occurs when chemical bonds are broken and formed and atoms are exchanged to produce chemically different species. First consider two very simple chemical reactions involving only one element, oxygen. In the very thin air high in the stratosphere more than 10 kilometers above Earth’s surface (above the altitudes where jet airliners normally cruise), high-energy ultraviolet radiation from the sun, represented by the symbol $h \nu$, splits apart molecules of elemental oxygen, O2, $O_{2} + h \nu \rightarrow 2O$ to produce oxygen atoms. As with most single atoms, the O atoms are reactive and combine with oxygen molecules to produce ozone, O3: $O + O_{2} \rightarrow O_{3}$ Both of these processes are chemical reactions. In a chemical reaction, the substances on the left of the arrow (read as “yields”) are the reactants and those on the right of the arrow are products. The first of these reactions states that the chemical bond holding together a molecule of O2 reactant is split apart by the high energy of the ultraviolet radiation to produce two oxygen atom products. In the second reaction, an oxygen atom reactant, O, and an oxygen molecule reactant, O2, form a chemical bond to yield an ozone product, O3. Are these very simple chemical reactions important to us? Emphatically yes. They produce a shield of ozone molecules in the stratosphere which in turn absorb ultraviolet radiation that otherwise would reach Earth’s surface, destroying life, causing skin cancer and other maladies that would make our existence on Earth impossible. As discussed in Chapter 10, the use of chlorofluorocarbon refrigerants (Freons) has seriously threatened the stratospheric ozone layer. It is a triumph of environmental chemistry that this threat was realized in time to do something about it and an accomplishment of green chemistry to develop relatively safe substitutes for ozone-threatening chemicals. Many chemical reactions are discussed in this book. At this point a very common chemical reaction can be considered, that of elemental hydrogen with elemental oxygen to produce water. A first approach to writing this reaction is $H_{2} + O_{2} \rightarrow H_{2}O$ stating that elemental hydrogen and elemental oxygen react together to produce water. This is not yet a proper chemical equation because it is not balanced. A balanced chemical equation has the same number of each kind of atom on both sides of the equation. As shown above, there are 2 H atoms in the single H2 molecule on the left and 2 H atoms in the single molecule H2O product. That balances hydrogen, but leaves 2 O atoms in the O2 molecule on the left with only 1 O atom in the single H2O molecule product. But, writing the reaction a $2H_{2} + O_{2} \rightarrow 2H_{2}O$ gives a balanced chemical equation with a total of 4 H atoms in 2 H2 molecules on the left, 4 H atoms in 2 H2O molecules on the right, and a total of 2 O atoms in the 2 H2O molecules on the right, which balances the 2 O atoms in the O2 molecule on the left. So the equation as now written is balanced. A balanced chemical equation always has the same number of each kind of atom on both sides of the equation.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.13%3A_The_process_of_making_and_breaking_chemical_bonds-_chemical_re.txt
We are familiar with matter in different forms. We live in an atmosphere of gas that is mostly N2 with about 1/4 as much oxygen, O2, by volume. We only become aware of the gas in the atmosphere when something is wrong with it, such as contamination by irritating air pollutants. A person stepping into an atmosphere of pure N2 would not notice anything wrong immediately, but would die within a few minutes, not because N2 is toxic, but because the atmosphere lacks life-giving oxygen. The same atmosphere that we breathe contains water in the gas form as water vapor. And we are also familiar, of course, with liquid water and with solid ice. he air that we breathe, like most substances, is a mixture consisting of two or more substances. Air is a homogeneous mixture meaning that the molecules of air are mixed together at a molecular level. There is no way that we can take air apart by simple mechanical means, such as looking at it under a magnifying glass and picking out its individual constituents. Another common substance that is a homogeneous mixture is drinking water, which is mostly H2O molecules, but which also contains dissolved O2 and N2 from air, dissolved calcium ions (Ca2+), chlorine added for disinfection, and other materials. A heterogeneous mixture is one that contains discernible and distinct particles that, in principle at least, can be taken apart mechanically. Concrete is a heterogeneous mixture. Careful examination of a piece of broken concrete shows that it contains particles of sand and rock embedded in solidified Portland cement. A material that consists of only one kind of substance is known as a pure substance. Absolutely pure substances are almost impossible to attain. Hyperpure water involved in semiconductor manufacturing operations approaches absolute purity. Another example is 99.9995% pure helium gas used in a combination gas chromatograph/mass spectrometer instrument employed for the chemical analysis of air and water pollutants. Mixtures are very important in the practice of green chemistry. Among other reasons why this is so is that separation of impurities from mixtures in the processing of raw materials and in recycling materials is often one of the most troublesome and expensive aspects of materials utilization and may generate large quantities of wastes. Impurities may make mixtures toxic. For example, toxic arsenic, which is directly below phosphorus in the periodic table and has chemical properties similar to phosphorus, occurs as an impurity in the phosphate ores from which elemental phosphorus is extracted. This is not a problem for phosphorus used as fertilizer because the small amount of arsenic added to the soil is negligible compared to the arsenic naturally present in the soil. But, if the phosphorus is to be made into phosphoric acid and phosphate salts to be added to soft drinks or to food, impurity arsenic cannot be tolerated because of its toxicity requiring removal of this element at considerable expense. Many byproducts of manufacturing operations are mixtures. For example, organochlorine solvents used to clean and degrease machined parts are mixtures that contain grease and other impurities. As part of the process for recycling these solvents, the impurities must be removed by expensive processes such as distillation. The separation of mixture constituents is often one of the most expensive aspects of the recycling of materials. States of Matter As shown in Figure \(1\), the three common states of matter are gases, liquids, and solids. These are readily illustrated by water, the most familiar form of which is liquid water. Ice is a solid and water vapor in the atmosphere or in a steam line is a gas. Gases, such as those composing the air around us, are composed mostly of empty space through which molecules of the matter composing the gas move constantly, bouncing off each other or the container walls millions of times per second. A quantity of gas expands to fill the container in which it is placed. Because they are mostly empty space, gases can be significantly compressed; squeeze a gas and it responds with a decreased volume. Gas temperature is basically an expression of the tendency of the gas molecules to move more rapidly; higher temperatures mean faster molecular movement and more molecules bouncing off each other or container walls per second. The constant impact of gas molecules on container walls is the cause of gas pressure. Because of the free movement of molecules relative to each other and the presence of mostly empty space, a quantity of gas takes on the volume and shape of the container in which it is placed. The physical behavior of gases is described by several gas laws relating volumes of gas to quantities of the gas, pressure, and temperature. Calculations involving these laws are covered at the beginning of Chapter 10. Molecules of liquids can move relative to each other, but cannot be squeezed together to a significant extent, so liquids are not compressible. Liquids do take on the shape of the part of a container that they occupy. Molecules of solids occupy fixed positions relative to each other. Therefore, solids cannot be significantly compressed and a solid object retains its shape regardless of the container in which it is placed.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/2.14%3A_The_Nature_of_Matter_and_States_of_Matter.txt
LITERATURE CITED 1. Manahan, Stanley E., Environmental Chemistry, 9th ed., Taylor & Francis/CRC Press, Boca Raton, FL, 2010. 2.Anastas, Paul T., and John C. Warner, Green Chemistry Theory and Practice, Oxford University Press, 1998. 3. Manahan, Stanley E., Fundamentals of Sustainable Chemical Science, Taylor & Francis/CRC Press, Boca Raton, FL, 2009. SUPPLEMENTARY REFERENCES Allen, David T., and David R. Shonnard, Green Engineering: Environmentally Conscious Design of Chemical Processes, Prentice Hall, Upper Saddle River, NH, 2002. Brown, Lawrence S., and Thomas A. Holme, Chemistry for Engineering Students, 2nd ed., Brooks/Cole Cengage Learning, Belmont, CA, 2011. Brown, Theodore L., H. Eugene Lemay, Bruce E. Bursten, Catherine J. Murphy, and Patrick Woodward, Chemistry: The Central Science, 11th ed., Prentice Hall, Upper Saddle River, NJ, 2008. Anastas, Paul, Ed., Handbook of Green Chemistry, Wiley-VCH, New York, 2010. Clark James and Duncan MacQuarrie, Handbook of Green Chemistry and Technology, Blackwell Science, Malden, MA, 2002. Denniston, Katherine J., Joseph J. Topping, and Robert L. Caret, General, Organic, and Biochemistry, 7th ed., McGraw-Hill, New York, 2011. Doble, Mukesh, and Anil Kumar Kruthiventi, Green Chemistry and Processes, Elsevier, Amsterdam, 2007. Frost, Laura D., S. Todd Deal, and Karen C. Timberlake,General, Organic, and Biological Chemistry: An Integrated Approach, Prentice Hall, Upper Saddle River, NJ, 2010. Guinn, Denise, and Rebecca Brewer, Essentials of General, Organic, and Biochemistry: An Integrated Approach, W. H. Freeman and Co., New York, 2009. Hill, John W., and Doris Kolb, Chemistry for Changing Times, 11th ed, Prentice Hall, Upper Saddle River, NJ, 2006. Horvath, Istvan T., and Paul T. Anastas, “Innovations and Green Chemistry,” Chemical Reviews, 107, 2169-2173 (2007). Lancaster, Mike, Green Chemistry, Royal Society of Chemistry, London, 2002. Li, Chao-Jun, and Barry M. Trost, “Green Chemistry for Chemical Synthesis,” Proceedings of the National Academy of Sciences of the United States of America, 105, 13197-13202 (2008). Matlack, Albert, Introduction to Green Chemistry, 2nd ed., CRC Press, Boca Raton, FL, 2010. McMurry, John, David S. Ballantine, Carl A. Hoeger, Virginia E. Peterson, and Mary E. Castellion, Fundamentals of General, Organic, and Biological Chemistry, 6th ed., Prentice-Hall, Upper Saddle River, NJ, 2009. Mickulecky, Peter J., Katherine Brutlag, Michelle Gilman, and Brian Peterson, Chemistry Workbook for Dummies, Wiley, Hoboken, NJ, 2008. Middlecamp, Catherine, Steve Keller, Karen Anderson, Anne Bentley, Michael Cann, Chemistry in Context, 7th ed., McGraw-Hill, Dubuque, IA, 2011. Roesky, Herbert W., Dietmar Kennepohl, and Jean-Marie Lehn, Experiments in Green and Sustainable Chemistry, Wiley-VCH, Weinheim, Germany, 2009. Moore, John W., Conrad L. Stanitski, and Peter C. Jurs, Chemistry: The Molecular Science, 4th ed., Brooks/Cole Cengage Learning, Belmont, CA, 2011. Smith, Janice G.,Principles of General, Organic, and Biochemistry, McGraw-Hill, New York, 2011. Viegas, Jennifer, Ed., Critical Perspectives on Planet Earth, Rosen Publishing Group, New York, 2007. QUESTIONS AND PROBLEMS Access to and use of the internet are assumed in answering all questions including general information, statistics, constants, and mathematical formulas required to solve problems. These questions are designed to promote inquiry and thought rather than just finding material in the text. So in some cases there may be several “right” answers. Therefore, if your answer reflects intellectual effort and a search for information from available sources, your answer can be considered to be “right.” 1. What is chemistry? Why is it impossible to avoid chemistry? 2. What is green chemistry? 3. Match the following pertaining to major areas of chemistry: A. Analytical chemistry 1. Occurs in living organisms B. Organic chemistry 2. Underlying theory and physical phenomena C. Biochemistry 3. Chemistry of most elements other than carbon D. Physical chemistry 4. Chemistry of most carbon-containing compounds E. Inorganic Chemistry 5. Measurement of kinds and quantities of chemicals 4. What are the five environmental spheres? Which of these did not exist before humans evolved on Earth? 5. Discuss why you think the very thin “skin” of Earth ranging from perhaps two or three kilometers in depth below the surface to several kilometers (several miles) in altitude in the atmosphere has particular environmental importance. 6. What is environmental chemistry? 7. Which event may be regarded as the beginning of the modern environmental movement? 8. What is the command and control approach to pollution control? 9. What is the Toxics Release Inventory, TRI. How does it reduce pollution? 10. Why are incremental increases in regulations under the command and control approach to pollution control much less effective now than they were when pollution control laws were first enacted and enforced? 11. What is the special relationship of green chemistry to synthetic chemistry? 12. What does Figure 2.1 show with respect to environmental chemistry, green chemistry, and other topics discussed in this chapter? 13. In which important respects is green chemistry sustainable chemistry? 14. With respect to raw materials, what are two general and often complementary approaches to the practice of green chemistry? 15. What is the distinction between yield and atom economy? 16. For a chemical synthesis of a pharmaceutical compound, 100 kilograms (kg) of reactants mixed in the exact proportions that would give a 100% theoretical yield of product would give 65.2 kg of product, the rest being byproduct. In an actual chemical synthesis, an excess of 10kg of one of the reactants was added to the 100 kg of mixture to help push the reaction to completion. After the total of 110 kg of reactants was put through the process, an actual yield of 59.5 kg of product was obtained. What was the percent yield? What was the percent atom economy? 17. What are two factors that go into assessing risk? 18. What are the risks of no risks? 19. What are the major basic principles of green chemistry? 20. What is shown by the formula O3? What about H2O2? 21. How does a covalent bond differ from an ionic bond? 22. What is the name given to a kind of material in which two or more different elements are bonded together? 23. Considering the compound shown in Figure 2.8, what is the name of the compound formed when a magnesium atom transfers two electrons to an oxygen atom giving a compound consisting of Mg2+ cations and O2- anions? 24. Summarize the information given by 3H2+ O3→3H2O. 25. In addition to showing the correct reactants and products, a correct chemical equation must be____________________. 26. Name three kinds of matter based upon purity. Which of these is extremely rare? 27. In terms of molecules, how are gases, liquids, and solids distinguished? 28. Describe gas pressure and temperature in terms of molecular motion. 29.What is the Presidential Green Chemistry Challenge? What have been some of the winning ideas in this challenge? 30. Which elemental species mentioned in this chapter is present in photochemical smog?
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/02%3A_The_Key_Role_of_Chemistry_and_Making_Chemistry_Green/References_and_Questions.txt
“More than fifty million unique chemical substances have been identified and the number is growing at a rapid pace. Essentially all of these are composed of the 92 naturally-occurring elements and the vast majority are from twenty or fewer of the most abundant elements.” 03: The Elements - Basic Building Blocks of Green Chemicals Chemistry is the science of matter. The fundamental building blocks of matter are the atoms of the various elements, which are composed of subatomic particles, the positively charged proton (+), the negatively charged electron (-), and the electrically neutral neutron (n). It is the properties of these atoms that determine matter’s chemical behavior. More specifically, it is the arrangement and energy levels of electrons in atoms that direct how they interact with each other, thus dictating all chemical behavior. One of the most fundamental aspects of chemistry is that elemental behavior varies periodically with increasing atomic number. This has enabled placement of elements in an orderly arrangement with increasing atomic number known as the periodic table. The periodic behavior of elements’ chemical properties is due to the fact that, as atomic number increases, electrons are added incrementally to atoms and occupy so-called shells, each filled with a specific number of electrons. After each shell is filled, a new shell is started, thus beginning a new period (row) of the periodic table. This sounds complicated, and indeed may be so, occupying the full-time computational activities of banks of computers to explain the behavior of electrons in matter. However, this behavior can be viewed in simplified models and is most easily understood for the first 20 elements using dots to represent electrons, enabling construction of an abbreviated 20-element periodic table. Although simple, this table helps understand and explain most of the chemical phenomena discussed in this book. The chapter also emphasizes some of the green aspects of the first 20 elements and how they relate to sustainability. Included among these elements are the nitrogen, oxygen, carbon (contained in carbon dioxide), and hydrogen and oxygen (in water vapor) that make up most of the air in the “green” atmosphere; the hydrogen and oxygen in water, arguably the greenest compound of all; the sodium and chlorine in common table salt; the silicon, calcium, and oxygen that compose most mineral matter, including the soil that grows plants supplying food to most organisms; and the hydrogen, oxygen, carbon, nitrogen, phosphorus, and sulfur that are the predominant elements in all living material. Long Before Subatomic Particles Were Known, There Was Dalton’s Atomic Theory Atomic theory describes the atoms in relation to chemical behavior. With the sophisticated tools now available to chemists, the nature of atoms, largely based upon the subatomic particles of which they are composed, especially the negatively charged electrons, is well known. But long before these sophisticated tools were even dreamed about, more than two centuries ago in 1808, an English schoolteacher named John Dalton came up with the atomic theory that bears his name. To a large extent, this theory is the conceptual basis of modern chemistry. Key aspects of Dalton’s atomic theory are the following: • The matter in each element is composed of extremely small particles called atoms. (Dalton regarded atoms as indivisible, unchanging bodies. We now know that they exchange and share electrons, which is the basis of chemical bonding.) • Atoms of different elements have different chemical properties. (These differences may range from very slight, such as those between the noble gases neon and argon, to vastly different, such as those between highly metallic sodium and strongly nonmetallic chlorine.) • Atoms cannot be created, destroyed, or changed to atoms of other elements. (In modern times, the provision is added that these things do not happen in ordinary chemical processes, since atoms can be changed to atoms of other elements by nuclear reactions, such as those that occur in nuclear reactors.) • Chemical compounds are formed by the combination of atoms of different elements in definite, constant ratios that usually can be expressed as integers or simple fractions. • Chemical reactions involve the separation and combination of atoms. (This phenomenon was surmised before anything was known about the nature of chemical bonds that are broken and formed as part of the process of chemical reactions.) Three Important Laws Dalton’s atomic theory explains the three important laws listed below. Evidence for these laws had been found prior to the publishing of Dalton’s atomic theory, and the atomic theory is largely based upon them. 1. Law of Conservation of Mass: There is no detectable change in mass in an ordinary chemical reaction. (This law, which was first stated in 1798 by “the father of chemistry,” the Frenchman Antoine Lavoisier, follows from the fact that in ordinary chemical reactions no atoms are lost, gained, or changed; in chemical reactions, mass is conserved.) 2. Law of Constant Composition: A specific chemical compound always contains the same elements in the same proportions by mass. 3. Law of Multiple Proportions: When two elements combine to form two or more compounds, the masses of one combining with a fixed mass of the other are in ratios of small whole numbers. A common illustration of this law is provided by the simple hydrocarbon compounds of carbon and hydrogen, which include \(\ce{CH4}\), \(\ce{C2H2}\), \(\ce{C2H4}\), and \(\ce{C2H6}\). In these compounds the relative masses of \(\ce{C}\) and \(\ce{H}\) are in ratios of small whole numbers. The Nature of Atoms At this point it is useful to note several characteristics of atoms, which were introduced in Section 2.11. Atoms are extremely small and extremely light. Their individual masses are expressed by the minuscule atomic mass unit, u. The sizes of atoms are commonly expressed in picometers, where a picometer is 0.000 000 001 millimeters (a millimeter is the smallest division on the metric side of a ruler). Atoms may be regarded as spheres with diameters between 100 and 300 picometers. As noted at the beginning of this chapter, atoms are composed of three basic subatomic particles, the positively charged proton, the electrically neutral neutron, and the much lighter negatively charged electron. Each proton and neutron has a mass of essentially 1 atomic mass unit, whereas the mass of the electron is only about 1/2000 as much The protons and neutrons are located in the nucleus at the center of the atom and the electrons compose a “fuzzy cloud” of negative charge around the nucleus. Essentially all the mass of an atom is in the nucleus and virtually all the volume is in the cloud of electrons. Each atom of a specific element has the same number of protons in its nucleus. This is the atomic number of the element. Each element has a name and is represented by a chemical symbol consisting of one or two letters. Atoms of the same element that have different numbers of neutrons and, therefore, different masses, are called isotopes. Isotopes may be represented by symbols such as \(\ce{^{12}_6C}\) where the subscript is the atomic number and the superscript is the mass number, which is the sum of the numbers of protons and neutrons in an atom. The average mass of all the atoms of an element is the atomic mass. Atomic masses are expressed relative to the carbon \(\ce{^{12}_6C}\) the isotope, which contains 6 protons and 6 neutrons in its nucleus. The mass of this isotope is taken as exactly 12 u. Atomic masses normally are not integers, in part because atoms of most elements consist of two or more isotopes with different masses. Electrons in Atoms The behavior of electrons in the cloud of negative charge making up most of the volume of atoms, particularly their energy levels and orientations in space, are what determine chemical behavior. Arrangements of electrons are described by electron configuration. A detailed description of electron configuration is highly mathematical and sophisticated, but is represented in a very simplified fashion in this chapter. Because of their opposite charges, electrons are strongly attracted to positively charged nuclei, but they do not come to rest on it. The placement of electrons in atoms determines the configuration of the periodic table, a complete version of which is printed at the end of this chapter. Elements are listed across this table in periods such that elements located in the same vertical groups have generally similar chemical behavior. The derivation of the complete periodic table showing more than 100 elements is too complicated for this book. So, in the remainder of this chapter, the first 20 elements will be discussed in order and the placement of electrons in the atoms of these elements will illustrate how these elements can be placed in the periodic table. From this information a brief 20-element periodic table will be constructed that should be very useful in explaining chemical behavior.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.01%3A_Elements_Atoms_and_Atomic_Theory.txt
Hydrogen, H, is the element with atomic number 1. Most hydrogen atoms consist of a single proton forming the nucleus with 1 electron per hydrogen atom. Recall from Section 2.12 and Figure 2.6 that elemental hydrogen exists as molecules with 2 H atoms, chemical formula H2, in which the 2 H atoms are joined together by a covalent bond consisting of 2 shared electrons. Molecules consisting of 2 atoms so joined are called diatomic molecules. As will be seen, several important elements among the first 20 elements are gases that consist of diatomic molecules in their elemental forms. Showing Electrons in Atomic Symbols and Molecular Formulas In discussing chemical behavior related to atomic structure, it is particularly useful to have a means of showing the electrons in the atoms (more specifically, the less strongly held outer shell electrons). This is done with Lewis symbols (named after G. N. Lewis) also called electron-dot symbols. The Lewis symbol of the hydrogen at Properties and Uses of Elemental Hydrogen Pure elemental H2 under normal conditions is a colorless, odorless gas that has the lowest density of any pure substance. Liquified H2 boils at a very cold -253˚C and solidifies at -259˚ C. Hydrogen gas is widely used in the chemical industry to react chemically with a large number of substances. It burns readily with a large release of energy and mixtures of hydrogen with oxygen or air are extremely explosive. The chemical reaction for elemental hydrogen burning with oxygen (O2) in air is $\ce{2H2 + O2 → 2H2O + energy} \label{(3.2.1}$ The product of this reaction is water. Used as a fuel, elemental hydrogen is a very green element because when it is burned or otherwise reacted to provide energy, the reaction product is simply water, H2O. Furthermore, given a source of electrical energy, elemental hydrogen and elemental oxygen can be produced at two separate electrodes by passing a direct current through water in which an appropriate salt has been dissolved to make the water electrically conducting: 2H2O 2H2 + O2 (3.2.2) So elemental hydrogen generated by the application of electrical energy to water provides a source of energy that can be moved from one place to another and utilized to produce electricity in fuel cells (see below) or for other beneficial purposes such as the synthesis of ammonia essential as a source of plant fertilizer nitrogen. The production of elemental hydrogen by electrolysis may be regarded as a green process because it does not require any reagents other than water. Furthermore, the electrolysis byproduct oxygen is harmless and has many uses whereas hydrogen made by the reaction of steam with carbon-containing compounds (see below) consumes fossil fuel and generates CO, which in some cases is burned producing greenhouse gas carbon dioxide. The main disadvantage of the electrolysis process for H2 generation is the relatively low efficiency by which the electricity is used in the process and improvements are needed in this area. Elemental hydrogen is widely used for chemical synthesis and other industrial applications. Its preparation by electrolysis of water was mentioned above. It is now most commonly prepared from methane, CH4, the main ingredient of natural gas, by steam reforming at high temperatures and pressures: CH4 + H2O CO + 3H2 (3.2.3) Hydrogen is used to manufacture a number of chemicals. Two of the most abundantly produced chemicals that require hydrogen for synthesis are ammonia, $\ce{NH3}$, and methanol (methyl alcohol, $\ce{CH3OH}$). The latter is generated by the reaction between carbon monoxide and hydrogen: $\ce{CO + 2H2 → CH3OH} \label{3.2.4}$ Methanol used to be made by heating wood in the absence of air and condensing methanol from the vapor given off, a process known as destructive distillation. Generation of so-called wood alcohol made by this relatively green process from biomass has the potential to supply at least a fraction of the methanol now needed, thus reducing the consumption of natural gas. Methanol has some important fuel uses. During the 1930s it was used instead of gasoline to run internal combustion engines to power a significant fraction of automobiles in France before Middle Eastern oil fields became such an abundant source of petroleum. At present it is blended with gasoline as an oxygenated additive; engines using this blended fuel produce less pollutant carbon monoxide. Now the most common use of methanol as a fuel is to break it down to elemental hydrogen and carbon dioxide to produce hydrogen used in fuel cells. In addition to its uses in making ammonia and methanol, hydrogen is added chemically to hydrocarbon molecules in some fractions of gasoline to upgrade the fuel value of gasoline. Hydrogen can be added directly to coal or reacted with carbon monoxide to produce synthetic petroleum. It is also combined with unsaturated vegetable oils to make margarine and other hydrogenated fats and oils. This application is controversial and becoming less common because of suspected adverse long-term health effects of these products commonly called trans fats. Hydrogen in Fuel Cells Fuel cells, discussed further in Chapter 15, are devices that enable hydrogen to “burn” at around room temperature and to produce electricity directly without going through some sort of internal combustion engine and electricity generator. A fuel cell (Figure 3.2) consists of two electrically conducting electrodes, an anode and a cathode that are contacted with elemental H2 and O2, respectively. As shown in the diagram, at the anode H2 loses electrons (it is said to be oxidized) to produce H+ ion. At the cathode O2 gains electrons (it is said to be reduced) and reacts with H+ ions to produce water, H2O. The H+ ions required for the reaction at the cathode are those generated at the anode and they migrate to the cathode through a solid membrane permeable to protons (the H+ ion is a proton). The net reaction is $\ce{2H2 + O2 → 2H2O + electrical energy} \label{3.3.5}$
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.02%3A_Hydrogen_-_The_Simplest_Atom.txt
The second element in the periodic table is helium, He, atomic number 2. All helium atoms have 2 protons in their nuclei and 2 electrons. There are two isotopes of helium of whichthat contains 2 neutrons in the nucleus is by far the predominant one, with much smaller numbers of the lighter isotope, which has 2 protons and 1 neutron in its nucleus and a mass number of 3. Helium is a noble gas meaning that it exists only as atoms of the elements that are never bonded to other atoms. Figure 3.3 is a representation of the helium atom showing its 2 electrons. The Lewis symbol of helium is simply He with 2 dots. This shows a very important characteristic of atoms. As electrons are added to atoms with increasing atomic number, they are added at various levels known as electron shells. The one electron in hydrogen, H, goes into the first electron shell, the one with the lowest possible energy. The second electron added to make the helium atom also goes into the first electron shell. This lowest electron shell can contain a maximum of only 2 electrons, so helium has a filled electron shell. Atoms with filled electron shells have no tendency to lose, gain, or share electrons and, therefore, do not become involved with other atoms through chemical bonding. Such atoms exist alone in the gas phase and the elements of which they consist are called noble gases. Helium is the first of the noble gases. Helium gas has a very low density of only 0.164 g/L at 25˚C and 1 atm pressure. Elemental helium is the second least dense substance next to hydrogen gas. It is this low density characteristic that makes helium so useful in balloons, including weather balloons, which can stay aloft for days, reaching very high altitudes. Helium is pumped from the ground with some sources of natural gas, some of which contain up to 10% helium by volume. Helium was first observed in the sun by the specific wavelengths of light emitted by hot helium atoms. Underground sources of helium were discovered by drillers searching for natural gas in southwestern Kansas who tried to ignite gas from a new well and were disappointed to find that it would not burn, since it was virtually pure helium! Chemically unreactive, helium has no chemical uses, except to provide a chemically inert atmosphere. A nontoxic, odorless, tasteless, colorless gas, helium is used because of its unique physical properties. Applications in weather balloons and airships were mentioned previously. Because of its low solubility in blood, helium is mixed with oxygen for breathing by deep-sea divers and persons with some respiratory ailments. Use of helium by divers avoids the very painful condition called “the bends” caused by bubbles of nitrogen forming from nitrogen gas dissolved in blood. The greatest use of helium is as the super-cold liquid, which boils at a temperature of only 4.2 K above absolute zero (-269˚C), especially in the growing science of cryogenics, which deals with very low temperatures. Some metals are superconductors at such temperatures so that helium is used to cool electromagnets enabling relatively small magnets to develop very powerful magnetic fields. Such magnets are components of the very useful chemical tool known as nuclear magnetic resonance (NMR). The same kind of instrument modified for clinical applications and called MRI is used as a medical diagnostic tool for scanning sections of the body for evidence of tumors and other maladies. Hydrogen Wants to be Like Helium Examination of the Lewis symbol of helium (right, Figure 3.3) and the Lewis formula of elemental hydrogen, H2, (Figure 3.1) shows that each of the two hydrogen atoms in the H2 molecule can lay claim to 2 electrons and thereby come to resemble the helium atom. Recall that helium is a noble gas that is very content with its 2 electrons. Each of the H atoms in H2 is satisfied with 2 electrons, though they are shared. This indicates a basic rule of chemical bonding that atoms of an element tend to acquire the same electron configuration as that of the nearest noble gas. In this case, hydrogen, which comes just before helium in the periodic table, gains the noble gas configuration of helium by sharing electrons.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.03%3A_Helium_-_The_First_Noble_Gas.txt
The element with atomic number 3 is lithium (Li), atomic mass 6.941. The most abundant lithium isotope is having 4 neutrons in its nucleus. A few percent of helium atoms are the isotope, which has only 3 neutrons. The third electron in lithium cannot fit in the lowest energy shell, which, as noted above, is full with only 2 electrons. Therefore, the third electron in lithium goes into a second shell, that is, an outer shell. As a consequence of its electronic structure, lithium is the lowest atomic number element that is a metal. In a general sense, metals are elements that normally have only 1–3 electrons in their outer shells. These electrons can be lost from metals to produce positively charged cations with charges of +1, +2, or +3. In the pure elemental state metals often have a characteristic luster (shine), they are malleable (can be flattened or pushed into various shapes without breaking) and they conduct electricity. Although some metals, notably lead and mercury, are very dense, lithium is the least dense metal at only 0.531 g/cm3. Two of lithium’s 3 electrons are inner electrons contained in an inner shell as in the immediately preceding noble gas helium. Inner shell electrons such as these stay on average relatively close to the nucleus, are very tightly held, and are not exchanged or shared in chemical bonds. As mentioned above, the third electron in lithium is an outer electron farther from, and less strongly attracted to, the nucleus. The outer electron is said to be in the atom’s outer shell. These concepts are illustrated in Figure 3.4. The Lewis symbol for atoms such as lithium that have both inner shell and outer shell electrons normally shows just the latter. (Inner shell electrons can be shown on symbols to illustrate a point, but normally this takes too much space and can be confusing.) Since lithium has only one outer shell electron, its Lewis symbol is Consider that the lithium atom has an inner shell of 2 electrons, just like helium. Being only 1 electron away from the helium noble gas structure, lithium has a tendency to lose its extra electron so it can be like helium as shown by the following: (3.4.1) Note that the product of this reaction is no longer a neutral atom, but is a positively charged \(\ce{Li^{+}}\) cation. In losing an electron to become a cation, the lithium atom is said to be oxidized. When lithium forms chemical compounds with other elements, it does so by losing an electron from each lithium atom to become \(\ce{Li^{+}}\) cations. These, then, are attracted to negatively charged anions in ionic compounds. Lithium compounds have a variety of uses. Lithium carbonate, Li2CO3, is widely prescribed as a pharmaceutical to alleviate the symptoms of mania in manic-depressive and schizo-affective mental disorders. Lithium carbonate is the most common starting material for the preparation of other lithium compounds and is an ingredient of specialty glasses and enamels and of ceramic ware that expands only minimally when heated. Lithium hydroxide, \(\ce{LiOH}\), is used to formulate some kinds of lubricant greases. In combination with iodine, lithium has been used to make cells that are sources of electricity for cardiac pacemakers. Implanted in the patient’s chest, some of these pacemakers and their batteries have lasted for 10 years before having to be replaced. Long an element with limited uses, lithium has become a rather “exciting” metal in the newly emerging sustainability economy because lithium-based storage batteries in which \(\ce{Li^{+}}\) ion is a charge carrier have become the storage batteries of choice for computers, portable electric devices and, especially, electric and hybrid automobiles. Lithium storage batteries exhibit superior qualities with respect to charge held per unit mass, stability, and longevity. In addition, lithium dry cells in which Li metal is irreversibly converted to \(\ce{Li^{+}}\) ion during discharge have become attractive (though expensive) options in the throwaway dry cell market because they are very long lived and carry much more charge per unit mass than standard alkaline dry cells. The most common source of lithium is lithium brines where lithium has become concentrated by leaching of rock and evaporation of water from highland salt flats in South America and western China. Lithium salts are collected for processing from the evaporation of water from the brines in evaporation ponds. Bolivia is the largest producer of lithium and there is significant potential of production from Chile, Argentina, Australia, China, and perhaps even the state of Nevada. As use of lithium in batteries increases, recycled lithium from spent batteries will become an important source of the element.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.04%3A_Lithium_The_First_Metal.txt
The first period of the periodic table is a short one consisting of only two elements, hydrogen and helium. Lithium, atomic number 3 begins the second period, which has 8 elements. Elements with atomic numbers 4-10, which complete this period, are discussed in this section. Beryllium, Atomic Number 4 Like atoms of all the elements in the second period of the periodic table, beryllium, atomic number 4, atomic mass 9.012, has 2 inner shell electrons. Beryllium also has 2 outer shell electrons, so its Lewis symbol is In addition to 4 protons in their nuclei, beryllium atoms also have 5 neutrons. When the beryllium atom is oxidized to form a beryllium cation, the reaction is Be: → Be2+ + 2e- (lost to another atom) (3.5.1) Since the beryllium atom needs to lose 2 electrons to reach the two-electron helium electron configuration, it produces doubly charged Be2+ cations. Beryllium has some important uses in metallurgy. Melted together with other metals, a process that produces alloys, beryllium yields metal products that are hard and corrosion-resistant. Beryllium alloys can be blended that are good electrical conductors and that are nonsparking when struck, an important characteristic in applications around flammable vapors. Among the devices for which beryllium alloys are especially useful are various specialty springs, switches, and small electrical contacts. Beryllium has found widespread application in aircraft brake components where its very high melting temperature (about 1290˚C) and good heat absorption and conduction properties are very advantageous. In a sense, beryllium is somewhat the opposite of a green element. This is because of its adverse health effects, including berylliosis, a disease marked by lung deterioration. Because of the extreme inhalation hazard of Be, allowable atmospheric levels are very low. Many workers were occupationally exposed to beryllium as part of the nuclear reactor and weapons industry in the U.S. in the decades following World War II. In recognition of the adverse health effects of occupational exposure to beryllium, in the late 1990s the U.S. Government agreed to compensate workers suffering occupational exposure to this metal. Boron, a Metalloid Boron, B, atomic number of 5, atomic mass 10.81, consists primarily of the isotope with 6 neutrons in addition to 5 protons in its nucleus; a less common isotope has 5 neutrons. Two of boron’s 5 electrons are in a helium core and 3 are outer electrons as denoted by Boron is the first example of an element with properties intermediate between those of metals and nonmetals called metalloids. In addition to boron, the metalloids include silicon, germanium, arsenic, antimony, and tellurium, of which the most notable, silicon, is among the first 20 elements. In the elemental state, metalloids have a luster like metals, but they do not readily form simple cations. Unlike metals, which generally conduct electricity well, metalloids usually conduct electricity poorly, if at all, but can become conductors under certain conditions. Such materials are called semiconductors and are of crucial importance because they form the basis of the world’s vast semiconductor industry, which has given us small, powerful computers and a huge array of other electronic products. Boron is a high-melting substance (2190˚C) that is alloyed with copper, aluminum, and steel metals to improve their properties. As a good absorber of neutrons, boron is used in nuclear reactor control rods and as a neutron-absorbing additive to the water that circulates through a reactor core as a heat transfer medium. Boron nitride, BN, is extraordinarily hard, as are some other compounds of boron. Boron oxide, B2O3, is an ingredient of heat-insulating fiber glass and boric acid, H3BO3, is used as a flame retardant in cellulose insulation in houses. The Element of Life, Carbon Carbon, C, atomic number 6, brings us to the middle of the second period of the periodic table. In addition to its 2 inner electrons, the neutral carbon atom has 4 outer electrons as shown by the Lewis symbol The ability of carbon atoms to bond with each other determines the properties of the several important and useful forms of elemental carbon. (Different forms of the same element are called allotropes.) Very fine carbon powder composes carbon black, which is used in tires, inks, and printer toner. Carbon atoms bonded in large flat molecules compose graphite, so soft and slick that it is used as a lubricant. Carbon treated with steam or carbon dioxide at elevated temperatures develops pores that give the carbon an enormous surface area. This product is activated carbon that is very useful in purifying foods, removing organic pollutants from water and removing pollutant vapors from air. Elemental carbon fibers are bonded together with epoxy resins to produce light composites so strong that they are used for aircraft construction. Bonded together in a different way that gives a very hard and rigid structure, carbon atoms produce diamond. A particularly interesting class of carbon allotropes is that of fullerenes consisting of elemental carbon bonded in generally 5- and 6-membered rings to form spheres, ellipsoids, and tubes. The first of this class of elemental carbon discovered only in 1985 consists of aggregates of 60 carbon atoms bonded together in 5- and 6-membered rings that compose the surface of a sphere. This structure resembles the geodesic domes designed as building structures by Buckminster Fuller, a visionary designer. Therefore, the discoverers of this form of carbon named it buckminsterfullerene and the C60 balls, which resemble soccer balls in their structure, are commonly called “buckyballs.” Since the discovery of the C60 fullerene, many related forms have been synthesized, of which the most interesting may be very narrow carbon tubes called carbon nanotubes. (“Nano” is a prefix commonly assigned to materials in which the individual units have dimensions around 1 nanometer or 1 × 10-9 meters.) Carbon nanotubes have very interesting properties including some forms with an extraordinarily great length-to-diameter ratio of up to 132,000,000:1. Because of their unique dimensions, extraordinary strength, electrical properties, and ability to efficiently conduct heat, carbon nanotubes are of intense interest in materials science, nanotechnology, electronics, optics, and other high technology applications. However, special attention needs to be given to their potential toxicity. Green Carbon from the Air Carbon is present in the air as gaseous carbon dioxide, CO2. Although air is only about 0.04% CO2 by volume, it serves as the source of carbon for the growth of green plants. In so doing, the chlorophyll in plants captures solar energy in the form of visible light, represented hν, and uses it to convert atmospheric carbon dioxide to high-energy glucose sugar, C6H12O6, as shown by the following reaction: $\ce{6CO2 + 6H2O ->[h\nu] C6H12O6 + 6O2} \label{3.5.2}$ The carbon fixed in the form of C6H12O6 and related compounds provides the basis of the food chains that sustain all organisms. Organic carbon produced by photosynthesis in eons past also provided the raw material for the formation of petroleum, coal, and other fossil fuels. Now, as supplies of these scarce resources dwindle and as the environmental costs of their extraction, transport, and utilization mount, there is much renewed interest in photosynthetically produced carbon compounds as raw materials and even fuels. Despite the low levels of carbon dioxide in the atmosphere and the relatively low efficiency of photosynthesis, rapidly growing plants, such as some varieties of hybrid poplar trees, can produce enormous quantities of carbon compounds very rapidly and in a sustainable manner. Nitrogen from the Air Nitrogen, N, atomic number 7, atomic mass 14.01, composes 78% by volume of air in the form of diatomic N2 molecules. The nitrogen atom has 7 electrons, 2 contained in its inner shell and 5 in its outer shell. So its Lewis symbol is the following: Nitrogen gas does not burn and is generally chemically unreactive. Advantage is taken of the extreme chemical stability of nitrogen gas in applications where a nonreactive gas is needed to prevent fires and explosions. Although almost 80% of the air that people breathe consists of elemental nitrogen gas, people have died of asphyxiation by entering areas filled with nitrogen gas in which oxygen is absent. Since nitrogen gas has no odor, it does not warn of its presence. Huge quantities of liquid nitrogen, which boils at a very cold -190˚C, are used in areas where cold temperatures are needed. This frigid liquid is employed to quick-freeze foods and for drying materials in freeze-drying processes, Biological materials, such as semen used in artificial breeding of animals, can be preserved in liquid nitrogen. The atmosphere is an inexhaustible reservoir of nitrogen. However, it is very difficult to get nitrogen into the chemically combined form in which it occurs in simple inorganic compounds or proteins. This is because of the extreme stability of the N2 molecule, mentioned above. The large-scale chemical fixation of atmospheric nitrogen over a catalytic surface at high temperatures and pressure as represented by the reaction $\ce{N2 + 3H2 → 2NH3} \label{3.5.3}$ was a major accomplishment of the chemical industry about a century ago. It enabled the large-scale production of relatively cheap nitrogen fertilizers that resulted in highly increased crop production, as well as the manufacture of enormous quantities of nitrogen-based explosives that made possible the unprecedented carnage of World War I. Despite the extreme conditions required for the preparation of nitrogen compounds by humans in the anthrosphere, humble bacteria accomplish the same thing under ambient conditions of temperature and pressure, converting N2 from the air into organically bound nitrogen in biomass. Prominent among the bacteria that do this are Rhizobium bacteria that grow symbiotically on the roots of legume plants, fixing atmospheric nitrogen that the plants need and drawing nutrients from the plants. Because of this ability, legumes, such as soybeans and clover grow well with less artificial nitrogen fertilizer than that required by other plants. One of the exciting possibilities with genetically modified plants is the potential to develop nitrogen-fixing varieties of corn, wheat, rice, and other crops that now lack the capability to fix nitrogen. Nitrogen is an essential life element that is present in all proteins, hemoglobin, chlorophyll, enzymes, and other life molecules. It circulates through nature in the nitrogen cycle by which elemental nitrogen is incorporated from the atmosphere into biological material. Nitrogen-containing biomass is converted during biodegradation by bacteria to inorganic forms, which may be utilized as nutrient nitrogen by plants. Eventually, bacterial processes convert the nitrogen back to elemental N2, which is returned to the atmosphere to complete the cycle. Oxygen, the Breath of Life Oxygen, atomic number 8, atomic mass 16.00 is required by humans and many other living organisms. A diatomic nonmetal, elemental oxygen consists of O2 molecules and makes up 21% of the volume of air. Of its 8 electrons, the oxygen atom has 6 in the outer shell as represented by the Lewis formula: Oxygen can certainly be classified as a green element for a number of reasons, not the least of which is that O in the atmosphere is there for the taking. Elemental oxygen is transferred from the atmosphere to the anthrosphere by liquifying air and distilling the liquid air, the same process that enables isolation of pure nitrogen. These two gases are also separated from air by adsorption onto solid surfaces under pressure followed by removal under vacuum. Pure oxygen has a number of applications including use as a gas to breathe by people with lung deficiencies, in chemical synthesis, and in oxyacetylene torches employed for welding and cutting metals. Although the elemental oxygen molecule is rather stable, at altitudes of many kilometers in the stratosphere, it is broken down to oxygen atoms by the absorption of ultraviolet radiation from the sun as shown in Chapter 2, Reaction 2.13.1 As illustrated by Reaction 2.13.2, the oxygen atoms formed by the photochemical dissociation of O2 combine with O2 molecules to produce molecules of ozone, O3. The result is a layer of highly rarefied air containing some ozone over an altitude range of many kilometers located high in the stratosphere. There is not really much ozone in this layer. If it were pure ozone under the conditions of pressure and temperature that occur at ground level, the ozone layer would be only about 3 millimeters thick! This stratospheric ozone, sparse though it is, serves an essential function in protecting organisms on Earth’s surface from the devastating effects of ultraviolet radiation from the sun. Were it not for stratospheric ozone, life as it is now known could not exist on Earth. Ozone has a split personality as a green form of oxygen. As discussed above, ozone in the stratosphere is clearly beneficial and essential for life. But it is toxic to inhale at levels less than even one part per million by volume. Ozone is probably the most harmful constituent of air polluted by the formation of photochemical smog in the atmosphere at ground levels. The most notable chemical characteristic of oxygen is its ability to combine with other materials in energy-yielding reactions. One such reaction with which most people are familiar is the burning of gasoline in an automobile, $\ce{2C8H18 + 25O2 → 16CO2 + 18H2O + energy } \label{3.5.4}$ performed so efficiently that the combustion of only 1 gallon of gasoline can propel a full-size automobile more than 25 miles at highway speeds. Along with many other organisms, we use oxygen in our bodies to react with nutrient carbohydrates, $\ce{C6H12O6 (glucose) + 6O2 → 6CO2 + 6H2O + energy } \label{3.5.5}$ to provide energy that we use. Whereas the combustion of a fuel such as gasoline occurs at red-hot temperatures, the “burning” of carbohydrates in our body occurs through the action of enzymes in the body at body temperature of only 37˚C. The Most Nonmetallic Element, Fluorine Fluorine, F, atomic number 9, atomic mass 19.00 has 7 outer electrons as shown by its Lewis symbol Elemental fluorine consists of diatomic F2 molecules constituting a greenish-yellow gas. Fluorine is the most nonmetallic of all the elements. It reacts violently with metals, organic matter, and even glass! Elemental fluorine is a very corrosive poison that attacks flesh and forms wounds that heal very poorly. Because of its hazards, the practice of green chemistry seeks to minimize the generation or use of F2. Fluorine is used in chemical synthesis. It was once widely employed to make Freons, chlorofluorocarbon compounds such as dichlorodifluoromethane, Cl2CF2, that were used as refrigerant fluids, spray can propellants, and plastic foam blowing agents. As discussed in Chapter 10, these compounds were found to be a threat to the vital stratospheric ozone layer mentioned in the discussion of oxygen above. They have now been replaced with fluorine-containing substitutes such as HFC-134a, CH2FCF3, which either do not contain the chlorine (Cl) that destroys stratospheric ozone or undergo destruction by atmospheric chemical processes near Earth’s surface, and thus never reach the stratosphere.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.05%3A_The_Second_Period_of_the_Periodic_Table.txt
Only one element remains to complete the second period of the abbreviated periodic table. This element is neon, atomic number 10, atomic mass 20.18. Although most neon atoms have 10 neutrons in addition to the 10 protons in their nuclei, some have 12 neutrons, and very few have 11. Neon is a gas consisting of individual Ne atoms that constitutes about 2 parts per thousand of the volume of air. Neon is recovered by distillation of liquid air. Its most common use is as a gas in tubes through which an electrical discharge is carried in glowing neon signs. The total of 10 electrons in the neon atom are contained in two shells with 8 in the outer shell. So the Lewis symbol of neon is With 8 electrons, the second shell of electrons is a filled electron shell. (Recall that helium has a filled electron shell with only 2 electrons.) Because it has a filled outer electron shell, neon is a noble gas that exists as single gas-phase atoms and forms no chemical compounds. The Special Significance of the Octet In addition to helium and neon, there are four other noble gas elements. These are argon (atomic number 18), krypton (atomic number 36), xenon (atomic number 54), and radon (atomic number 86). Other than helium, these all share a common characteristic of 8 outer-shell electrons. Such an electron configuration can be shown by the general Lewis symbol, and is known as an octet of electrons, where X is the chemical symbol of the noble gas. Although only atoms of noble gases have octets as single atoms, many other atoms acquire octets by forming chemical bonds. This tendency of atoms to acquire stable octets through chemical bonding is the basis of the octet rule, which is used in this book to predict and explain the chemical bonding in a number of compounds, such as those discussed in Chapter 4. To see a simple application of the octet rule in chemical bonding, consider the bonds involved in molecular elemental nitrogen, N2. Recall the Lewis symbol of the N atoms showing 5 dots to represent the 5 outer-shell electrons in each N atom. Figure 3.6 shows bond formation between 2 N atoms. Each of the two N atoms in the N2 molecule needs 8 outer-shell electrons, but only 10 electrons are available to provide these electrons (the two inner-shell electrons in each N atom are not available to form bonds). This means that a lot of the electrons will have to be shared to form a bond between the N atoms in the N2 molecule, and, in fact, 6 of the 10 outer-shell electrons available in two N atoms are shared between the N atoms to give a triple bond consisting of 3 pairs of shared electrons as shown by the 3 pairs of dots between the 2 N atoms in the molecule of N2 in Figure 3.6. 3.07: Completing the 20-Element Periodic Table So far in this chapter 10 elements have been defined and discussed. A total of 10 more are required to complete the 20-element abbreviated periodic table. Their names and properties are summarized briefly here. The periodic table is given in Figure 3.9. Sodium, Na, atomic number 11, atomic mass 22.99, comes directly below lithium in the periodic table and is very similar to lithium in being a soft, chemically very reactive metal. There is one major isotope of sodium containing 12 neutrons in the atom’s nucleus. Sodium has 10 inner-shell electrons contained in its first inner shell of 2 electrons and its second one of 8 electrons. The 11th electron in the sodium atom is in a third shell, which is an outer shell. This is shown as a single dot in the Lewis symbol of Na in Figure 3.7. The electrons in sodium can be represented as shown in Figure 3.7. Magnesium, Mg, atomic number 12, atomic mass 24.31, has 12 electrons per atom so it has 2 outer shell electrons. There are three isotopes of magnesium containing 12, 13, and 14 neutrons. Magnesium is a relatively strong, very lightweight metal that is used in aircraft, extension ladders, portable tools, and other applications where light weight is particularly important. Aluminum, Al, atomic number 13, atomic mass 26.98, has 3 outer-shell electrons in addition to its 10 inner electrons. Aluminum is a lightweight metal used in aircraft, automobiles, electrical transmission lines, building construction and many other applications. Although it is chemically reactive, the oxide coating formed when aluminum on the surface of the metal reacts with oxygen in air is self-protecting and prevents more corrosion. In some important respects aluminum can be regarded as a green metal. This is because aluminum enables construction of strong lightweight components which, when used in aircraft and automobiles, require relatively less energy to move. So aluminum is important in energy conservation. Aluminum cables also provide an efficient way to transmit electricity. Although the ores from which aluminum is made are an extractive resource dug from the earth, aluminum is an abundant element. And there are alternative resources that can be developed, including aluminum in the fly ash left over from coal combustion. Furthermore, aluminum is one of the most recyclable metals, and scrap aluminum is readily melted down and cast into new aluminum goods. If an “element of the century” were to be named for the 1900s, humble silicon, Si, atomic number 14, atomic mass 28.09, would be a likely candidate. This is because silicon is the most commonly used of the semiconductor elements and during the latter 1900s provided the basis for the explosion in electronics and computers based upon semiconductor devices composed primarily of silicon. Despite the value of these silicon-based products, silicon is abundant in soil and rocks, ranking second behind oxygen as a constituent of Earth’s crust. The silicon atom has 4 outer-shell electrons, half an octet, and it is a metalloid, intermediate in behavior between the metals on the left of the periodic table and the nonmetals on the right. By vastly reducing the bulk of electronic components relative to performance, silicon has contributed to a huge saving of materials used in radios, televisions, communications equipment, and other electronic devices. Furthermore, the silicon-based semiconductor devices used in solid-state electronics consumes only a fraction of the electricity once used by vacuum tube based devices. The bulky wires made of relatively scarce copper formerly employed for transmitting communications signals electrically have been largely replaced by fiber optic devices consisting of transparent silica, SiO2, which transfer information as pulses of light. A hair-like optical fiber can transmit many times the amount of information per unit time as the thick copper wire that it replaces. And the energy required for transmission of a unit of information by a fiber optic cable is minuscule compared to that required to send the same information by electrical impulse over copper wire. So silicon is truly a green element that, although cheap and abundant, performs electronic and communications functions much faster and better than the copper and other metals that it has replaced. Phosphorus, P, atomic number 15, atomic mass 30.97, has 5 outer-shell electrons. So it is directly below nitrogen in the periodic table and resembles nitrogen in its chemical behavior. Pure elemental phosphorus occurs in several forms, the most abundant of which is white phosphorus. White phosphorus is a chemically very reactive nonmetal that may catch fire spontaneously in the atmosphere. It is toxic and causes deterioration of bone. The jawbone is especially susceptible to the effects of phosphorus and develops a condition known as “phossy jaw” in which the bone becomes porous and weak and may break from the strain of chewing. Chemically combined phosphorus is an essential life element, however, and is one of the components of DNA, the basic molecule that directs molecular life processes. Phosphorus is also an essential plant fertilizer and is an ingredient of many industrial chemicals including some pesticides. Arsenic is in the same group of the periodic table as phosphorus and occurs as an impurity with phosphorus processed from ore. If this phosphorus is to be used for food, the arsenic has to be removed. Sulfur, S, atomic number 16, atomic mass 32.06, has 6 outer-shell electrons. It is a brittle, generally yellow nonmetal. It is an essential nutrient for plants and animals occurring in the amino acids that compose proteins. Sulfur is a common air pollutant emitted as sulfur dioxide, SO2, in the combustion of fossil fuels that contain sulfur. Much of the large quantities of sulfur required for industrial production of sulfuric acid and other sulfur-containing chemicals is reclaimed from the hydrogen sulfide, H2S, that contaminates much of the natural gas (methane, CH4) that is such an important fuel and raw material in the world today. In keeping with the best practice of green chemistry, the hydrogen sulfide is separated from the raw natural gas and about 1/3 of it is burned, $\ce{2H2S + 3O2 → 2SO2 + 2H2O} \label{3.7.1}$ producing sulfur dioxide, SO2. The sulfur dioxide produced is then reacted with the remaining hydrogen sulfide through the Claus reaction, below, yielding an elemental sulfur product that is used to synthesize sulfuric acid and other sulfur chemicals. $\ce{2H2S + SO2 → 3S + 2H2O } \label{2.7.2}$ Chlorine, Cl, atomic number 17, atomic mass 35.453, has 7 outer-shell electrons, just 1 electron short of a full octet. Elemental chlorine is a greenish-yellow diatomic gas consisting of Cl2 molecules. In these molecules the Cl atoms attain stable octets of outer-shell electrons by sharing two electrons in a covalent bond as illustrated in Figure 3.8. The chlorine atom can also accept an electron to attain a stable octet in the Cl- anion as shown in the ionic compound sodium chloride, NaCl, in Figure 3.8. Elemental chlorine can be deadly when inhaled and was the first military poison gas used in World War I. Despite its toxic nature, chlorine gas has saved many lives because of its use for approximately the last 100 years as a drinking water disinfectant that has eradicated deadly water-borne diseases, such as cholera and typhoid. Chlorine is an important industrial chemical that is used to make plastics and solvents. There is no possibility of a shortage of chlorine and it can even be made by passing an electrical current through seawater, which contains chlorine as dissolved sodium chloride. The green aspects of chlorine depend upon its application. Elemental chlorine is certainly a dangerous material whose manufacture and use are generally to be avoided where possible in the practice of green chemistry. But, as noted above, elemental chlorine has saved many lives because of its uses to disinfect water. A number of persistent pesticides including DDT are organic compounds composed of chlorine along with carbon and hydrogen. In addition to the ecological damage done by these pesticides, the waste byproducts of their manufacture and of the production of other organochlorine compounds are among the most abundant contaminants of troublesome hazardous waste chemical dumps. A common plastic, polyvinyl chloride (PVC), contains chlorine. This plastic is widely used in water pipe and drain pipe, in the former application replacing relatively scarce and expensive copper metal and toxic lead. But the material used to make PVC is volatile vinyl chloride. It is one of the few known human carcinogens, having caused documented cases of a rare form of liver cancer in workers formerly exposed to very high levels of vinyl chloride vapor in the workplace. Because of the dangers of elemental chlorine and the problems caused by organochlorine compounds, the practice of green chemistry certainly tries to minimize the production and use of elemental chlorine and generally attempts to minimize production of organochlorine compounds and their dispersion in the environment. Element number 18, argon, Ar, atomic mass 39.95, brings us to the end of the third period of the abbreviated periodic table. It has a complete octet of outer-shell electrons and is a noble gas. No true chemical compounds of argon have been isolated and no chemical bonds involving this element were known until formation of a very unstable transient bond involving Ar atoms was reported in September, 2000. Argon composes about 1% by volume in the atmosphere. Largely because of its chemically inert nature, argon has some uses. It is employed as a gas to fill incandescent light bulbs. In this respect it helps prevent evaporation of white-hot tungsten atoms from the glowing lamp filament, thus significantly extending bulb life. It is also used as a plasma medium in instruments employed for inductively coupled plasma atomic emission analysis of elemental pollutants. In this application a radiofrequency signal is used to convert the argon to a gaseous plasma that contains positively charged Ar+ ions and negatively charged electrons and is heated to an incredibly hot 10,000˚C. Completing the Periodic Table The next element to be added to the abbreviated periodic table is element number 19. This begins a fourth period of the periodic table. This period actually contains 18 elements, but we will take it only as far as the first two. That is because element number 21 is the first of the transition metals and to explain their placement in the periodic table on the basis of the electrons in them gets a little more complicated and involved than is appropriate for this book. The reader needing more details is referred to other standard books on beginning chemistry.2,3 The element with atomic number 19 is potassium, K, having an atomic mass of 39.10. Most potassium consists of the isotope with 20 neutrons, . However, a small fraction of naturally occurring K is in the form of . This is a radioactive isotope of potassium and since we all have potassium (an essential element for life) in our bodies, we all are naturally radioactive! Muscle mass contains more potassium than adipose (fat) tissue, so more muscular people are more radioactive. But not to worry, the levels of radioactivity from potassium in the body are too low to cause concern and, under any circumstances, cannot be avoided. (One proponent of nuclear energy has pointed out that sleeping with a muscular person exposes one to more radioactivity than does living close to a nuclear power reactor.) The same things that can be said of sodium, element number 11, are generally true of potassium. In the pure elemental state, potassium is a very reactive alkali metal. As an essential element for life, it is a common fertilizer added to soil to make crops grow well. Chemically, potassium loses its single outer-shell electron to produce K+ ion. Calcium, Ca, atomic number 20, atomic mass 40.08, has 2 outer-shell electrons, two beyond a full octet. The calcium atom readily loses its 2 “extra” electrons to produce Ca2+ cation. Like other elements in its group in the periodic table, calcium is an alkaline earth metal. Elemental calcium metal is chemically reactive, though not so much so as potassium. Calcium has chemical properties very similar to those of magnesium, the alkaline earth metal directly above calcium in the periodic table. Calcium is essential for life, although most soils contain sufficient calcium to support optimum crop growth. Calcium is very important in our own bodies because as hard mineral hydroxyapatite, Ca5OH(PO4)3, it is the hard material in teeth and bones. Calcium deficiency can cause formation of poor teeth and the development of disabling osteoporosis a condition characterized by weak bones that is especially likely to afflict older women.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.06%3A_The_Magic_Octet_of_8_Outer-Shell_Electrons.txt
With element number 20, all of the elements required for the abbreviated periodic table have been described. As noted above, the placement of electrons in elements with atomic number 21 and higher is a little too complicated to explain here. However, these elements are important and they are all shown in the complete periodic table at the end of this chapter. Among the heavier elements in the complete periodic table are the transition metals, including the important metals chromium, manganese, iron, cobalt, nickel, and copper. Also included are the lanthanides and the actinides. Among these elements are thorium, uranium, and plutonium, which are important in nuclear energy and nuclear weaponry. The abbreviated periodic table with the first 20 elements is illustrated in Figure 3.9. In addition to atomic number and atomic mass, this table shows the Lewis symbol of each element. It is seen that the symbols of the elements in the same vertical columns have the same number of dots showing identical configurations for their outer-shell electrons. This very simple, brief table contains much useful information, and it is recommended that the reader become familiar with it and be able to reproduce the Lewis symbols for each of the 20 elements. As will be seen in later chapters, the chemistry of the first 20 elements tends to be straightforward and easily related to the atomic structures of these elements. In examining the periodic table, hydrogen should be regarded as having unique properties and not belonging to a specific group. Otherwise, the elements in vertical columns belong to groups with similar chemical properties. Excluding hydrogen, the elements in the first group on the left of the table — lithium, sodium, and potassium — are alkali metals. In the elemental state alkali metals have a very low density and are so soft that they can be cut with a knife. Freshly cut, an alkali metal surface has a silvery-white color which almost instantaneously turns to a coating of gray metal oxide with exposure to air. The alkali metals (represented by M, below) react violently with water, 2M + 2H2O → 2MOH + H2 (3.5.4) to produce the metal hydroxides, strongly basic substances that can be very destructive to flesh that they contact. The alkali metals react with elemental chlorine to produce the ionic chloride salts including, in addition to NaCl shown in Figure 3.8, LiCl and KCl. The second group of the abbreviated periodic table contains beryllium, magnesium, and calcium, all known as alkaline earth metals. Freshly exposed surfaces of these metals have a grayish-white luster. These metals are highly reactive to form doubly charged cations (Be2+, Mg2+, Ca2+) by the loss of 2 electrons per atom. The second group from the right, which in the abbreviated periodic table consists of fluorine and chlorine, is known as the halogens. These elemental halogens are diatomic gases in which the two atoms of F2 or Cl2 are held together by a single covalent bond consisting of two shared electrons. These elements are the most nonmetallic of the elements. Rather than losing electrons to produce positively charged cations, as is common with metals, the halogens readily gain electrons to complete their outer shell electron octets, producing F- and Cl- anions. The far right group of the abbreviated periodic table is composed of the noble gases, helium, neon, and argon. These elements have complete outer shells, exhibit no tendency to enter into chemical bonds, and consist of individual gas-phase atoms.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.08%3A_The_Brief_Periodic_Table_is_Complete.txt
Access to and use of the internet is assumed in answering all questions including general information, statistics, constants, and mathematical formulas required to solve problems. These questions are designed to promote inquiry and thought rather than just finding material in the text. So in some cases there may be several “right” answers. Therefore, if your answer reflects intellectual effort and a search for information from available sources, your answer can be considered to be “right.” 1. Match the law or observation denoted by letters below with the portion of Dalton’s atomic theory that explains it denoted by numbers: A. Law of Conservation of Mass B. Law of Constant Composition C. The reaction of C with O2 does not produce SO2 D. Law of Multiple Proportions 1. Chemical compounds are formed by the combination of atoms of different elements in definite constant ratios that usually can be expressed as integers or simple fractions 2. During the course of ordinary chemical reactions, atoms are not created or destroyed 3. During the course of ordinary chemical reactions, atoms are not changed to atoms of other elements 4. Illustrated by groups of compounds such as CHCl3, CH2Cl2, or CH3Cl 2. Explain why it is incorrect to say that atomic mass is the mass of any atom of an element. How is atomic mass defined? 3. Define what is meant by the notation ? What do y, x, and A mean? 4. What is the Lewis symbol of hydrogen and what does it show? What is the Lewis formula of H2 and what does it show? 5. Why should hydrogen be considered in a separate category of the periodic table? 6. Consider the Lewis symbol of helium and explain how the helium atom illustrates the concepts of electron shell, filled electron shell, and noble gases. 7. What does helium have to do with cyrogenics? 8. Use three dots to show all the electrons in the lithium atom, Li. What does this show about inner and outer electrons and why Li produces Li+ cation? 9. In what respect may it be argued that beryllium is definitely not a green element? 10. What are two elemental oxygen species other than molecular O2 found at very high altitudes in the stratosphere? How do they get there? 11. In what respects may carbon be classified as the “element of life”? 12. How are a specific kind of fluorine compounds related to stratospheric ozone? What does this have to do with green chemistry? 13. How does neon illustrate important points about the octet and the octet rule? 14. Of the following, the untrue statement pertaining to matter, atoms, and elements is A. All matter is composed of only about a hundred fundamental kinds of matter called elements. B. Each element is made up of very small entities called atoms C. All atoms of the same element have the same number of protons and neutrons and the same mass D. All atoms of the same element behave identically chemically E. All atoms of the same element have the same number of protons 15. Given that in the periodic table elements with atomic numbers 2, 10, and 18 are gases that do not undergo chemical reactions and consist of individual molecules, of the following, the statement most likely to be true is A. Elements with atomic numbers 3, 11, and 19 are also likely to be gases. B. Elements with atomic numbers 3, 11, and 19 would not undergo chemical reactions. C. Elements with atomic numbers 10 and 18 would be at opposite ends of the table. D. The element with atomic number of 11 may well be a highly reactive metal. E. The properties of elements with atomic number 3, 11, and 19 would have chemical properties that are much different from each other. 16. The two atoms represented below A. Are of different elements. B. Are atoms of deuterium, a form of hydrogen, H. C. Are of the same element. D. Are not isotopes of the same element. E. Are of two elements with atomic numbers 6 and 7. 17. Of the following, the statement that is untrue regarding chemical bonding and compounds is A. Chemical bonds occur only in compounds, not in pure elements. B. Molecules of H2 are held together by bonds consisting of shared electrons. C. Ionic compounds consist of charged atoms or groups of atoms. D. Both pure elemental hydrogen, H2, and the compound water, H2O, have covalent bonds. E. An atom that has more electrons than protons is an anion. 18. Suggest a material that is a source of electrons in a fuel cell used to generate electricity. What may accept electrons? 19. What are semiconductors? What is the most important semiconductor discussed in this chapter. 20. What is the most notable chemical characteristic of elemental oxygen? 21. What are some reasons that aluminum can be regarded as a green metal? 22. What are some of the toxic elements or elemental forms among the first 20 elements? 23. What is a common air pollutant gas that contains sulfur? 24. Why does the abbreviated periodic table stop at atomic number 20? 25. Suggest why calcium might be particularly important in the diet of (A) children and (B) older people? 26. Which elements among the first 20 are commonly present in fertilizers used to enhance the growth of food crops? 27. What is the special significance of the carbon isotope with 6 neutrons in its nucleus? 28. What is the single exception to the rule that all atoms contain at least 1 neutron? 29. What is the single exception to the rule that noble gases contain stable octets of electrons? 30. What is the outer-shell electron configuration of metals? What does this have to do with their chemical behavior? 31. What is it about the carbon atom that enables millions of organic compounds to exist? 32. What are some of the forms of elemental carbon and their uses? Which of these was discovered only relatively recently? 33. What is the major chemical characteristic of elemental nitrogen? What is a major advantage afforded by this characteristic? In what respect is this a problem? 34. What are two applications that elemental magnesium and aluminum have in common? 35. How do copper and silica differ in the way that they transfer communications signals? 36. Using the octet rule, propose a Lewis formula for O2. 3.R: References 1. Manahan, Stanley E., Fundamentals of Sustainable Chemical Science, Taylor & Francis/CRC Press, Boca Raton, FL, 2009. 2. Brown, Theodore L., H. Eugene Lemay, Bruce E. Bursten, Catherine J. Murphy, and Patrick Woodward, Chemistry: The Central Science, 11th ed., Prentice Hall, Upper Saddle River, NJ, 2008. 3. Chang, Raymond, Chemistry, 10th ed., McGraw-Hill, New York, 2009 SUPPLEMENTARY REFERENCES Brown, Lawrence S., and Thomas A. Holme, Chemistry for Engineering Students, 2nd ed., Brooks/Cole Cengage Learning, Belmont, CA, 2011. Denniston, Katherine J., Joseph J. Topping, and Robert L. Caret, General, Organic, and Biochemistry, 7th ed., McGraw-Hill, New York, 2011. Guinn, Denise, and Rebecca Brewer, Essentials of General, Organic, and Biochemistry: An Integrated Approach, W. H. Freeman and Co., New York, 2009. Hill, John W., and Doris Kolb, Chemistry for Changing Times, 11th ed, Prentice Hall, Upper Saddle River, NJ, 2006. McMurry, John, David S. Ballantine, Carl A. Hoeger, Virginia E. Peterson, and Mary E. Castellion, Fundamentals of General, Organic, and Biological Chemistry, 6th ed., Prentice-Hall, Upper Saddle River, NJ, 2009. Mickulecky, Peter J., Katherine Brutlag, Michelle Gilman, and Brian Peterson, Chemistry Workbook for Dummies, Wiley, Hoboken, NJ, 2008. Middlecamp, Catherine, Steve Keller, Karen Anderson, Anne Bentley, Michael Cann, Chemistry in Context, 7th ed., McGraw-Hill, Dubuque, IA, 2011. Moore, John W., Conrad L. Stanitski, and Peter C. Jurs, Chemistry: The Molecular Science, 4th ed., Brooks/Cole Cengage Learning, Belmont, CA, 2011. Smith, Janice G., Principles of General, Organic, and Biochemistry, McGraw-Hill, New York, 2011.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/03%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals/3.E%3A_The_Elements_-_Basic_Building_Blocks_of_Green_Chemicals_%28E.txt
“The millions of known chemical compounds vary tremendously in their properties. For example, some are so unstable that just touching them with a steel spatula will cause them to explode spontaneously. Others are so stable and persistent that they last practically forever in the environment” 04: Compounds- Safer Materials for a Safer World Chemical compounds consist of molecules or aggregates of ions composed of two or more elements held together by chemical bonds. Several examples of chemical compounds including water (H2O), ammonia (NH3), and sodium chloride (NaCl) were given in earlier chapters. This chapter addresses chemical compounds in more detail, including aspects of their green chemistry. A crucial aspect of chemical compounds consists of the kinds of bonds that hold them together. As noted earlier, these may be covalent bonds composed of shared electrons or ionic bonds consisting of positively charged cations and negatively charged anions. The strengths of these bonds vary and are important in determining compound behavior. For example, chlorofluorocarbons, such as dichlorodifluoromethane, Cl2CF2, are so stable that they persist in the atmosphere and do not break down until reaching very high altitudes in the stratosphere, where the release of chlorine atoms destroys stratospheric ozone. The extreme stabilities of the chlorofluorocarbons are due to the very high strengths of the C-Cl and C-F bonds by which chlorine and fluorine are bonded to a central carbon atom. The proper practice of green chemistry requires that substances that get released to the environment break down readily. Since Cl2CF2 is so stable when released to the atmosphere, it cannot be regarded as being a very good green chemical. Another important aspect of the way in which chemical compounds are put together is molecular structure, which refers to the shape of molecules. Consider the Cl2CF2 compound just mentioned in which the Cl and F atoms are bonded to a single carbon atom. To represent this molecule as the flat structure (below) is not totally correct because not all of the 5 atoms in the compound lie in the same plane. Instead, the F and Cl atoms can be visualized as being distributed as far apart as possible in three dimensions around a sphere, at the center of which is the C atom. This can be represented as where the two Cl atoms are visualized as being above the plane of the book page toward the reader and the two F atoms are represented as being below the plane of the page away from the reader. The shape of molecules is very important in determining the ways in which they interact with other molecules. For example, the molecules of enzymes that enable metabolism to occur in living organisms recognize the substrate molecules upon which they act by their complementary shapes. What Are Green Chemical Compounds? Chemical compounds vary markedly in the degree to which they are “green.”Dichlorodifluoromethane, Cl2CF2, the chlorofluorocarbon discussed above, is definitely not green. That is not because it is toxic — it is one of the least toxic synthetic compounds known — but because it is so extremely stable and persistent in the atmosphere and can cause stratospheric ozone destruction. The compounds that have replaced it, the hydrofluorocarbons and hydrochlorofluorocarbons, are much more green because they do not last long when released in the atmosphere or do not contain ozone-damaging chlorine. There are several characteristics of compounds that meet the criteria of being green. These are the following: • Preparation from renewable or readily available resources by environmentally friendly processes • Low tendency to undergo sudden, violent, unpredictable reactions such as explosions that may cause damage, injure personnel, or cause release of chemicals and byproducts to the environment. • Nonflammable or poorly flammable • Low toxicity • Absence of toxic or environmentally dangerous constituents, particularly heavymetals • Facile degradability, especially biodegradability, in the environment. • Low tendency to undergo bioaccumulation in food chains in the environment An example of a green compound is sodium stearate, common hand soap. This common substance is prepared by reacting byproduct animal fat with sodium hydroxide, which is prepared by passing an electrical current through saltwater. Flushed down the drain, sodium stearate reacts with calcium in water to form a solid, calcium stearate, the white solid that composes “bathtub ring.” This removes the soap from the water and the nontoxic calcium stearate readily undergoes biodegradation so that it does not persist in the environment.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.01%3A_Chemical_Bonds_and_Compound_Formation.txt
The electrons in the outermost shell of atoms are those that become involved in chemical bonds. These are called valence electrons. Refer back to the Lewis symbols of the elements shown in Figure 3.9. Note that the three elements on the right of the table are noble gases that are chemically content with their filled outer electron shells containing 2 electrons in the case of helium and 8 each for neon and argon. As a basis for the understanding of chemical bonds consider that the other elements tend to attain the filled electron shells of their nearest-neighbor noble gases by sharing, losing, or gaining electrons. Of these elements, the only one that we will consider in detail that attains a helium-like electron configuration is hydrogen, H, each atom of which almost always has access to 2 electrons shared in covalent bonds. The other elements that we will consider, carbon and higher, attain 8 electrons in their outer shells by chemical bonding. This is the basis of the octet rule, the tendency of atoms to attain stable outer shells of 8 electrons by forming chemical bonds. The octet rule is immensely useful in explaining and predicting chemical bonds and the formulas and structures of chemical compounds and will be emphasized in this chapter. Some examples of the kinds of bonding arrangements discussed above have already been illustrated in Chapter 3. Figure 3.1 illustrates that, even in the elemental form, H2, hydrogen atoms have 2 valence electrons in the diatomic molecule. Examples of elements that have 8 valence electrons as the result of chemical bonding were also shown. Figure 3.6 illustrates the two N atoms in the N2 molecule sharing 6 electrons in a covalent bond so that each of the atoms may have an octet. Figure 3.8 shows that 2 Cl atoms, each with 7 valence electrons, share 2 electrons in the covalent bond of the Cl2 molecule to attain octets. The same figure shows that Na loses its single valence electron in forming ionic NaCl to produce the Na+ ion, which has an octet of electrons in its outer shell. In forming the same ionic compound, Cl gains an electron to become the Cl- anion, which also has a stable octet of outer-shell electrons. In the remainder of this chapter, the octet rule will be used in explaining the formation of chemical compounds consisting of two or more different elements bonded together. It was already used to show the bonding in ionic sodium chloride in Figure 3.8. One of the best compounds for showing the octet rule in covalent compounds is methane, CH4, shown in Figure 4.1. The molecule of CH4 is produced when an atom of carbon with 4 outer electrons (see Figure 3.9) attains an octet of 8 electrons by sharing with H atoms. Each H atom has 1 electron to donate to the sharing arrangement, so by each of 4 H atoms contributing an electron the carbon atom can gain an octet. Each of the H atoms has access to 2 electrons in the single covalent bond that connects it to the C atom. Examination of Figure 4.1 implies that the 4 H atoms and the C atom all lie in the same plane in a flat structure. But that is not the case because of the tendency for the 4 electron pairs composing the 4 covalent bonds to be oriented as far as possible from each other around the sphere of the carbon atom. The resulting structure is a little hard to illustrate on paper, but one way to approximate it is with a ball and stick model that represents the atoms as balls and the chemical bonds as sticks connecting the atoms. Figure 4.2 is an illustration of the ball and stick model of CH4.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.02%3A_Electrons_Involved_in_Chemical_Bonds_and_Octets_of_Electrons.txt
Many atoms and groups of atoms in chemical compounds are ions that have an electrical charge because of their unequal numbers of protons and electrons. Cations are positively charged ions and anions are negatively charged ions. Compounds consisting of ions are ionic compounds and the bonds holding them together are ionic bonds. Ionic bonds depend upon the mutual attraction between positive cations and negative anions for their bond strength (oppositely charged bodies attract each other, whereas negatively charged bodies repel each other). The formation of ions based upon the octet rule is readily seen for the well-known ionic compound, sodium chloride, NaCl, as illustrated in Figure 4.3. By losing an electron to become the Na+ cation, sodium’s underlying shell of 8 electrons becomes the ion’s outer shell with a stable octet. Chlorine attains a stable octet of 8 outer-shell electrons by gaining 1 electron per atom to produce Cl- ion. Sodium chloride is a very stable compound because of the mutual attraction of oppositely charged ions. But the ions have to be arranged in an optimum manner for this attraction to be effective. Since oppositely charged ions attract each other, but ions with the same charge are mutually repulsive, the ions in an ionic compound such as sodium chloride have to be packed to maximize attraction and minimize repulsion. The arrangement that does this for NaCl is shown by a ball and stick model in Figure 4.4. Although it may be a little hard to imagine for a model represented on paper, the six nearest neighbors of each negatively charged Cl- anion are Na+ cations. And the six nearest neighbors of each positively charged Na+ cation are negatively charged Cl- anions In reality, ions are more accurately represented in an ionic structure as spheres that touch. The Na+ cation is significantly smaller than the Cl- anion, so a more accurate representation of NaCl than that shown in Figure 4.4 would show rather large Cl- spheres between which are nestled barely visible Na+spheres. But the imperfect ball and stick model shown in Figure 4.4 shows several important points regarding ionic NaCl. It illustrates the relative positions of the ions. These positions, combined with ionic charge and size, determine the crystal structure of the solid crystal of which the ionic compound is composed. Furthermore, examination of the figure shows that no single Cl- anion belongs to a specific Na+ cation, and no single Na+ cation belongs to a specific Cl- anion. So, although the chemical formula NaCl accurately represents the relative numbers of Na and Cl atoms in sodium chloride it does not imply that there are discrete molecules consisting of 1 Na and 1 Cl. For this reason it is not correct to refer to a molecule of sodium chloride because distinct molecules of ionic compounds do not exist as such. Instead, reference is made to formula units of ionic compounds, where a formula unit of NaCl consists of 1 Na+ ion and 1 Cl- ion, the smallest quantity of a substance that can exist and still be sodium chloride. The stabilities of chemical compounds are all about energy. In general, the more energy released when a compound forms, the more stable the compound is. Sodium chloride could be formed by reacting elemental solid sodium with elemental Cl2 gas, $\ce{2Na(solid) + Cl2(gas) \rightarrow 2NaCl (solid)}$ to produce solid sodium chloride. This reaction releases a large amount of energy and elemental sodium burns explosively in chlorine gas. The reaction can be viewed in terms of the following steps. 1. The atoms in solid Na are taken apart, which requires energy. 2. Each molecule of Cl2 is taken apart, which requires energy. 3. An electron is taken from each Na atom to produce Na+ ion, which requires energy. 4. An electron is added to each Cl atom to produce a Cl- ion, which releases energy. 5. All the Na+ cations and 1 Cl- anion are assembled in a 1/1 ratio in a crystal lattice to produce NaCl, which releases a very large quantity of energy. The very large amount of energy involved in Step 5 is called the lattice energy and is primarily responsible for the high stability of ionic compounds. A general picture of the energy involved is shown in Figure 4.5. The differences in ionic size noted above are represented in Figure 4.6 for several monatomic (1-atom) ions from elements close to each other in the periodic table. The figure shows that negative monatomic ions are generally larger than positive monatomic ions formed from elements that are nearby in the periodic table. Thus, the negative F- ion is larger than the positive Na+ion, although both ions have the same number of electrons (10) and the atomic number of Na is higher than that of F. It is seen that for ions in the same group of elements that have the same charge, the ion from the element with higher atomic number is larger. Figure 4.6 shows the Cl-ion larger than the F- ion and the K+ ion is larger than the Na+ ion. As electrons are removed from elements in the same period of the periodic table to produce progressively more highly charged cations, ion size shrinks notably as shown by the order of ion sizes Na+> Mg2+> Al3+, each of which has 10 e-. This occurs because as the charge of the nucleus becomes larger relative to the charge of the negative electron cloud around it, the cloud is drawn closer to the nucleus, shrinking ion size. As electrons are added to atoms to produce more highly charged anions, the anion size increases because more electrons occupy more space. So S2- ion is larger than Cl- ion. In order to further understand ion formation, several more examples can be considered. Calcium and chlorine react, to form calcium chloride, CaCl2. This compound is a byproduct of some industrial processes, from which its disposal can be a problem. It is commonly used as road salt to melt ice and snow on streets and highways. Although calcium chloride is effective in this respect, it is corrosive to automobiles and calcium chloride is a pollutant salt that can contribute to excess salt levels in bodies of water. A “greener,” though more costly substitute is calcium acetate, Ca(C2H3O2)2. This compound is composed of Ca2+ ions and acetate (C2H3O22-) anions. Its advantage is that bacteria on soil and in water readily cause biodegradation of the acetate anion as shown by the reaction, $\ce{Ca(C2H3O2)2 + 4O2} \: \: \: \underrightarrow{Bacteria} \: \: \: \ce{CaCO3 + 3CO2 + 3H2O}$ from which the calcium ends up as calcium carbonate, a common component of rock and soil. Another example of the formation of an ionic compound is the following reaction of aluminum metal with elemental oxygen, This reaction produces aluminum oxide for which the chemical formula is Al2O3. This compound is the source of aluminum in bauxite, the ore from which aluminum is produced and is an important industrial chemical. Called alumina, aluminum oxide itself has many applications including its use for abrasives and sandpaper, as a raw material for ceramics, and as an ingredient of antacids and antiperspirants. Exercise: Show the ionic products of the reaction of the metal and nonmetal indicated Answers: (a) Na+, (b) Cl-, (c) NaCl, (d) K+, (e) O2-, (f) K2O, (g) Ca2+, (h) Cl-, (i) CaCl2 In addition to ions formed from single atoms losing or gaining electrons, many ions consist of groups of atoms covalently bound together, but having a net electrical charge because of an excessor a deficiency of electrons. An example of such an ion is the acetate ion shown above in the formula of calcium acetate, Ca(C2H3O2)2. The structural formula of the acetate anion, C2H3O2-, is shown below in which the two carbon atoms are joined with a single covalent bond consisting of two shared electrons, each of the three H atoms are joined to one of the carbon atoms by a single covalent bond and the other carbon atom is joined to one oxygen with a single covalent bond and to the other by a double covalent bond consisting of 4 shared electrons. The net charge on the ion is -1. Ionic Liquids and Green Chemistry Most common ionic compounds such as sodium chloride are hard solids because the ions of which they are composed are relatively small and packed tightly together in the crystalline lattice. These ionic compounds must be heated to very high temperatures before they melt, 801 ̊C for NaCl, for example. In recent years, ionic compounds have been developed that are liquids under ordinary conditions. The ions in these ionic liquids are composed of large organic molecules composed of skeletons of numerous carbon atoms bonded to other atoms and having a net charge. The charges on the ions in such compounds is much less concentrated than in simple inorganic compounds like NaCl, the large ions move readily relative to each other in the ionic crystal, and the compound is liquid at low temperatures. A common example of an ionic liquid compound is decylmethylimidazolium hexafluorophosphate, which is a liquid at temperatures above 40 ̊C, the temperature of a very hot summer’s day. There has been a lot of interest in the application of ionic liquids to green chemistry. This is because many chemical reactions including those for preparing polymers used in synthetic fabrics or pharmaceutical compounds are carried out in liquid solvents, which tend to evaporate and contaminate air and to pose disposal problems. Furthermore, although the solvent properties in chemical synthesis often play a strong role in determining the course of the synthesis, the number of available solvents is very limited. In the case of ionic liquids, however, there is a vast variety of both cations and liquids which, combined together, could give as many as a trillion (!) different ionic liquids. This versatility enables fine-tuning the properties of the ionic liquids for specialized uses in synthesis and other applications. The ionic liquids are rather easy to recycle, adding to their green character. In addition to their applications as solvents for chemical synthesis media, ionic liquids may be useful for isolating pollutants. For example, an appropriate ionic liquid may be shaken with water contaminated with toxic heavy metals, such as lead or cadmium, which bind with the ionic liquid. Since such a liquid is normally not soluble in water, it can be physically separated from the water, carrying the heavy metals with it and leaving purified water.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.03%3A_Sodium_Chloride_and_Ionic_Bonds.txt
Lewis symbols can be used to show how some atoms of elements on the left side of the table with only one or two outer-shell electrons can lose those electrons to form cations such as Na+ or Ca2+. It is also easily seen that atoms from groups near the right side of the periodic table can accept one or two electrons to gain stable octets and become anions such as Cl- or O2-. But, it is difficult to impossible to take more than two electrons away from an atom to form cations with charges greater than +2 or to add 3 or more electrons to form anions with charges of -3 or even more negative, although ions such as Al3+ and N3- do exist. So atoms of elements in the middle of the periodic table and the nonmetals on the right have a tendency to share electrons in covalent bonds, rather than becoming ions. It is readily visualized how mutually attracting ions of opposite charge are held together in a crystalline lattice. Shared electrons in covalent bonds act to reduce the forces of repulsion between the positively charged nuclei of the atoms that they join together. That is most easily seen for the case of the hydrogen molecule, H2. The nuclei of H atoms consist of single protons, and the two H atom nuclei in the H2 molecule repel each other. However, their 2 shared electrons compose a cloud of negative charge between the two H nuclei, shielding the nuclei from each other’s repelling positive charge and enabling the molecule to exist as a covalently bound molecule (Figure 4.7). 4.05: Covalent Bonds in Compounds Consider next some example covalent bonds between atoms of some of the lighter elements. These are best understood in reference to Figure 3.9, the abbreviated version of the periodic table showing the Lewis symbols (outer shell valence electrons) of the first 20 elements. As is the case with ions, atoms that are covalently bonded in molecules often have an arrangement of outer shell electrons like that of the noble gas with an atomic number closest to the element in question. It was just seen that covalently bonded H atoms in molecules of H2 have 2 outer shell electrons like the nearby noble gas helium. For atoms of many other elements, the tendency is to acquire 8 outer shell electrons— an octet— in sharing electrons through covalent bonds. This tendency forms the basis of the octet rule discussed in Section 4.2. In illustrating the application of the octet rule to covalent bonding Section 4.7 considers first the bonding of atoms of hydrogen to atoms of elements with atomic numbers 6 through 9 in the second period of the periodic table. These elements are close to the noble gas neon and tend to attain a “neon-like” octet of outer shell electrons when they form covalently bonded molecules. Covalent bonds are characterized according to several criteria. The first of these is the number of electrons involved. The most common type of covalent bond consists of 2 shared electrons and is a single bond. Four shared electrons as shown for the bond joining an O atom to one of the C atoms in the structure of the acetate anion above constitute a double covalent bond. And 6 shared electrons as shown for the very stable covalent bond joining the two N atoms in the N2 molecule illustrated in Chapter 3, Figure 3.6 make up a triple covalent bond. These bonds are conventionally shown as lines in the structural formulas of molecules (large numbers of dots in a formula can get a little confusing). So the single covalent bond in H2 is shown as The double bond consisting of 4 shared electrons holding the two carbon atoms together in C2H4 (ethylene, a hydrocarbon used to make polyethylene plastic) are shown by the following: And the very strong triple bond joining the two N atoms in the N2 molecule are shown by three lines: Covalent bonds have a characteristic bond length. Bond lengths are of the general magnitude of the size of atoms, so they are measured in units of picometers (pm). The H-H bond in H2 is 75pm long. A third important characteristic of bonds is bond energy. Bond energy is normally expressed in kilojoules (kJ) required to break a mole (6.02×1023) of bonds. (See Section 4.8 for a detailed definition of the mole.) The bond energy of the H-H bond in H2 is equal to 435 kJ/mole. This means that an amount of energy required to break all the H-H bonds in a mole of H2 (2.0 grams of H2, 6.02×1023 molecules) is a very substantial 435 kJ.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.04%3A_Covalent_Bonds_in_H2_and_Other_Molecules.txt
The nature of covalent bonds is strongly related to green chemistry. Some reasons why this is so include the following: • High-energy bonds in raw materials require a lot of energy and severe conditions, such as those of temperature and pressure, to take apart in synthesizing • Especially stable bonds may make substances unduly persistent in the environment. • Relatively weak bonds may allow molecules to come apart too readily, contributing to reactive species in the atmosphere or in biological systems. • Unstable bonds or arrangements of bonds may lead to excessive reactivity in chemicals making them prone to explosions and other hazards. • Some arrangements of bonds contribute to chemical toxicity. An example of a substance that has a very high bond stability making it an energy-intensive source of raw material is N2. As mentioned earlier, large amounts of energy and severe conditions are required to take this molecule apart in the synthesis of ammonia, NH3, the compound that is the basis for most synthetic nitrogen compounds. As discussed with nitrogen in Section 2.5, Rhizobium bacteria growing on the roots of leguminous plants such as soybeans convert N2 to chemically fixed nitrogen. The amount of nitrogen fixed by this totally green route is certainly a welcome addition to the pool of fertilizer nitrogen. An example of a compound in which especially stable bonds contribute to persistence and ultimate environmental harm is provided by dichlorodifluoromethane, Cl2CF2, a chlorofluorocarbon implicated in the destruction of stratospheric ozone (see Chapter 3, Section 3.5, and Chapter 10). The chemical bonds in this compound are so strong that nothing attacks them until the molecule has penetrated many kilometers high into the stratosphere where extremely energetic ultraviolet radiation breaks the C-Cl bond in the molecule. This produces Clatoms that bring about the destruction of protective stratospheric ozone. A somewhat opposite condition occurs in the case of atmospheric nitrogen dioxide, NO2, near ground level in the atmosphere. Here the NO bond is relatively weak so that the relatively low-energy ultraviolet radiation ($h \nu$) that is close to the wavelength of visible light and penetrates to the atmosphere at or near ground level can break apart NO2, molecules: $NO_{2} + h \nu \rightarrow NO + O$ The O atoms released are very reactive and interact with pollutant hydrocarbons, such as those from gasoline, in the atmosphere resulting in the disagreeable condition of photochemical smog. Some bonding arrangements are notable for instability. These include bonds in which two N atoms are adjacent or very close in a molecule and are bonded with double bonds. Also include dare arrangements in which N and O atoms are adjacent and double bonds are also present. The presence of some kinds of bonds in molecules can contribute to their biochemical reactivity and, therefore, to their toxicity. An organic compound with one or more C=C double bonds in the molecule is often more toxic than a similar molecule without such bonds. By avoiding generation, use, or release to the environment of compounds with the kinds of bonds described above as part of a program of green chemistry, the practice of chemistry and the chemical industry can be made much safer. Green chemistry also considers bonds that may have to be protected in chemical synthesis. Often steps must be added to attach protecting groups to bonding groups to prevent their reacting during a step of a synthesis. After the desired step is performed, the protecting groups must be removed to give the final product. Materials are consumed and byproducts are generated by these steps, so the practice of green chemistry attempts to avoid them whenever possible.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.06%3A_Covalent_Bonds_and_Green_Chemistry.txt
It is often possible to predict the formulas of molecules using the Lewis symbols of the elements in the compound with the octet rule for chemical bonding. This is shown in Figure 4.8 for the hydrogen compounds of several elements in the second period of the periodic table. The prediction of chemical bonds in compounds in which H is bonded to another atom is very simple because each H atom has to be involved in sharing two electrons and the other kind of atom normally has to have a total of 8 electrons in its valence shell octet; these may be shared in bonds or present as unshared pairs. As an example, consider a well-known compound of carbon, carbon dioxide, chemical formula CO2. The Lewis symbol of C and those of the two O atoms can be used to deduce the Lewis formula of CO2 as shown in Figure 4.9. As another example of the application of the octet rule, consider hydrogen peroxide, H2O2. This compound’s formula looks a lot like that of water, but it is a lot different from water. Hydrogen peroxide decomposes to release oxygen: $\ce{2H2O2 (liquid) \rightarrow O2(gas) + 2H2O(liquid)}$ As a liquid in the form of a concentrated aqueous solution, hydrogen peroxide provides a source of oxygen so potent that it has been used in rockets. It was the treacherous oxidant used along with hydrazine (N2H4) fuel in the German Luftwaffe’s Messerschmidt 163 rocket plane at the end of World War II. Trailing an exhaust of lethal NO2 gas, this minuscule manned missile (on the rare occasions when it worked according to plan) was propelled rapidly into the lower stratosphere, then glided down through waves of Allied bombers, attempting to nick them with machine gunfire as it plummeted back to Earth. Few bombers were damaged but many Me-163 pilots died in the attempt, some as the result of explosions, fires, and spills from the hydrogen peroxide oxidant. Hydrogen peroxide decomposed over a catalyst was also used as a source of oxygen for the diesel engines on several German submarines near the end of World War II. Pre-dating nuclear submarines, these potentially deadly craft were the first true submersibles. In assembling the structure of the hydrogen peroxide molecule, one has simply to work with two O atoms each contributing 6 valence electrons and two H atoms each with 1 valence electron. The Lewis formula of the H2O2 molecule is showing that all of the 14 total valence electrons are involved in chemical bonds and both oxygens have octets of outer-shell electrons. Despite the evil nature of concentrated solutions of hydrogen peroxide, it can be regarded as a green compound in more dilute solutions, such as the 3% hydrogen peroxide commonly used to kill bacteria in treating wounds. Among its green applications, dilute hydrogen peroxide makes an effective and safe bleaching agent that is much safer to handle than elemental chlorine commonly used for bleaching and that does not produce the potentially toxic byproducts that chlorine generates. And even though it kills bacteria, hydrogen peroxide can be pumped underground to serve as an oxidant for acclimated bacteria that attack wastes that have been placed in or seeped into underground locations Molecules That Do Not Obey the Octet Rule In some cases the octet rule is not obeyed. This occurs when a molecule has an uneven number of electrons so that it is impossible for each atom to have an octet (an even number) of electrons. A simple example of this is nitric oxide, NO, made from an atom of N with 5 valence electrons and one of O with 6 valence electrons. The resulting molecule is shown in Figure 4.10. Since the uneven number of 11 valence electrons cannot provide complete octets of electrons around both the N and O atoms simultaneously, the NO molecule is shown in two forms in which one atom has 8 valence electrons and the other has 7. These are known as resonance structures. Unequal Sharing of Electrons The Lewis formula for water, indicates that the molecule is not symmetrical and the two H atoms are located on one side of the molecule and the O atom on the other side. One might think that the electrons shared between Hand O are shared equally. But such is not the case because the relatively larger O atom nucleus with its 8 protons has a stronger attraction for the electrons than do the two H atom nuclei, each with only 1 proton. So the shared electrons spend relatively more time around the O atom and less around the H atoms. This gives each H atom a partial positive charge and the O atom a partial negative charge. An unequal distribution of charge such as that makes a body polar and the O-H bonds are polar covalent bonds. Because of this phenomenon, the whole water molecule is polar and can be represented as the following, where the small spheres stand for H atoms and the large one for the O atom: The polar nature of the water molecule has a lot to do with water as a solvent and how it behaves in the environment and in living systems. These aspects are discussed in more detail in Chapter 8. When Only One Atom Contributes to a Covalent Bond In some cases only one of the two atoms joined by a covalent bond contributes both the electrons in the bond. This occurs with ammonia, NH3, dissolved in water. Water contains dissolved H+ cations, and the more acidic water is, the higher the concentration of H+. The H+ cation, would be stabilized by two electrons which it can get by binding with dissolved NH3 as shown in Reaction 4.7.2. Both of the electrons shared between N and the H+ cation now bound to it as part of a new species, the ammonium ion, NH4+, were contributed by the N atom. Such a covalent bond is called a coordinate covalent bond or a dative bond. In the case of NH4+, once the coordinate covalent N-H bond is formed, it is indistinguishable from all the other N-H bonds. The formation of the coordinate covalent bond in NH4+ is very useful when soil is fertilized with nitrogen. The most economical way to apply nitrogen fertilizer is by injecting NH3 into the soil, but NH3 is a gas that would be expected to rapidly evaporate from soil. Instead, it becomes attached to H+ ion from the water in the soil and is bound to the soil as the NH4+ ion. Another important example of a coordinate covalent bond occurs in water. As discussed in Section 4.9, acids, which are very important materials commonly dissolved in water, produce the hydrogen ion, H+, in water. This ion does not exist simply dispersed in water. Instead, it binds strongly to a water molecule to produce the hydronium ion, H3O+:
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.07%3A_Predicting_Formulas_of_Covalently_Bound_Compounds.txt
Chemical formulas represent the composition of chemical compounds. A number of chemical formulas have been shown so far including H2O for water and NH3 for ammonia. A chemical formula of a compound contains a lot of significant information as shown in Figure 4.11. Included is the following: • The elements that compose the compound • The relative numbers of each kind of atom in the compound • How the atoms are grouped, such as in ions (for example, SO42-) present in the compound • With a knowledge of atomic masses, the molar mass of the compound • With a knowledge of atomic masses, the percentage composition of the compound Where the symbols of the elements represent letters in the alphabet of chemical language, the formulas of compounds represent words composed of those letters. As discussed in Chapter 5, formulas are put together in chemical equations that act as sentences in the chemical language to describe what chemical substances do. The Mole With a knowledge of atomic masses, the percentage composition of a compound is readily calculated from its formula. Before doing such a calculation for ammonium sulfate, however, it is useful to introduce the concept of the mole. Chemists use the mole to express quantities of materials containing a specific number of specified entities, which may be atoms of elements, molecules of elements that exist as diatomic molecules, formula units of ionic compounds, or molecules of covalent compounds. A mole of a substance is simply the atomic mass, molecular mass, or formula mass followed by grams. This quantity is called the molar mass. The masses of a mole of several typical substances are given below: Atoms of neon, atomic mass 20.1: 20.1 g/mol Molecules of H2, atomic mass 1.0, molecular mass 2.0: 2.0 g/mole Molecules of CH4, molecular mass 16.0: 16.0 g/mole Formula units of ionic CaO, formula mass 56.1: 56.1 g/mol The number of specified entities in a mole of a substance is always the same regardless of the substance. This number is very large, 6.02×1023, and is called Avogadro’s number. As examples, a mole of neon contains 6.02×1023 neon atoms, a mole of elemental hydrogen contains 6.02×1023 H2 molecules (but 2×6.02×1023 H atoms), and a mole of CaO contains 6.02×1023 formula units (pairs of Ca2+ and O2- ions) of CaO. The calculation of the percentage composition of (NH4)2SO4 is given below. Note that the molar mass of the compound is 132 g/mol, each mol of the substance contains 2×1 = 2 mol of N, 2×4 = 8mol of H, 1 mol of S, and 4 mol of O. $\textrm{2 mol N} \times \textrm{14.0g N/mol N} = 28.0 \textrm{g N, % N} = \frac{28.0g}{132 g} \times 100 = 21.2 \textrm{% N}$ $\textrm{8 mol H} \times \textrm{1.0 g H/mol H} = 8.0 \textrm{ g H, % H} = \frac{8.0g}{132g} \times 100 = 6.1 \textrm{ % H}$ $\textrm{1 mol S} \times \textrm{32.0 g S/mol S} = 32.0 \textrm{ g S, % S} = \frac{32.0 g}{132g} \times 100 = 24.2 \textrm{% S}$ $\textrm{4 mol O} \times \textrm{16.0 g O/mol O} = 64.0 \textrm{ g O, % O} = \frac{64.0 g}{132g} \times 100 = 48.5 \textrm{% O}$ Example: Given the atomic masses Ca 40.0, C 12.0, and O 16.0, what is the percentage composition of calcium oxalate, CaC2O4? Answer: 31.3% Ca, 18.8% C, 50.0% O
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.08%3A_Chemical_Formulas_the_Mole_and_Percentage_Composition.txt
The naming of chemical compounds can get a little complicated. This is particularly true of organic compounds, the names of which are discussed in Chapter 9. Some of the simpler aspects of naming inorganic compounds are discussed here. In naming compounds, prefixes are used to represent the relative numbers of atoms in the formula unit of the compound. These prefixes through number 10 are given below: 1-mono 3-tri 5-penta 7-hepta 9-nona 2-di 4-tetra 6-hexa 8-octa 10-deca The first class of inorganic compounds to be addressed here are binary molecular compounds. Binary molecular compounds are composed of only 2 kinds of elements and do not contain ions. For these compounds, the first part of the name is simply the name of the first element in the compound formula. The second part of the name is that of the second element in the compound formula modified to have the ending -ide. Prefixes are added to indicate how many of each kind of atom are present in the molecule. Consider as an example the name of N2O5. The name of the compound is dinitrogen pentoxide where di indicates 2 N atoms, pent indicates 5 oxygen atoms, and the second element has the ide ending. Other examples of this system of naming are SiCl4, silicon tetrachloride; S2F6, disilicon hexafluoride; PCl5, phosphorus pentachloride; and SCl2, sulfur dichloride. A number of compounds, including binary molecular compounds, have common names that have been used for so long that they are part of the chemical vocabulary. An especially common example is the name of water for H2O; its official name is dihydrogen monoxide. Another example is dinitrogen monoxide, N2O, usually called nitrous oxide. Aluminum oxide, Al2O3, is commonly called alumina and silicon oxide, SiO2, is called silica, specific mineral forms of which are sand and quartz. Recall that ionic compounds are those composed of ions that are held together by ionic bonds, rather than covalent bonds. As noted in the discussion of ionic sodium chloride in Section 4.3, ionic compounds do not consist of discrete molecules, but rather of aggregates of ions whose relative numbers make the compound electrically neutral overall. Therefore, it is not correct to refer to molecules of ionic compounds but rather to formula units equal to the smallest aggregate of ions that can compose the compound. Consider, for example, the ionic compound composed of Na+ and SO42- ions. Every ionic compound must be electrically neutral with the same number of positive as negative charges. For the compound in question this requires 2 Na+ ions for each SO42- ion. Therefore, the formula of the compound is Na2SO4 and a formula unit contains 2 Na+ ions and 1 SO42- ion. Furthermore, a mole of Na2SO4 composed of 6.02×1023 formula units of Na2SO4 contains 2×6.02×1023 Na+ ions and 6.02×1023 SO42- ions. Since the ionic charges determine the relative numbers of ions, prefixes need not be used in naming the compound and it is called simply sodium sulfate. Exercise: Give the formulas and names of compounds formed from each cation on the top row with each anion on the bottom row, below: 1. NH4+ 2. Ca2+ 3. Al3+ (A) Cl- (B) SO42- (C) PO43- Answers: 1(A) NH4Cl, ammonium chloride; 1(B) (NH4)2SO4, ammonium sulfate; 1(C), (NH4)3PO4, ammonium phosphate; 2(A) CaCl2, calcium chloride; 2(B) CaSO4, calcium sulfate; 2(C), Ca3(PO4)2, calcium phosphate; 3(A) AlCl3, aluminum chloride; 3(B) Al2(SO4)3, aluminum sulfate; 3(C), AlPO4, aluminum phosphate. Prefixes are used in naming ionic compounds where more than 1 cation or more than 1 anion are present in the formula unit. For example, Na2HPO4 in which each formula unit is composed of 2 Na+ ions, 1 H+ ion, and 1 PO43- ion is called disodium monohydrogen phosphate. And KH2PO4 is called monopotassium dihydrogen phosphate.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.09%3A_What_Are_Chemical_Compounds_Called.txt
Other than binary molecular compounds, most inorganic compounds can be classified as acids, bases, or salts. These three categories of compounds and their names are addressed briefly here. Acids Acids are characterized by the H+ ion, the presence of which in water, makes the water acidic. An acid either contains this ion or produces it when it dissolves in water. Sulfuric acid, H2SO4, is an example of a compound that contains H+ ion. Dissolved in water, a molecule of sulfuric acid exists as 2 H+ ions and 1 SO42- ion. An example of a compound that is classified as acidic because it produces H+ ion when it dissolves in water is carbon dioxide, which undergoes the following reaction in water solution: $CO_{2} + H_{2}O \rightarrow H^{+} + HCO_{3}^{-}$ In this case, only a small fraction of the CO2 molecules dissolved in water undergo the above reaction to produce H+ so water solutions of CO2 are weakly acidic and carbon dioxide is classified as a weak acid. It is the presence of dissolved CO2 from the carbon dioxide naturally present in air that makes rainfall coming from even nonpolluted atmospheres slightly acidic and, as discussed in Chapter 9, the weakly acidic properties of CO2 are very important in natural waters in the environment. Other acids, such as hydrochloric acid, HCl, are completely dissociated to H+ and an anion (in the case of HCl the Cl- anion) when they are dissolved in water; such acids are strong acids. The naming of acids follows certain rules. In the case of an acid that contains only H and one other element, the acid is a hydro-ic acid. So HCl is called hydrochloric acid. Somewhat different rules apply when an acid contains oxygen. Some elements form acids in which the anion has different amounts of oxygen; examples are H2SO4 and H2SO3. The acid with more oxygen is an “-ic” acid, so H2SO4 is sulfuric acid. The acid with the lesser amount of oxygen is an “-ous” acid, so H2SO3 is sulfurous acid. A greater amount of oxygen than even the “-ic” acid is denoted by the prefix “per-”, and a lesser amount of oxygen than even the “-ous” acid is denoted by the prefix “hypo-”. These names are shown very well by the names of the oxyacids of chlorine. So the names of HClO4, HClO3, HClO2, and HClO are, respectively, perchloric acid, chloric acid, chlorous acid and hypochlorous acid. Acids are extremely important as industrial chemicals, in the environment, and in respect to green chemistry. About 40 million metric tons (40 billion kilograms) of sulfuric acid are produced in the United States each year. It is the number 1 synthetic chemical, largely because of its application to treat phosphate minerals to make phosphate crop fertilizers. Sulfuric acid is also used in large quantities to remove corrosion from steel, a process called steel pickling. Other major uses include detergent synthesis, petroleum refining, lead storage battery manufacture, and alcohol synthesis. About 7-8 million tons of nitric acid, HNO3, are produced in the U.S. each year giving it a rank of 10th, and hydrochloric acid ranks about 25th with annual production around 3 million metric tons. Acids are important in the environment. Improperly disposed acid has caused major problems around hazardous waste sites. Sulfuric acid along with smaller quantities of hydrochloric and nitric acid are the major constituents of acid rain (see Chapter 10). Acids figure prominently in the practice of green chemistry. Reclamation and recycling of acids are commonly performed in the practice of industrial ecology. As noted earlier, much of the sulfuric acid now manufactured uses a potential waste and pollutant, hydrogen sulfide, H2S, removed from sour natural gas sources as a source of sulfur. In cases where a relatively weak acid can be used, acetic acid made by the fermentation of carbohydrates is an excellent green alternative to stronger acids, such as sulfuric acid. Yeasts can convert the carbohydrates to ethanol (ethyl alcohol, which is present in alcoholic beverages) and other microorganisms in the presence of air convert the ethanol to acetic acid by the same process that vinegar, a dilute solution of acetic acid, is made from cider or wine. The structural formula of acetic acid is in which only one of the 4 H atoms is ionizable to produce H+ion. The production of acetic acid is a green process that uses biological reactions acting upon renewable biomass raw materials. As a weak acid, acetic acid is relatively safe to use, and contact with humans is not usually very dangerous (we ingest dilute acetic acid as vinegar, but pure acetic acid attacks flesh and is used to remove warts from skin). Another advantage of acetic acid is that it is biodegradable, so any of it released to the environment does not persist. Bases A base either contains hydroxide ion, OH-, or reacts with water to produce hydroxide. Most bases that contain hydroxide consist of metal cations and hydroxide; examples are sodium hydroxide, NaOH, and calcium hydroxide, Ca(OH)2. The most common basic substance that produces hydroxide in water is ammonia, NH3, which reacts with water as follows: $NH_{3} + H_{2}O \rightarrow NH_{4}^{+} + OH^{-}$ Only a small fraction of the ammonia molecules undergo this reaction in water, so ammonia does not produce much OH-in water and is known as a weak base. The metal hydroxides, such as KOH, that completely dissociate in water are strong bases. Metal hydroxides are named by the metal followed by “hydroxide.” Therefore, Mg(OH)2 is magnesium hydroxide. Salts Acids and bases react to form a salt, an ionic compound that has a cation other than H+and an anion other than OH-. This kind of reaction always produces water and is known as a neutralization reaction. The most well known salt is sodium chloride, NaCl. Although it is commonly what one means in referring to “salt,” there are many other salts as well. These include calcium chloride, CaCl2, used to melt road ice, sodium carbonate, Na2CO3, used in cleaning formulations and potassium chloride, KCl, a source of potassium fertilizer for crops. A typical neutralization reaction is the one between NaOH and hydrochloric acid, HCl, to produce sodium chloride: $\underbrace{NaOH}_{\textrm{Base}} + \underbrace{HCl}_{\textrm{acid}} \rightarrow \underbrace{NaCl}_{\textrm{a salt, sodium chloride}} + \underbrace{H_{2}O}_{\textrm{water}}$ Salts are named very simply with just the name of the cation followed by that of the anion. The charges of the ions determine the formulas of the salts, so it is not necessary to add prefixes to denote the relative numbers of each ion. Therefore, CaCl2 is simply calcium chloride, not calcium dichloride. As noted earlier in this chapter, prefixes are added in names of salts that contain more than 1 kind of cation or more than 1 kind of anion to show the relative numbers of ions. As an example, KH2PO4 is called potassium dihydrogen phosphate.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/4.10%3A_Acids_Bases_and_Salts.txt
Access to and use of the internet is assumed in answering all questions including general information, statistics, constants, and mathematical formulas required to solve problems. These questions are designed to promote inquiry and thought rather than just finding material in the text. So in some cases there may be several “right” answers. Therefore, if your answer reflects intellectual effort and a search for information from available sources, your answer can be considered to be “right." 1. What distinguishes the molecules of chemical compounds from those of elements, such as N2? 2. Several “characteristics of compounds that meet the criteria of being green” were mentioned at the beginning of this chapter. Near the end of the chapter, acetic acid was mentioned as a “green acid.” In what respects does it meet the criteria of green compounds? 3. What is sodium stearate? Why is it regarded as being green? 4. Which of the following is not usually regarded as a characteristic of green chemical compounds? Why is it not so regarded? A. Preparation from renewable resources B. Low tendency to undergo sudden, violent, unpredictable reactions C. Readily biodegradable D. Extremely high stability 5. What are valence electrons? Why are they particularly important? 6. What is the octet rule? Why is it particularly important in chemistry? 7. What does the structure representing CH4 below say about bonding and octets of electrons around the central C atom? 8. Considering that the central nitrogen atom in ammonia, NH3, has an unshared pair of valence electrons and 3 pairs shared between N and H, propose a structure for the ammonia molecule based upon the structure of the methane molecule in the preceding question. Use a pair of dots to represent the unshared pair of electrons. 9. What is an ionic bond? Why is it not regarded as being between one specific cation and a specific anion in an ionic compound? 10. Do ionic compounds such as NaCl obey the octet rule? Explain. 11. Why is NaCl referred to as a formula unit of the ionic compound rather than a molecule of sodium chloride? 12. Energy is involved in several steps of the process by which an elemental metal and an elemental nonmetal are converted to an ionic compound (salt). Of these, which has the largest energy? 13. Place the following ions in decreasing order of size: Na+, Cl-, Al3+, K+ 14. What is a major disadvantage of calcium chloride as a road de-icing agent? Why is calcium acetate a good substitute? 15. List some important characteristics of a covalent bond. 16. What is the major characteristic of ions in ionic liquids that enable these materials to be liquid at around room temperature? 17. Can the atoms in NO2 obey the octet rule? Suggest the structural formula for this molecule inwhich the 2 O atoms are bonded to an N atom. 18. Coordinate covalent bonds are normally regarded as those in which each of two atoms contributes electrons to be shared in the bond. Are there any circumstances in which this is not true? If so, give an example. 19. What are three major ways in which covalent bonds are characterized? 20. What are some of the ways in which the characteristics of covalent bonds are related to green chemistry? 21. Why are elements in the middle of periods of the periodic table less likely to form ionic compounds and more likely to form covalent compounds than those near either end of each period? 22. Predict the formula of the compound formed when H reacts with P and explain. 23. Although hydrogen chloride, HCl, exists as a gas, the contest for the two shared electrons in the bond between H and Cl is unequal, with the Cl nucleus having the greater attraction. Suggest the nature of the H-Cl bond and suggest what may happen when HCl gas dissolves in water to produce hydrochloric acid. 24. Using Lewis formulas, show the bonding in the SO2 molecule in which two O atoms are bonded to a central O atom. Can another equivalent structure be drawn? Considering the bonding in NO discussed in this chapter, what are these structures called. 25. Summarize the information shown in the formula Ca3(PO4)2.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/04%3A_Compounds-_Safer_Materials_for_a_Safer_World/Questions_and_Problems.txt
“The materials that we make and the ways that we make them have an enormous impact on Earth’s environment. Much of green chemistry has to do with making materials safely and sustainably.” 05: Chemical Reactions- Making Materials Safely and Sustainable How far would you have to go to find a diverse chemical factory carrying out hundreds of complex chemical processes? Not far, because your own body is just such a remarkably sophisticated factory that could not be duplicated by the efforts of thousands of chemists and chemical engineers and the expenditure of billions of dollars. As an example of a process that our bodies carry out consider the utilization of glucose sugar, chemical formula C6H12O6, which is present in our blood and generates energy that the human body uses by the following metabolic biochemical reaction: $C_{6}H_{12}O_{6} + 6O_{2} \rightarrow 6CO_{2} + 6H_{2}O$ This is a chemical equation that represents a chemical reaction, something that actually occurs with chemicals. It states that glucose reacts with molecular oxygen to produce carbon dioxide and water. The chemical reaction also produces energy and that is why the body carries it out to obtain the energy needed to move, work, and grow. The production of energy is sometimes denoted in the equation by adding “+ energy” to the right side. Just as a chemical formula contains a lot of information about a chemical compound, a chemical equation contains much information about a chemical process. A chemical equation is divided into two parts by the arrow, which is read “yields.” On the left of the arrow are the reactants and on the right are the products. A key aspect of a correctly written chemical equation is that it is balanced, with the same number of atoms of each element on the left as on the right. Consider the chemical equation above. The single molecule of C6H12O6 contains 6 C atoms, 12 H atoms, and 6 O atoms. The 6 O2 molecules contain 12 O atoms, giving a total of 18 O atoms among the reactants. Adding up all the atoms on the left gives 6 C atoms, 12 H atoms, and 18 O atoms among the reactants. On the right, the products contain 6 C atoms in the 6 CO2 molecules, 12H atoms in the 6 H2O molecules, and 12 O atoms in the 6 CO2 molecules, as well as 6 O atoms in the 6 H2O molecules, a total of 18 O atoms. So there are 6 C atoms, 12 H atoms, and 18 O atoms among the products, the same as in the reactants. Therefore, the equation is balanced. An important exercise is the process of balancing a chemical equation. This consists of putting the correct numbers before each of the reactants and products so that equal numbers of each kind of atom are on both the left and right sides of the equation. The procedure for balancing a chemical equation is addressed in section 5.2. Learning chemistry is largely an exercise in learning chemical language. In the chemical language the symbols of the elements are the alphabet. The formulas of the compounds are the words. And chemical equations are the sentences that tell what actually happens. It is often important to know the physical states of reactants and products in chemical reactions. Suppose, for example, that a geologist tested a sample of rock to see if it were limestone by adding some liquid hydrochloric acid to the rock and observing the CO2 gas coming off. The equation for the chemical reaction that occurred is $\ce{CaCO3(s) + 2HCl(aq) \rightarrow CO2(g) + CaCl2(aq) + H2O(l)}$ Here abbreviations in parentheses are used to represent the physical state of each reaction participant — (s) for solid, (aq) for a substance in solution, (g) for gas, and (l) for liquid. The equation above states that solid calcium carbonate reacts with an aqueous solution of hydrochloric acid to produce carbon dioxide gas, a solution of calcium chloride, and liquid water. Chemical reactions often are reversible, that is, they may go either forward or backward. A reversible reaction is shown with a doublearrow,←→. Asan example, consider the reaction of dissolved ammonia, NH3, with water to produce ammonium ion, NH4+, and hydroxide ion, OH-. $\ce{NH3(aq) + H2O(l) \rightleftharpoons NH4^{+} (aq) + OH^{-} (aq)}$ Actually, only a small fraction of NH3 molecules undergo this reaction at any given time, and those that are converted to NH4+ are rapidly converted back to NH3. The double arrow in the chemical equation shows that both the forward and reverse processes occur. Another symbol that is sometimes used in chemical equations is ∆. This symbol denotes that heat is applied to make the chemical reaction occur at a more rapid pace. It is normally placed over the arrow in the chemical reaction. Chemical equations are used to calculate the quantities of chemicals involved in a chemical reaction, either as reactants or as products. This is an important area of chemistry that is addressed by the topic of stoichiometry discussed later in this chapter in Section 5.8.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.01%3A_New_Page.txt
As noted earlier, a balanced chemical equation shows the same number of each kind of atom on both sides of the equation. The process of balancing chemical equations is an important exercise in chemistry and is addressed here. Consider a simple example of balancing a chemical equation, the reaction of methane, CH4, with elemental chlorine, Cl2, to produce dichloromethane, CH2Cl2, an important laboratory solvent, and byproduct hydrogen chloride, HCl. The unbalanced chemical equation is $\ce{CH4 + Cl2 \rightarrow CH2Cl2 + HCl}$ Inspection of this equation as it is written shows that it is not balanced because it has 4 H on the left, but just 3 on the right and 2 Cl on the left, but 3 Cl on the right. In order to balance such an equation, consider one element at a time. Carbon is already balanced, so it is best to avoid changing any of the numbers in front of the C-containing compounds. The equation can be balanced for H by putting a 2 in front of HCl: $\ce{CH4 + Cl2 \rightarrow CH2Cl2 + 2HCl}$ Now everything is balanced except for Cl, of which there are 4 on the right, but just 2 on the left.Placing a 2 in front of Cl, gives the required 4 Cls on the left: $\ce{CH4 + 2Cl2 \rightarrow CH2Cl2 + 2HCl}$ This equation is now balanced with 1 C, 4 Hs, and 4 Cls on both the left and the right. A crucial thing to remember in balancing a chemical equation is that the chemical formulas must not be altered. Only the relative numbers of reactant and product species may be changed. Next consider the reaction of methane, CH4, with iron oxide, Fe2O3, to give iron metal, Fe, carbon dioxide, CO2, and water, H2O. The unbalanced equation is $\ce{CH4 + Fe2O3 \rightarrow Fe + CO2 + H2O}$ In this case it is helpful to note that CH4 is the only source of both C and H and that 4 times as many H atoms as C atoms must appear in the products. That means that for each CO2 there must be 2 H2Os. Both C and H are balanced in the following: $\ce{CH4 + Fe2O3 \rightarrow Fe + CO2 + 2H2O}$ But now O is not balanced. Furthermore, the 3 Os in Fe2O3 means that the number of O atoms must be divisible by 3, so try multiplying the three species balanced so far — CH4, CO2, and 2H2O — by 3: $\ce{3CH4 + Fe2O3 \rightarrow Fe + 3CO2 + 6H2O}$ That gives a total of 12 O atoms on the right, 6 each in 3 CO2 and 6 H2O. Taking 4 times Fe2O3 gives 12 Os on the left: $\ce{3CH4 + 4Fe2O3 \rightarrow Fe + 3CO2 + 6H2O}$ The only species remaining to be balanced is Fe, which can be balanced by putting 8 in front of Fe on the right. The balanced equation is $\ce{3CH4 + 4Fe2O3 \rightarrow 8Fe + 3CO2 + 6H2O}$ Checking the answer shows on both left and right 3 C, 8 Fe, 12 H, and 12 O demonstrating that the equation is in fact balanced. Exercise: Balance the following: 1. $\ce{Fe2O3 + CO \rightarrow Fe + CO2}$ 2. $\ce{FeSO4 + O2 + H2O \rightarrow Fe(OH)3 + H2SO4}$ 3. $\ce{C2H2 + O2 \rightarrow CO2 + H2O}$ 4. $\ce{Mg3N2 + M2O \rightarrow Mg(OH)2 + NH3}$ 5. $\ce{NaAlH4 + H2O \rightarrow H2 + NaOH + Al(OH)3}$ 6. $\ce{Zn(C2H5)2 + O2 \rightarrow ZnO + CO2 + H2O}$ Answers: (1) $\ce{Fe2O3 + 3CO \rightarrow 2Fe + 3CO2}$, (2) $\ce{4FeSO4 + O2 + 10H2O \rightarrow 4Fe(OH)3 + 4H2SO4}$, (3) $\ce{2C2H2 + 5O2 \rightarrow 4CO2 + 2H2O}$, (4) $\ce{Mg3N2 + 6H2O \rightarrow 3Mg(OH)2 + 2NH3}$ (5) $\ce{NaAlH4 + 4H2O \rightarrow 4H2 + NaOH + Al(OH)3}$, (6) $\ce{Zn(C2H5)2 + 7O2 \rightarrow ZnO + 4CO2 + 5H2O}$
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.02%3A_New_Page.txt
The fact that a balanced chemical equation can be written does not necessarily mean that the chemical reaction that it represents will occur. As an example, it is known that a number of metals will react with acid to release elemental hydrogen gas and produce a metal salt. For example, if iron wire, Fe, is placed into a solution of sulfuric acid, H2SO4, H2 gas is evolved, $\ce{Fe(s) + H2SO4(aq) \rightarrow H2(g) + FeSO4(aq)}$ leaving FeSO4 salt in solution. The copper salt, CuSO4, is also known to exist. So one might believe that it could be prepared by reacting copper metal with H2SO4: $\ce{Cu(s) + H2SO4 (aq) \rightarrow H2(g) + CuSO4 (aq)}$ This equation is balanced and it looks like it could occur. But, placing copper metal into a solution of H2SO4 in the laboratory results in — nothing. The reaction simply does not occur. The lesson here is that a balanced chemical equation is not sufficient reason to conclude that a reaction will take place. Since CuSO4 is known to exist, there has to be a way to prepare it. There are, in fact several ways. One pathway to the preparation of this salt starting with copper metal is to first react the copper with oxygen at a relatively high temperature to produce copper oxide: $\ce{2Cu(s) + O2(g) \rightarrow 2CuO(s)}$ The CuO product reacts with sulfuric acid to give CuSO4 salt: $\ce{CuO(s) + H2SO4(aq) \rightarrow CuSO4(aq) + H2O(l)}$ Alternate Reaction Pathways in Green Chemistry Much of the science of green chemistry involves making decisions about alternative chemical reactions to choose a reaction or reaction sequence that provides maximum safety, produces minimum byproduct, and utilizes readily available materials. Consider two ways of preparing iron sulfate, FeSO4. This chemical is commonly used to treat (clarify) water because when it is added to water and air is bubbled through the water, it produces Fe(OH)3, a gelatinous solid that settles in the water and carries suspended mud and other particles with it. Consider two possible ways of making FeSO4. The first of these was shown earlier and consists of the reaction of iron metal with sulfuric acid: $\ce{Fe(s) + H2SO4(aq) \rightarrow H2(g) + FeSO4(aq)}$ leaving FeSO4 salt in solution. The copper salt, CuSO4, is also known to exist. So one might believe that it could be prepared by reacting copper metal with H2SO4: $\ce{Cu(s) + H2SO4(aq) \rightarrow H2(g) + CuSO4(aq)}$ This equation is balanced and it looks like it could occur. But, placing copper metal into a solution of H2SO4 in the laboratory results in — nothing. The reaction simply does not occur. The lesson here is that a balanced chemical equation is not sufficient reason to conclude that a reaction will take place. Since CuSO4 is known to exist, there has to be a way to prepare it. There are, in fact several ways. One pathway to the preparation of this salt starting with copper metal is to first react the copper with oxygen at a relatively high temperature to produce copper oxide: $\ce{2Cu(s) + O2(g) \rightarrow 2CuO(s)}$ The CuO product reacts with sulfuric acid to give CuSO4 salt: $\ce{CuO(s) + H2SO4(aq) \rightarrow CuSO4(aq) + H2O(l)}$ Alternate Reaction Pathways in Green Chemistry Much of the science of green chemistry involves making decisions about alternative chemical reactions to choose a reaction or reaction sequence that provides maximum safety, produces minimum byproduct, and utilizes readily available materials. Consider two ways of preparing iron sulfate, FeSO4. This chemical is commonly used to treat (clarify) water because when it is added to water and air is bubbled through the water, it produces Fe(OH)3, a gelatinous solid that settles in the water and carries suspended mud and other particles with it. Consider two possible ways of making FeSO4. The first of these was shown earlier and consists of the reaction of iron metal with sulfuric acid: $\ce{Fe(s) + H2SO4(aq) \rightarrow H2(g) + FeSO4(aq)}$ A second pathway would be to react iron oxide, FeO, with sulfuric acid: $\ce{FeO(s) + H2SO4(aq) \rightarrow FeSO4(aq) + H2O(aq)}$ Which of these reactions would be the better choice? Both would work. The first reaction generates elemental H2 gas as a byproduct. That has a potential downside because elemental hydrogen is highly explosive and flammable and could cause an explosion or fire hazard. But, in a contained reaction vessel that allowed for capture of H2, the elemental hydrogen could be put to use as a fuel or reacted directly in a fuel cell to produce electricity (Section 3.2 and Figure 3.2). Furthermore, scrap iron metal and waste sulfuric acid are common materials that should be recycled and the synthesis of FeSO4 by the direct reaction of the two can prepare a useful material from the two recyclable substances. The second reaction (5.3.6) also gives the desired product. Its only byproduct is innocuous water. And there is no hazard from elemental hydrogen. In principle, the FeO required could be made by reacting scrap iron metal with oxygen from the air. $\ce{2Fe + O2 \rightarrow 2FeO}$ but in practice the reaction tends to produce other oxides of iron, particularly Fe2O3 and Fe3O4.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.03%3A_New_Page.txt
A fundamental concept basic to green chemistry that can be illustrated by chemical reactions is the distinction between yield and atom economy. In Chapter 2 yield was defined as a percentage of the degree to which a chemical reaction or synthesis goes to completion and atom economy was defined as the fraction of reactants that go into final products. Those two ideas are illustrated here for the preparation of HCl gas which, dissolved in water, produces hydrochloric acid. There are several ways in which HCl can be prepared. One of these commonly used in the laboratory is the reaction of concentrated sulfuric acid, H2SO4, with common table salt, NaCl, accompanied by heating to drive off the volatile HCl vapor: $\ce{2NaCl(s) + H2SO4(l) \rightarrow 2HCl(g) + Na2SO4(s)}$ This reaction can be performed so that all of the NaCl and H2SO4 react, which gives a 100% yield. But it produces Na2SO4 byproduct, so the atom economy is less than 100%. The percent atom economy is calculated very simply by the relationship $\textrm{Percent atom economy} = \frac{\textrm{Mass of desired product}}{\textrm{Total mass of product}} \times 100$ (We could just as well divide by the total mass of reactants since in a chemical reaction it is equal to the total mass of products.) In this case, the mass of the desired product is that of 2 HCl and the total mass of product is that of 2HCl + Na2SO4. Given the atomic masses H 1.0, Cl 35.5, Na 23.0, and O 16.0 gives the following: $\textrm{Mass of desired product} = 2 \times (1.0 + 35.5) =73.0$ $\textrm{Total mass product} = 2 \times (1.0 + 35.5) + (2 \times 23.0 + 32.0 + 4 \times 16.0) = 215$ $\textrm{Percent atom economy} = \frac{73.0}{215} \times 100 = 34.0\%$ This result shows that even with 100% yield, the reaction is only 34.0% atom economical and if it were used as a means to prepare HCl large quantities of Na2SO4, a material with only limited value, would be produced. In contrast, the direct reaction of hydrogen gas with chlorine gas to give HCl gas, $\ce{H2(g) + Cl2(g) \rightarrow 2HCl (g)}$ can be carried out with 100% atom economy if all of the H2 reacts with Cl2. There is no waste byproduct. 5.05: New Page Carbon monoxide will certainly burn in the presence of oxygen from air as shown by the reaction $\ce{2CO + O2 \rightarrow 2CO2}$ Carbon monoxide is a product of automobile exhausts and an undesirable, toxic air pollutant. One way of ridding automobile exhaust gases of this pollutant is to pump air into the exhaust and convert the carbon monoxide to carbon dioxide as shown by the reaction above. However, even in the presence of oxygen, this reaction does not proceed to completion in an ordinary automobile exhaust system. It is enabled to occur, however, by passing the exhaust mixed with air over a solid honeycomb-like surface of ceramic coated with a metal that enables the reaction to occur, but is not itself consumed in the reaction. Such a substance is called a catalyst. Most people who have an automobile are vaguely aware that they have an automotive exhaust catalyst. They become much more acutely aware of this fact if the automobile’s exhaust system fails an emissions test and the catalytic converter in it has to be replaced at a cost of several hundred dollars! We do not have to go any farther than our own bodies to find catalysts. That is because all living organisms have biological catalysts that enable reactions to occur. Such living catalysts consist of specialized proteins called enzymes. Enzymes are discussed in Chapter 7. A common enzyme-catalyzed process is the reaction of glucose (blood sugar, C6H12O6) with molecular oxygen to produce energy mentioned at the beginning of this chapter: This is the important process of oxic respiration carried out by all organisms that live in contact with air and utilize oxygen from air to react with food materials. Although the overall reaction for oxic respiration can be written very simply, the actual process requires many steps and several catalytic enzymes are used. Other enzymes are used for various life processes, such as protein synthesis. There are enzymes that detoxify toxic substances, and in some cases they inadvertantly make toxic substances out of nontoxic ones. Some of the more common cancer-causing substances are actually synthesized from other molecules by enzyme action. Obviously enzymes are very important in life processes. Catalysts speed up reactions. Depending upon the conditions the rate of reaction can vary significantly. Rates of chemical reactions are addressed by the area of chemical kinetics. Catalysts are very important in green chemistry. One reason that this is so is because catalysts enable reactions to be carried out very specifically. Also, the right catalyst can enable reactions to occur with relatively less energy consumption and at relatively lower temperatures.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.04%3A_New_Page.txt
It is useful to place chemical reactions in various categories. The important categories of chemical reactions are addressed here. The simplest kind of chemical reaction to visualize is a combination reaction in which two substances come together to form a new substance. The substances may be two elements, two compounds, or an element and a compound. An example of a combination reaction occurs when elemental carbon burns, $\ce{C + O2 \rightarrow CO2}$ to produce carbon dioxide. Since this reaction generates only one product, it occurs with 100% atom economy. Another combination reaction occurs when calcium oxide, CaO, present in a bed of solid material in a fluidized bed furnace used to burn coal reacts with sulfur dioxide: $\ce{CaO + SO2 \rightarrow CaSO3}$ The sulfur dioxide is a potential air pollutant produced from the burning of sulfur present in the coal. By injecting pulverized coal into a bed of CaO and other minerals kept in a fluid-like state by the injection of air, the sulfur dioxide produced has the opportunity to react with CaO and is not emitted as a pollutant with the stack gas. In addition to being a combination reaction, the reaction above could also be called an addition reaction because the SO2 adds to the CaO. Addition reactions are very desirable in the practice of green chemistry because they are 100% atom economical. The opposite of a combination reaction is a decomposition reaction. An example of such a reaction occurs when a direct electrical current is passed between two electrodes through water to which a salt such as Na2SO4 has been added to make a solution in the water that is electrically conducting: $\ce{2H2O(l) \: \: \: \underrightarrow{Electrolysis} \: \: \: 2H2(g) + O2(g)}$ Reactions such as this that occur by the action of electricity passed through a solution are called electrolysis reactions. As written, the reaction is 100% atom economical. However, some side reactions may occur that reduce the efficiency. For example, impurity chloride ion, Cl-, must be avoided in solution because it can produce some Cl2 gas, a toxic, undesirable byproduct. Another inefficiency occurs because not all of the electricity passed through the solution is utilized to decompose water. An example of a useful decomposition reaction is the high-temperature decomposition of methane, $\ce{CH4(g) \: \: \: \underrightarrow{\Delta} \: \: \: C(s) + 2H2(g)}$ to produce elemental C and H2 gas (where the triangle over the arrow shows that heat is applied —in this case to a temperature of 1260–1425°C — to make the reaction occur). The elemental carbon from this reaction is generated as a fine powder called carbon black. Carbon black is an ingredient of the paste in dry cells (such as those used in portable electronic devices); it is used as a filler in tires and to make electrodes for electrolysis processes such as the one by which aluminum metal is prepared. Decomposition reactions do not always produce elements. For example, sodium bicarbonate mineral, NaHCO3 may be heated, $\ce{2NaHCO3(s) \: \: \underrightarrow{\Delta} \: \: Na2CO3(s) + CO2(g) + H2O(g)}$ to produce sodium carbonate, Na2CO3, commonly used as an industrial chemical to treat water, in cleaning solutions, and as an ingredient of glass. Example: Using atomic masses Na 23.0, H 1.0, C 12.0, and O 16.0, calculate the percent atom economy of the above reaction for the production of Na2CO3. Answer: When 2 formula units of NaHCO3 react, 1 formula unit of Na2CO3 is produced. The masses involved in atomic mass units, u, are the following: $\textrm{Mass 2NaHCO}_{3} = 2 \times (23.0 + 1.0 + 12.0 + 3 \times 16.0) = 168 \textrm{ u}$ $Na_{2}CO_{3} = 2 \times 23.0 + 12.0 + 3 \times 16.0 = 106 \textrm{ u}$ $\textrm{Percent atom economy} = \frac{160 \textrm{ u}}{168 \textrm{ u}} \times 100 = 63.1 \%$ A substitution or replacement reaction is one such as the reaction of iron and sulfuric acid, $\ce{Fe(s) + H2SO4(aq) \rightarrow H2(g) + FeSO4(aq)}$ in which Fe replaces H in H2SO4, a reaction shown earlier for the preparation of FeSO4. This reaction also falls under the classification of reactions involving evolution of a gas, in this case evolution of hydrogen gas. A double replacement reaction, also called a metathesis reaction, is one in which two compounds trade ions or other groups. When dissolved calcium chloride reacts with dissolved sodium carbonate, $\ce{CaCl2(aq) + Na2CO3(aq) \rightarrow CaCO3(s) + 2NaCl(aq)}$ the Ca2+ ion in calcium chloride simply switches places with the Na+ ions in the sodium carbonate to produce solid calcium carbonate and NaCl in solution. This is also a precipitation reaction in which a solid material forms from two substances dissolved in water; the solid formed is a precipitate. The removal of calcium from water as shown by this reaction is a common water treatment process called water softening. It is done because excessive levels of calcium cause formation of scale that can clog water pipes and damage plumbing apparatus. Whenever an acid and a base react, as shown here for the reaction of hydrochloric acid with sodium hydroxide, $\ce{HCl(aq) + NaOH(aq) \rightarrow H2O(l) + NaCl(aq)}$ water and a salt are formed. Such a reaction is a neutralization reaction or simply an acid-base reaction. Exercise: Classify each of the following reactions as combination, decomposition, substitution, metathesis, neutralization, precipitation, or evolution of a gas. In some cases, a reaction will fit into more than one category. (a) $ce{2Ca(s) + O2(g) \rightarrow 2CaO(s)}$ (b) $\ce{2KClO3(s) \underrightarrow{\Delta} 2KCl(s) + 3O2(g)}$ (c) $\ce{SO3(g) + H2O(l) \rightarrow H2SO4(aq)}$ (d) $\ce{MgCO3(s) + 2HCl(aq) \rightarrow MgCl2(aq) + H2O(l) + CO2(g)}$ (e) $\ce{Zn(s) + CuCl2(aq) \rightarrow Cu(s) + ZnCl2(aq)}$ (f) $\ce{KOH(aq) + Hcl(aq) \rightarrow KCl(aq) + H2O(l)}$ (g) $\ce{MgSO4(aq) + 2KOH(aq) \rightarrow Mg(OH)2(s) + K2SO4(aq)}$ Answers: (a) Combination, (b) decomposition, evolution of a gas, (c) combination, (d)metathesis, evolution of a gas, (e) substitution, (f) neutralization, metathesis, (g) precipitation, metathesis
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.06%3A_New_Page.txt
Many reactions, including some of those given in the preceding section, are oxidation-reduction reactions, frequently called redox reactions. This name derives from the long standing use of oxidation to describe the reaction of a substance with oxygen. Consider the following reaction of elemental calcium with elemental oxygen: $\ce{2Ca + O2 \rightarrow 2CaO}$ Combining with oxygen, Ca is oxidized. Whenever something is oxidized, something else has to be reduced. In this case, elemental oxygen is reduced to produce the oxide ion, O2-in CaO. It is seen from this reaction that the calcium atoms lose electrons when they are oxidized and the oxygen atoms gain electrons. This leads to another definition of oxidation-reduction reactions, which is that when a chemical species loses electrons in a chemical reaction it is oxidized and when a species gains electrons it is reduced. Elemental hydrogen is commonly involved in oxidation-reduction. Whenever a chemical species reacts with elemental hydrogen, it is reduced. As an example, iron(II) oxide, FeO, can be reacted with elemental hydrogen, $\ce{FeO + H2 \rightarrow Fe + H2O}$ In this case the Fe in FeO is reduced to iron metal and the hydrogen in elemental H2 is oxidized to H2O. When elemental oxygen reacts to produce chemically combined oxygen, it is acting as an oxidizing agent and is reduced. And when elemental hydrogen reacts to produce chemically combined hydrogen, it acts as a reducing agent and is oxidized. Consider what happens when the opposite reactions occur. When chemically combined oxygen is released as elemental oxygen from a chemical reaction, the oxygen is oxidized. And when elemental hydrogen is released as the result of a chemical reaction, hydrogen is reduced. A good illustration of these definitions may be seen when a direct electrical current is passed between two metal electrodes through water made electrically conducting by dissolving in it a salt, such as Na2SO4 as shown in Figure 5.1. At the left electrode, electrons are pumped into the system reducing the chemically bound H in H2O to elemental H2. An electrode at which reduction occurs is called the cathode. At the other electrode, electrons are removed from the system, elemental O2 is released, and the oxygen in H2O is oxidized. An electrode at which oxidation occurs is called the anode. Figure 5.1. Electrolysis of water containing some dissolved salt to make it electrically conducting. At the left electrode(cathode) H in H2O is reduced by adding electrons releasing H2 gas. At the right electrode (anode) electrons are removed from chemically bound O in H2O releasing elemental O2 and the oxygen is oxidized. The reaction shown above is an electrolysis reaction. It is very significant in the practice of green chemistry because it is a means of getting pure hydrogen and pure oxygen from water without the use of any other chemical reagents. For example, using a nonpolluting source of energy, such as wind power, elemental hydrogen can be generated for use in nonpolluting fuel cells (see Figure 3.2 and Chapter 16) Oxidation-reduction reactions are very significant in energy conversion processes. An important example is photosynthesis, $\ce{6CO2 + 6H2O + h \nu \rightarrow C6H12O6 + 6O2}$ in which solar energy ($h \nu$) from sunlight is used by plants to produce glucose sugar, C6H12O6, a high-energy compound that is used by organisms to provide energy for their metabolic needs. Since elemental oxygen is produced, oxygen is oxidized. Although it is not obvious based upon the discussion of oxidation-reduction so far, carbon is reduced; the carbon in the C6H12O6 product is reduced compared to the carbon in the CO2 reactant. The reverse of this reaction shown at the beginning of this chapter is $\ce{C6H12O6 + 6O2 \rightarrow 6CO2 + 6H2O + energy}$ which occurs when organisms—including humans—utilize glucose sugar to produce energy. In this case, oxygen reacts, an obvious oxidation process. The oxygen is reduced and carbon is oxidized by the action of the elemental oxygen. A very common oxidation-reduction reaction occurs when fossil fuels are burned to produce energy. One such reaction occurs when natural gas (methane, CH4) burns, $\ce{CH4 + 2O2 \rightarrow CO2 + 2H2O + energy}$ to produce carbon dioxide and water, releasing energy. The burning of gasoline, diesel fuel, coal,wood, and even hydrogen gas are oxidation-reduction reactions in which carbon or hydrogen are oxidized by the action of oxygen yielding usable energy. Oxidation-reduction reactions are the most important kinds of reactions considered in green chemistry. That is true in part because of the central role currently played by the oxidation of fossil fuels and other materials in producing energy needed for chemical processes. Furthermore, the most common raw material currently used for making plastics, synthetic fabrics, and other manufactured materials is petroleum hydrocarbon. There are many hydrocarbon compounds all containing chemically bound carbon and hydrogen. A typical such compound is ethane, C2H6. The hydrogen and carbon in a hydrocarbon are in the most chemically reduced form, but required raw materials often are partially oxidized hydrocarbons in which O atoms are bonded to the hydrocarbon (complete oxidation of a hydrocarbon yields CO2 and H2O). Ethanol, C2H6O, used in chemical synthesis and as an oxygenated additive to make gasoline burn more smoothly with emission of fewer air pollutants is a partially oxidized hydrocarbon. Large quantities of materials and energy are expended in converting petroleum hydrocarbons to partially oxidized compounds used as raw materials. For example, ethanol can be made from ethane taken from petroleum and natural gas by a series of chemical reactions for which the net process is the following: $\ce{2C2H6 + O2 \rightarrow 2C2H6O}$ This transformation requires relatively severe conditions and a net loss of energy. A greener alternative is to use glucose sugar produced by photosynthesis (Reaction 5.7.3) to grow yeasts that produce an ethanol product $\ce{C6H12O6 \rightarrow 2C2H6O + 2CO2}$ a process that occurs under room temperature conditions. In addition to making ethanol, this fermentation process yields carbon dioxide in a concentrated form that can be used for carbonated beverages, supercritical carbon dioxide solvent, or pumped underground for tertiary petroleum recovery. The protein-rich yeast biomass produced in fermentation makes a good animal feed additive.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.07%3A_New_Page.txt
Much of green chemistry is involved with calculations of quantities of materials involved in chemical reactions. It is essential to do such calculations in order to deal with the important concepts of percent yield and atom economy. Fortunately, it is easy to calculate quantities of materials if a balanced chemical reaction is known along with the pertinent atomic and formula masses. Both energy (Section 5.9) and mass (Section 5.10) in chemical reactions can be calculated. To this point, we have been viewing chemical reactions in terms of individual atoms and molecules and have been thinking of masses in atomic mass units, u, used to express the masses of individual atoms and molecules. But that is much too small a scale to use in the laboratory. The chemist conveniently deals with grams and moles where a mole of a substance (see Section 4.8) typically has a mass of several to several hundred grams. Much of the remainder of this chapter deals with quantitative information from chemical reactions. Energy changes in chemical reactions are addressed first followed by consideration of masses of reactants and products. 5.09: New Page In addition to changes in the distribution of mass among various chemical species that occur with chemical reactions, another important participant in chemical reactions is energy. Chapter 15 defines and discusses energy in more detail. Here energy is considered in the form of heat manifested by the movement of atoms and molecules and as chemical energy, which is energy stored in chemical bonds in matter. The standard unit of energy is the joule, abbreviated J. A total of 4.184 J of heat energy will raise the temperature of 1 g of liquid water by 1 ̊C. As an example of energy involved with chemical reactions, Consider what happens in a burner on a kitchen range fueled by natural gas. The flame is obviously hot; something is going on that is releasing heat energy. The flame is also giving off light energy, probably as a light blue glow. A chemical reaction is taking place as the methane in the natural gas combines with oxygen in the air, $\ce{CH4 + 2O2 \rightarrow CO2 + 2H2O}$ to produce carbon dioxide and water. Most of the energy released during this chemical reaction is released as heat, and a little bit as light. It is reasonable to assume that the methane and oxygen contain stored energy as chemical potential energy and that it is released in producing carbon dioxide and water. Common sense tells us that it would be hard to get heat energy out of either of the products. They certainly won’t burn! Water is used to put out fires, and carbon dioxide is even used in fire extinguishers. The potential energy contained in chemical species is contained in the chemical bonds of the molecules that are involved in the chemical reaction. Figure 5.2 shows the kinds of bonds involved in methane, elemental oxygen, carbon dioxide, and water and the energy contained in each. The bond energies are in units of the number of kilojoules (thousands of joules, kJ) required to break a mole (6.02×1023, see Section 4.8) of the bonds (kJ/mol). The same amount of energy is released when a mole of a bond is formed. By convention, energy put into a system is given a positive sign and energy released is denoted by a negative sign. To calculate the energy change when a mole of methane reacts with oxygen as shown in Reaction 5.9.1, the difference is taken between the sum of the energies of the bonds in the products and the sum of the energies of the bonds in the reactants. Examination of Reaction 5.9.1 and Figure 5.2 shows the following total bond energies in the products: $\textrm{1 mol CO}_{2} \times \frac{\textrm{2 mol C=O}}{\textrm{mol CO}_{2}} \times \frac{\textrm{799 kJ}}{\textrm{mol C=O}} = \textrm{1598 kJ}$ $\textrm{2 mol H}_{2}\textrm{O} \times \frac{\textrm{2 mol O-H}}{\textrm{mol H}_{2}\textrm{O}} \times \frac{\textrm{459 kJ}}{\textrm{mol O-H}} = \textrm{1836 kJ}$ A similar calculation gives the total bond energies in the reactants $\textrm{1 mol CH}_{4} \times \frac{\textrm{4 mol C-H}}{\textrm{1 mol CH}_{4}} \times \frac{\textrm{411 kJ}}{\textrm{mol C-H}} = \textrm{1644 kJ}$ $\textrm{2 mol O}_{2} \times \frac{\textrm{1 mol O=O}}{\textrm{mol O}_{2}} \times \frac{\textrm{494 kJ}}{\textrm{mol O=O}} = \textrm{988 kJ}$ Total bond energy in reactants = 1644 kJ + 988 kJ = 2632 kJ The difference in bond energies between products and reactants is 3434 kJ - 2632 kJ = 802 kJ This calculation states that, based upon considerations of bond energy, alone, the energy released when 1 mole of CH4 reacts with 2 moles of O2 to produce 1 mole of CO2 and 2 moles of H2O, is 802 kJ. This is an exothermic reaction in which heat energy is released, so it is denoted as-802 kJ. This value is close to the value that would be obtained by experimentally measuring the heat energy released by the reaction, assuming all the reactants and products were in the gas phase. (A significant amount of heat energy is released when vapor-phase water condenses to liquid. Highly efficient condensing gas furnaces capture this heat in a heat exchanger where the water vapor in the furnace exhaust condenses to the liquid phase.) For the most part, therefore, the amount of heat energy released in a chemical reaction, and the amount of potential chemical energy contained in the reactants is equal to the difference between the total bond energies of the products and those of the reactants.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.08%3A_New_Page.txt
The calculation of quantities of materials involved in chemical reactions is called stoichiometry. To illustrate stoichiometric calculations consider a typical chemical reaction, in this case the heat-producing combustion of ethane, a hydrocarbon fuel with a chemical formula of C2H6: $\ce{2C2H6 + 7O2 \rightarrow 4CO2 + 6H2O}$ Rather than viewing this reaction in terms of individual molecules, it is possible to scale up to moles. Recall from Section 4.8 that the mole is a fundamental unit for quantity of material and that each mole contains Avogadro’s number (6.022×1023) of formula units (molecules of covalently bound compounds). This equation simply says that 2 moles of C2H6 react with 7 moles of O2 to yield 4 moles of CO2 and 6 moles of H2O. Now we can examine the equation in more detail to do some quantitative calculations. Before doing that, however, review the following two terms Formula mass: The sum of the atomic masses of all the atoms in a formula unit of a compound. Although the average masses of atoms and molecules may be expressed in atomic mass units (amu or u), formula mass is generally viewed as being relative and without units. Molar mass: Where X is the formula mass, the molar mass is X grams of an element or compound, that is, the mass in grams of 1 mole of the element or compound Given the atomic masses H 1.0, C 12.0, and O 16.0 the molar mass of C2H6 is 2×12.0 + 6×1.0 = 30.0 g/mol, that of O2 is 2×16.0 = 32.0 g/mol, that of CO2 is 12.0 + 2×16.0 = 44.0 g/mol, and that of H2O is 2×1.0 + 16.0 = 18.0 g/mol. Now consider the chemical equation $\ce{2C2H6 + 7O2 \rightarrow 4CO2 + 6H2O}$ in terms of the minimum whole number of moles reacting and produced and the masses in grams of these quantities. The equation states that 2 moles of C2H6 with a mass of 2×30.0 g = 60.0 g of C2H6 react with 7 moles of O2 with a mass of 7×32.0 g = 224 g of O2 to produce 4 moles of CO2 with a mass of 4×44.0 g = 176 g of CO2 and 6 moles of H2O with a mass of 6×18.0 g = 108 g of H2O. The total mass of reactants is $\textrm{60.0 g of } C_{2}H_{6} + \textrm{ 224 g of } O_{2} = \textrm{284.0g of reactants}$ and the total mass of products is $\textrm{176 g of } CO_{2} + \textrm{108g of } H_{2}O = \textrm{284 g of products}$ Note that, as in all chemical reactions, the total mass of products equals the total mass of reactants. Stoichiometry, the calculation of quantities of materials involved in chemical reactions, is based upon the law of conservation of mass which states that the total mass of reactants in a chemical reaction equals the total mass of products, because matter is neither created nor destroyed in chemical reactions. The basic premise of the mole ratio method of stoichiometric calculations is that the relative numbers of moles of reactants and products remain the same regardless of the total quantity of reaction. To illustrate stoichiometric calculations, consider again the following reaction: $\ce{2C2H6 + 7O2 \rightarrow 4CO2 + 6H2O}$ As noted above, this equation states that 2 moles C2H6 react with 7 moles of O2 to produce 4 moles of CO2 and 6 moles of H2O. The same ratios hold true regardless of how much material reacts. So for 10 times as much material, 20 moles C2H6 react with 70 moles of O2 to produce 40 moles of CO2 and 60 moles of H2O Suppose that it is given that 18.0 g of C2H6 react. What is the mass of O2 that will react with this amount of C2H6? What mass of CO2 is produced? What mass of H2O is produced? This problem can be solved by the mole ratio method. Mole ratios are, as the name implies, simply the ratios of various moles of reactants and products to each other as shown by a chemical equation. Mole ratios are obtained by simply examining the chemical equation in question; the three that will be used in solving the problem posed are the following: Suppose that it is given that 18.0 g of C2H6 react. What is the mass of O2 that will react with this amount of C2H6? What mass of CO2 is produced? What mass of H2O is produced? This problem can be solved by the mole ratio method. Mole ratios are, as the name implies, simply the ratios of various moles of reactants and products to each other as shown by a chemical equation. Mole ratios are obtained by simply examining the chemical equation in question; the three that will be used in solving the problem posed are the following: $\frac{7 \text{mol O}_{2}}{\textrm{2 mol C}_{2} \textrm{H}_{6}} \: \: \: \frac{\textrm{4 mol CO}_{2}}{\textrm{2 mol C}_{2} \textrm{H}_{6}} \: \: \: \frac{\textrm{6 mol H}_{2} \textrm{O}}{\textrm{2 mol C}_{2} \textrm{H}_{6}}$ To solve for the mass of O2 reacting the following steps are involved: A. Mass of C2H6 reacting B. Convert to moles of C2H6 C. Convert to moles of O2 D. Convert to mass of O2 In order to perform the calculation, it will be necessary to have the molar mass of C2H6, stated earlier as 30.0 g/mol, the molar mass of O2(18.0 g/mol) and the mole ratio relating moles of O2 reactant to moles of C2H6, 7 mol O2/2 mol C2H6. The calculation becomes the following $\textrm{Mass of O}_{2} = \textrm{18.0 g C}_{2} \textrm{H}_{6} \times \frac{\textrm{1 mol C}_{2} \textrm{H}_{6}}{\textrm{30.0 g C}_{2} \textrm{H}_{6}} \times \frac{\textrm{7 mol O}_{2}}{\textrm{2 mol C}_{2} \textrm{H}_{6}} \times \frac{\textrm{32.0 g O}_{2}}{\textrm{1 mol O}_{2}} = \textrm{67.2 g O}_{2}$ Note that in this calculation units cancel above and below the line, starting with units of g C2H6. Now that the mass of O2 reacting has been calculated, it is possible using the appropriate mole ratios and molar masses to calculate the masses of CO2 and of H2O produced as follows: $\textrm{Mass of CO}_{2} = \textrm{18.0 g C}_{2} \textrm{H}_{6} \times \frac{\textrm{1 mol C}_{2}\textrm{H}_{6}}{\textrm{30.0 g C}_{2} \textrm{H}_{6}} \times \frac{\textrm{6 mol H}_{2} \textrm{O}}{\textrm{2 mol C}_{2} \textrm{H}_{6}} \times \frac{\textrm{18.0 g H}_{2} \textrm{O}}{\textrm{1 mol H}_{2} \textrm{O}} = \textrm{32.4 g H}_{2} \textrm{O}$ $\textrm{Mass of H}_{2} \textrm{O} = \textrm{18.0 g C}_{2} H_{6} \times \frac{\textrm{1 mol C}_{2} \textrm{H}_{6}}{\textrm{30.0gC}_{2} \textrm{H}_{6}} \times \frac{\textrm{6 mol H}_{2}\textrm{O}}{\textrm{2 mol C}_{2} \textrm{H}_{6}} \times \frac{\textrm{18.0 g H}_{2} \textrm{O}}{\textrm{1 mol H}_{2} \textrm{O}} = \textrm{32.4 g H}_{2} \textrm{O}$ Are the masses calculated above correct? A good check is to compare the total mass of reactants, 18.0 g C2H6 + 67.2 g O2= 85.2 g of reactants, with the total mass of products, 52.8 g CO2 + 32.4 g H2O = 85.2 g of products. The fact that the total mass of reactants is equal to the total mass of products gives confidence that the calculations are correct. As one more example consider the reaction of 15.0 g of Al with Cl2 to give AlCl3: $\ce{2Al + 3Cl2 \rightarrow 2AlCl3}$ What mass of Cl2 reacts and what is the mass of AlCl3 produced? The atomic mass of Al is 27.0 and that of Cl is 35.5. Therefore, the molar mass of Cl2 is 71.0 g/mol and the molar mass of AlCl3 is 133.5 g/mole. The mass of Cl2 reacting is $\textrm{Mass of Cl}_{2} = \textrm{15.0 g Al} \times \frac{\textrm{1 mol Al}}{\textrm{27.0 g Al}} \times \frac{\textrm{3 mol Cl}_{2}}{\textrm{2 mol Al}} \times \frac{\textrm{71.0 g Cl}_{2}}{\textrm{1 mol Cl}_{2}} = \textrm{59.2 g Cl}_{2}$ $\textrm{Mass of AlCl}_{3} = \textrm{15.0 g Al} \times \frac{\textrm{1 mol Al}}{\textrm{27.0 g Al}} \times \frac{\textrm{2 mol AlCl}_{3}}{\textrm{2 mol Al}} \times \frac{\textrm{133.5 g AlCl}_{3}}{\textrm{1 mol AlCl}_{3}} = \textrm{74.2 g AlCl}_{3}$ As a check, 15.0 g Al + 59.2 g Cl2 reactant gives a total of 74.2 g of reactants equal to the mass of the AlCl3 product. Exercise Calculate the mass of CH4 that reacts and the masses of the products when 25.0 g of Fe2O3 undergo the reaction below. The atomic masses involved are H 1.0, C12.0, O 16.0, Fe 55.8 $\ce{4Fe2O3 + 3CH4 \rightarrow 8Fe + 3CO2 + 6H2O}$ Answer Answer: 1.88 g CH4, 17.5 g Fe, 5.2 g CO2, 4.2 g H2O Exercise Calculate the mass of O2 that reacts and the masses of the products when 100 g of benzoic acid, C7H6O2 undergo the reaction below. The atomic masses involved are H 1.0, C 12.0, and O 16.0. $\ce{2C7H6O2 + 15O2 \rightarrow 14CO2 + 6H2O}$ Answer 197 g O2, 252 g CO2, 44.3 g H2O
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.10%3A_Stoichiometry_by_the_Mole_Ratio_Method.txt
Mixing of exact amounts of reactants such that all are consumed and none left over in a chemical reaction almost never occurs. Instead, one of the reactants is usually a limiting reactant. Suppose, for example that 100 g of elemental zinc (atomic mass 65.4) and 80 g of elemental sulfur (atomic mass 32.0) are mixed and heated undergoing the following reaction: $\ce{Zn + S \rightarrow ZnS}$ What mass of ZnS, formula mass 97.4 g/mol, is produced? If 100 g of zinc react completely, the mass of S reacting and the mass of ZnS produced would be given by the following calculations: $\textrm{Mass s = 100.0 G Zn} \times \frac{\textrm{1 mol Zn}}{\textrm{65.4 g Zn}} \times \frac{\textrm{1 mol S}}{\textrm{1 mol Zn}} \times \frac{\textrm{32.0 g S}}{\textrm{1 mol S}} = \textrm{48.9 g S}$ $\textrm{Mass ZnS = 100.0g Zn} \times \frac{\textrm{1 mol Zn}}{\textrm{65.4 g Zn}} \times \frac{\textrm{1 mol S}}{\textrm{1 mol Zn}} \times \frac{\textrm{97.4 g ZnS}}{\textrm{1 mol ZnS}} = \textrm{149 g S}$ Only 48.9 g of the available S react, so sulfur is in excess and zinc is the limiting reactant. A similar calculation for the amount of Zn required to react with 80 g of sulfur would show that 164g of Zn would be required, but only 100 g is available. Exercise A solution containing 10.0 g of HCl dissolved in water (a solution of hydrochloric acid) was mixed with 8.0 g of Al metal undergoing the reaction $\ce{2Al + 6HCl \rightarrow 2AlCl3 + 3H2}$ Given atomic masses H 1.0, Al 27.0, and Cl 35.5, which reactant was left over? How much? What mass of AlCl3 was produced? Answer HCl was the limiting reactant. Only 2.47 g of Al were consumed leaving 5.53 g of Al unreacted. The mass of AlCl3 produced was 12.2 g Percent Yield The mass of product calculated from the mass of limiting reactant in a chemical reaction is called the stoichiometric yield of a chemical reaction. By measuring the actual mass of a product produced in a chemical reaction and comparing it to the mass predicted from the stoichiometric yield it is possible to calculate the percent yield. This concept is illustrated by the following example. Suppose that a water solution containing 25.0 g of CaCl2 was mixed with a solution of excess sodium sulfate, $\ce{CaCl2 (aq) + Na2SO4 (aq) \rightarrow CaSO4 (s) + 2NaCl(aq)}$ to produce a solid precipitate of CaSO4, the desired product of the reaction. (Recall that a precipitate is a solid formed by the reaction of species in solution; such a solid is said to precipitate from the solution.) Removed by filtration and dried, the precipitate was found to have a mass of 28.3 g, the measured yield. What was the percent yield? Using atomic masses Ca 40.0, Cl 35.5, Na 23.0, and O, 16.0 gives molar masses of 111 g/mol for CaCl2 and 136 g/mol for CaSO4. Furthermore, 1 mole of CaCl2 yields 1 mol of CaSO4. The stoichiometric yield of CaSO4 is given by the following calculation $\textrm{Mass CaSO}_{4} = \textrm{25.0 g CaCl}_{2} \times \frac{\textrm{1 mol CaCl}_{2}}{\textrm{111 g CaCl}_{2}} \times \frac{\textrm{1 mol CaSO}_{4}}{\textrm{1 mol CaCl}_{2}} \times \frac{\textrm{136 g CaSO}_{4}}{\textrm{1 mol CaSO}_{4}} = \textrm{30.6 g CaSO}_{4}$ The percent yield is calculated by the following: $\textrm{Percent yield} = \frac{\textrm{Measured yield}}{\textrm{Stoichiometric yield}} \times 100$ $\textrm{Percent yield} = \frac{\textrm{28.3 g}}{\textrm{30.6 g}} \times 100 = 92.5 \%$
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.11%3A_Limiting_Reactant_and_Percent_Yield.txt
Masses are commonly measured with a laboratory balance that registers in grams. Masses of industrial chemicals are measured with much larger industrial scales that commonly give masses in kilograms or tons. In doing laboratory stoichiometric measurements with species in solution, it is often convenient to measure volumes of solution rather than masses of reactants. Solutions can be prepared that contain known numbers of moles per unit volume of solution. The volume of the reagent that must be added to another reagent to undergo a particular reaction can be measured with a device called a buret. A buret is shown in Figure 5.3. By measuring the volume of a solution of known concentration of solute required to react with another reactant, the number of moles of solute reacting can be calculated and stoichiometric calculations can be performed based upon the reaction. This procedure is commonly used in chemical analysis and is called titration. It is especially easy to relate volumes of solutions stoichiometrically when the solution concentrations are expressed as molar concentration, M. This concentration unit is defined as $\textrm{M} = \frac{\textrm{moles of solute}}{\textrm{number of liters of solution}}$ The number of moles of a substance, in this case the moles of solute, is related to the mass of the substance by $\textrm{Moles of solute} = \frac{\textrm{moles of solute}}{\textrm{molar mass of solute, g/mol}}$ These two relationships can be combined to give the following useful equation: $\textrm{M} = \frac{\textrm{mass of solute}}{\textrm{(molar mass of solute)} \times \textrm{(number of liters of solution)}}$ A solution of known concentration that is added to a reaction mixture during the procedure of titration is a standard solution. One of the most common of these is a standard base solution of sodium hydroxide, NaOH. Typically, the concentration of sodium hydroxide in such a standard solution is 0.100 mol/L. Suppose that it is desired to make exactly 2 liters of a solution of 0.100mol/L sodium hydroxide. What mass of NaOH, molar mass 40.0 g/mol, is dissolved in this solution? To do this calculation, use Equation 5.12.3 rearranged to solve for mass of solute: $\textrm{Mass NaOH} = \textrm{M} \times \textrm{(molar mass NaOH)} \times \textrm{(liters NaOH)}$ $\textrm{Mass NaOH} = 0.100 \textrm{ mol/L} \times 40.0 \textrm{ g/mol} \times 2.00 \textrm{ L} = 8.00 \textrm{ g NaOH}$ A common titration procedure is to use a standard solution of base to titrate an unknown solution of acid or to use standard acid to determine base. As an example consider an analysis for acid of a sample of water used to scrub exhaust gas from a hospital incinerator. The water is acidic because of the presence of hydrochloric acid produced by the scrubbing of HCl gas from the incinerator stack gas where the HCl was produced in the burning of polyvinyl chloride in the incinerator. Suppose that a sample of 100 mL of the scrubber water was taken for titration with a 0.125 mol/L standard NaOH and that the volume of standard NaOH consumed was 11.7 mL. What was the molar concentration of HCl in the stack gas scrubber water? To solve this problem it is necessary to know that the reaction between NaOH and HCl is, $\ce{NaOH + HCl \rightarrow NaCl + H2O}$ a neutralization reaction in which water and a salt, NaCl are produced. Examination of the reaction shows that 1 mole of HCl reacts for each mole of NaOH. Equation 5.12.1 applies to both the standard NaOH solution and the HCl solution being titrated leading to the following equations: $M_{HCl} = \frac{moles_{HCl}}{liters_{HCl}} = \textrm{and M}_{NaOH} = \frac{moles_{NaOH}}{liters_{NaOH}}$ When exactly enough NaOH has been added to react with all the HCl present, the reaction is complete with no excess of either HCl or NaOH. In a titration this end point is normally shown by the change of color of a dye called an indicator dissolved in the solution being titrated. At the endpoint moles HCl= moles NaOH and the two equations above can be solved to give, $\textrm{M}_{\textrm{HCl}} \times \textrm{liters}_{\textrm{HCl}} = \textrm{M}_{\textrm{NaOH}} \times \textrm{liters}_{\textrm{NaOH}}$ which can be used to give the molar concentration of HCl: $\textrm{M}_{\textrm{HCl}} = \frac{\textrm{M}_{\textrm{NaOH}} \times \textrm{liters}_{\textrm{NaOH}}}{\textrm{liters}_{\textrm{HCl}}}$ Converting the volumes given from mL to liters and substituting into this equation gives the molar concentration of HCl in the incinerator scrubber water: $\textrm{M}_{\textrm{HCl}} = \frac{0.125 \textrm{mol/L} \times 0.0117 \textrm{L}}{0.100 \textrm{ L}} = 0.0146 \textrm{mol/L}$ Determining Percentage Composition by Titration A useful application of titration, or titrimetric analysis as it is called, is to determine the percentage of a substance in a solid sample that will react with the titrant. To see how this is done, consider a sample consisting of basic lime, Ca(OH)2, and dirt with a mass of 1.26 g. Using titration with a standard acid solution it is possible to determine the mass of basic Ca(OH)2 in the sample and from that calculate the percentage of Ca(OH)2 in the sample. Assume that the solid sample is placed in water and titrated with 0.112 mol/L standard HCl, a volume of 42.2 mL(0.0422 L) of the acid being required to reach the end point. The Ca(OH)2 reacts with the HCl $\ce{Ca(OH)2 + 2HCl \rightarrow CaCl2 + 2H2O}$ whereas the dirt does not react. Examination of this reaction shows that at the end point the mole ratio $\frac{\textrm{1 mol Ca(OH)}_{2}}{\textrm{2 mol HCl}}$ applies. At the end point, the number of moles of HCl can be calculated from $\textrm{Mol}_{\textrm{HCl}} = \textrm{liters}_{\textrm{HCl}} \times \textrm{M}_{\textrm{HCl}}$ and, since the molar mass of Ca(OH)2 is 74.1 (given atomic masses 40.1, 16.0, and 1.0 for Ca, O, and H, respectively), the mass of Ca(OH)2 is given by $\textrm{Mass}_{\textrm{Ca(OH)}_{2}} = \textrm{moles}_{\textrm{Ca(OH)}_{2}} \times \textrm{molar mass}_{\textrm{Ca(OH)}_{2}}$ With this information it is now possible to calculate the mass of Ca(OH)2: $\textrm{Mass}_{\textrm{Ca(OH)}_{2}} = \textrm{mol}_{\textrm{Ca(OH)}_{2}} \times \frac{\textrm{74.1 g Ca(OH)}_{2}}{\textrm{1 mol Ca(OH)}_{2}}$ $\textrm{Mass}_{\textrm{Ca(OH)}_{2}} = \underbrace{\textrm{Liters}_{\textrm{HCl}} \times \textrm{M}_{\textrm{HCl}}}_{\textrm{Moles HCl reacting}} \times \underbrace{\frac{\textrm{1 mol Ca(OH)}_{2}}{\textrm{2 mol HCl}}}_{\textrm{Converts from moles HCl to moles Ca(OH)}_{2}} \times \underbrace{\frac{\textrm{74.1 g Ca(OH)}_{2}}{\textrm{1 mol Ca(OH)}_{2}}}_{\textrm{Gives mass Ca(OH)}_{2} \textrm{from moles Ca(OH)}_{2}}$ $\textrm{Mass}_{\textrm{Ca(OH)}_{2}} = 0.0422 \textrm{L HCl} \times \frac{\textrm{0.112 mol HCl}}{\textrm{1 L HCl}} \times \frac{\textrm{1 mol Ca(OH)}_{2}}{\textrm{2 mol HCl}} \times \frac{\textrm{74.1 g Ca(OH)}_{2}}{\textrm{1 mol Ca(OH)}_{2}}$ $\textrm{Mass Ca(OH)}_{2} = 0.175 \: g$ $\textrm{Percent}_{\textrm{Ca(OH)}_{2}} = \frac{\textrm{mass Ca(OH)}_{2}}{\textrm{mass sample}} \times \frac{0.175g}{1.26g} \times 100 = 13.9 \%$ Exercise Exercise: A 0.638 g sample consisting of oxalic acid, H2C2O4, and sodium oxalate, Na2C2O4 was dissolved and titrated with 0.116 mol/L sodium hydroxide, of which 47.6mL (0.0476 L) was required. Each molecule of H2C2O4 releases 2 H+ ions. Calculate the percentage of oxalic acid in the sample. Answer 38.9%
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.12%3A_Titrations_-_Measuring_Moles_by_Volume_of_Solution.txt
Literally thousands of chemical reactions are used to make important industrial products. Most of these involve organic chemicals, which are addressed in Chapter 9 and later chapters of this book. Some are used to make inorganic chemicals in large quantities. One such synthesis operation is the Solvay process, long used to make sodium bicarbonate and sodium carbonate, industrial chemicals required for glass making, cleaning formulations, and many other applications. The Solvay process is examined in some detail in this section because it illustrates some important inorganic chemical reactions and can be used for the discussion of green chemistry in industry. The key reaction in Solvay synthesis is, $\ce{NaCl + NH3 + Co2 + H2O \rightarrow NaHCO3(s) + NH4Cl}$ in which a sodium chloride solution (brine) is saturated with ammonia gas (NH3), then with carbon dioxide, and finally cooled. This is a precipitation reaction in which solid sodium bicarbonate, NaHCO3, comes out of solution. When heated, the solid NaHCO3 yields solid sodium carbonate, Na2CO3, water vapor, and carbon dioxide gas: $\ce{2NaHCO3 + heat \rightarrow Na2CO3 + H2O(g) + CO2(g)}$ In keeping with the practice of green chemistry (although Solvay developed the process long before anyone ever thought of green chemistry), the CO2 from Reaction 5.13.2 is recycled back into Reaction 5.13.1 The raw materials for the Solvay process are cheap. The NaCl solution can be pumped from the ground from brine deposits in some locations, or fresh water can be pumped into a salt formation to dissolve NaCl and the resulting brine pumped to the surface. The most expensive raw material is ammonia, which is made by the reaction of elemental hydrogen and nitrogen over an iron-based catalyst, $\ce{3H2 + N2 \rightarrow 2NH3}$ a means of making ammonia developed by Haber and Bosch in Germany in 1913. However, as shown below, the ammonia is recycled, so only relatively small quantities of additional makeup NH3 are required. In addition to NaCl, the major consumable raw material in the Solvay process is calcium carbonate, CaCO3, which is abundantly available from deposits of limestone. It is heated (calcined) $\ce{CaCO3 + heat \rightarrow CaO + CO2}$ to produce calcium oxide and carbon dioxide gas. The carbon dioxide gas is used in Reaction 5.13.1, another green chemical aspect of the process. The calcium oxide is reacted with water (it is said to be slaked), $\ce{CaO + H2O \rightarrow Ca(OH)_{2}}$ to produce basic calcium hydroxide. This base is then reacted with the solution from which solid NaHCO3 has been precipitated (Reaction 5.13.1) and that contains dissolved ammonium chloride, $\ce{Ca(OH)2(s) + 2NH4Cl(aq) \rightarrow 2NH3(g) + CaCl2(aq) + 2H2O(l)}$ releasing ammonia gas that is recycled back into Reaction 5.13.1 for NaHCO3 synthesis. This has the advantage of recycling ammonia, which is essential for the process to be economical. It has the disadvantage of generating a solution of calcium chloride, CaCl2. The commercial demand for this salt is limited, although concentrated solutions of it are used for de-icing ice-covered roads. It has such a voracious appetite for water that it cannot be dried economically for storage in a dry form. Does the Solvay process meet the criteria for a green chemical synthesis? There is not a simple answer to that question. There are two respects in which it does meet green chemical criteria: 1. It uses inexpensive, abundantly available raw materials in the form of NaCl brine and limestone (CaCO3). A significant amount of NH3 is required to initiate the process with relatively small quantities to keep it going. 2. It maximizes recycle of two major reactants, ammonia and carbon dioxide. The calcination of limestone (Reaction 5.13.4) provides ample carbon dioxide to make up for inevitable losses from the process, but some additional ammonia has to be added to compensate for any leakage. What about the percent yield and atom economy of the Solvay process? The percent yield of reaction generating the product, Reaction 5.13.1, can be expected to be significantly less than 100% in large part because the stoichiometric amount of NaHCO3 cannot be expected to precipitate from the reaction mixture. To calculate the maximum atom economy for Na2CO3 production, it must be assumed that all reactions go to completion without any losses. In such an ideal case, the overall reaction for the process is $\ce{CaCO3 + 2NaCl \rightarrow Na2CO3 + CaCl2}$ Using the atomic masses Ca 40.0, C 12.0, O 16.0, and Cl 35.5 gives the molar masses of CaCO3, 100 g/mol; NaCl, 58.5 g/mol; Na2CO3, 106 g/mol; and CaCl2, 111 g/mol. If the minimum whole number of moles of reactants were to react, 100 g of CaCO3 would react with 2×58.5 = 117 g of NaCl to produce 106 g of Na2CO3 and 111 g of CaCl2. Note that the mass of NaCl reacting is 2 times the molar mass because 2 moles of NaCl are reacting. So, for these amounts of materials inthe reaction, a total mass of 100 + 117 = 217 g of reactants produces 106 g of the Na2CO3 product. Therefore, the percent atom economy is $\textrm{Percent atom economy} = \frac{\textrm{Mass of desired product}}{\textrm{Total mass of reactants}} \times 100 = \frac{106 g}{217g} \times 100 = 48.8 \%$ This is the maximum possible value assuming complete reactions and no losses. If the CaCl2byproduct is considered to be a useful product, the atom economy can be regarded as being higher. Is the Solvay process green with respect to environmental impact? Again, the answer to this question is mixed. Extraction of the two major raw materials, limestone and NaCl, normally can be accomplished with minimal adverse effects on the environment. Quarrying of limestone in open pits results in dust production and blasting of the rock, which is usually carried out with an explosive mixture of fuel oil mixed with ammonium nitrate, NH4NO3, causes some disturbance. Open-pit lime stone quarries can be unsightly, but can also serve as artificial lakes. In some places the underground spaces left from the underground quarrying of limestone have found excellent commercial use as low-cost warehouses that largely provide their own climate control. Truck transport of quarried lime definitely has negative environmental impacts. Extraction of liquid NaCl brine usually has minimal environmental impact. The Solvay process, itself, releases significant quantities of greenhouse gas CO2 and some gaseous ammonia to the atmosphere. Solvay production of sodium carbonate requires significant amounts of energy. There are numerous natural deposits of sodium bicarbonate and sodium carbonate. The most common source of these salts is a mineral called trona, for which the chemical formula is Na2CO3•NaHCO3•2H2O. (This formula shows that a formula unit of trona mineral consists of 1 formula unit of ionic Na2CO3, 1 formula unit of ionic NaHCO3 and 2 molecules of H2O). The development of huge deposits of trona in the state of Wyoming and elsewhere in the world has lowered dependence on the Solvay process for sources of sodium bicarbonate and sodium carbonate and the process is no longer used in the United States.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/5.13%3A_Industrial_Chemical_Reactions_-_The_Solvay_Process.txt
1. How do chemical equations relate to chemical reactions? 2. Summarize the information contained in the chemical equation below. How would this reaction be classified? CaCl2(aq) + Na2CO3(aq)→CaCO3(s) + 2NaCl(aq) 3. What are the meanings of (s), (l), (g), and (aq) after formulas in a chemical equation? What are the meanings of ∆ and←→? 4. What is wrong with balancing the chemical equation S + O2→SO3 as S + O2→SO2? 5. From your knowledge of chemistry and chemical formulas write the balanced equation for heating magnesium carbonate to give magnesium oxide and carbon dioxide, indicating the physical states of the reactants and products. 6. Balance the equation FeSO4+ H2SO4+ O2→Fe2(SO4)3+ H2O, which is for a reaction involved in the formation of pollutant acid mine water. 7. Balance each of the following: (a) C2H4+ O2→CO2+ H2O, (b) KClO4→KClO + O2, (c)FeS2+ O2+ H2O→FeSO4+ H2SO4 (d) Fe2O3+ CO→Fe + CO2, (e) H3PO4+ H2→PH3 + H2O, (f) P + Cl2→PCl5 8. Explain how chemical equations fit in with the general scheme of chemistry as a language. 9. A chemical equation that describes the action of hydrogen sulfide, H2S, dissolved in water isH2S←→H++ HS-. What does this equation say and how is it consistent with the fact that dissolved hydrogen sulfide is a weak acid?. 10. From the discussion of reactions of metals with sulfuric acid in Section 5.3 and your knowledge of the properties of silver jewelry, explain what is likely to happen when silver metal is placed in sulfuric acid. 11. Zinc is a very reactive metal. Explain with chemical equations what you would expect to happen if zinc metal were placed in sulfuric acid and what would happen if zinc oxide, ZnO, were placed in sulfuric acid. 12. Finely divided steel wool heated red hot and quickly placed into a bottle of oxygen burns vigorously undergoing the reaction 4Fe + 3O2→2Fe2O3. Why is there no concern that a steel beam used in construction will burn in air? However, such a beam can be cut with an oxyacetylene torch by first heating a small portion of it with the torch, then turning off the acetylene and slowly running the torch across the beam. What is happening in this case? 13. A water solution of hydrogen peroxide, H2O2, is relatively stable. But, if a small quantity of solid manganese oxide is placed in the solution of hydrogen peroxide, bubbles are given off near the surface of the manganese oxide, although the solid appears to remain intact. Explain what happens and the role of the manganese oxide. 14. The following reactions were given in connection with the Solvay process used to makesodium bicarbonate and sodium carbonate: (A) NaCl + NH3+ CO2+ H2O→NaHCO3(s) +NH4Cl, (B) 2NaHCO3+ heat→ Na2CO3+ H2O(g) + CO2(g) (C) 3H2+ N2→2NH3, (D) CaCO3+ heat→CaO + CO2, (E) CaO + H2O→Ca(OH)2. Classify each of these reactions in the categories given in Section 5.6. 15. Given the chemical reaction 4CH4+ 6NO2→4CO + 3N2+ 8H2O, write all the possible mole ratios relating N2 to each of the other reaction participants. 16. Given the atomic masses N 14.0, H 1.0, and Cl 35.5 and the reaction below, calculate the mass of HCl produced when 12.7 g of NH3 react. 2NH3+ 3Cl2→N2+ 6HCl 17. Given the atomic masses C 12.0, H 1.0, and O 16.0 and the reaction below, calculate the mass of H2O produced when 15.6 g of O2react. C2H4+ 3O2→2CO2+ 2H2O 18. Match the reaction type from the list on the left with the example reaction from the right, below. PbSO4 is insoluble in water. A. Decomposition 1. HCl + NaOH→H2O + NaCl B. Neutralization 2. Pb(NO3)2+ Na2SO4→PbSO4+ 2NaCl C. Substitution 3. 2H2O2→2H2O + O2 D. Double displacement 4. CuSO4(aq) + Fe(s)→FeSO4(aq) + Cu(s) 19. Of the following, the untrue statement is A. The symbol←→is used to show that a reaction goes both ways. B. The notation (l) is used to show that a reactant or product is dissolved in water. C. A catalyst changes the rate of a reaction but is not itself consumed. D. The symbol, ∆, is used to show application of heat to a reaction. E. Simply because a chemical equation may be written and balanced does not indicate for certain that the chemical reaction it indicates will occur. 20. Given the reaction 2S + 3O2→2SO2 and atomic masses of 32.0 and 16.0 for S and O, respectively, calculate the mass of O2 reacting with 15.0 g of S. 21. Given the reaction CH4+ 2H2O→CO2+ 4H2 and atomic masses of C, 12.0, H, 1.0; and O16.0, calculate the total mass of products formed when 24.0 g of CH4 reacts. 22. Given the reaction 3CH4+ 4Fe2O3→3CO2+ 6H2O + 8Fe and atomic masses of C, 12.0; H,1.0; Fe, 55.8; and O, 16.0, what is the mass of CO2 produced by the reaction of 36.0 g of Fe2O3? 23. What is the basis of stoichiometry in respect to relative amounts of materials in reactions? 23. What are the major steps in doing a stoichiometric calculation? 24. What is a limiting reactant? 25. A solution of FeSO4 was prepared by mixing 100 g of pure H2SO4 with water and putting it in contact with 50.0 g of iron metal. What reaction occurred? What masses of reaction products were generated and what were the masses of reactants, if any, left over? The atomic masses needed are H 1.0, Fe 55.8, S 32.0, and O 16.0. 26. What is the difference between the stoichiometric yield and the measured yield in a chemical reaction? How are they used to calculated percent yield? 27. How are titrations and stoichometry related? 28. A solid mineral sample consisting of calcium carbonate, CaCO3, and nonreactive mineral matter weighing 0.485 g was stirred in some water to which 0.115 mol/L standard hydrochloric acid, HCl, was added from a buret. The reaction was CaCO3+ 2HCl→CaCl2+ CO2+ H2O. If 48.6 milliliters (0.0486 L) of HCl was required to react with all the CaCO3 in the sample, what was the percentage of CaCO3 in the sample given that the molar mass of CaCO3 is 100 g/mol? 29. A 250 mL sample of incinerator exhaust gas scrubber water contaminated with HCl was titrated with 0.104 mol/L standard NaOH, of which 11.3 mL were required to reach the endpoint. What was the molar concentration of HCl in the water sample? 30. What is made by the Solvay process? What is the overall chemical reaction that describes the Solvay process? What are the two major raw materials consumed and what are two major species that are recycled through the process? 31. What are major green aspects of the Solvay process? What are some aspects that are less green? 32. What is a major alternative to use of the Solvay process? 33. Calculate the number of moles of total of AlCl3 in 38.6 g of the compound and the number of moles of CH4, in 217 g of methane. Use 27.0, 35.5, 12.0, and 1.0 for the atomic masses of Al, Cl, C, and H, respectively. 34. Why might you expect stoichiometric ratios of reactants to be used in industrial chemical reactions? If one of two reactants used in an industrial process is much more expensive than another, suggest why and in which way a stoichiometric ratio might not be used? Also, suppose that one of two reactants is quite toxic whereas the other reactant is not. Why might the practice of green chemistry suggest using a nonstoichometric ratio of reactants in such a case? 35. Given the reaction 2H2+ O2→2H2O, identify which species is oxidized, which is reduced, which is the oxidizing agent, and which is the reducing agent. 36. Given the reaction that occurs when a direct electrical current is passed through liquid ionic NaCl, 2Na++ 2Cl-→2Na + Cl2, identify which species is oxidized and which is reduced. Justify the answer. 37. Identify which reactions given in Section 5.6 are oxidation-reduction reactions
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/05%3A_Chemical_Reactions-_Making_Materials_Safely_and_Sustainable/Questions_and_Problems.txt
“The first few of what are now known to be organic chemicals to be discovered were produced by living organisms and were therefore called ‘organic.’ One such compound is urea, which occurs in urine. In 1828 Friedrich Wöhler disproved the idea that all organic compounds must come from living organisms when he accidentally discovered that urea could be made by the reaction of cyanic acid (HOCN) and ammonia, both simple organic compounds. This discovery established the science of organic chemistry based upon the unique bonding capabilities of the carbon atom leading to the synthesis and discovery of tens of millions of unique organic compounds. In 2009 the American Chemical Society Chemical Abstract Service registered the 60 millionth compound(most of which are organic compounds) only 9 months after the registration of the 40 millionth, apace of discovery of more than one new compound per minute.” 06: he Wonderful World of Carbon- Organic Chemistry and Biochemicals Most of the molecules of chemical compounds studied so far have been clusters of only a few atoms. Therefore, molecules of water, H2O, exist as individual clusters of 2 H atoms bonded to 1O atom and molecules of ammonia, NH3, each consist of an atom of N to which are bonded 3 H atoms. In cases where atoms of a particular element in chemical compounds have a tendency to bond with atoms of the same element, the number of possible compounds is increased tremendously. This is the case with carbon, C. Groups of carbon atoms can bond together to form straight chains, branched chains, and rings, leading to a virtually limitless number of chemical compounds. Such carbon-containing compounds are organic chemicals, the study of which is organic chemistry. Adding to the enormous diversity of organic chemistry is the fact that two carbon atoms may be connected by single bonds consisting of 2 shared electrons, double bonds composed of 4 shared electrons, and even triple bonds that contain 6 shared electrons. Organic chemicals comprise most of the substances with which chemists are involved. Petroleum, which serves as the raw material for vast polymer, plastics, rubber, and other industries consists of hundreds of compounds composed of hydrogen and carbon called hydrocarbons.Among organic chemicals are included the majority of important industrial compounds, synthetic polymers, agricultural chemicals, and most substances that are of concern because of their toxicities and other hazards. The carbohydrates, proteins, lipids (fats and oils), and nucleic acids(DNA) that make up the biomass of living organisms are organic chemicals made by biological processes. The feedstock chemicals needed to manufacture a wide range of chemical products are mostly organic chemicals, and their acquisition and processing are of great concern in the practice of green chemistry. The largest fraction of organic chemicals acquired from petroleum and natural gas sources are burned to fuel vehicles, airplanes, home furnaces, and power plants. Prior to burning, these substances may be processed to give them desired properties. This is particularly true of the constituents of gasoline, the molecules of which are processed and modified to give gasoline desired properties of smooth burning (good antiknock properties) and low air pollution potential. Pollution of the water, air, and soil environments by organic chemicals is an area of significant concern. Much of the effort put into green chemistry has involved the safe manufacture, recycling, and disposal of organic compounds. A number of organic compounds are made by very sophisticated techniques to possess precisely tailored properties. This is especially true of pharmaceuticals, which must be customized to deliver the desired effects with minimum undesirable side effects. A single organic compound that is effective against one of the major health problems — usually one out of hundreds or even thousands tested — has the potential for hundreds of millions of dollars per year in profits. Organic chemicals differ widely in their toxicities. Some compounds are made and used because of their toxicities to undesirable organisms. These are the pesticides, including, especially, insecticides used to kill unwanted insects and herbicides used to eradicate weeds that compete with desired crops. Green chemistry is very much involved with these kinds of applications. One of the more widely applied uses of genetically modified crops has been the development of crops that produce their own insecticides in the form of insecticidal proteins normally made by certain kinds of bacteria whose genes have been spliced into field crops. Another application of green chemistry through genetic engineering is the development of crops that resist the effects of specific organic molecules commonly used as herbicides. These herbicides may be applied directly to target crops, leaving them unscathed while competing weeds are killed. It should be obvious from this brief discussion that organic chemistry is a vast, diverse, highly useful discipline based upon the unique bonding properties of the carbon atom. The remainder of this chapter discusses major aspects of organic chemistry. Many of the most interesting and important organic chemicals are made by biological processes. Indeed, until 1828, it was generally believed that only organisms could synthesize organic chemicals. In that year, Friedrich Wöhlers succeeded in making urea, an organic chemical that is found in urine, from ammonium cyanate, an inorganic material. Because of the important role of organisms in making organic chemicals, several of the most significant kinds of these chemicals made biologically are also discussed in this chapter. Additional details regarding the ways in which living organisms make and process chemicals are given in Chapters 9 and 13.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/06%3A_he_Wonderful_World_of_Carbon-_Organic_Chemistry_and_Biochemicals/6.01%3A_New_Page.txt
The tremendous variety and diversity of organic chemistry is due to the ability of carbon atoms to bond with each other in a variety of straight chains, branched chains, and rings and of adjacent carbon atoms to be joined by single, double, or triple bonds. This bonding ability can be illustrated with the simplest class of organic chemicals, the hydrocarbons consisting only of hydrogen and carbon. Figure 6.1 shows some hydrocarbons in various configurations. Hydrocarbons are the major ingredients of petroleum and are pumped from the ground as crude oil or extracted as natural gas. They have two major uses. The first of these is combustion as a source of fuel. The most abundant hydrocarbon in natural gas, methane, CH4, is burned in home furnaces, electrical power plants, and even in vehicle engines, $\ce{CH4 + 2O2 \rightarrow CO2 + 2H2O + heat energy}$ to provide energy. The second major use of hydrocarbons is as a raw material for making rubber, plastics, polymers, and many other kinds of materials. Given the value of hydrocarbons as a material, it is unfortunate that so much of hydrocarbon production is simply burned to provide energy, which could be generated by other means. There are several major class of hydrocarbons, all consisting of only hydrogen and carbon. Alkanes have only single bonds between carbon atoms. Cyclohexane, n-heptane, and 3-ethyl-2,5-dimethylhexane in Figure 6.1 are alkanes; the cyclohexane is a cyclic hydrocarbon. Alkenes, such as propene shown in Figure 6.1, have at least one double bond consisting of 4 shared electrons between two of the carbon atoms in the molecule. Alkynes have at least one triple bond between carbon atoms in the molecule as shown for acetylene in Figure 6.1. Acetylene is an important fuel for welding and cutting torches; otherwise, the alkynes are of relatively little importance and will not be addressed farther. A fourth class of hydrocarbon consists of aromatic compounds which have rings of carbon atoms with special bonding properties as discussed later in this chapter Alkanes The molecular formulas of non-cyclic alkanes are CnH2n+2. By counting the numbers of carbon and hydrogen atoms in the molecules of alkanes shown in Figure 6.1, it is seen that the molecular formula of n-heptane is C7H6 and that of 3-ethyl-2,5-dimethylhexane is C10H22, both of which fit the general formula given above. The general formula of cyclic alkanes is CnH2n; that of cyclohexane, the most common cyclic alkane, is C6H12. These formulas are molecular formulas, which give the number of carbon and hydrogen atoms in each molecule, but do not tell anything about the structure of the molecule. The formulas given in Figure 6.1 are structural formulas which show how the molecule is assembled. The structure of n-heptane is that of a straight chain of carbon atoms; each carbon atom in the middle of the chain is bound to 2 H atoms and the 2 carbon atoms at the ends of the chain are each bound to 3 H atoms. The prefix hep in the name denotes 7 carbon atoms and then-indicates that the compound consists of a single straight chain. This compound can be represented by a condensed structural formula as CH3(CH2)5CH3 representing 7 carbon atoms in a straight chain. In addition to methane mentioned previously, the lower alkanes include the following: Ethane: CH3CH3 Propane: CH3CH2CH3 Butane: CH3(CH2)2CH3 n-Pentane: CH3(CH2)3CH3 For alkanes with 5 or more carbon atoms, the prefix (pen for 5,hex for 6,hept for 7,oct for 8,non for 9) shows the total number of carbon atoms in the compound and n-may be used to denote a straight-chain alkane. Condensed structural formulas may be used to represent branched chain alkanes as well. The condensed structural formula of 3-ethyl-2,5-dimethylhexane is CH3CH(CH3)CH(C2H5)CH2CH(CH3)CH3 In this formula, the C atoms and their attached H atoms that are not in parentheses show carbons that are part of the main hydrocarbon chain. The (CH3) after the second C in the chain shows a methyl group attached to it, the (C2H5) after the third carbon atom in the chain shows an ethyl group attached to it, and the (CH3) after the fifth carbon atom in the chain shows a methyl group attached to it. Compounds that have the same molecular formulas but different structural formulas are structural isomers. For example, the straight-chain alkane with the molecular formula C10H22 is n-decane which is a structural isomer of 3-ethyl-2,5-dimethylhexane. The names of organic compounds are commonly based upon the structure of the hydrocarbon from which they are derived using the longest continuous chain of carbon atoms in the compound as the basis for the name. For example, the longest continuous chain of carbon atoms in 3-ethyl-2,5-dimethylhexane shown in Figure 6.1 is 6 carbon atoms, so the name is based upon hexane. The names of the chain branches are also based upon the alkanes from which they are derived. As shown below, the two shortest-chain alkanes are methane with 1 carbon atom and ethane with 2 carbon atoms. Removal of 1 of the H atoms from methane gives the methyl group and removal of 1 of the H atoms from ethane gives the ethyl group. These terms are used in the name 3-ethyl-2,5-dimethylhexane to show groups attached to the basic hexane chain. The carbon atoms in this chain are numbered sequentially from left to right. An ethyl group is attached to the 3rd carbon atom, yielding the “3-ethyl” part of the name, and methyl groups are attached to the 2nd and 5th carbon atoms, which gives the “2,5-dimethyl” part of the name. The names discussed above are systematic names, which are based upon the actual structural formulas of the molecules. In addition, there are common names of organic compounds that do not indicate the structural formulas. Naming organic compounds is a complex topic, and no attempt is made here to teach it to the reader. However, from the names of compounds given in this and later chapters, some appreciation of the rationale for organic compound names should be obtained. Other than burning them for energy, the major kind of reaction with alkanes consists of substitution reactions such as, $\ce{C2H6 + 2Cl2 \rightarrow C2H4Cl2 + 2HCl}$ in which one or more H atoms are displaced by another kind of atom. This is normally the first step in converting alkanes to compounds containing elements other than carbon or hydrogen for use in synthesizing a wide variety of organic compounds. Alkenes Four common alkenes are shown in Figure 6.2. Alkenes have at least one C=C double bond per molecule and may have more. The first of the alkenes in Figure 6.2, ethylene, is a very widely produced hydrocarbon used to synthesize polyethylene plastic and other organic compounds. About 25 billion kilograms (kg) of ethylene are processed in the U.S. each year. About 14.5 billion kg of propylene are used in the U.S. each year to produce polypropylene plastic and other chemicals. The two 2-butene compounds illustrate an important aspect of alkenes, the possibility of cis-trans isomerism. Whereas carbon atoms and the groups substituted onto them joined by single bonds can freely rotate relative to each other as though they were joined by a single shaft, carbon atoms connected by a double bond behave as though they were attached by two parallel shafts and are not free to rotate. So, cis-2-butene in which the two end methyl (-CH3) groups are on the same side of the molecule is a different compound from trans-2-butene in which they are on opposite sides. These two compounds are cis-trans isomers Alkenes are chemically much more active than alkanes. This is because the double bond is unsaturated and has electrons available to form additional bonds with other atoms. This leads to addition reactions in which a molecule is added across a double bond. For example, the addition of H2O to ethylene yields ethanol, the same kind of alcohol that is in alcoholic beverages. In addition to adding immensely to the chemical versatility of alkenes, addition reactions make them quite reactive in the atmosphere during the formation of photochemical smog. The presence of double bonds also adds to the biochemical and toxicological activity of compounds in organisms Because of their double bonds, alkenes can undergo polymerization reactions in which large numbers of individual molecules add to each other to produce big molecules called polymers (see Section 6.5). For example, 3 ethylene molecules can add together as follows: a process that can continue, forming longer and longer chains and resulting in the formation of huge molecules molecules of polyethylene Aromatic Hydrocarbons A special class of hydrocarbons consists of rings of carbon atoms, almost always containing 6C atoms, which can be viewed as having alternating single and double bonds as shown below: These structures show the simplest aromatic hydrocarbon, benzene, C6H6. Although the benzene molecule is represented with 3 double bonds, chemically it differs greatly from alkenes, for example undergoing substitution reactions rather than addition reactions. The properties of aromatic compounds are special properties called aromaticity. The two structures shown above are equivalent resonance structures, which can be viewed as having atoms that stay in the same places, but in which the bonds joining the atoms can shift positions with the movement of electrons composing the bonds. Since benzene has different chemical properties from those implied by either of the above structures, it is commonly represented as a hexagon with a circle in the middle: Many aromatic hydrocarbons have two or more rings. The simplest of these is naphthalene, a two-ringed compound in which two benzene rings share the carbon atoms at which they are joined; these two carbon atoms do not have any H attached, each of the other 8 C atoms in the compound has 1 H attached. Aromatic hydrocarbons with multiple rings, called polycyclic aromatic hydrocarbons, PAH, are common and are often produced as byproducts of combustion. One of the most studied of these is benzo(a)pyrene, found in tobacco smoke, diesel exhaust, and charbroiled meat. This compound is toxicologically significant because it is partially oxidized by enzymes in the body to produce a cancer-causing metabolite. The presence of hydrocarbon groups and of elements other than carbon and hydrogen bonded to an aromatic hydrocarbon ring gives a variety of aromatic compounds. Three examples of common aromatic compounds are given below. Toluene is widely used for chemical synthesis and as a solvent. The practice of green chemistry now calls for substituting toluene for benzene wherever possible because benzene is suspected of causing leukemia, whereas the body is capableof metabolizing toluene to harmless metabolites (see Chapter 7). About 850 million kg of aniline are made in the U.S. each year as an intermediate in the synthesis of dyes and other organic chemicals. Phenol is a relatively toxic oxygen-containing aromatic compound which, despite its toxicity to humans, was the first antiseptic used in the 1800s.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/06%3A_he_Wonderful_World_of_Carbon-_Organic_Chemistry_and_Biochemicals/6.02%3A_New_Page.txt
The aromatic structures shown above use a hexagon with a circle in it to denote an aromatic benzene ring. Organic chemistry uses lines to show other kinds of structural formulas as well. The reader who may have occasion to look up organic formulas will probably run into this kind of notation, so it is important to be able to interpret these kinds of formulas. Some line formulas are shown in Figure 6.3. In using lines to represent organic structural formulas, the corners where lines intersect and the ends of lines represent C atoms, and each line stands for a covalent bond (2 shared electrons). It is understood that each C atom at the end of a single line has 3 H atoms attached, each C atom at the intersection of 2 lines has 2 C atoms attached, each C at the intersection of 3 lines has 1 H attached, and the intersection of 4 lines denotes a C atom with no H atoms attached. Multiple lines represent multiple bonds as shown for the double bonds in 1,3-butadiene. Substituent groups are shown by their symbols (for individual atoms), or formulas of functional groups consisting of groups of atoms; it is understood that each such group substitutes for a hydrogen atom as shown in the formula of 2,3-dichlorobutane in Figure 6.3. The 6-carbon-atom aromatic ring is denoted by a hexagon with a circle in it. Exercise What is the structural formula of the compound represented on the left, below? Answer 6.04: New Page Numerous elements in addition to carbon and hydrogen occur in organic compounds. These are contained in functional groups, which define various classes of organic compounds. The −NH2 group in aniline and the -OH groups in phenol mentioned above are examples of functional groups. The same organic compound may contain two or more functional groups. Among the elements common in functional groups are O, N, Cl, S, and P. There is not space here to discuss all the possible functional groups and the classes of organic compounds that they define. Some important examples are given to provide an idea of the variety of organic compounds with various functional groups. Other examples are encountered later in the text In using lines to represent organic structural formulas, the corners where lines intersect and the ends of lines represent C atoms, and each line stands for a covalent bond (2 shared electrons). It is understood that each C atom at the end of a single line has 3 H atoms attached, each C atom a the intersection of 2 lines has 2 C atoms attached, each C at the intersection of 3 lines has 1 H attached, and the intersection of 4 lines denotes a C atom with no H atoms attached. Multiple lines Organooxygen Compounds Figure 6.4 shows several important classes of organic compounds that contain oxygen. Ethylene oxide is a sweet-smelling, colorless, flammable, explosive gas. It is an epoxide characterized by an oxygen atom bridging two carbon atoms that are also bonded with each other. Ethylene oxide is toxic and is used as a sterilant and fumigant as well as a chemical intermediate. Because of the toxicity and flammability of this compound, the practice of green chemistry tries to avoid its generation and use. Ethanol, which occurs in alcoholic beverages, is an alcohol, a class of compound in which the -OH group is bonded to an alkane or alkene (attachment of the -OH group to an aromatic hydrocarbon molecule gives a phenolic compound). Acetone is a ketone, a class of compounds that has the C=O functional group in the middle of a hydrocarbon chain. Acetone is an excellent organic solvent and relatively safe. Butyric acid, which occurs in butter, is an organic carboxylic acid, all of which contain the functional group, which can release the H+ion characteristic of acids. Methyltertiarybutyl ether, MTBE, is an example of an ether in which an O atom connects 2 C atoms. When highly toxic tetraethyllead was phased out of gasoline as an octane booster, MTBE was chosen as a substitute. It was subsequently found to be a particularly noxious water pollutant, and its use has been largely banned The C=O group in the middle of an organic molecule is characteristic of ketones. When this group is located at the end of a molecule and the carbon is also bonded to H, the compound is an aldehyde. The two lowest aldehydes are formaldehyde and acetaldehyde, of which formaldehyde is the most widely produced. Despite its many uses, formaldehyde lacks characteristics of green chemicals because it is a volatile, toxic, noxious substance. Formaldehyde tends to induce hypersensitivity (allergies) in people who inhale the vapor or whose skin is exposed to it. The reaction of an alcohol and an organic acid, produces an important kind of organic compound called esters. The linkage characteristic of esters is outlined by the dashed box in the structure of propyl acetate above. A large number of the naturally-occurring esters made by plants are noted for their pleasant odors. Propyl acetate, for example gives pears their pleasant odor. Other fruit odors due to esters include methyl butyrate, apple; ethyl butyrate, pineapple; and methyl benzoate, ripe kiwi fruit. Organonitrogen Compounds Methylamine, is the simplest of the amines, compounds in which an N atom is bonded to a hydrocarbon group. In an amine, the N atom may be bonded to 2 H atoms, or one or both of these H atoms may be substituted by hydrocarbon groups as well. Although it is widely used in chemical synthesis because no suitable substitutes are available, methylamine is definitely not compatible with the practice of green chemistry. That is because it is highly flammable and toxic. It is a severe irritant to skin, eyes, and mucous membranes of the respiratory tract. It has a noxious odor and is a significant contributor to the odor of rotten fish. In keeping with the reputation of amines as generally unpleasant compounds, another amine, putrescine, gives decayed flesh its characteristic odor. Many organonitrogen compounds contain oxygen as well. One such compound is nitromethane used in chemical synthesis and as a fuel in some race cars. As seen in the structural formula above, the nitro group, -NO2, is the functional group in this compound and related nitro compounds. Another class of organonitrogen compounds also containing oxygen consists of the nitrosamines, or N-nitroso compounds, which have figured prominently in the history of green chemistry before it was defined as such. These are compounds that have the N-N=O functional group, which are of concern because several are known carcinogens (cancer-causing agents). The most well known of these is dimethylnitrosamine shown below: This compound used to be employed as an industrial solvent and was used in cutting oils. However, workers exposed to it suffered liver damage and developed jaundice, and the compound as well as other nitrosamines was found to be a carcinogen. A number of other nitrosamines were later found in industrial materials and as byproducts of food processing and preservation. Because of their potential as carcinogens, nitrosamines are avoided in the practice of green chemistry. Organohalide Compounds Organohalides exemplified by those shown in Figure 6.5 are organic compounds that contain halogens — F, Cl, Br, or I — but usually chlorine, on alkane, alkene, or aromatic molecules. Organohalides have been widely produced and distributed for a variety of applications, including industrial solvents, chemical intermediates, coolant fluids, pesticides, and other applications. They are for the most part environmentally persistent and, because of their tendency to accumulate in adipose (fat) tissue, they tend to undergo bioaccumulation and biomagnification in organisms Carbon tetrachloride is produced when all four H atoms on methane, CH4, are substituted by Cl. This compound was once widely used and was even sold to the public as a solvent to remove stains and in fire extinguishers, where the heavy CCl4 vapor smothers fires. It was subsequently found to be very toxic, causing severe liver damage, and its uses are severely restricted. Dichlorodifluoromethaneis a prominent member of the chlorofluorocarbon class of compounds, popularly known as Freons. Developed as refrigerant fluids, these compounds are notably unreactive and nontoxic. However, as discussed in Chapter 10, they were found to be indestructible in the lower atmosphere, persisting to very high altitudes in the stratosphere where chlorine split from them by ultraviolet radiation destroys stratospheric ozone. So the manufacture of chlorofluorocarbons is now prohibited. Vinyl chloride, an alkene-based organohalide compound, is widely used to make polyvinylchloride polymers and pipe. Unfortunately, it is a known human carcinogen, so human exposure to it is severely limited. Trichloroethylene is an excellent organic solvent that is nonflammable. It is used as a dry cleaning solvent and for degreasing manufactured parts, and was formerly used for food extraction, particularly to decaffeinate coffee. Chlorobenzene is the simplest aromatic organochloride. In addition to its uses in making other chemicals, it serves as a solvent and as a fluid for heat transfer. It is extremely stable, and its destruction is a common test for the effectiveness of hazardous waste incinerators. The polychlorinated biphenyl(PCB) compound shown is one of 209 PCB compounds that can be formed by substituting from 1 to 10 Cl atoms onto the basic biphenyl (two-benzene-ring) carbon skeleton. These compounds are notably stable and persistent, leading to their uses in electrical equipment, particularly as coolants in transformers and in industrial capacitors, as hydraulic fluids, and other applications. Their extreme environmental persistence has led to their being banned. Sediments in New York’s Hudson River are badly contaminated with PCBs that were (at the time, legally) dumped or leaked into the river from electrical equipment manufacture from the 1950s into the 1970s. From the discussion above, it is obvious that many organohalide compounds are definitely not green because of their persistence and biological effects. A lot of the effort in the development of green chemistry has been devoted to finding substitutes for organohalide compounds. A 2001 United Nations treaty formulated by approximately 90 nations in Stockholm, Sweden, designated a “dirty dozen” of 12 organohalide compounds of special concern as persistent organic pollutants(POP); other compounds have subsequently been added to this list. Organosulfur and Organophosphorus Compounds A number of organosulfur and organophosphorus compounds have been synthesized for various purposes including pesticidal applications. A common class of organosulfur compounds consists of the thiols, the simplest of which is methanethiol: As with other thiols, which contain the -SH group, this compound is noted for its foul odor. Thiols are added to natural gas so that their odor can warn of gas leaks. Dimethylsulfide, also shown above, is a volatile compound released by ocean-dwelling microorganisms to the atmosphere in such quantities that it constitutes the largest flux of sulfur-containing vapors from Earth to the atmosphere. Among the most prominent organophosphorus compounds are the organophosphates as shown by methyl parathion and malathion (below). These compounds are both insecticides and contain sulfur as well as phosphorus. Parathion was developed during the 1940s and was once widely used as an insecticide in place of DDT because parathion is very biodegradable, whereas DDT is not and undergoes bioaccumulation and biomagnification in ecosystems. Unfortunately, parathion has a high toxicity to humans and other animals and some human fatalities have resulted from exposure to it. Like other organophosphates, it inhibits acetylcholinesterase, an enzyme essential for nerve function (the same mode of action as its deadly cousins, the “nerve gas” military poisons, such as Sarin). Because of its toxicity, parathion is now banned from general use. Malathion is used in its place and is only about 1/100 as toxic as parathion to mammals because they — though not insects — have enzyme systems that can break it down.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/06%3A_he_Wonderful_World_of_Carbon-_Organic_Chemistry_and_Biochemicals/6.03%3A_New_Page.txt
Reaction 6.2.4 showed the bonding together of molecules of ethylene to form larger molecules. This process, widely practiced in the chemical and petrochemical industries, is called polymerization and the products are polymers. Many other unsaturated molecules, usually based upon alkenes, undergo polymerization to produce synthetic polymers used as plastics, rubber, and fabrics. As an example, tetrafluoroethylene polymerizes as shown in Figure 6.6 to produce a polymer (Teflon) that is exceptionally heat- and chemical-resistant and that can be used to form coatings to which other materials will not stick (for example, frying pan surfaces). Polyethylene and polytetrafluoroethylene are both addition polymers in that they are formed by the chemical addition together of the monomers making up the large polymer molecules. Other polymers are condensation polymers that join together with the elimination of a molecule of water for each monomer unit joined. A common condensation polymer is nylon, which is formed by the bonding together of two different kinds of molecules. There are several forms of nylon, the original form of which is nylon 66 discovered by Wallace Carothers, a DuPont chemist, in 1937 and made by the polymerization of adipic acid and 1,6-hexanediamine: There are many different kinds of synthetic polymers that are used for a variety of purposes. Some examples in addition to the ones already discussed in this chapter are given in Table 6.1. Polymers and the industries upon which they are based are of particular concern in the practice of green chemistry for a number of reasons. The foremost of these is because of the huge quantities of materials consumed in the manufacture of polymers. In addition to the enormous quantities of ethylene and propylene previously cited in this chapter, the U.S. processes about 1.5 billion kg of acrylonitrile, 5.4 billion kg of styrene, 2.0 billion kg of butadiene, and 1.9 kg of adipic acid (for nylon 66) each year to make polymers containing these monomers. These and similarly large quantities of monomers used to make other polymers place significant demands upon petroleum resources and the energy, materials, and facilities required to make the monomers. Table 6.1. Some Typical Polymers and the Monomers from Which they Are Formed Monomer Monomer Formula Polymer Applications Propylene (polypropylene) Applications requiring harder plastic, luggage, bottles, outdoor carpet Vinyl chloride (polyvinyl chloride) Thin plastic wrap, hose, flooring, PVC pipe Styrene (polystyrene) Plastic furniture, plastic cups and dishes, blown to produce styrofoam plastic products Acrylonitrile (polyacrylonitrile) Synthetic fabrics (Orlon, Acrilan, Creslan), acrylic paints Isoprene (polyisoprene) Natural rubber There is a significant potential for the production of pollutants and wastes from monomer processing and polymer manufacture. Some of the materials contained in documented hazardous waste sites are byproducts of polymer manufacture. Monomers are generally volatile organic compounds with a tendency to evaporate into the atmosphere, and this characteristic combined with the presence of reactive C=C bonds tends to make monomer emissions active in the formation of photochemical smog (see Chapter 10). Polymers, including plastics and rubber, pose problems for waste disposal, as well as opportunities and challenges for recycling. On the positive side, improved polymers can provide long-lasting materials that reduce material use and have special applications, such as liners in waste disposal sites that prevent waste leachate migration and liners in lagoons and ditches that prevent water loss. Strong, lightweight polymers are key components of the blades and other structural components of huge wind generators that are making an increased contribution to renewable energy supplies around the world (see Chapter 16). Some of the environmental and toxicological problems with polymers have arisen from the use of additives to improve polymer performance and durability. The most notable of these are plasticizers, normally blended with plastics to improve flexibility, such as to give polyvinylchloride the flexible characteristics of leather. The plasticizers are not chemically bound as part of the polymer and they leak from the polymer over a period of time, which can result in human exposure and environmental contamination. The most widely used plasticizers are phthalates, esters of phthalic acid as shown by the example of di(2-ethylhexyl) phthalate below. Though not particularly toxic, these compounds are environmentally persistent, resistant to treatment processes, and prone to undergo bioaccumulation. They are found throughout the environment and have been implicated by some toxicologists as possible estrogenic agents that mimic the action of female sex hormone and cause premature sexual development in young female children.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/06%3A_he_Wonderful_World_of_Carbon-_Organic_Chemistry_and_Biochemicals/6.05%3A_New_Page.txt
Access to and use of the internet is assumed in answering all questions including general information, statistics, constants, and mathematical formulas required to solve problems. These questions are designed to promote inquiry and thought rather than just finding material in the text. So in some cases there may be several “right” answers. Therefore, if your answer reflects intellectual effort and a search for information from available sources, your answer can be considered to be “right.” 1. What are two major reactions of alkanes? 2. What is the difference between molecular formulas and structural formulas of organic compounds? 3. What is the difference between ethane and the ethyl group? 4. What is the structural formula of 3-ethyl-2,3-dimethylpentane? 5. What is a type of reaction that is possible with alkenes, but not with alkanes? 6. What is represented by the structure below? 7. Suggest a name for the compound below, which is derived from the hydrocarbon toluene 8. What is a health concern with the aromatic compound below: 9. What do the groups of atoms outlined by dashed lines represent in the structure below? 11. What are 3 separate kinds of groups characteristic of organonitrogen compounds? 12. What is a class of organochlorine compounds consisting of many different kinds of molecules that is noted for environmental persistence? 13. What is a notable characteristic of organosulfur thiols? 14. What is a particularly toxic organophosphorus compound? What is a biochemical molecule containing phosphorus? 15. What are polymers and why are they important? 16. Examination of the formulas of many of the monomers used to make polymers reveals a common characteristic. What is this characteristic and how does it enable polymer formation? Does nylon illustrate a different pathway to monomer formation? Explain. 17. Write the complete structural formulas corresponding to each of the line structures below: 18. Some of the most troublesome organic pollutant compounds are organochlorine compounds including the “dirty dozen” persistent organic pollutants mentioned in this chapter. Organohalides involving a halogen other than chlorine are emerging as significant pollutants. Doing some research on the internet, find which class of compounds these are and why they are significant pollutants. Supplementary References Armstrong, James, General, Organic, and Biochemistry, Brooks/Cole, Cengage Learning, Belmont, CA, 2010. Bettelheim, Frederick A., Introduction to General, Organic, and Biochemistry, 9th ed., Brooks/Cole, Cengage Learning, Belmont, CA, 2010. Denniston, Katherine J., Joseph J. Topping, and Robert L. Caret, General, Organic, and Biochemistry, 7th ed., McGraw-Hill, New York, 2011. Guinn, Denise, and Rebecca Brewer, Essentials of General, Organic, and Biochemistry: An Integrated Approach, W. H. Freeman and Co., New York, 2009. McMurry, John, David S. Ballantine, Carl A. Hoeger, Virginia E. Peterson, and Mary E.Castellion, Fundamentals of General, Organic, and Biological Chemistry, 6th ed., Prentice-Hall, Upper Saddle River, NJ, 2009. Seager, Spencer L., and Michael R. Slabaugh, Organic and Biochemistry for Today, 7th ed., Brooks/Cole, Cengage Learning, Belmont, CA, 2010. Solomons, T.W. Graham, and Craig Fryhle, Organic Chemistry, 10th ed., Wiley, Hoboken, NJ, 2011. Smith, Janice G., Principles of General, Organic, and Biochemistry, McGraw-Hill, New York, 2011. Stoker, H. Stephen, General, Organic, and Biological Chemistry, 5th ed., Brooks/Cole, Cengage Learning, Belmont, CA, 2010. Winter, Arthur, Organic Chemistry 1 for Dummies, Wiley, Hoboken, NJ, 2005.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/06%3A_he_Wonderful_World_of_Carbon-_Organic_Chemistry_and_Biochemicals/Questions_and_Problems.txt
“A microscopic cell of photosynthetic cyanobacteria constitutes a complex of chemical factories that carry out a multitude of biochemical processes. Powered by solar energy and operating under ambient conditions, these organisms take carbon dioxide and nitrogen from air and simple inorganic ions dissolved in water and make all the life molecules they need for their metabolism and reproduction. In eons past these kinds of organisms generated all of the oxygen that is in Earth’s atmosphere. For all their knowledge of chemistry it would be impossible for humans to reproduce the chemical processes of these remarkable bacteria” 07: Chemistry of Life and Green Chemistry Biochemistry is the science of chemical processes that occur in living organisms.1 By its nature biochemistry is a green chemical and biological science. This is because over eons of evolution organisms have evolved that carry out biochemical processes sustainably. Because the enzymes that carry out biochemical processes can only function under mild conditions, particularly of temperature, biochemical processes take place under safe conditions, avoiding the high temperatures, high pressures, and corrosive and reactive chemicals that often characterize synthetic chemical operations. Therefore, it is appropriate to refer to green biochemistry. The ability of organisms to carry out chemical processes is truly amazing, even more so when one considers that many of them occur in single-celled organisms. Photosynthetic cyanobacteria consisting of individual cells less than a micrometer (μm) in size can make all the complex biochemicals they need to exist and reproduce using sunlight for energy and simple inorganic substances such as CO2, K+ ion, NO3- ion and HPO42- ion for raw materials. Beginning soon after conditions on Earth became hospitable to life, these photosynthetic bacteria produced the oxygen that now composes about 20% of Earth’s atmosphere. Fossilized stromatolites (bodies of sedimentary materials bound together by films produced by microorganisms) produced by cyanobacteria have been demonstrated dating back 2.8 billion years, and this remarkable microorganism that converts atmospheric carbon dioxide to biomass and atmospheric N2 to chemically fixed N may have been on Earth as long as 3.5 billion years ago. It is fascinating to view single live cells of animal-like protozoa through an optical microscope. An ameba appears as a body of cellular protoplasm and moves by oozing about like a living blob of jelly. Examination of Euglena protozoa may show a cell several μm in size with many features including a cell nucleus that serves to direct metabolism and reproduction, green chloroplasts for photosynthetic production of biomass, a red eye-spot sensitive to light, a mouth-like contractile vacuole by which the cell expels excess water, and a thin tail-like structure(flagella) that moves rapidly and propels the cell. More detailed examination by electron microscope of such cells and those that make up more complex organisms reveals many more cell parts that are involved with biochemical function. At least a rudimentary knowledge of biochemistry is needed to understand green chemistry, environmental chemistry, and sustainability science and technology. One reason is the ability of organisms to synthesize a vast variety of substances. The most obvious of these is biomass made by the photosynthetic fixation of carbon dioxide and that forms the basis of nature’s food webs.Organisms make many of the materials upon which humans rely. In addition to food, one such material is the lignocellulose that composes most of plant biomass such as wood used for construction, paper-making, and fuel. Very complex molecules are made by organisms, for example, human insulin produced by genetically engineered organisms. Organisms make materials under very mild conditions compared to those used in the anthrosphere. An important example is chemically fixed nitrogen from the atmosphere which is produced synthetically in the anthrosphere as ammonia (NH3) at high temperatures and pressures whereas Rhizobium bacteria attached to the roots of soybeans and other legume plants fix nitrogen in the mild conditions of the soil environment. Increasingly as supplies of petroleum and other non-renewable raw materials become more scarce, humans are turning to microorganisms and plants to make essential materials. Another major reason for considering biochemistry as part of green chemistry and sustainability is the protection of organisms from products and processes in the anthrosphere. It is essential to know the potential toxic effects of various materials, a subject addressed by toxicological chemistry.2 One of the fundamental goals of green chemistry is to minimize the production and use of products that may have adverse environmental effects. Sustainability of the entire planet requires that humans not disperse into the environment substances that may undergo bioaccumulation and be toxic to humans and other organisms. Biochemical processes not only are profoundly influenced by chemical species in the environment, they largely determine the nature of these species, their degradation, and even their syntheses, particularly in the aquatic and soil environments. The study of such phenomena forms the basis of environmental biochemistry. This chapter is designed to give an overview of biochemistry and how it relates to green chemistry and sustainability science and technology. A glance at the structural formulas of some of the biochemicals shown in this chapter gives a hint of the complexity of biochemistry. This chapter is designed to provide a basic understanding of this complex science with enough detail for it to be meaningful but to avoid overwhelming the reader. It begins with an overview of the four major classes of biochemicals— proteins, carbohydrates, lipids, and nucleic acids. Many of the compounds in these classes are polymers with molecular masses of the order of a million or even larger. Proteins and nucleic acids consist of macromolecules, lipids are usually relatively small molecules, carbohydrates range from small sugar molecules to high-molar-mass macromolecules such as those in cellulose. The behavior of a substance in a biological system depends to a large extent upon whether the substance is hydrophilic (“water-loving”) or hydrophobic (“water-hating”). Some important toxic substances are hydrophobic, a characteristic that enables them to traverse cell membranes readily and to bioaccumulate in lipid (fat) tissue. Many hydrocarbons and organohalide compounds synthesized from hydrocarbons are hydrophobic. Part of the detoxification process carried on by living organisms is to render such molecules hydrophilic, therefore water-soluble and readily eliminated from the body.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/07%3A_Chemistry_of_Life_and_Green_Chemistry/7.01%3A_New_Page.txt
For the most part, biochemical processes occur within cells, the very small units of which living organism are composed.3 Cells are discussed in more detail as basic units of life in Chapter 12, Section 12.3; here they are regarded as what chemical engineers would call “unit operations” for carrying out biochemical processes. Many organisms consist of single cells or individual cells growing together in colonies. Bacteria, yeasts, protozoa, and some algae consist of single cells. Other than these microorganisms, organisms are composed of many cells that have different functions. Liver cells, muscle cells, brain cells, and skin cells in the human body are quite different from each other and do different things. Two major kinds of cells are eukaryotic cells which have a nucleus and prokaryotic cells which do not. Prokaryotic cells are found predominately in single-celled bacteria. Eukaryotic cells occur in multicellular plants and animals — higher life forms. Cell structure has an important influence on determining the nature of biomaterials. Muscle cells consist largely of strong structural proteins capable of contracting and movement. Bone cells secrete a protein mixture that then mineralizes with calcium and phosphate to produce solid bone.The walls of cells in plants are largely composed of strong cellulose, which makes up the sturdy structure of wood. 7.03: New Page Carbohydrates are biomolecules consisting of carbon, hydrogen, and oxygen having the approximate simple formula CH2O. One of the most common carbohydrates is the simple sugar glucose shown in Figure 7.2. Units of glucose and other simple sugars called monosaccharides join together in chains with the loss of a water molecule for each linkage to produce macromolecular polysaccharides. These include starch and cellulose in plants and starch-like glycogen in animals. Glucose carbohydrate is the biological material generated from water and carbon dioxide when solar energy in sunlight is utilized in photosynthesis. The overall reaction is $\ce{6CO2 + 6H2O \rightarrow C6H12O6 + 6O2}$ This is obviously an extremely important reaction because it is the one by which inorganic molecules are used to synthesize high-energy carbohydrate molecules that are in turn converted to the vast number of biomolecules that comprise living systems. There are other simple sugars, including fructose, mannose, and galactose, that have the same simple formula as glucose, C6Η12Ο6, but which must be converted to glucose before being utilized by organisms for energy. Common table sugar, sucrose, C12Η22Ο11, consists of a molecule of glucose and one of fructose linked together (with the loss of a water molecule); because it is composed of two simple sugars sucrose is called a disaccharide. Starch molecules, which may consist of several hundred glucose units joined together, are readily broken down by organisms to produce simple sugars used for energy and to produce biomass. For example, humans readily digest starch in potatoes or bread to produce glucose used for energy (or to make fat tissue). The chemical formula of starch is (C6H10O5)n, where n may represent a number as high as several hundreds. What this means is that the very large starch molecule consists of as many as several hundred units of C6H10O5 from glucose joined together. For example, if n is 100, there are 6 times 100 carbon atoms, 10 times 100 hydrogen atoms, and 5 times 100 oxygen atoms in the molecule. Its chemical formula is C600H1000O500. The atoms in a starch molecule are actually present as linked rings represented by the structure shown in Figure 7.2. Starch occurs in many foods, such as bread, potatoes, and cereals. It is readily digested by animals, including humans. Cellulose is a polysaccharide which is also made up of C6H10O5 units. Molecules of cellulose are huge, with molecular masses of around 400,000. The cellulose structure (Figure 7.3) is similar to that of starch. Cellulose is produced by plants and forms the structural material of plant cell walls. Wood is about 60% cellulose, and cotton contains over 90% of this material. Fibers of cellulose are extracted from wood and pressed together to make paper. Humans and most other animals cannot digest cellulose because they lack the enzyme needed to hydrolyze the oxygen linkages between the glucose molecules. Ruminant animals (cattle, sheep, goats, moose) have bacteria in their stomachs that break down cellulose into products which can be used by the animal. Fungi and termites existing synergistically with cellulose-degrading bacteria biodegrade huge quantities of cellulose. Chemical processes are available to convert cellulose to simple sugars by the reaction $\underbrace{\ce{(C6H10O5)_{n}}}_{\textbf{cellulose}} \ce{ + nH2O} \rightarrow \underbrace{\ce{nC6H12O6}}_{\textbf{glucose}}$ where n may be 2000-3000. This involves breaking the linkages between units of C6H10O5 by adding a molecule of H2O at each linkage, a hydrolysis reaction. Large amounts of cellulose from wood, sugar cane, and agricultural products go to waste each year. The hydrolysis of cellulose enables these products to be converted to sugars, which can be fed to animals. The potential for producing very large quantities of glucose from cellulose have led to intense efforts to hydrolyze cellulose to glucose with enzymes (biological catalysts) and is an important effort in green chemistry. Carbohydrates are potentially very important in green chemistry. For one thing, they are a concentrated form of organic energy synthesized and stored by plants as part of the process by which plants capture solar energy through photosynthesis. Carbohydrates can be utilized directly for energy or fermented to produce ethanol, C2Η6O, a combustible alcohol that is added to gasoline or can even be used in place of gasoline. Secondly, carbohydrates are a source of organic raw material that can be converted to other organic molecules to make plastics and other useful materials. 7.04: New Page Proteins are macromolecules that are composed of nitrogen, carbon, hydrogen, and oxygen along with smaller quantities of sulfur. The small molecules of which proteins are made are composed of 20 naturally occurring amino acids. The simplest of these, glycine, is shown in the first structure in Figure 7.4, along with two other amino acids. As shown in Figure 7.4, amino acids join together with the loss of a molecule of H2O for each linkage formed. The three amino acids in Figure 7.4 are shown linked together as they would be in a protein in the bottom structure in the figure. Many hundreds of amino acid units may be present in a protein molecule. The three-dimensional structures of protein molecules are of the utmost importance and largely determine what the proteins do in living systems and how they are recognized by other biomolecules. Enzymes, special proteins that act as catalysts to enable biochemical reactions to occur, recognize the substrates upon which they act by the complementary shapes of the enzyme molecules and substrate molecule. There are several levels of protein structure. The first of these is determined by the order of amino acids in the protein macromolecule. Folding of protein molecules and pairing of two different protein molecules further determine structure. The loss of protein structure, called denaturation, can be very damaging to proteins and to the organism in which they are contained. Two major kinds of proteins are tough fibrous proteins that compose hair, tendons, muscles, feathers, and silk, and spherical or oblong-shaped globular proteins, such as hemoglobin in blood or the proteins that comprise enzymes. Proteins serve many functions. These include nutrient proteins, such as casein in milk, structural proteins, such as collagen in tendons, contractile proteins, such as those in muscle, and regulatory proteins, such as insulin, that regulate biochemical processes. Proteins with carbohydrate groups attached constitute an important kind of biomolecule called glycoproteins. Collagen is a crucial glycoprotein that provides structural integrity to body parts. It is a major constituent of skin, bones, tendons, and cartilage. Some proteins are very valuable biomaterials for pharmaceutical, nutritional, and other applications and their synthesis is an important aspect of green chemistry. The production of specific proteins has been greatly facilitated in recent years by the application of genetic engineering to transfer to bacteria the genes that direct the synthesis of specific proteins. The best example is insulin, a protein injected into diabetics to control blood sugar. Insulin injected for blood glucose control used to be isolated from the pancreas’s of slaughtered cattle and hogs. Although this enabled many diabetics to live normal lives, the process of getting the insulin was cumbersome, supply was limited, and the insulin from this source was not exactly the same as that made in the human body, which often caused the body to have an allergic response to it as a foreign protein. The transfer through recombinant DNA technology of the human gene for insulin production into prolific Escherichia coli bacteria has enabled large-scale production of human insulin by the bacteria.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/07%3A_Chemistry_of_Life_and_Green_Chemistry/7.02%3A_New_Page.txt
Lipids differ from most other kinds of biomolecules in that they are repelled by water. Lipids can be extracted from biological matter by organic solvents, such as diethyl ether or toluene. Recall that proteins and carbohydrates are distinguished largely by chemically similar characteristics and structures. However, lipids have a variety of chemical structures that share the common physical characteristic of solubility in organic solvents. Many of the commonly encountered lipid fats and oils are esters of glycerol alcohol, CH2(OH)CH(OH)CH2(OH), and long-chain carboxylic acids (fatty acids), such as stearic acid, CH3(CH2)16CO2H. The glycerol molecule has three -OH groups to each of which a fatty acid molecule may be joined through the carboxylic acid group with the loss of a water molecule for each linkage that is formed. Figure 7.5shows a fat molecule formed from three stearic acid molecules and a glycerol molecule. Such a molecule is one of many possible triglycerides. Also shown in this figure is cetyl palmitate, the major ingredient of spermaceti wax extracted from sperm whale blubber and used in some cosmetics and pharmaceutical preparations. Cholesterol shown in Figure 7.5 is one of several important lipid steroids, which share the ring structure composed of rings of 5 and 6 carbon atoms shown in the figure for cholesterol. Although the structures shown in Figure 7.5 are diverse, they all share a common characteristic. This similarity is the preponderance of hydrocarbon chains and rings so that lipid molecules largely resemble hydrocarbons. Their hydrocarbon-like molecules make lipids soluble in organic solvents. Some of the steroid lipids are particularly important because they act as hormones, chemical messengers that convey information from one part of an organism to another. Major examples of steroid hormones are cholesterol, testosterone (male sex hormone), and estrogens (female sex hormones). Steroid lipids readily penetrate the membranes that enclose cells, which are especially permeable to more hydrophobic lipid materials. Hormones, start and stop a number of body functions and regulate the expression of many genes. In addition to steroid lipids, many hormones including insulin and human growth hormone are proteins. Hormones are given off by ductless glands in the body called endocrine glands. The locations of important endocrine glands are shown in Figure 7.6. Lipids are important in green chemistry for several reasons. Lipids are very much involved with toxic substances, the generation and use of which are always important in green chemistry. Poorly biodegradable substances, particularly organochlorine compounds, that are always an essential consideration in green chemistry, tend to accumulate in lipids in living organisms, a process called bioaccumulation. Lipids can be valuable raw materials and fuels. A major kind of renewable fuel is made by hydrolyzing the long-chain fatty acids from triglycerides and attaching methyl groups to produce esters. This liquid product, commonly called biodiesel fuel, serves as a substitute for petroleum-derived liquids in diesel engines. The development and cultivation of plants that produce oils and other lipids is a major possible route to the production of renewable resources. 7.06: New Page Nucleic acids (Figure 7.7)are biological macromolecules that store and pass on the genetic information that organisms need to reproduce and synthesize proteins. The two major kinds of nucleic acids are deoxyribonucleic acid,DNA, which basically stays in place in the cell nucleus of an organism and ribonucleic acid, RNA, which is spun off from DNA and functions throughout a cell. Molecules of nucleic acids contain three basic kinds of materials. The first of these is a simple sugar, 2-deoxy-β-D-ribofuranose (deoxyribose) contained in DNA and β-D-ribofuranose (ribose) contained in RNA. The second major kind of ingredient consists of nitrogen-containing bases: cytosine, adenine, and guanine, which occur in both DNA and RNA, thymine, which occurs only in DNA, and uracil, which occurs only in RNA. The third constituent of both DNA and RNA is inorganic phosphate, PO43-. These three kinds of substances occur as repeating units called nucleotides joined together in astoundingly long chains in the nucleic acid polymer as shown in Figure 7.7. The remarkable way in which DNA operates to pass on genetic information and perform other functions essential for life is the result of the structure of the DNA molecule. In 1953, James D.Watson, and Francis Crick deduced that DNA consisted of two strands of material counterwound around each other in a structure known as anα-helix (Figure 7.8), a remarkable bit of insight that earned Watson and Crick the Nobel Prize in 1962. These strands are held together by hydrogen bonds between complementary nitrogenous bases. Taken apart, the two strands resynthesize complementary strands, a process that occurs during reproduction of cells in living organisms. Indirecting protein synthesis, DNA becomes partially unravelled and generates a complementary strand of material in the form of RNA, which in turn directs protein synthesis in the cell. Consideration of nucleic acids and their function is very important in the development of green chemistry. One aspect of this relationship is that the toxicity hazards of many chemical substances result from potential effects of these substances upon DNA. Of most concern is the ability of some substances to alter DNA and cause uncontrolled cell replication characteristic of cancer. Also of concern is the ability of some chemical substances called mutagens to alter DNA such that undesirable characteristics are passed on to offspring. Another important consideration with DNA as it relates to green chemistry is the ability that humans now have to transfer DNA between organisms, popularly called genetic engineering. An important example is the development of bacteria that have the DNA transferred from humans to make human insulin. This technology of recombinant DNA is discussed in more detail in Chapter 12.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/07%3A_Chemistry_of_Life_and_Green_Chemistry/7.05%3A_New_Page.txt
Recall from Chapter 5, Section 5.5, that catalysts are substances that speed up a chemical reaction without themselves being consumed in the reaction. Catalysis is one of the most important aspects of green chemistry because the ability to make reactions go faster as well as more efficiently, safely, and specifically means that less energy and raw materials are used and less waste is produced. Biochemical catalysts called enzymes include some of the most sophisticated of catalysts. Enzymes speed up biochemical reactions by as much as ten- to a hundred million-fold. They often enable reactions to take place that otherwise would not occur, that is, they tend to be very selective in the reactions they promote. One of the greatest advantages of enzymes as catalysts is that they have evolved to function under the benign conditions under which organisms exist. This optimum temperature range is generally from about the freezing point of water (0 ̊C) to slightly above body temperature (up to about 40 ̊C). Chemical reactions go faster at higher temperatures, so there is considerable interest in enzymes isolated from microorganisms that thrive at temperatures near the boiling point of water (100 ̊C) in hot water pools heated by underground thermal activity such as are found in Yellowstone National Park. Enzymes are proteinaceous substances. Their structure is highly specific so that they bind with whatever they act upon, a substance called a substrate. The basic mechanism of enzyme action is shown in Figure 7.9. As indicated by the figure, an enzyme recognizes a substrate by its shape, bonds with the substrate to produce an enzyme-substrate complex, causes a change such as splitting a substrate in two with addition of water (hydrolysis), then emerges unchanged to do the same thing again. The basic process can be represented as follows: $\textrm{enzyme + substrate} \leftrightarrows \textrm{ enzyme-substrate complex} \leftrightarrows \textrm{ enzyme + product}$ Note that the arrows in the formula for enzyme reaction point both ways. This means that the reaction is reversible. An enzyme-substrate complex can simply go back to the enzyme and the substrate. The products of an enzymatic reaction can react with the enzyme to form the enzyme-substrate complex again. It, in turn, may again form the enzyme and the substrate. Therefore, the same enzyme may act to cause a reaction to go either way. In order for some enzymes to work, they must first be attached to coenzymes. Coenzymes normally are not protein materials. Some of the vitamins are important coenzymes. The names of enzymes are based upon what they do and where they occur. For example, gastric protease, commonly called pepsin,is an enzyme released by the stomach (gastric), which splits protein molecules as part of the digestion process (protease). Similarly, the enzyme produced by the pancreas that breaks down fats (lipids) is called pancreatic lipase. Its common name is steapsin. In general, lipase enzymes cause lipid triglycerides to dissociate and form glycerol and fatty acids. Lipase and protease enzymes are hydrolyzing enzymes, which enable the breakdown of high-molecular-mass biological compounds and add water, one of the most important types of the reactions involved in digestion of food carbohydrates, proteins, and fats. Recall that the higher carbohydrates humans eat are largely disaccharides (sucrose, or table sugar) and polysaccharides(starch). These are formed by the joining together of units of simple sugars, C6H12O6, with the elimination of an H2O molecule at the linkage where they join. Proteins are formed by the condensation of amino acids, again with the elimination of a water molecule at each linkage. Fats are esters which are produced when glycerol and fatty acids link together. A water molecule is lost for each of these linkages when a protein, fat, or carbohydrate is synthesized. In order for these substances to be used as a food source, the reverse process must be catalyzed by hydrolyzing enzymes to break down large, complicated molecules of protein, fat, or carbohydrate to simple, soluble substances which can penetrate a cell membrane and take part in chemical processes in the cell. An important biochemical process is the shortening of carbon atom chains, such as those in fatty acids, commonly by the elimination of CO2 from carboxylic acids. For example, pyruvate decarboxylase enzyme removes CO2 from pyruvic acid, to produce a compound with one less carbon. It is by such carbon-by-carbon breakdown reactions that long chain compounds are eventually degraded to CO2 in the body. Another important consequence of this kind of reaction is the biodegradation of long-chain hydrocarbons by the action of microorganisms in the water and soil environments. Energy is exchanged in living systems largely by oxidation and reduction mediated by oxidoreductase enzymes. Cellular respiration is an oxidation reaction in which a carbohydrate, C6H12O6, is broken down to carbon dioxide and water with the release of energy $\ce{C6H12O6 + 6O2 \rightarrow 6 CO2 + 6H2O + energy}$ Actually, such an overall reaction occurs in living systems by a complicated series of individual steps including oxidation. The enzymes that bring about oxidation in the presence of free O2 are called oxidases. In addition to the major types of enzymes discussed above there are numerous other enzymes that perform various functions. Isomerases form isomers of particular compounds. For example, isomerases convert several simple sugars with the formula C6H12O6 to glucose, the only sugar that can be used directly for cell processes. Transferase enzymes move chemical groups from one molecule to another, lyase enzymes remove chemical groups without hydrolysis and participate in the formation of C=C bonds or addition of species to such bonds, and ligase enzymes work in conjunction with ATP (adenosine triphosphate, a high-energy molecule that plays a crucial role in energy-yielding, glucose-oxidizing metabolic processes) to link molecules together with the formation of bonds such as carbon-carbon or carbon-sulfur bonds. Enzymes are affected by the conditions and media in which they operate. Among these is the hydrogen ion concentration (pH). An interesting example is gastric protease which requires the acid environment of the stomach to work well but stops working when it passes into the much more alkaline medium of the small intestine. This prevents damage to the intestine walls, which would occur if the enzyme tried to digest them. Part of the damage to the esophagus from reflux esophagitis (acid reflux) is due to the action of gastric protease enzyme that flows back into the esophagus from the stomach with the acidic stomach juices. Temperature is critical for enzyme function. Not surprisingly, the enzymes in the human body work best at around 37 ̊C (98.6 ̊F), which is the normal body temperature. Heating these enzymes to around 60 ̊C permanently destroys them. Some bacteria that thrive in hot springs have enzymes that work best at temperatures as high as that of boiling water. Other “cold-seeking” bacteria have enzymes adapted to near the freezing point of water. Immobilized Enzymes in Green Chemistry As noted above, enzymes in organisms have a variety of existing and potential uses in the practice of green chemistry. In many cases it is advantageous to isolate the enzyme used for a particular process from cells and use it outside the cellular environment. In a batch synthesis this can be done by mixing the enzyme with the reactants and allow it to catalyze the desired reaction making sure that optimum conditions of temperature and pH are maintained. This approach has several disadvantages. Enzymes are expensive and isolation of the enzyme from the reaction mixture is usually very costly and often not possible. Enzyme contamination of the product can cause difficulties. The solution to the problem outlined above is often to employ enzyme immobilization that uses a two-phase system in which the enzyme is in one phase and the reaction occurs in another. Sequestration of the enzyme in a separate phase enables its re-use or continuous use in flow-through systems and prevents enzyme contamination of the products. Several major techniques have been employed for enzyme immobilization including (1) adsorption onto a solid, (2) covalent binding onto a separate phase, (3) entrapment in a separate phase, (4) confinement with a membrane that allows transport of reactants and products but retains the enzyme. Ideally the matrix holding the enzyme should be capable of holding the enzyme as well as being inert, physically strong, chemically stable, and capable of being regenerated. The most common materials used to hold enzymes have been porous carbon, ion-exchange matrices, clays, polymeric resins, hydrous metal oxides, and glasses. The procedure for immobilization of an enzyme begins with mixing the enzyme and the solid materials under suitable conditions of pH and ionic strength, sometimes along with binding agents. The support holding the immobilized enzyme is then incubated for some time. Finally, excess enzyme and, where used, binding agents are washed off of the support. Rather than isolating enzymes and immobilizing them on a support, living microorganism cells, usually those of bacteria, are often used. Two main categories of immobilization are employed, attachment and entrapment. The simplest kind of immobilization is aggregation cross-linking in which the microbial cells form networks that compose their own support. This approach is usually confined to batch processes. Otherwise the major kinds of attachment immobilization are covalent binding, binding on ion-exchangers, adsorption binding, and biofilm formation. Probably the most common form of biofilm reactor is the trickling filter used for wastewater treatment (see Chapter 9) in which growing bacteria and protozoa form a film over solid support material (usually rock) and the wastewater is sprayed over the biofilm. This enables contact of the immobilized microorganisms with both the biodegradable material in the wastewater and atmospheric oxygen. Entrapment of microorganisms may be on organic polymer, on inorganic polymer, or behind a semi-permeable membrane. Use of living organisms as sources of immobilized enzymes offers the advantage of not having to isolate the enzyme and, in cases where the organism is reproducing, of continuously replenishing the enzyme. A disadvantage includes having to maintain conditions under which the organism is viable. Also, living cells harbor numerous enzymes so side-reactions and unwanted products can be a problem. Effects of Toxic Substances on Enzymes Toxic substances may destroy enzymes or alter them so that they function improperly or not at all. Among the many toxic substances that act adversely with enzymes are heavy metals, cyanide, and various organic compounds such as insecticidal parathion. Many enzyme active sites through which an enzyme recognizes and bonds with a substrate contain -SH groups. Toxic heavy metal ions such as Pb2+ or Hg2+ are “sulfur seekers” that bind to the sulfur in the enzyme active site causing the enzyme to not function. A particularly potent class of toxic substances consists of the organophosphate “nerve gases” such as Sarin that inhibit the acetylcholinesterase enzyme required to stop nerve impulses. Very small doses of Sarin stop respiration by binding with acetylcholinesterase and causing it to not work. Discussed further in Section 7.9 under the topic of toxicological chemistry, toxicity to enzymes is a major consideration in the practice of green chemistry.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/07%3A_Chemistry_of_Life_and_Green_Chemistry/7.07%3A_New_Page.txt
So far, this chapter has discussed the cells in which biochemical processes occur, the major categories of biochemicals, and the enzymes that catalyze biochemical reactions. Biochemical processes involve the alteration of biomolecules, their synthesis, and their breakdown to provide the raw materials for new biomolecules, processes that fall under the category of metabolism. Metabolic processes may be divided into the two major categories of anabolism (synthesis) and catabolism (degradation of substances). An organism may use metabolic processes to yield energy or to modify the constituents of biomolecules. Metabolism is discussed here as it affects biochemicals and in Chapter 12, Section 12.4, as it applies to the function of organisms in the biosphere. Metabolism is a very important consideration in green chemistry and sustainability. Toxic substances that impair metabolism pose a danger to humans and other organisms and attempts are made to avoid such substances in the practice of green chemistry. Exposures to environmental pollutants that impair metabolism endanger humans and other organisms; the control of such pollutants is an important aspect of environmental chemistry. Metabolic processes are used to make renewable raw materials and to modify substances to give desired materials. The complex metabolic process of photosynthesis provides the food that forms the base of essentially all food webs and is increasingly being called upon to provide renewable raw materials for manufacturing. Energy-Yielding Processes The processing of energy is obviously one of the most important metabolic functions of organisms. The metabolic processes by which organisms acquire and utilize energy are complex generally involving numerous steps and various enzymes. Organisms can process and utilize energy by one of three major processes: • Respiration in which organic compounds undergo catabolism • Fermentation, which differs from respiration in not having an electron transport chain • Photosynthesis, in which light energy captured by plant and algal chloroplasts is used to synthesize sugars from carbon dioxide and water There are two major pathways in respiration. Oxic respiration(called aerobic respiration in the older literature) requires molecular oxygen whereas anoxic respiration (anaerobic respiration) occurs in the absence of molecular oxygen. Oxic respiration uses the Krebs cycle to obtain energy from the following reaction: $\ce{C6H12O6 + 6O2 \rightarrow 6CO2 + 6H2O + energy}$ About half of the energy released is converted to short-term stored chemical energy, particularly through the synthesis of adenosine triphosphate(ATP) shown in Figure 7.10 The highly energized ATP molecule is sometimes described as the “molecular unit of currency” for the transfer of energy within cells during metabolism. It releases its energy when it loses a phosphate group and reverts to adenosine diphosphate and other precursors. ATP is used by enzymes and proteins for cell processes including biosynthetic reactions (anabolism), cell division, and motility (such as occurs in moving protozoa cells). In so doing, ATP is continually being produced and reconverted back to its precursor species in an organism. Some studies have suggested that the human body processes its own mass in ATP during a single day! Whereas ATP is used for very short term energy storage and processing, for longer-term energy storage,glycogen or starch polysaccharides are synthesized, and for still longer-term energy storage, lipids(fats) are generated and retained by the organism. As noted above, fermentation differs from respiration in not having an electron transport chain and organic compounds are the final electron acceptors rather than O2in the energy-yielding process. Many biochemical processes including some used to make commercial products are fermentations. A common example of fermentation is the production of ethanol from sugars by yeasts growing in the absence of molecular oxygen: $\ce{C6H12O6 \rightarrow 2CO2 + 2C2H5OH}$ Photosynthesis, is an energy-capture process in which light energy captured by plant and algal chloroplasts is used to synthesize sugars from carbon dioxide and water: $\ce{6CO2 + 6H2O + h \nu \rightarrow C6H12O6 + 6O2}$ When it is dark, plants cannot get the energy that they need from sunlight but still must carry on basic metabolic processes using stored food. Plant cells, like animal cells, contain mitochondria in which stored food is converted to energy by cellular respiration. Nonphotosynthetic organisms depend upon organic matter produced by plants for their food and are said to be heterotrophic. They act as “middlemen” in the chemical reaction between oxygen and food material using the energy from the reaction to carry out their life processes Plant cells, which use sunlight as a source of energy and CO2 as a source of carbon, are classified as autotrophic. In contrast, animal cells must depend upon organic material manufactured by plants for their food. These are called heterotrophic cells. Biochemical conversions involving energy are very important in the practice of green chemistry and sustainability. The most obvious connection is the capture of solar energy as chemical energy by photosynthesis. As discussed in Chapter 15, photosynthetically produced biomass can serve as a source of chemically fixed carbon for the synthesis of chemical fuels including synthetic natural gas, gasoline, diesel fuel, and ethanol. A tantalizing possibility is to use recombinant DNA techniques to increase by several-fold the very low efficiency of photosynthesis by most plants. Fermentation has a strong role to play in sustainable energy development. As shown in Reaction 7.8.2, fermentation of glucose produces ethanol, which can be used as fuel. Anoxic fermentation of biomass (abbreviated {CH2O}) from sources such as sewage sludge or food wastes yields methane (natural gas) the cleanest-burning of all hydrocarbon fuels: $\ce{2(CH2O) \rightarrow CH4 + CO2}$
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/07%3A_Chemistry_of_Life_and_Green_Chemistry/7.08%3A_New_Page.txt
One of the more important aspects of biochemistry that is important in green chemistry is the way that organisms deal biochemically with toxic substances. There are two major aspects of this. One is the biochemical changes in toxic substances in an organism’s system including the detoxification of such substances and in some cases the conversion of nontoxic compounds to toxic compounds. The second is the biochemical mode of action of toxic substances through which they exert a toxic effect. Toxicological chemistry is the science that deals with the chemical nature and reactions of toxic substances, including their origins, uses, and chemical aspects of exposure, fates, and disposal.2 Toxicological chemistry addresses the relationships between the chemical properties and molecular structures of molecules and their toxicological effects. Figure 7.11 illustrates the definition of toxicological chemistry. This section discusses the biochemical aspects of toxicological chemistry. Toxicology, itself, and other details regarding toxicological chemistry are covered in Chapter 17. Biochemistry of Toxicants and Protoxicants In some cases a toxic substance that enters into the system of a living organism is unchanged until it reacts to cause a toxic effect. This is the case with carbon monoxide, CO, which enters the bloodstream through the lungs and binds with blood hemoglobin to prevent oxygen transfer to tissues. In other case toxicants or their metabolic precursors (protoxicants) react in ways that may make them more toxic or that detoxify them and facilitate their elimination from the organism. Xenobiotic compounds are those that are normally foreign to living organisms. Some of the more toxic substances, such as the toxin produced by Botulinus bacteria or the venom of the deadly Australian inland taipan viper, are among the most toxic substances known. “Toxicant” is used here as a term to refer to toxic substances and their precursors including both xenobiotic materials and those of natural organisms. “The body” is used to refer to the human body, but also applies to other organisms as well. Of particular importance in the metabolism of toxicants is intermediary xenobiotic metabolism which results in the formation of somewhat transient species that are different from both those ingested and the ultimate product that is excreted. Intermediary metabolites may have significant toxicological effects. Toxicants and protoxicants in general are acted upon by enzymes that normally act upon an endogenous substrate, a material that is in the body naturally. For example, flavin-containing monooxygenase enzyme acts upon endogenous cysteamine to convert it to cystamine, but also functions to oxidize xenobiotic nitrogen and sulfur compounds. Toxicants undergo biotransformation as a result of enzyme action, usually Phase I and Phase II reactions defined below. Some nonenzymatic transformations are also important including bonding of compounds with endogenous biochemical species without an enzyme catalyst, hydrolysis in body fluid media, or oxidation/reduction processes. The likelihood of enzymatic metabolism in the body depends upon the physical and chemical properties of the species. Highly polar compounds, such as those that form ions readily, are less likely to enter the body system and generally are quickly excreted. Therefore, such compounds are unavailable, or only available for a short time, for enzymatic metabolism. Volatile compounds, such as dichloromethane or diethyl ether, are expelled so quickly from the lungs that enzymatic metabolism is less likely. This leaves as the most likely candidates for enzymatic metabolic reactions nonpolar lipophilic compounds, those that are relatively less soluble in aqueous biological fluids and more attracted to lipid species. Of these, the ones that are resistant to enzymatic attack (PCBs, for example) tend to bioaccumulate in lipid tissue. Xenobiotic species may be metabolized in many body tissues and organs. The liver is of particular significance because materials entering systemic circulation from the gastrointestinal tract must first traverse the liver. As part of the body’s defense against the entry of xenobiotic species, the most prominent sites of xenobiotic metabolism are those associated with entry into the body, such as the skin and lungs. The gut wall through which xenobiotic species enter the body from the gastrointestinal tract is also a site of significant xenobiotic compound metabolism. Phase I and Phase II Reactions The metabolism of toxic substances may be divided into two phases. Phase I reactions normally consist of attachment of a functional group, usually accompanied by oxidation. For example, benzene, C6H6, (see Chapter 6, Section 6.2) is oxidized in the body by the action of the cytochrome P-450 enzyme system as shown in Figure 7.12. The Phase I oxidation product of benzene is phenol, a toxic substance. A reactive intermediate in the process is benzene epoxide, which interacts with biomolecules to cause toxic effects. The phenol Phase I oxidation product of benzene may undergo a second reaction, a Phase II reaction in which it is bound with a conjugating agent that is endogenous to (produced naturally by) the body, such as glucuronide (Figure 7.13). Although Phase I and Phase II reactions generally act to make xenobiotic substances more water soluble, more readily eliminated from the body, and less toxic, in some cases, the opposite occurs and metabolic processes make substances more toxic. Most known human carcinogens(cancer-causing agents) are actually produced by biochemical processes in the body from non-carcinogenic precursor substances. Biochemical and Toxic Effects of Toxicants Toxic substances, which, as noted above, are often produced by metabolic processes from nontoxic precursors, produce a toxic response by acting upon a receptor in the body. Typically, a receptor is an enzyme that is essential for some function in the body. As a consequence of the binding of the receptor to the toxicant there is a biochemical effect. A common example of a biochemical effect occurs when a toxicant binds to an enzyme such that the bound enzyme may be inhibited from carrying out its normal function. As a result of a biochemical effect, there is a response, such as a behavioral or physiological response, which constitutes the actual observed toxic effect. Acetylcholinesterase enzyme inhibited by binding to nerve gas Sarin may fail to stop nerve impulses in breathing processes, leading to asphyxiation. The phenomena just described occur in the dynamic phase of toxicant action as summarized in Figure 7.14. Toxicological Chemistry and the Endocrine System The endocrine glands and the hormones they produce (see Section 7.5 and Figure 7.6) are important in the consideration of toxicological chemistry, green chemistry and sustainability. Of particular importance are substances that organisms get exposed to through their environment, food, and drinking water that have the potential to disrupt the crucial endocrine gland activities that regulate the metabolism and reproductive functions of organisms. Hormonally active agents exhibit hormone-like activity that may be detrimental. Most commonly these are estrogenic substances that act like the female sex hormone estrogen. Some of these survive water treatment processes. Discharged to receiving waters they affect aquatic organisms and potentially can get into water that humans drink. Among such substances are estrogen, an endogenous sex hormone; 17a-ethinylestradiol, an ingredient of oral contraceptives; and chemicals from industrial and consumer sources that mimic estrogen. Estrogenic substances from artificial sources are called xenoestrogens and include antioxidants, bisphenol A, dioxins, PCBs, phytoestrogens (from plants), some pesticides (chlordecone, dieldrin, DDT and its metabolites, methoxychlor, toxaphene), preservatives, and phthalic acid esters (butylbenzyl phthalate). The practice of green chemistry minimizes the production and use of xenoestrogens and attempts to prevent their introduction into the environment. A particular need exists for the development of alternatives to xenoestrogenic bisphenol A and phthalate plasticizers. These substances improve the properties of plastics and as molecules much smaller than those in the plastic polymer tend to get into the environment and food chain. Modification of the plastic polymer formulation to give desired properties without the need for plasticizer additives would be especially desirable.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/07%3A_Chemistry_of_Life_and_Green_Chemistry/7.09%3A_Biochemistry_of_Toxic_Substances_and_Toxicological_Chemistry.txt
1. Voet, Donald, Judith G. Voet, and Charlotte W. Pratt, Fundamentals of Biochemistry, Wiley, New York, 2008. 2. Stanley E. Manahan, Toxicological Chemistry and Biochemistry, 3rd ed., Taylor & Francis/CRC Press, Boca Raton, Florida, 2002. 3. Rogers, Kara, Ed., The Cell, Rosen Educational Services, New York, 2011. Questions and Problems Access to and use of the internet is assumed in answering all questions including general information, statistics, constants, and mathematical formulas required to solve problems. These questions are designed to promote inquiry and thought rather than just finding material in the text. So in some cases there may be several “right” answers. Therefore, if your answer reflects intellectual effort and a search for information from available sources, your answer can be considered to be “right.” 1. What are the basic building blocks of proteins, and how do they determine the primary structure of proteins? 2. What is meant by denaturation of proteins? Is it bad? 3. What are some major kinds of proteins? 4. What is the approximate simple formula of carbohydrates? 5. Fill in the blanks of the following pertaining to carbohydrates: Glucose is an example of a______________, sucrose is a ___________________________, and starch and cellulose areboth ___________________________. 6. How are lipids defined and how does this definition differ from that of other biomolecules? 7. What does DNA stand for? What are 6 specific ingredients of DNA? 8. Although lipids are defined by a physical property that they all share, what is a common characteristic of lipid structure? 9. From the structures given in Figure 7.5, what kind of functional group seems to be common in lipids? 10. How does the compatibility of lipids with organic substances, such as organochlorine compounds, influence the environmental behavior of such compounds? 11. What distinguishes RNA from DNA? How are they similar? 12. What are the three constituents of all basic units of nucleic acids? 13. Exposure of a person to toxic benzene can be estimated by measuring phenol in blood. Explain the rationale for such an analysis. Why is benzene epoxide not commonly determined to estimate benzene exposure? 14. Consider the toxicity of inhaled carbon monoxide in the context of Figure 7.14. Identify for carbon monoxide the receptor, the abnormal biochemical effect, and the physiological response manifesting toxicity. 15. What is the toxicological importance of lipids? How are lipids related to hydrophobic pollutants and toxicants? 16. What is the function of a hydrolase enzyme? 17. Match the cell structure on the left with its function on the right, below: A. Mitochondria 1. Toxicant metabolism B. Endoplasmic reticulum 2. Fills the cell C. Cell membrane 3. Deoxyribonucleic acid D. Cytoplasm 4. Mediate energy conversion and utilization E. Cell nucleus 5. Encloses the cell and regulates the passage of materials into and out of the cell interior 18. The formula of simple sugars is C6Η12Ο6. The simple formula of higher carbohydrates is C6Η10Ο5. Of course, many of these units are required to make a molecule of starch or cellulose. If higher carbohydrates are formed by joining together molecules of simple sugars, why is there a difference in the ratios of C, H, and O atoms in the higher carbohydrates as compared to the simple sugars? What do these formulas suggest about the kind of enzyme that would be required to produce glucose from a higher carbohydrate such as cellulose? 19. What would be the chemical formula of a trisaccharide made by the bonding together of three simple sugar molecules? 20. The general formula of cellulose may be represented as (C6H10O5)x. If the molar mass of a molecule of cellulose is 400,000, what is the estimated value ofx? 21. Glycine and phenylalanine can join together to form two different dipeptides. What are the structures of these two dipeptides? 22. Fungi, which break down wood, straw, and other plant material, have what are called“exoenzymes.” Fungi have no teeth and cannot break up plant material physically by force. Knowing this, what do you suppose an exoenzyme is? Explain how you think it might operate in the process by which fungi break down something as tough as wood. 23. The straight-chain alcohol with 10 carbons is called decanol. What do you think would be the formula of decyl stearate? To what class of compounds would it belong? 24. In what respect is an enzyme and its substrate like two opposite strands of DNA? Supplementary References Tymoczko, John L., Jeremy Mark Berg, and Lubert Stryer, Biochemistry: A Short Course, 6th ed.,W.H. Freeman, New York, 2009. Bettelheim, Frederick A., William H. Brown, and Jerry March, Introduction to Organic and Biochemistry, 7th ed., Brooks/Cole, Belmont, CA, 2009. Chesworth, J. M., T. Stuchbury, and J.R. Scaife, An Introduction to Agricultural Biochemistry, Chapman and Hall, London, 1998. Elliott, William H., and Daphne C. Elliott, Biochemistry and Molecular Biology, 4th ed., Oxford University Press, New York, 2009. Garrett, Reginald H., Charles M. Grisham, Biochemistry, 4th ed., Brooks/Cole, Belmont, CA,2009. Kuchel, Philip W., Simon Easterbrook-Smith, Vanessa Gysbers, J. Mitchell Guss, Dale P.Hancock, Jill M. Johnston, Alan Jones, and Jacqui M. Matthews, Schaum’s Outline of Biochemistry, McGraw-Hill, Boston, 2010. Bowsher, Caroline, Martin Steer, and Alyson Tobin Plant Biochemistry, Garland Science, New York, 2008. Markandey, D. K., and N. Rajvaida, Environmental Biochemistry, APH Publishing, New Dehli,2005. McKee, Trudy, and James R. McKee, Biochemistry: The Molecular Basis of Life, 4th ed., Oxford University Press, New York, 2009. Moore, John, T., and Richard Langley, Biochemistry for Dummies, Wiley Publishing, Inc., Indianapolis, IN, 2008. Pratt, Charlotte W., and Kathleen Cornely, Essential Biochemistry, Wiley, Hoboken, NJ, 2011. Swanson, Todd A., Sandra I. Kim, and Marc J. Glucksman, Biochemistry and Molecular Biology, 4th ed., Lippincott Williams & Wilkins, Philadelphia, 2007. Tymoczko, John L., Jeremy M. Berg, and Lubert Stryer, Biochemistry: A Short Course, W.H.Freeman and Co., New York, 2009. Voet, Donald, Judith G. Voet, Fundamentals of Biochemistry: Life at the Molecular Level, 3rd ed., Wiley, Hoboken, NJ, 2008.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/07%3A_Chemistry_of_Life_and_Green_Chemistry/Literature_Cited.txt
“In addition to water, air, solid earth, and life, a fifth sphere of the environment must be considered, the anthrosphere consisting of the things made and used by humans. The anthrosphere has such a profound influence that we are now entering a new epoch, the anthropocene in which Earth’s environment will be largely determined by human activities. 08: The Five Environmental Spheres and Biogeochemical Cycle In Section 1.2 it was noted that Earth’s environment may be regarded as consisting of five spheres: (1) The hydrosphere, (2) the atmosphere, (3) the geosphere, (4) the biosphere, and (5) the anthrosphere (that part of the environment constructed and operated by humans). All of these spheres are closely interconnected and interactive, continuously exchanging matter and energy and influencing each other. Much of this interaction is through biogeochemical cycles in which biological, geochemical, aquatic, atmospheric, and, increasingly, anthrospheric processes act to exchange matter and energy among the five environmental spheres and determine Earth’senvironment. The next several chapters address these five environmental spheres and the cycles in which they are involved. Consideration of the five environmental spheres and their interactions through biogeochemical cycles are very important in green chemistry, science, and technology as well as sustainability as a whole. Earth’s natural capital (see Section 1.4) resides in the four “natural” spheres and is utilized (sometimes shamelessly exploited) and managed in the anthrosphere. The interfaces between spheres are particularly important for, among other reasons, they are where materials and energy are exchanged. As discussed in Section 1.2, one such interface is the one between the geosphere and the atmosphere where most plants grow that support life on Earth. Like most such interfaces, it is very thin compared to the spheres themselves, generally extending into the geosphere for only the meter or less penetrated by plant roots and into the atmosphere only to the height of the plants. The boundary between the anthrosphere and the atmosphere is where air pollutants from automobile engines and other sources enter the atmosphere. Many other examples of important interfaces could be given. 8.02: New Page Water, chemical formula H2O, comprises the hydrosphere (Figure 8.1). As discussed in more detail in Chapter 9, although it has a simple chemical formula, water is actually a very complex substance largely because of the water molecule’s polarity and ability to form hydrogen bonds. Water participates in one of the great natural cycles of matter, the hydrologic cycle illustrated in Figure 8.1. Basically the hydrologic cycle is powered by solar energy that evaporates water as atmospheric water vapor from the oceans and bodies of fresh water from where it may be carried by wind currents through the atmosphere to fall as rain, snow, or other forms of precipitation in areas far from the source. In addition to carrying water, the hydrologic cycle conveys energy absorbed as latent heat when water is evaporated by solar energy and released as heat when the water condenses to form precipitation. The hydrosphere is shown in Figure 8.1, which also illustrates its relationship to the other environmental spheres. A remarkable 97.5% of Earth’s water is saltwater in the oceans. Of the remaining fresh water, 1.7% of Earth’s total water is in polar ice and the Greenland icecap. This leaves only about 0.77% of Earth’s water as fresh water for other than ocean-dwelling organisms and for use by humans. This fresh water occurs in natural lakes, rivers, impoundments made by humans, and underground as groundwater. At any given time a miniscule, but very important fraction is contained in the anthrosphere, such as in water distribution systems. Water is the most widely used substance by organisms in the anthrosphere. Humans use water in their households for drinking, food preparation, cleaning, and disposal of wastes, drawing water from rivers, from impoundments made by damming rivers, and by pumping from underground aquifers. Moving water is one of the oldest forms of power harnessed by humans. Water wheels date back more than 2000 years and hydroelectric power is still the leading source of renewable energy. Hot water vapor (steam) is widely used for heat transfer in industry and in buildings and is the largest means of electrical power generation through the mechanism of steam turbines coupled to electrical generators. Sustainability demands consideration of the water resource, shortages of which resulting from climate-induced droughts have caused severe problems for many organisms and declines of major civilizations. Devastating floods displace and even kill large numbers of people throughout the world and destroy homes and other structures. Severe droughts curtail plant productivity resulting in food shortages for humans and animals in natural ecosystems and often necessitating slaughter of farm animals. It is feared that both drought and the severity of occasional flooding will become much worse as the result of global warming brought on by rising carbon dioxide levels in the atmosphere (see Chapter 10, Section 10.6). The maintenance of healthy and prosperous human populations requires consideration of both water quality and quantity. Waterborne diseases including cholera and typhoid have killed millions of people and these and others, especially dysentery, are problems in less developed areas lacking proper sanitation. The prevention of water pollution has been a major objective of the environmental movement and avoiding discharge of harmful water pollutant chemicals is one of the main objectives of the practice of green technology. Water supplies are a concern with respect to terrorism because of their potential for deliberate contamination with biological or chemical agents. Since ancient times humans have built water reservoirs for water storage and dikes and dams for flood control. Such water management measures have enabled development of large populations in arid regions and in areas vulnerable to flooding. However, unusually severe, prolonged droughts do occur and in past times entire civilizations have been wiped out as a result. The effects of severe droughts are exacerbated by the fact that control of water supplies has enabled excessive growth in water-deficient areas. The Las Vegas metropolitan region of the U.S.and Mexico City in Mexico are examples of metropolitan regions that have outgrown the natural water capital available to them. Floods cause hundreds of millions of dollars in damage to communities where construction of river dikes and impoundments have enabled agricultural and other developments in flood-prone areas that become overwhelmed by “hundred-year” flood events. Such an incident took place along the Missouri River in 1993 when a 500-year flood overwhelmed most of the protective structures. Failure of the protective systems caused much greater devastation than would otherwise have been the case when Hurricane Katrina destroyed much of New Orleans in 2005. Following the 1993 Missouri River flood, sensible actions were taken in some areas where farm property along the Missouri river was purchased by government agencies and allowed to revert to wildlife habitat in its natural state, which included periodic flooding. It would have been sensible for the future of New Orleans to move areas located below sea level and flooded by levee failure in 2005 to higher ground and to avoid trying to thwart the natural tendency of water to seek lower levels where humans may try to live. Problems with water supply are discussed in Chapter 9. Figure 9.2 showing rainfall patterns in the continental U.S. reveals that the eastern continental U.S. receives generally adequate rainfall, although damaging droughts may still occur in that region. However, except for northern coastal regions, the western half of the continental U.S. is water-deficient. Water-deficient areas of theU.S. including southern California, Arizona, Nevada, and Colorado have exhibited some of the most rapid population growth in recent decades putting increasing pressure on limited water supplies and making the region vulnerable to prolonged severe droughts. Even much more severe water supply problems exist in other parts of the world, such as sections of Africa and the Middle East including the area of Palestine and Israel. Damage to the Hydrosphere Earth could have lost most of its water by now except for one very fortunate atmospheric feature, the very cold tropopause boundary at the upper part of the atmospheric troposphere. At a temperature well below the freezing point of water, this region converts water vapor to ice that remains in the troposphere and participates in the hydrologic cycle. Were this not the case, the water vapor would infiltrate the next higher atmospheric layer, the stratosphere, where highly energetic solar ultraviolet radiation would split H atoms off the H2O molecules. These very light atoms and H2 molecules formed from them would have diffused into space, leaving Earth with an arid Martian landscape. In fact, there is probably a net influx of water into Earth’s atmosphere from meteorites that are largely composed of water. Although water is not destroyed on Earth, the hydrosphere certainly can suffer damage by human activities. One of the main ones of these is excessive utilization of water in arid regions. Withdrawal of irrigation water from rivers in arid regions has reduced some once mighty rivers to trickles by the time they reach the ocean. The water is not destroyed, but it evaporates and in some cases infiltrates to below ground. Mexico City manifests one of the most serious problems of water over-use, the depletion of groundwater. This highly populous city was built on an old lake bed, and excessive pumping of ground water has caused land subsidence and damage to surface structures. In the U.S. wasteful use of groundwater is illustrated by the depletion of the High Plains Aquifer (Figure 8.2) commonly called the Ogallala aquifer. Largely composed of fossil water remaining from the last Ice Age, the Ogallala aquifer lies beneath much of Nebraska, western Kansas, the Oklahoma and Texas panhandles, and small sections of eastern Wyoming, Colorado, and New Mexico. Although it is recharged from surface water in parts of Nebraska, it is largely composed of fossil water from the last Ice Age. It contains an astounding amount of water, enough to cover the entire United States to a depth of around 1.5 meters! Since the 1940s, huge quantities of water pumped from the Ogallala aquifer have been used to irrigate corn and other crops not normally adapted to the High Plains region. As a result, the water table (level reached by water in a well drilled into an aquifer, Figure 8.3) has dropped dramatically, exceeding 50 meters in some areas, In the single decade beginning in 1995, the water level dropped by 6 meters in the middle of Kansas’ irrigated corn belt around Ulysses, Kansas. Such unsustainable water depletion will force a shift from thirsty crops, such as corn, to those that require less water, such as milo. The depletion of water supplies has numerous implications for sustainability. Clearly, the U.S. can meet its food demands without exploiting the Ogalla aquifer and other diminishing sources of water. A major thrust of the U.S. energy plan — such as it is — has been increased production of biofuels — ethanol from fermentation of glucose sugar derived from corn and biodiesel fuel from soybeans. The voracious consumption of irrigation water required to grow enough of these crops to make a significant difference in fuel resources is not sustainable. Some underground water supplies should be regarded as depletable resources and reserved for municipal water supplies and manufacturing, which require only a fraction as much water as does irrigation. Sustaining the Hydrosphere Fortunately, the solar-powered hydrologic cycle and the inexhaustible supply of water in Earth’s oceans makes water one of nature’s most renewable resources. Water is never really consumed or destroyed, although it may become so polluted or dispersed that its reclamation is impractical. Even water infiltrating into the ground may be regarded as recycled water because it renews groundwater sources. Although the thought of so doing has largely prevented efforts to completely recycle water that has been through domestic sewage systems, recycling of this water following purification will have to be practiced in some water-deficient areas in the future. In fact, recycling sewage water has long been the practice where municipalities take their water supplies from rivers into which other municipalities discharge treated wastewater. A very favorable development during the last 30 years is that the trend toward ever increasing water use that prevailed in the U.S. until about 1980 has leveled off due to more efficient irrigation and industrial processes and it appears that total utilization of water will remain relatively constant in the U.S into the future. As exemplified by the exploitation of the Ogallala aquifer discussed above, human manipulation of the hydrosphere to provide water has often had adverse effects. However, application of the principles of green science and technology can enable supply of water to water-deficient areas without damaging the environment and even with enhancement of water quality. Unlike the over-utilized Colorado and Rio Grande rivers of the southwestern U.S., the enormous flow of the Mississippi River could be tapped for water-deficient areas. One scheme would be to divert a fraction of this flow near the mouth of the Mississippi where it discharges to the Gulf of Mexico and pump the water using abundant wind power to arid regions of the U.S. Southwest and northern Mexico. Mississippi River water retained for some time in constructed wetlands near the point of the diversion could undergo self-purification, collecting sediment that builds up land mass and removing nutrients that are now harmful to water quality in the Gulf of Mexico. Aquatic plants growing in the wetlands and thriving on fertilizer runoff from the upstream Mississippi watershed could remove nutrients that are now harmful to water quality in the Gulf of Mexico. Groundwater recharge is another key to water sustainability that is being practiced in parts of the world. Most groundwater recharge occurs naturally, although it has been reduced by paving and surface modifications in the anthrosphere. Anthrospheric constructs on the surface can be designed to maximize recharge. For example, some paving surfaces in China have been made of porous materials that allow water to penetrate into the ground below. Two of the more active approaches to groundwater recharge are shown in Figure 8.3. One of these is water pumped into a shaft that extends underground and even to the aquifer itself. Surfaces of these conduits can become clogged with silt, bacterial growths, and other materials suspended in the recharge water and may have to be cleaned periodically. A spreading basin consists of a reservoir of water excavated into porous geospheric material from which water flows into the aquifer. An advantage is the purification of water that occurs through contact with mineral matter, but the process does not work well if aquifers are overlain by poorly pervious layers
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/08%3A_The_Five_Environmental_Spheres_and_Biogeochemical_Cycle/8.01%3A_New_Page.txt
Illustrated in Figure 8.4, the atmosphere having a total mass of about 5.15×1015 tons is a layer of gases blanketing Earth that diminishes rapidly in density with increasing altitude. More than 99% of the atmosphere’s mass is within 40 kilometers (km) of Earth’s surface, with the majority of the air below 10 km altitude (compared to Earth’s diameter of almost 13,000 km). A person exposed to air at the approximately 13,000 m altitude at which commercial jet aircraft fly could remain conscious for only about 15 seconds without supplementary oxygen. There is no clearly defined upper limit to the atmosphere, which keeps getting thinner with increasing altitude. A practical upper limit may be considered to be an altitude of about 1000 km above which airmolecules can be lost to space (a region called the exosphere). The atmosphere nurtures life on Earth in many important respects. Some of the main ones of these are the following: • The atmosphere constitutes much of Earth’s natural capital because of its attributes listed below. • The atmosphere is a source of molecular O2 for all organisms that require it including humans and all other animals. In addition, pure oxygen, argon, and neon are extracted from the atmosphere for industrial uses. • At approximately 0.039% carbon dioxide, CO2, the atmosphere is the source of carbon that plants and other photosynthetic organisms use to synthesize biomass • Consisting mostly of molecular N2, the atmosphere serves as a source of nitrogen that is an essential component of protein and other biochemicals as well as a constituent of a variety of synthetic chemicals. Organisms “fix” this nitrogen in the biosphere chemically by the action of bacteria such as Rhizobium and it is fixed synthetically in the anthrosphere under much more severe conditions of temperature and pressure. •The atmosphere acts as a blanket to keep Earth’s surface at an average temperature of about 15 ̊C at sea level and within a temperature range that enables life to exist(the “good” greenhouse effect). •Earth’s atmosphere absorbs very short wavelength ultraviolet radiation from the sun and space, which, if it reached organisms on Earth’s surface, would tear apart the complex biomolecules essential for life. In this respect, the stratospheric ozone layer is of particular importance • The atmosphere contains and carries water vapor evaporated from oceans that forms rain and other kinds of precipitation over land in the hydrologic cycle(Figure 8.1) The atmosphere is 1–3% by volume water vapor, a level that is somewhat higher in tropical regions and lower in desert areas and at higher altitudes where condensation to liquid droplets and ice crystals removes water vapor. Exclusive of water vapor, on a dry basis air is 78.1% by volume nitrogen gas (N2), 21.0% O2, 0.9% noble gas argon, and almost 0.039% CO2, a level that keeps increasing by a little more than 0.001% per year. In addition, there are numerous trace gases in the atmosphere at levels below 0.002% including ammonia, carbon monoxide, helium, hydrogen, krypton, methane, neon, nitrogen dioxide, nitrous oxide, ozone, sulfur dioxide, and xenon. Figure 8.4 shows some of the main features and aspects of the atmosphere and its relationship to other environmental spheres. Except for a few aviators who fly briefly into the stratosphere, living organisms experience the lowest layer called the troposphere characterized by decreasing temperature and density with increasing altitude. The troposphere extends from surface level, where the average temperature is 15 ̊C, to about 11 km (the approximate cruising altitude of commercial jet aircraft) where the average temperature is –56 ̊C (the tropopause discussed in Section 8.2) Above the troposphere is the stratosphere in which the average temperature increases from about –56 ̊C at its lower boundary to about –2 ̊C at its upper limit. The stratosphere is warmed by energy of intense solar radiation impinging on air molecules. Because this radiation can break the bonds holding O2 molecules together, at higher altitudes the stratosphere maintains a significant level of O atoms and of ozone (O3) molecules formed by combination of O atoms with O2 molecules. Stratospheric ozone molecules are essential for the ability of humans and other organisms to exist on Earth’s surface because of their ability to filter out damaging ultraviolet radiation before it can penetrate to Earth’s surface. Although the stratosphere’s ozone is spread over many km in altitude, it is commonly called the ozone layer. If all this ozone that is so essential for life were in a single layer of pure ozone at conditions near Earth’s surface, it would be only about 3 millimeters thick! Some classes of chemical species, especially the chlorofluorocarbons or Freons formerly used as refrigerants, are known to react in ways that destroy stratospheric ozone, and their elimination from commerce has been one of the major objectives of efforts in achieving sustainability. Above the stratosphere are the atmospheric mesosphere and thermosphere. which are relatively less important in the discussion of the atmosphere. Radiation energetic enough to tear electrons away from atmospheric molecules and atoms reaches these regions giving rise to a region containing ions called the ionosphere. Earth’s atmosphere is crucial in absorbing, distributing, and radiating the enormous amount of energy that comes from the sun. A square meter of surface directly exposed to sunlight unfiltered by air would receive energy from the sun at a power level of 1,340 watts. Called the solar flux, this level of power impinging on just one square meter could power an electric iron or thirteen 100-watt light bulbs plus a 40-watt bulb! If one considers Earth’s cross-sectional area, the rate of total incoming solar energy is huge. Incoming solar radiation is in the form of electromagnetic radiation, which has a wavelike character in which shorter wavelengths are more energetic. The incoming radiation in the form electromagnetic radiation centered in the visible wavelength region with a maximum intensity at a wavelength of 500 nanometers (1 nm = 10-9m) is largely absorbed and converted to heat in the atmosphere and at Earth’s surface. On average, the incoming solar energy must be balanced with heat energy radiated back into space; otherwise Earth would have melted and vaporized long ago. The outbound heat energy radiates into space as infrared radiation between about 2 micrometers and 40 μm (1 μm = 10-6 m)with a maximum intensity at about 10 μm). This energy is delayed on its outward path by being re-absorbed by water molecules, carbon dioxide, methane, and other minor species in the atmosphere. This has a warming (greenhouse) effect that is very important in sustaining life on Earth. As discussed in Chapter 10, anthrospheric discharges of greenhouse gases, especially carbon dioxide and methane, are likely causing an excessive greenhouse effect, which will have harmful effects on global climate. Climate Largely determined by conditions in the atmosphere, climate is crucial to the well-being of humans and other organisms on Earth. Weather refers to such factors as rain, wind, cloud cover, atmospheric pressure, and temperature whereas climate involves these conditions over a long period of time such as the warm, low-humidity weather that prevails in southern California or the cool, generally rainy conditions of Ireland. Meteorology is the science of the atmosphere and weather and climate is addressed by climatology. Much of the driving force behind weather and climate is due to the fact that the incoming flux of solar energy is very intense in regions around the equator and very low in polar regions due to the angles at which the solar flux impinges Earth in these regions. Heated equatorial air tends to expand and flow away from the equatorial regions, creating winds and carrying with it large quantities of energy and water vapor evaporated from the oceans. As the air cools, water vapor condenses forming precipitation and warming the air from heat released when water goes from a vapor to a liquid. This process is the driving force behind hurricanes and typhoons which can result in torrential rainfalls and damaging winds. Meteorological phenomena have an important influence on air pollution. An important example occurs with temperature inversions in which a layer of relatively warm air confines a surface layer of somewhat cooler, more dense stagnant air close to the ground. Hydrocarbons and nitrogen oxides confined in the stagnant air mass are acted upon by solar energy to cook up the noxious mixture of ozone, oxidants, aldehydes and other unpleasant materials that constitute photochemical smog. Winds are important in air pollution. The lack of substantial wind is required for the formation of photochemical smog. Sulfur dioxide emitted to the atmosphere may be relatively innocuous near the point of discharge but transformed to harmful acid rain as it is carried some distance by wind. That is the reason that New England and parts of Canada can be afflicted with acid rain from sulfur dioxide given off by coal-fired power plants some distance away in the Ohio River Valley. Climate and sustainability go hand-in-hand. Favorable climate conditions and a relatively unpolluted atmosphere are important parts of natural capital. A favorable climate is required to maintain food productivity. One of the greatest concerns with the emission of excessive amounts of greenhouse gases to the atmosphere is warming that will result in catastrophic drought in formerly productive agricultural regions leading to much reduced food production and even widespread starvation. Anthrospheric Influences on the Atmosphere Human activities in the anthrosphere strongly influence the atmosphere and have enormous potential to affect Earth’s environment as a whole. Photochemical smog with its destructive ozone and other oxidants along with other pollutants and visibility-obscuring atmospheric particulate matter results when nitrogen oxides and reactive hydrocarbons, largely from internal combustion vehicle engine exhausts, are emitted to the atmosphere from the anthrosphere. The sulfur dioxide and nitrogen oxide precursors to strong sulfuric and nitric acids in acid rain come from anthrospheric activities associated with fossil fuel combustion and other sources such as roasting of sulfide metal ores. Because of factors such as these, one of the most important aspects of sustainability is the construction and operation of the anthrosphere in ways that preserve the quality of the atmosphere. One of the important aspects of green chemistry is to use products and processes that do not contribute to damage to the atmosphere.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/08%3A_The_Five_Environmental_Spheres_and_Biogeochemical_Cycle/8.03%3A_New_Page.txt
As illustrated in Figure 8.5, the geosphere is the solid Earth (which sometimes is not so solid when earthquakes or volcanic eruptions occur). The geosphere is an enormous source of natural capital. It provides the platform upon which most food is grown and is the source of plant fertilizers, construction materials, and fossil fuels that humans use. As part of its natural capital, the geosphere receives large quantities of consumer and industrial wastes. Past and current practices of using the geosphere as the anthrosphere’s waste dump are ultimately unsustainable. As shown in Figure 8.5, the geosphere interacts strongly with the hydrosphere, atmosphere, biosphere, and anthrosphere. Managing and preserving Earth’s natural capital are of utmost importance to sustainability. Earth is in the shape of a geoid defined by the levels of the oceans and a hypothetical sea level beneath the continents. It is slightly pear-shaped rather than being a perfect sphere because of differences in gravitational attraction in different parts of Earth. Although humans have flown hundreds of thousands of km into space, they have been unable to penetrate more than a few km into Earth’s crust. The geosphere is a layered structure most of which is hot enough to melt rock. Earth’s core is a huge ball of iron at a temperature above the normal melting point of iron, but solid due to the enormous pressure that it is under. Above this core is the mantle composed of rock and ranging in depth between 300 km and 1,890 km. The deeper inner mantle, though hot enough for the rock to be liquid under ordinary pressures is solid because of the enormous pressure to which it is subjected. On top of the inner mantle is the outer mantle at a depth between 10 km and 300 km composed of hot molten rock called magma. Floating on the magma is the solid lithosphere composed of relatively strong rock, varying in thickness from just a few to as much as 400 km, averaging about 100 km. The transition layer between the molten magma and the lithosphere is the athenosphere composed of hot rock that is relatively weak and plastic. Earth’s crust is the outer layer of the lithosphere, which is only 5-40 km thick. Introduced to much controversy in the mid-1900s, the theory of plate tectonics views Earth’ssurface as consisting of huge lithospheric plates upon which the continents and Pacific Ocean rest, behaving as units. Earth’s crust is a dynamic system in which the lithospheric plates move relative to each other by, typically, a few centimeters per year. When abrupt plate movement occurs, an earthquake results. Magma coming to the surface along plate boundaries results in emissions of hot and molten rock, ash, and gases in the form of volcanoes. There are three kinds of boundaries between tectonic plates • Divergent boundaries on ocean floors between tectonic plates that are moving apart are where hot magma undergoes upwelling and cooling to form new solid lithospheric rock, creating ocean ridges. • Convergent boundaries where plates move toward each other forcing matter downward into the asthenosphere in subduction zones, eventually to form new molten magma and in some cases forcing matter upward to produce mountain ranges • Transform fault boundaries where two plates move laterally relative to each other creating fault lines along which earthquakes occur The conditions outlined above drive the tectonic cycle. In this cycle, there is upwelling of molten rock magma at the boundaries of divergent plates. This magma cools and forms new solid lithospheric material. At convergent boundaries, solid rock is forced downward and melts from the enormous pressures and contact with hot magma at great depths, reforming magma. This cycle and the science of plate tectonics explain once puzzling observations of geospheric phenomena including the opening and spreading of ocean floors that create and enlarge oceans, the movement of continents relative to each other, formation of mountain ranges, volcanic activity, and earthquakes. Earth’s Composition Characterized by definite chemical composition and crystal structure, about 2000 known minerals compose Earth. Most rocks in the crust are composed of only about 25 common minerals. With a composition of 49.5% oxygen and 25.7% silicon, Earth’s crust is largely composed of chemical compounds of these two elements with smaller amounts of aluminum, iron, carbon, sulfur, and other elements. Other than aluminum and iron, only about 1.6% of Earth’scrust consists of the kinds of rock that must serve as important resources of metals, phosphorus required for plant growth, and other essential minerals. Careful management of this resource of essential minerals is one of the primary requirements for sustainability. As molten magma penetrates to near the top of Earth’s crust then cools and solidifies, it forms igneous rock. Exposed to water and the atmosphere, igneous rock undergoes physical and chemical changes in a process called weathering. Weathered rock material carried by water and deposited as sediment layers may be compressed to produce secondary minerals, of which clays are an important example. Molded and heated to high temperatures to make pottery, brick, and other materials, clays were one of the first raw materials used by humans and are still widely used today. A part of the crust crucial for the existence of humans and most other non-aquatic life forms is the thin layer of weathered rock, partially decayed organic matter, air spaces, and water composing soil (see Chapter 11) that supports plant life. Were Earth the size of a geography classroom globe, the average thickness of the soil layer on it would be only about the size of a human cell! The to player of soil that is most productive of plants is topsoil, which is often only a few centimeters thick in many locations, or even non-existent where poor cultivation practices and adverse climatic conditions have led to its loss by wind and water erosion. The conservation of soil and enhancement of soil productivity are key aspects of sustainability (see Chapter 11). The Geosphere in Relation to Other Environmental Spheres Virtually all things and creatures commonly regarded as parts of Earth’s environment are located on, in, or just above the geosphere. Major segments of the hydrosphere including the oceans, rivers, and lakes rest on the geosphere, and groundwater exists in aquifers underground. Water dissolves minerals from the geosphere that nourish aquatic life. These minerals and rock particles eroded by moving water from the geosphere are deposited in layers and transformed into rock again. The atmosphere exchanges gases with the geosphere. For example organic carbon produced by photosynthetic plants from atmospheric carbon dioxide may end up as soil organic matter in the geosphere, and the photosynthetic processes of plants growing on the geosphere put elemental oxygen back into the atmosphere. The majority of biomass of organisms in the biosphere is located on or just below the surface of the geosphere. Most structures that are parts of the anthrosphere are located on the geosphere, and a variety of wastes from human activities are discarded to the geosphere. Modifications and alterations of the geosphere can have important effects on other environmental spheres and vice versa.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/08%3A_The_Five_Environmental_Spheres_and_Biogeochemical_Cycle/8.04%3A_New_Page.txt
Composed of living organisms and the materials and structures that they produce, the biosphere is one of the five major environmental spheres. The nature of the biosphere and its essential role in sustainability are the topics of Chapter 13. Biochemistry, the chemistry that occurs in the biosphere, is discussed in Chapter 7. The biosphere is of obvious importance to the discussion of soil and agriculture in Chapter 12. Aspects of the biosphere are important in sustainability covered in other chapters of the book. The living organisms that compose the biosphere are the topic of the science of biology. The classes of biomolecules that make up these organisms are outlined in Chapter 7. They include proteins that are the basic building blocks of organisms, carbohydrates made by photosynthesis and metabolized by organisms for energy, lipids (fats and oils), and the all-important nucleic acids(DNA, RNA), genetic materials which define the essence of each individual organism and act as codes to direct protein biosynthesis and reproduction. Hierarchical organization is a characteristic of living organisms in the progression of biomolecules < living cells < organs < organisms < the biosphere, itself. Organisms carry out metabolic processes in which they chemically alter substances to obtain energy and to synthesize new biomass. An essential function of organisms is reproduction, and their young undergo various stages of development. Through their DNA organisms express heredity, and modifications of DNA cause mutations. Although an imperfect system of classification, organisms are regarded as belonging to several kingdoms. Three of these are organisms capable of existing as single cells, but often occurring in colonies of undifferentiated cells clumped together: (1) Archaebacteria without defined cell nuclei that do not require oxygen or light and often exist in extreme environments such as hot springs; (2) eubacteria without defined cell nuclei including heterotrophs that metabolize organic material, cyanobacteria that obtain energy through photosynthesis, and members that obtain their energy by mediating reactions of inorganic matter; and (3) protists consisting of single-celled organisms that have defined cell nuclei enclosed by a nuclear membrane and often have animal-like features, such as moveable hair-like flagella that enable the organisms to move in water. At more complex levels are generally multicelled plantae (plants) and animalia (animals), as well as fungi including yeasts, molds, and mushrooms. Several terms are used to describe organisms and their place in ecosystems. A population of organisms consists of a group of the same species. Groups coexisting in the same location makeup a community. Interacting communities and their physical environment make up an ecosystem, all of which grouped together constitute the entire biosphere. The basis of any ecosystem is its productivity, the ability to produce biomass, usually through photosynthesis in which organisms remove carbon dioxide from the atmosphere and fix it in the form of organic matter that is further converted by biochemical processes to proteins, fats, DNA, and other life molecules. These biomolecules constitute the basis of the whole ecosystem food chain upon which the remainder of the organisms in the food chain depend for their existence. In water, the most biomass at the base of the food chain is produced by algae, photosynthetic phytoplankton growing suspended in water. Some protozoa and some bacteria also have photosynthetic capabilities. The major photosynthetic organisms in terrestrial ecosystems are plants growing in soil. The outline of the main features of the biosphere are shown in Figure 8.6. For the most part the biosphere is anchored by dominant plant species that are the major producers of biomass. The dominant plant species may also modify the physical environment in ways that facilitate the existence of other species. The big trees dominant in a rain forest obviously form an environment to which other organisms adapt, for example, by providing safe nesting places for birds. The shade provided by the trees makes up a microclimate and a degree of shelter at ground level in which certain kinds of organisms can thrive. Lichen, a synergistic combination of fungi and photosynthetic algae that grow on the surface of rocks weather the rocks to eventually produce soil in which other plants can grow. The biosphere has undergone massive evolutionary and climate-related changes over millions of years. Much more rapid and sometimes dramatic changes have been caused by human influences. Arguably the most notable of these took place after Columbus discovered the Americas in 1492. Separated by vast oceans, the biospheres of the Eastern and Western hemispheres had evolved largely independent of each other. As humans introduced organisms from one hemisphere to the other an often spectacular phenomenon called ecological release took place as populations of some species exploded when they were introduced into regions free of their natural predators. The Bluegrass State of Kentucky got its name from the prolific growth of this grass introduced from Europe; clover from Europe also grew rapidly in the New World. Newly introduced peachtrees grew so well in the Carolinas and Georgia that fears were expressed over the potential development of a “peach tree wilderness.” Some regions of Peru were inundated with newly introduced mint. It is possible that a human population explosion in parts of Africa was made possible by productive corn from the Western Hemisphere enabling the removal of millions of people from the continent into slavery without depleting Africa’s population. Originating in the Andes Mountains of South America, the potato became a staple of the diet in Ireland. When the catastrophic Phytophthora infestans fungal blight decimated the crop in the 1840s, Ireland lost half its population to starvation and emigration. Tragically, smallpox introduced to North America by Europeans decimated Native American populations, reducing populations of some tribes by 90%. The biosphere has significant effects on the other environmental spheres and vice versa. Materials generated in the biosphere are used in the anthrosphere; wood is one such material. Biological productivity is largely determined by conditions of geospheric soil. The availability of the hydrosphere’s water largely determines kinds and populations of organisms. Fertilizers and pesticides produced in the anthrosphere have a strong influence on biological productivity in agriculture. Shielding organisms from pollutants, wastes, and toxic substances generated in the anthrosphere is a high priority in environmental protection. The design and operation of the anthrosphere strongly influence the nature of the biosphere and its productivity, especially in the agricultural sector. The biosphere largely determines Earth’s environment. The atmosphere’s oxygen was generated by photosynthetic bacteria eons ago. Lichen communities of synergistically growing algae and fungi act upon geospheric rock forming soil that supports plant life. The anthrosphere of less industrialized societies has been largely a product of the biosphere in which the human residents have existed. Massive herds of bison provided food, robes, and the material for the teepee dwellings of North American Plains Native Americans. The availability and kinds of wood have largely determined the nature of dwellings constructed in many societies. More than 2000 years ago, domesticated animals harnessed to carts, wagons, and plows provided humans with mobility and the source of power used to cultivate soil. Many societies, including the Amish farmers in the U.S., still use horses, donkeys, mules, oxen, and water buffalo for land cultivation and transport of goods. One of the basic tenets of green chemistry and sustainability is the use of materials from the biosphere to replace those produced from scarce, expensive petroleum. In achieving sustainability humans have much to learn from the organisms in the biosphere which over millions of years have developed essential tools of survivability. An important aspect is the ability to thrive and be productive under the mild, safe conditions under which organisms may exist, conditions which are most desirable for green chemical synthesis and other sustainable activities. Even the conditions under which some thermophilic bacteria thrive in the thermal hot springs of Yellowstone National Park are mild compared to the much higher temperature, high-pressure conditions required in many chemical syntheses. The intolerance of living organisms to toxic substances provides valuable lessons regarding which substances should be avoided in the practice of green chemistry. Important lessons in the development of sustainable systems of industrial ecology (Chapter13) are provided by the biosphere. Over hundreds of millions of years of evolution organisms in the biosphere have had to evolve sustainable ecosystems for their own survival, completely recycling materials and enhancing their environment. In contrast, humans have behaved in ways that are unsustainable with respect to their own existence, exploiting nonrenewable resources and fouling the environment upon which they depend for their own survival. The complex, sustainable ecosystems in which organisms live sustainably in relationships with each other and their surroundings serve as models for anthropogenic systems. By taking lessons from the biosphere and its long-established ecosystems, humans can develop much more sustainable systems of industry and commerce. A crucial respect in which the biosphere is a key to achieving sustainability is its ability to perform photosynthesis and synthesis of specialized materials. Using carbon dioxide from the atmosphere and energy from the sun, organisms produce biogenic materials in a much greener, safer, more sustainable manner than the manner in which materials are produced in the anthrosphere. Furthermore, organisms are particularly well adapted to make a variety of complex and specialized materials that are very difficult or impossible to make by purely chemical means.
textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/08%3A_The_Five_Environmental_Spheres_and_Biogeochemical_Cycle/8.05%3A_New_Page.txt