chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
IT MUST BE ACKNOWLEDGED THAT THERE ARE RISKS IN TAKING NO RISKS
Some things for which there are no suitable substitutes are inherently dangerous. We must avoid becoming so risk adverse that we do not allow dangerous, but necessary activities (some would put sex in this category) to occur. A prime example is nuclear energy. The idea of using a“controlled atom bomb” to generate energy is a very serious one. But the alternative of continuing to burn large amounts of greenhouse-gas-generating fossil fuels, with the climate changes that almost certainly will result, or of severely curtailing energy use, with the poverty and other ill effects that would almost certainly ensue, indicates that the nuclear option is a good one.
So it is necessary to manage risk and to use risky technologies in a safe way. As discussed in Chapter 15, with proper design and operation, nuclear power plants can be operated safely and spent nuclear fuel can be processed safely. Modern technology and applications of computers can be powerful tools in reducing risks. Computerized design of devices and systems can enable designers to foresee risks and plan safer alternatives. Computerized control can enable safe operation of processes such as those in chemical manufacture. Redundancy can be built into computerized systems to compensate for failures that may occur. The attention of computers does not wander, they do not do drugs, become psychotic, or do malicious things (although people who use them are not so sure). Furthermore, as computerized robotics advance, it is increasingly possible for expendable robots to do dangerous things in dangerous areas where in the past humans would have been called upon to take risks.
Although the goal of risk avoidance in green chemistry and green technology as a whole is a laudable one, it should be kept in mind that without a willingness to take some risks, many useful things would never get done. Without risk-takers in the early days of aviation, we would not have the generally safe and reliable commercial aviation systems that exist today. Without the risks involved in testing experimental pharmaceuticals, many life-saving drugs would never make it to the market. Although risks must be taken judiciously, a total unwillingness to take risks will result in stagnation and a lack of progress in important areas required for sustainability.
17.11: The Tenth Commandment
EDUCATION IN SUSTAINABILITY IS ESSENTIAL; IT MUST EXTEND TO ALL AGES AND STRATA OF SOCIETY, IT MUST BE PROMULGATED THROUGH ALL MEDIA, AND IT IS THE RESPONSIBILITY OF ALL WHO HAVE EXPERTISE IN SUSTAINABILITY
Although the achievement of sustainability is the central challenge facing humanity, most people know pathetically little about it. The reader of this chapter belongs to a small fraction of the populace who have been exposed to the idea of sustainability. If asked, a distressingly large number of people would probably say that they know little about sustainability (and some would say that they do not even care or that they are even hostile to the concept!). Therefore, education is essential and a key to achieving sustainability.
Education in sustainability must begin early with children in primary school and should be integrated into curricula from kindergarten through graduate school. By providing containers for recyclables in grade schools, there is some small benefit from the waste paper, plastics, and aluminum cans collected, but a much greater benefit in the lessons of sustainability that those containers illustrate. Green chemistry should be part of the background of every student graduating with a university degree in chemistry and the principles of green engineering should be part of the knowledge base of every engineering graduate. But of equal—often greater—importance is the education of people in nontechnical areas in the principles of sustainability. Lawyers, political scientists, economists, and medical professionals should all graduate with education in sustainability.
A particular challenge is that of informing the general public of the principles of sustainability and of its importance. The general public has more choice in its sources of information than does the captive audience of a student body, so the challenge of informing them about sustainability is greater. In this respect the media and the internet have key roles to play. Unfortunately, relative to the large amounts of media time devoted to the salacious antics of some attention-seeking fools—matters that have virtually no relevance to the lives of everyday citizens—almost no air time is devoted to sustainability, which is highly relevant to the lives of all. Therefore, those who have an interest in, and knowledge of sustainability have an obligation to get the message out through the media and the internet. | textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/17%3A_The_Ten_Commandments_of_Sustainability_and_Sensible_Measure/17.10%3A_The_Ninth_Commandment.txt |
Some ideas for measures to enhance sustainability are discussed here. Some of these are simple and readily implemented. Others are grandiose and even “far out.” Most have probably been suggested in one form or another by others or have been implemented in some places. Few require the invention of anything new. They are presented here with the idea of stimulating thought and discussion. Probably the reader can suggest other ideas for sustainability. Although most of these ideas are presented from the viewpoint of their potential implementation in the United States, most of them, or closely related measures, could be implemented in other countries as well.
The Methane Energy Economy
The basic idea of the methane energy economy is to greatly increase the role of natural gas as a fuel, particularly for transportation. Methane, CH4, is the cleanest burning of the fossil fuels and produces the least greenhouse gas per unit energy generated. Significantly more energy can be stored in a pressurized tank of methane than is the case for elemental hydrogen. Largely regarded as a rapidly depleting source of fuel as recently as the turn of the century (2000), natural gas has emerged as an abundant energy source. There are two major reasons for this abundance. One is a revolutionary development of methods for gas extraction from hydraulically-fractured tight shale formations. These formations, previously inaccessible for natural gas extraction, are widely distributed in many countries including in the U. S. wide areas of Pennsylvania, Arkansas, Louisiana, and Texas (one of the richest deposits lies beneath the city of Fort Worth, Texas). A second factor is the construction of large depots for the export of liquified natural gas from countries that formerly burned it in flares as a waste byproduct of crude oil production. As of 2010it was projected that the U.S. had 100 years of natural gas reserves and might become an exporter of this fuel.
As depletable underground sources of methane become exhausted, methane can be made by gasification of coal and biomass and the byproduct carbon dioxide from solids gasification converted to methane by reacting CO2 with hydrogen made from water electrolysis using electricity from renewable sources such as wind. Methane is readily distributed by pipeline, and a system for so doing is already in place in the U.S. and many other nations. Once properly installed underground, a pipeline is a less disruptive, highly reliable, means of transporting energy than is a high-voltage electrical line or transport of liquid fuels by train or barge. An important consideration is that the pipeline infrastructure must be very well designed, constructed, and maintained to avoid destructive blowouts. A very bad natural gas pipeline failure happened on September 9, 2010, with the rupture of a 30-inch natural gas pipeline in San Bruno, California. This devastating incident resulted in an explosion and massive fire that killed 8 people, destroyed 38 houses, damaged many more homes, and left a crater 12 meters deep, 51 m long, and 8 m wide.
Conversion to Highly Efficient Hybrid Vehicles
As noted above, methane is a very sustainable energy source that can replace most existing applications of fossil fuels. One of the most attractive options is to use methane to power locomotives, trucks, and other vehicles. The Clean Air Power Company has designed highly energy-efficient truck engines that get 90% of their energy from methane fed into the engine with intake air and that are ignited by a small amount of diesel fuel injected at the peak of the compression stroke.5 This is an application of the stratified-charge ignition concept that enables internal combustion engines to run on a fuel/air mixture that is too lean (fuel-deficient) to ignite with a spark but can be ignited by a small fuel-rich ignition zone created by injecting fuel directly into a small region of the combustion chamber (in a spark-ignited engine directly onto the spark plug).
The ultimate in automobile fuel economy would be a plug-in hybrid vehicle with a natural gas fueled stratified-charge ignition engine (see above). The battery on such a vehicle could be charged by plugging into a source of electricity and with a full charge could be driven several tens of miles distance before the internal combustion engine would need to be engaged. When needed to charge the battery, the internal combustion engine could be run at optimum speeds for comparatively long periods of time for maximum efficiency and minimum pollution. One problem with hybrid vehicles is the need for the internal combustion engine to start immediately in cold weather to provide heat for the vehicle heater. This problem may be overcome with a phase-change material held as a heated liquid in an insulated container and that releases latent heat as it solidifies.6
Diverting the Mississippi
In Chapter 8 a scheme was presented to utilize the vast resource of the Mississippi River by diverting part of its flow from the Gulf of Mexico to water-deficient regions of the southwesternU.S. and northern Mexico. The scheme would impound a fraction of the river discharge into a contained wetlands area at the mouth of the river, allow natural processes to purify the water, and use wind energy to pump the purified water as much as 2000 miles as far as southern California for municipal water supply and irrigation. Major sustainability aspects of this scheme are the following:
• Plant growth in the constructed wetlands impoundment would remove nutrients from the water thereby reducing their discharge into the Gulf of Mexico where they cause a eutrophied “dead zone.”
• Plant biomass harvested from the constructed wetlands could be used for synthetic fuels production.
• River sediment collected in the impoundment would aid in coastal restoration.
• Water transported to arid regions would reduce current excessive demand for water from the Colorado and Rio Grande rivers.
• Fish and freshwater shrimp grown in the waterway and impoundments along its course could provide a significant source of protein
Three Day Per Week Mail Delivery
In the modern electronic age, communication by conventional mail is relatively less important than it was in the pre-computer age. Significant cost and energy savings could be achieved by reducing mail delivery to three days per week, Monday, Wednesday, Friday on some routes and Tuesday, Thursday, Saturday on alternate routes. Such a system would need to be implemented gradually to avoid layoffs of personnel.
Rail Connections to and within All Major Airports
Ground transportation systems to many major airports are cumbersome, slow, and wasteful oftime and energy. Each large airport should have an integrated rail system linking terminals to the rail system serving the city and, within the airport area itself, parking areas, car rental locations,hotels and other facilities. Such a totally integrated system would save energy and time and significantly reduce pollution in the airport complex.
Fertilization of Algae Beds with Exhaust Carbon Dioxide
This proposal is to use combustion exhaust gas from powerplant furnaces to enrich algae cultures with carbon dioxide. The efficiency of algal media to sequester carbon dioxide is enhanced by the ability of algae to make media basic in the production of biomass ({CH2O}) by the following photosynthesis reaction
$\ce{HCO3^{-} + H2O + } h \nu \rightarrow \ce{(CH2O) + O2 + OH^{-}}$
Such a system would further purify the exhaust gas by sequestering acid gas, particularly SO2, and particulate matter
Nuclear Fuel Reprocessing
Burying spent nuclear fuel in a permanent disposal repository is a wasteful practice inconsistent with the principles of sustainability and green chemistry. A much better practice is to reprocess the spent fuel as explained in the discussion of nuclear energy in Section 15.10. The relatively short-lived fission products can be isolated for burial and will decay to harmless levels within a few centuries. The longer-lived transuranic elements, several of which are useful fissionable fuels can be treated by neutron bombardment, a process called transmutation. As an interim solution, relatively long term storage of spent fuel elements at reactor sites (up to 100 years, for example) enables decay of a very large fraction of the initial radioactivity making it easier to reprocess the fuels. Such long term storage also allows time for the development of improved recycling technologies and breeder reactors that can use spent fuels.
Extraction of Heat from Wastewater
Wastewater is a significant source of heat that can be extracted with a heat pump for heating homes and commercial buildings. The concept involves collecting filtered sewage in a storage tank and pumping heat from the water with a heat pump. Subsequently, the water, cooled by several degrees, can be discharged to the sewer. The same system can be used for cooling in which case heat can be pumped into the water before it is discharged | textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/17%3A_The_Ten_Commandments_of_Sustainability_and_Sensible_Measure/17.12%3A_Some_Sensible_Measures_for_Sustainability.txt |
THE PRODUCTION AND USE OF TOXIC, DANGEROUS, PERSISTENT SUBSTANCES SHOULD BE MINIMIZED AND SUCH SUBSTANCES SHOULD NOT BE RELEASED TO THE ENVIRONMENT; ANY WASTES DISPOSED TO DISPOSAL SITES SHOULD BE CONVERTED TO NONHAZARDOUS FORM
The most fundamental tenet of green chemistry is to avoid the production and use of toxic, dangerous, persistent substances and to prevent their release to the environment. With the caveat that it is not always possible to totally avoid such substances (see the Ninth Commandment below)significant progress has been made in this aspect of green chemistry. Much research is ongoing in the field of chemical synthesis to minimize toxic and dangerous substances. In cases where such substances must be used because no substitutes are available, it is often possible to make minimum amounts of the materials on demand so that large stocks of dangerous materials need not be maintained.
Many of the environmental problems of recent decades have been the result of improperly disposed hazardous wastes. Current practice calls for placing hazardous waste materials in secure chemical landfills. There are two problems with this approach. One is that, without inordinate expenditures, landfills are not truly “secure” and the second is that, unlike radioactive materials that do eventually decay to nonradioactive substances, some refractory chemical wastes never truly degrade to nonhazardous substances. Part of the solution is to install monitoring facilities around hazardous waste disposal facilities and watch for leakage and emissions. But problems may show up hundreds of years later, not a good legacy to leave to future generations.
Therefore, any wastes that are disposed should first be converted to nonhazardous forms. This means destruction of organics and conversion of any hazardous elements to forms that will not leach into water or evaporate. A good approach toward this goal is to co-fire hazardous wastes with fuel in cement kilns; the organics are destroyed and the alkaline cement sequesters acid gas emissions and heavy metals. Ideally, hazardous elements, such as lead, can be reclaimed and recycled for useful purposes. Conversion of hazardous wastes to nonhazardous forms may require expenditure of large amounts of energy (see the fourth commandment, above)
Literature Cited
1. Ehrlich, Paul R., The Population Bomb, Ballantine Books, New York, 1968
2. Simon, Julian, Hoodwinking the Nation, Transaction Publishers, Somerset, NJ, 1999.
3. Diamond, Jared, Collapse: How Societies Choose to Fail or Succeed, Viking, New York, 2005.
4. Kolstad, Charles D., Environmental Economics, Oxford University Press, Oxford, UK, 2010.
5. Davy, M., R. L. Evans, and A. Mezo, “The Ultra Lean Burn Partially Stratified Charge Natural Gas Engine,” presented at the 9th International Conference on Engines, and Vehicles, Naples, Italy, September 2009, seehttp://papers.sae.org/2009-24-0115.
6. Kenisarin, Murat and Khamid Mahkamov, “Solar Energy Storage Using Phase Change Materials,” Renewable and Sustainable Energy Reviews,11, 1913-1965 (2007).
Questions and Problems
Access to and use of the internet is assumed in answering all questions including general information, statistics, constants, and mathematical formulas required to solve problems. These questions are designed to promote inquiry and thought rather than just finding material in the text. So in some cases there may be several “right” answers. Therefore, if your answer reflects intellectual effort and a search for information from available sources, your answer can be considered to be “right.
1. The U.S. Geological Survey posts prices in metals corrected for inflation from 1959 through 1998 athttp://minerals.usgs.gov/minerals/.../metal_prices/. Other internet sources give current prices of metals. Using these resources, look up past and current prices of copper, chromium, nickel, tin and tungsten, the metals in the historic Ehrlich/Simon wager to see how the wager would have turned out in recent years. Explain your observations and suggest future trends.
2. At the beginning of the chapter were stated six “great challenges to sustainability.” Using adiagram, if appropriate, suggest how these challenges are very much interrelated.
3. Much of Earth’s population is dependent upon seafood for its protein. Suggest how provisionof adequate food by this route might be affected by the challenges of adequate energy supply as related to contamination of Earth’s environment with toxic and persistent substances.
4. Suggest scenarios by which the people of your country might be adversely affected and its population might even decline because of declines in key environmental support systems. Suggest how these problems might be avoided in the future by actions taken now.
5. Look up telecommuting on the internet and explain how it might fit with the ten commandments of sustainability.
6. In a quote attributed to the British sociologist Martin Albrow, sociospheres consist of “distinct patterns of social activities belonging to networks of social relations of very different intensity, spanning widely different territorial extents, from a few to many thousands of miles.” Suggest how sociospheres relate to the anthrosphere. What is your sociosphere? How might sociospheres be important in sustainability.
7. It has been estimated that U.S. coal resources can provide for up to three centuries of U.S.energy needs and could provide energy for much of the world. Is this a sustainable source ofenergy? Explain.
8. Explain how unsustainability may be the result of depletion of resources, environmentalpollution, or a combination. Give an example in which environmental pollution reducesresource availability.
9. From news events of the last five years, cite evidence that global climate change is in factoccurring. Cite evidence to the contrary.10. Currently, population growth tends to occur in coastal areas. Suggest how global warmingmight reverse that trend.11. Prior to European settlement, vast areas of the U.S. Great Plains supported huge herds of bisonthat provided the base for a viable Native American population. Given the erratic climate ofthat region, depleted groundwater for irrigation, and the aversion of many people to dwell onthe “lone prairie,” some authorities have suggested that these areas revert to a bison-basedsystem, sometimes called the “buffalo commons.” Suggest how such a system might be viableand sustainable and give arguments against it.12. Is the term “secure chemical landfill” an oxymoron? What are the alternatives to suchlandfills?13. Among the ideas for sustainability discussed in Section 17.12, might be added a suggestion forgreatly increasing the numbers of telecommuters. What are telecommuters? How mightincreasing their numbers enhance sustainability?14. Some countries in Asia have constructed new airports with exemplary rail connections. Citeand discuss several examples.
Supplementary References
Assadourian, Erik and Muhammed Yunus, State of the World 2010: Transforming Cultures from Consumerism to Sustainability, W.W. Norton and Company, New York 2010.
Ayres, Robert U., and Benjamin Warr, The Economic Growth Engine: How Energy and Work Drive Material Prosperity, Edward Elgar, Northampton, MA, 2009.
Brand, Stewart, Whole Earth Discipline: Why Dense Cities, Nuclear Power, Transgenic Crops, Restored Wildlands, and Geoengineering are Necessary, Penguin, New York, 2010.
Dresner, Simon, The Principles of Sustainability, 2nd ed., Earthscan Publications, London, 2008.
Hawken, Paul, Amory Lovins, and L. Hunter Lovins, Natural Capitalism: Creating the Next Industrial Revolution, Back Bay Books, Boston, 2008.
Lankey, Rebecca L., and Paul T. Anastas, Advancing Sustainability through Green Chemistry and Engineering, American Chemical Society, Washington, DC, 2002.
Owen, David, Green Metropolis: Why Living Smaller, Living Closer, and Driving Less are the Keys to Sustainability, Riverhead Hardcover, New York, 2009.
Vallero, Daniel A., Sustainable Design: The Science of Sustainability and Green Engineering, Wiley, Hoboken, NJ, 2008. | textbooks/chem/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/17%3A_The_Ten_Commandments_of_Sustainability_and_Sensible_Measure/17.9%3A_The_Eighth_Commandment.txt |
Introduction
The rise of the industrial complex and society’s strong need to increase its quality of life, security, wealth, and future productivity have helped to relegate the discipline of chemistry and its work to a very lofty position of prominence. Our progress toward achieving a Utopia of human intellectual growth, freedom of choices, equality within classes, and wealth acquisition have been supplemented and encouraged by the fruits of chemical research & development efforts since the dawn of last century. For example, we have witnessed significant strides in the following facets of our lives:
• Water: the introduction of reagents such as chlorine and fluoride to not only reduce and eliminate microbial contamination, but to also improve dental health, respectively;
• Food: we have seen the ability to use fluoro, chlorocarbons (CFCs, “Freon”) to help refrigerate foods for much longer “shelf life”, packaging that has specific mechanical and antimicrobial properties, and preservatives/food processing aids for eliminating spoilage;
• Clothing: synthetic fibers such as rayon, nylon, polyester, and Kevlar have been indispensable to not only normal wear, but to military, geologic, agricultural, and packaging applications;
• Safety: materials chemistry has spurred the manufacture of lightweight, but tough materials such as helmets, resins, plastics, etc., that have helped save lives in a number of military and industrial sectors.
The chemical industry has played a vital role in the emergence of high quality life within our human societies in these latter areas as well as many others. In fact, the industry is among the top ten industrial sectors in terms of gross output and sales. One of the most adverse realities and perceptions of the industry is its impact on the environment. Although it is a very safe industry as a whole, well publicized disasters have contributed to a poor public perception. For example, issues such as eutrophication, persistent organic pollutants, BOD, and the famous burning Cuyahoga River are typical of the calamities that damage the reputation and importance of our industry.
In fact, the morning of June 22, 1969 was witness to a fire from the burning of oil and debris that collected on the surface of the Cuyahoga River (Cleveland). The event roused up a media storm, captured national attention, and was featured in a commentary on the nation’s environmental problems (Time magazine, Aug. 1). It was a clarion call to how calamitous the nation’s environmental problems had become. The then EPA Administrator Lisa Jackson commented years later (2011) that the fire evidence of “the almost unimaginable health and environmental threats” from water pollution of the time. Clearly, as one environmentalist stated, “when rivers are on fire, you know things are bad.” The image was seared into the nation’s emerging environmental consciousness and fueled a demand for greater regulation. In 1972, Congress passed the federal Clean Water Act (CWA). Today, the nation’s waters are far cleaner, and many credit CWA with preventing other rivers from befalling a similar fate.
The reality is that the fire was a symbol of how bad river conditions in the US had once been. That fire was not the first time an industrial river had caught on fire, but it was the last. For example, the late 19th and early 20th century saw many river fires. At least 13 on the Cuyahoga occurred, whereas there were others on rivers in Detroit, Baltimore, Philadelphia, and elsewhere. Although industries, chemists, and governments do not ever intentionally wish to or contribute to cause harm, the law espoused by “Murphy” holds. The contemporary form of the law can be traced back to 1952 as an epigraph for a mountaineering book by Sack who described the adage as an “ancient mountaineering adage”:
Anything that can possibly go wrong, does (Sack, 1952).
Thus, the challenge for contemporary chemists and allied workers in view of the law is to ensure that any and all reactions, processes, and designs keep this general philosophy in mind. Our goal in the twenty first century is to mitigate potential disasters that can occur from our chemistries and engineering.
Sustainable Development
The concept of “sustainable development” was articulated from a UN Commission on Environment and Development in 1987 (Brundtland Commission). It is simply stated:
‘..meeting the needs of the present without compromising the ability of future generations to meet their own needs.’ Since the Brundtland Commission, a number of governments, NGOs, and societies have considered deeply what sustainable really means. Several of the key drivers heating up this
discussion include:
1. What is an acceptable rate of depletion of fossil fuels? Is there?
2. Is there an acceptable level of pollution to release into the atmosphere and water?
Obviously, waste is a given in any industrial process, but the primary question is not its generation, but how it is handled. If we produce a waste stream, can it be recovered, recycled, or reused? If carbon dioxide is the waste stream, can we conclude that nature will sequester it or will it contribute to global warming (climate change)? These are questions that warrant intense analysis and follow-up. The overall fate of the planet (humanity, ecosystems, and our way of life) hang in the balance. Methods to address ensuring the continuation of our planet and its resources include:
• Not allowing for the accretion of toxics (e.g., heavy metals) from the earth’s crust and other
• beds;
• Not continuously creating non-degradable/perishable compounds (e.g., CFC's, benzodioxocins) that can cause significant damage to the ozone and aquatic life, respectively;
• Ensuring that the natural processes in place on earth are not disrupted (e.g., ravaging
• rainforests, polluting watersheds);
• Not depleting or hoarding natural resources (e.g., water).
We as an educated community (students, teachers, government workers, industry workers, etc.)recognize and appreciate that the Earth is equipped to deal with hiccups and disturbances in its processes, but continuously pushing such boundaries of recovery will only lead to disasters.Indeed, a paradigm for ensuring that our chemical processes do not exceed the capacity of the Earth to engage in buffering and recovery is the following:
To promote innovative chemical technologies that reduce or eliminate the use or generation of hazardous substances in the design, manufacture, and use of chemical products.
The above statement defines the concept of “Green Chemistry.” The subject has now become a very well accepted and welcomed part of many chemistry curricula and industry philosophies. In fact, the concept of a triple bottom line, i.e., financial, social, and environmental owes its existence to green chemistry. Green chemistry has become an incredibly indispensable field within the panoply of chemistry subject matter. Its content and mission, however, are very unique in comparison to its sister courses because it embodies a social and environmental focus. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/01%3A_Key_Elements_of_Green_Chemistry/1.01%3A_Introduction_to_Green_Chemistry.txt |
Agricultural Support
The growing of crops in different farming regions to feed the masses owes its potential greatly to harnessing the power of chemicals such as herbicides, pesticides, fungicides, etc. Although in general we tend to think of the benefits of pesticides, there are a number of issues associated with their use that go well beyond their ability to promote the cultivation and availability of food. For example, DDT (Figure \(1\)) is a chemical that has seen much use in the US. Over the period of 1950 to 1980, it was used in agriculture at the rate of more than 40,000 tons each year worldwide and it has been estimated that a total of 1.8 million tons have been produced around the world since the 1940s. In the United States, it was manufactured by numerous companies including Monsanto, Ciba, Montrose Chemical Company, and Pennwalt. More than 600,000 tons (1.35 billion pounds) were applied in the US before it was banned in 1972.
Nature’s Best
The idea that nature produces chemicals that are “green” is a fallacy or a myth. The concept of green deserves some clarification at this point. What is green and what isn’t is actually a matter of nature, quantity, human safety, long term effects, and acute toxicity. Green is generally a term associated with a sustainable (renewable) product or non-toxic process whose employment in society has no acute toxicity and a general favorable life cycle analysis. There are a number of documented chemicals in nature that are extremely toxic even at small doses thus invalidating the idea that nature is “green”. For example, aflatoxin B1, shown below in Figure \(2\), is a toxin produced by Apsergillus flavus and A. parasiticus that is one of the most potent carcinogens known. It is a common contaminant of a variety of foods including peanuts, cottonseed meal, corn, and other grains as well as animal feeds. According to the Food and Agriculture Organization, the worldwide maximum levels of aflatoxin B1 is in the range of 1–20 µg/kg in food, and 5–50 µg/kg in dietary cattle feed.
Shown in Figure \(3\) below is a derivative of lysergic acid. Lysergic acid, also known as D-lysergic acid and (+)-lysergic acid, is a precursor to a diverse array of ergoline alkaloids produced by the ergot fungus and found in the seeds of Turbina corymbosa (ololiuhqui), Argyreia nervosa (Hawaiian Baby Woodrose), and Ipomoea tricolor (morning glories, tlitliltzin). Amides of lysergic acid, lysergamides (see Fig. 3), are widely used as pharmaceuticals and as psychedelic drugs (LSD). Lysergic acid received its name from the lysis of various ergot alkaloids.
Figure \(2\): Shown above is the diethylamide version of lysergic acid.
https://www.wikiwand.com/en/Lysergic_acid
Again, as already stipulated, nature’s best does not necessarily imply that her products are “green”. Green, by virtue of the product distribution found in nature (e.g. cocaine), can certainly pose serious if not lethal implications for human consumption. We all know that sodium chloride is ubiquitous within nature. It is a common salt that manifests a cubic crystal lattice structure that readily dissolves in water where it is mostly found. Humans consume it in great quantities as a flavorant, seasoning, and preservative. Nevertheless, its mass consumption especially in the US has led to > 65 million people afflicted with high blood pressure while numerous others have all other maladies associated from its overconsumption.
Accidents & Waste Minimization
Waste because of the nature of living processes is a natural consequence. Within the schema of living cell frameworks, the ability to generate waste is considered a clear indication of a “living” system. Waste is a necessary by-product of any work function even if it is heat, light, or some other form of energy by virtue of the intrinsic inefficiencies associated with the utilization of “fuel” or raw materials. For example, even a newborn that is fed on breast milk, a ideal natural food source for babies, does not simply produce a non-material, i.e., heat/energy dissipation, waste product. The problems associated with waste including inherent inefficiencies with using resources are how it builds up and affects the quality of the environment. In general, we cannot prevent waste because everything is subject to entropy.
The crux of the problem associated with waste boils down to safety. As alluded to in the Prezi example above, rust can cause material fatigue or loss of integrity that can compromise functionality. Additionally, chemical processes that produce waste need to adhere to the three general “R”s of Green Chemistry.
• Reduce
• Reuse
• Recycle | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/01%3A_Key_Elements_of_Green_Chemistry/1.02%3A_Green_Chemistry_Concepts.txt |
The LeBlanc Case
The Leblanc process industrial factories were very damaging to the local environment. It was an early industrial process for producing soda ash (sodium carbonate) that was named after its inventor. It required two stages: sodium sulfate (salt cake) produced from sodium chloride (salt, we just discussed this!) followed by reaction with coal and calcium carbonate to produce sodium carbonate. It eventually became obsolete after development of the Solvay process. The process of generating salt cake involved reacting salt with sulfuric acid which released hydrochloric acid gas, an acid that was industrially useless in the early 19th century and therefore vented into the atmosphere. Also, an insoluble, smelly solid waste (CaS, galligu) was produced. The inefficiency of the process was horrific! Each ton of soda ash, the process produced ~1.5 tons of hydrogen chloride and >1 ton of calcium sulfide, the useless waste product! Galligu had no economic value so it was piled in heaps and spread on fields where it weathered to release
hydrogen sulfide (what a smell!).
Leblanc soda works then became targets of lawsuits and legislation. A lawsuit from 1839 alleged that, “the gas from these manufactories is of such a deleterious nature as to blight everything within its influence and is alike baneful to health and property. The herbage of the fields in their vicinity is scorched, the gardens neither yield fruit nor vegetables; many flourishing trees have lately become rotten naked sticks. Cattle and poultry droop and pine away. It tarnishes the furniture in our houses, and when we are exposed to it, which is of frequent occurrence, we are afflicted with coughs and pains in the head ... all of which we attribute to the Alkali works.”
Therefore, in 1863, the British Parliament passed the first Alkali Act, a precursor to the first modern air pollution legislation. This Act dictated that no more than 5% of the hydrochloric acid produced by alkali plants could be vented. To comply, soda works passed it up a tower packed with charcoal where it was absorbed by water flowing in the other direction. Unfortunately, the chemical works usually dumped the resulting solution into nearby bodies of water where it promptly ended up killing fish and other aquatic life.
1.04: Conclusions and Review
The greatest blight affecting the modern time is the apathy associated with ignorance. We tend to strongly adhere to wrongful beliefs—all manufacturing and industrial processes are failsafe in protecting humans and the environment. The truth is that Murphy’s Law more often than not is RIGHT. Thus, the need for pre-emptive measures as embodied by the principles of green chemistry is of paramount significance. The principles as listed below will constitute a future chapter in this book:
Prevention
It is better to prevent waste than to treat or clean up waste after it has been created.
Atom Economy
Synthetic methods should be designed to maximize the incorporation of all materials used in the process into the final product.
Less Hazardous Chemical Syntheses
Wherever practicable, synthetic methods should be designed to use and generate substances that possess little or no toxicity to human health and the environment.
Designing Safer Chemicals
Chemical products should be designed to affect their desired function while minimizing their toxicity.
Safer Solvents and Auxiliaries
The use of auxiliary substances (e.g., solvents, separation agents, etc.) should be made unnecessary wherever possible and innocuous when used.
Design for Energy Efficiency
Energy requirements of chemical processes should be recognized for their environmental and economic impacts and should be minimized. If possible, synthetic methods should be conducted at ambient temperature and pressure.
Use of Renewable Feedstocks
A raw material or feedstock should be renewable rather than depleting whenever technically and economically practicable.
Reduce Derivatives
Unnecessary derivatization (use of blocking groups, protection/ deprotection, temporary modification of physical/chemical processes) should be minimized or avoided if possible, because such steps require additional reagents and can generate waste.
Catalysis
Catalytic reagents (as selective as possible) are superior to stoichiometric reagents.
Design for Degradation
Chemical products should be designed so that at the end of their function they break down into innocuous degradation products and do not persist in the environment.
Real-time analysis for Pollution Prevention
Analytical methodologies need to be further developed to allow for real-time, in- process monitoring and control prior to the formation of hazardous substances.
Inherently Safer Chemistry for Accident Prevention
Substances and the form of a substance used in a chemical process should be chosen to minimize the potential for chemical accidents, including releases, explosions, and fires.
Review Questions
\(1\) Using lysergic acid as a starting point, indicate the general type of chemistry (you can indicate in your own words what is happening) that would need to occur in order for the epimerization of lysergic and L-lysergic acid to occur shown below. https://www.wikiwand.com/en/Lysergic_acid
\(2\).How does too much salt in a human’s diet contribute to high blood pressure?
a. What are two known chemical mechanisms (reactions) that enzymes can pursue for bonding
to 4-hydroxy-2-nonenal?
b. Come up with a way to quantify the inefficiency of a reaction; that is, derive a quantitative
measure of the wastefulness of a reaction using the LeBlanc numerical process
information contained in this chapter. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/01%3A_Key_Elements_of_Green_Chemistry/1.03%3A_Case_Study.txt |
• 2.1: Introduction to Life Cycles
Life Cycle Assessment (LCA) is a comprehensive life cycle approach that quantifies ecological and human health impacts of a product or system over its complete life cycle. It uses credible scientific methods to model steady-state, global environmental and human health impacts. It also helps decision-makers understand the scale of many environmental and human health impacts of competing products, services, policies or actions.
• 2.2: LCA/LCIA Concepts
Assumptions inherent in an LCA study are apt to change the results and conclusions derived from analysis. In addition, many different types of studies require various levels data collection and analysis. The goal and scope of a LCA defines its intent, targeted audience, and use. The intended use informs further decisions for scope, functional unit of comparison, and data collection.
• 2.3: Conclusions and Review
There are many tools to assist life cycle assessments. There are software and data packages designed for performing LCAs. No matter the form of the software, the use of some sort of LCA software and data management system is needed in LCAs. The life cycle inventory step of an LCA often requires a large data set listing hundreds of emissions to the environment.
02: Life-Cycle Analysis
Green chemistry, in addition to being a science, it is also a philosophy and nearly a religion. Attendance at American Chemical Society Green Chemistry & Engineering Conferences will instill such an ideal into any attendant because of the universal appeal and possibilities in this novel approach to radicalizing the business of doing science and engineering. Life Cycle Assessment (LCA) is a comprehensive life cycle approach that quantifies ecological and human health impacts of a product or system over its complete life cycle. It uses credible scientific methods to model steady-state, global environmental and human health impacts. It also helps decision-makers understand the scale of many environmental and human health impacts of competing products, services, policies or actions.
Please see for more information https://www.youtube.com/watch?v=NQTW7jjXVmE (LCA: Intro to Life Cycle Assessment. (2013).)
Triple bottom line (economic, social, and environmental)
Triple Bottom Line accounting enables enterprises to support sustainability analysis in their operations, products and services. LCA contributes to the Triple Bottom Line reporting by quantifying the ecological and human health performance of competing products and services (Figure 2-1). Adding the social and economic performance reporting of a product or service to the LCA results of the product or service is one way to deliver Triple Bottom Line reporting.
LCA for Decision Making
Who makes decisions
• Company product managers or planners
• Company procurement and purchasing
• Industrial sector consortia (example: aluminum manufacturers)
• Regional or national policy makers
• Consumers, customers and product users
Primary drivers and expectations for LCA:
• Learning about the environmental performance of products and services
• Minimizing production and regulatory costs
• Minimizing environmental and human health damage
• Understanding trade-offs between multiple impact categories and product phases
• Supporting equitable economic distribution and profitable operations
Types of decision situations
• strategic planning and capital investments
• (green building, waste management)
• eco-design, product development
• operational management
• (green procurement)
• communication and marketing
• (eco-labeling, product information)
Eco-labels:
• Type 1: Third-party certified multi-criteria environmental labeling. Example: Forest Stewardship Council label for wood products
• Type 2: Environmental self-declaration claims. Example: Single issue claim such as the "bio-degradable" label
• Type 3: Independently verified label with preset quantified indices. Example: Numerical water consumption rating for a dish washer
Eco-labels that require LCA:
Type 1) Third-party certified multi-criteria environmental labeling
Type 3) Independently verified label with preset quantified indices
Life Cycle Assessment (LCA) enables the creation of Type 1 and Type 3 eco-labels. These eco-labels can be powerful tools in obtaining larger shares of a specific market sector. Life cycle assessment (LCA) is a standardized programmatic tool to determine the environmental impacts of products or services. It can be described by a four-part framework as outlined by the 14044 ISO standard.
Life cycle assessment Framework
1. Goal and scope definition
2. Life cycle inventory
3. Life cycle impact assessment
4. Interpretation
This integrated framework was inspired by earlier forms of life-cycle thought from life cycle financial analysis. Examining a product from origination to use and disposal provides more holistic analysis to identify where environmental impacts originate and guide efforts in reducing impacts.
The ISO standards (http://www.iso.org/iso/home/standards.htm) provides guidance on structure framework, reuse requirements of data, study assumptions, and methods. By using more standard LCA methodologies, studies are more comparable and of greater scientific rigor. A standardized method allows LCA practitioners to manage complex datasets, provide comparisons among products, and allow benchmarking. In the absence of a standardized method, the results of LCA studies are even more variable depending on study assumptions and methods. The ISO standards help reduce the influence of the practitioner influence on study results. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/02%3A_Life-Cycle_Analysis/2.01%3A_Introduction_to_Life_Cycles.txt |
Concepts
Goal, Scope, and Definition
Assumptions inherent in an LCA study are apt to change the results and conclusions derived from analysis. In addition, many different types of studies require various levels data collection and analysis. The goal and scope of a LCA defines its intent, targeted audience, and use. The intended use informs further decisions for scope, functional unit of comparison, and data collection. For example, if a LCA study is used internally, a full review panel of LCA experts is not required; however, when providing public environmental claims about a competing product, a review is required.
Inventory analysis
A life cycle inventory (LCI) is the most laborious step of a LCA: data is collected and organized. It often involves contacting companies, accumulating literature sources, and building models using life cycle assessment software. Materials flows, types of
materials, product life time, and product energy requirements are collected in the LCI phase.
Life Cycle Impact Assessment
A life cycle impact assessment (LCIA) part of the analysis process collects life cycle inventory data and delivers environmental impacts values. This process greatly reduces the complexity of the data set from hundreds of inputs to 10 or fewer impact categories for decision-making. There are many different methods for LCIA based on location, goals, and scope.
Interpretation
The interpretation step derives from what was found in the other steps for the generation of new information. It is not the last step but iterative. When it is done, the study assumptions, goals, scope, and methods are refined to suit the needs of the study.
Life Cycle Analysis: Goal, Scope and Boundaries
Goal
The first step is defining the goal to give the aim and what it encompasses. There are two types of LCA objectives: (1) descriptive and (2) change-oriented. The descriptive types look at broader aspects of an issues, e.g., how much of the world’s carbon dioxide emissions are derived from commuters (light duty vehicles). These broader environmental questions fall within the domain of descriptive LCAs. The second type of LCA is change-oriented, in which two options for fulfilling a function are compared. Typical examples of change-oriented LCAs are paper vs. plastic, flying vs. driving, and gas vs. electric heating. These types of studies can guide the choice of methods to reduce environmental impacts. The intended audience is another part of the goal and scope. The audience may include interest groups such as policy makers, company marketing groups, or product development teams. Additionally, interest groups should be identified. These include companies, funding sources, target audiences, and expert reviewers. It is noted that the intended use of the LCA may be different from the end use because the information may be relevant to other decisions and analyses beyond the original intent.
One specific LCA to compare two products is a “comparative assertion disclosed to the public”. In this type of study, “environmental claims regarding the superiority or equivalence of one product vs a competing product which performs the same function” are communicated. These types of studies must follow ISO 14044 standards with the nine steps for a “comparative assertion”.
Scope
The scope definition serves the purpose of communicating to the audience what is included and what is excluded. Depending on the goal, there are several types of scopes including cradle-to-gate, cradle-to-grave, and gate-to-gate. There are other words commonly used to describe these scopes:
• Cradle-to-grave: includes all flows and impacts from obtaining raw material to disposal and reuse
• Crade-to-gate: includes all flows and impacts from raw material to production, but excludes product use and end of life.
• Gate-to-gate: only includes flows from production or material processing steps of a product life.
The scope must be carefully selected in consideration of the potential implications not including product stages or phases in the scope of the work. For example, a product may have lower product emissions, but have a shorter lifetime than an alternative product that would not be communicated in a process stage diagram as seen in Figure \(1\). These types of diagrams list the major unit steps considered and clearly show what is not included.
Temporal boundaries are also established in the scope. Assumptions relating to time can have a large influence on the results. A study timeframe should be picked which will best capture the impacts of the product or processes. A 100-year window is a common temporal boundary, for example, in global warming. In a 100-year temporal window, impacts occurring after 100 years are not part of the overall analysis.
Other aspects to be included in scope are technology and geographical regions. Many studies are spatially dependent so LCA results are not broadly applicable to other regions. Products or services from older technologies often have different impacts than current technologies. Thus, it is important to communicate the type and stage of the technology. In addition, allocation procedure impact assessment methods and should be reported.
Functional Unit
A functional unit is the primary measure of a product or service. ISO states that “the functional unit defines the quantification of the identified functions (performance characteristics) of the product. The primary purpose of a functional unit is to provide a reference to which the inputs and outputs are related. This reference is necessary to ensure comparability of LCA results. The functional unit can be a service, mass of material, or an amount of energy. Selecting appropriate functional units is critical to creating an unbiased analysis. For example, when comparing trains to cars for transportation, the comparison may suffer from the inability to correlate energy inputs and outputs. The real purpose of the train would be to deliver a larger number of people to a specific centralized location. For this example, a better functional unit may be impacts of a train delivering a specific number of people over a specified distance. The results will then be normalized to distance for more reasonable correlations and assessments.
Cut of Criteria
Data collection for an LCA is the most time-intensive and laborious step. Cut-off criteria are used to expedite the process. Cut-off criteria define a level of product content or other parameter to which the study will not consider. One example: materials contents less than 1% of the total product mass are not considered. This allows the LCA practitioner to focus on data from the main flows of the system while systematically eliminating flows which may not influence the results.
Life Cycle Inventory
In addition to data collection, the life cycle inventory (LCI) step is a very laborious aspect of life cycle assessment. The data collected for the product, production process, and product life cycle are used in the impact assessment to determine the environmental impacts. Collecting consistent, transparent, and accurate LCI data is critical to the success of an overall LCA.
Primary and Secondary Data
Example data collected:
• Raw material use
• Energy use
• Transportation distances
• Chemical use
• Waste treatment information
• Process yields
• Life time
• Water use
• Product and co-product flows
• Other flows in or out of the system that are within the defined cut-of-criteria
Tracking the material flows into and out of the defined system is the first step of LCI. After the materials flows have been determined through interviews, literature searches, and measurement, LCA software can be used to track the material process’s elementary flows to and from the environment. Elementary flows originate in the environment and are mined or retrieved for use in a process or flows that are released from processes to the environment and are not used by other processes. These elementary flows are the actual materials used and materials released to the environment as a result of the studied product system. In Figure \(2\), the two types of LCI data can be seen. On the top half of the figure, process flows such as products, services and other goods are listed. The lower half lists elementary flows such as chemicals released to soil or air.
Life Cycle Impact Assessment
Life cycle impact assessment (LCIA) is among the last steps of LCA. The purpose of a LCIA “is to provide additional information to assess life cycle inventory (LCI) results and help users better understand the environmental significance of natural resource use and environmental releases”. The LCIA helps provide significance and results for easier decision making; however, it is important to understand it does not directly measure the impacts of chemical releases to the environment as an environmental risk assessment does. The third step of LCIA follows sequentially after the LCI using the many flows to and from the environment developed in the LCI. These LCI flows, without an impact assessment step, are not easily interpreted and understanding the significance of emissions is impossible. The LCIA is different from a risk assessment measuring absolute values of environmental impacts in that the LCIA helps determine the significance of emissions and impacts in relation to the study scope. The absolute value of the impacts cannot be determined by the LCIA due to (Margni and Curran 2012):
• The relative expression of potential environmental impacts to a reference unit
• The integration of environmental data over space and time
• The inherent uncertainty in modeling environmental impact
• The fat that some possible environmental impact occurs in the future
Even though the LCIA has limitations, it is useful in determining what impacts matter, what unit processes are contributing the most through hot spot analysis and identify best scenario options when environmental tradeoffs occur.
According to ISO there are three mandatory processes of a LCIA including Selection of impact categories, Classification, and Characterization, Figure \(4\). | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/02%3A_Life-Cycle_Analysis/2.02%3A_LCA_LCIA_Concepts.txt |
There are many tools to assist life cycle assessments. There are software and data packages designed for performing LCAs. No matter the form of the software, the use of some sort of LCA software and data management system is needed in LCAs. The life cycle inventory step of an LCA often requires a large data set listing hundreds of emissions to the environment. Keeping track of these flows manually is arduous, so LCA software is designed to manage these flows and perform specific functions such as impact assessments based on the inventory as well as uncertainty analysis.
There is a large list of LCA software emerging with various features. A basic overview of how data and LCA software will first be provided then a list of software packages. LCA software can be split into several components:
• The software package
• Data sets
• Life cycle impact assessment (LCIA) methods
Software packages such as SimaPro, openLCA, and Gabi are frameworks or calculators that keep track of data and performs intensive numerical calculations. With the many flows and detailed 49 data, much effort has been invested in creating efficient calculation methods to speed up analysis time. This framework, however, is not useful without inventory data. There are many premade datasets provided from sources such as Ecoinvent, Gabi and United States Department of Agriculture (USDA) that contain previous life cycle inventory results for various chemicals, materials, energy, services, and waste treatment processes. LCA software can access this previously developed data and include a chemical or other process from a dataset in their LCA without needing to perform an entire LCA on that particular material or process. This fundamental aspect of LCA, the leveraging of previous study results for new studies, is a key benefit of LCA software and can save countless hours on the LCI step. LCIA methods are procedures and conversions that are used in performing a LCIA such as global warming potential characterizations and weighting methods. There are many accepted LCIA methods that calculate LCA results using different impact categories, types of impacts, and weighting methods.
Figure \(1\) visually depicts how the different components of LCA software and data interact. The life cycle inventory step requires data from datasets (e.g., Ecoinvent) and primary data gathered by the LCA practitioner surrounding the process or product under analysis. The combination of these two types of data with the use of LCA software calculations gives an LCI. The LCI data can then be used to perform an impact assessment using the LCIA methods (e.g., TRACI).
Table \(1\): Three common LCA Software package options
Software Licensing Datasets Software Features Website
openLCA Open source and free Ecoinvent, Gabi, USLCI, CML and others Fast calculation engine, easily share models, no yearly subscription, process based with transparent data, used for USDA digital commons LCA data development
www.openLCA.org
SimaPro Paid licensing Ecoinvent, USLCI, CML and others Process based with transparent data, good customer support, robust uncertainty analysis www.pre-sustainability.com/simampro
Gabi Paid Licensing Gabi dataset, Ecoinvent, USLCI Robust dataset, visual process flow, based modeling, ease of use www.gabi-software.com
Review Questions
1. Define the opportunities that could benefit from an LCA and why?
2. Is LCA going to be completely exhaustive in any of the various gate scenarios for determination of impacts?
3. What software is available for LCA?
4. Come up with a process and define 10 or less flows to allow you to calculate impacts.
Further Reading
1. Margni and Curran 2012 (See “Chapter 2 Supplementary Material in Moodle site)
2. Novel Screening Technique: Integrated Combinatorial Green Chemistry & Life Cycle Analysis (CGC-LCA): ojs.cnr.ncsu.edu/index.php/Bi...ning_Technique | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/02%3A_Life-Cycle_Analysis/2.03%3A_Conclusions_and_Review.txt |
• 3.1: Introduction to Hazards of Chemistry
• 3.2: Hazard Concepts
• 3.3: Case Study - Badger Army Ammunition Plant
Environmental cleanup of the 7,400-acre Badger Army Ammunition Plant will be greater than \$250 million. This is one of the 40 contaminated military sites in Wisconsin which the Defense Environmental Restoration Account cites as the most contaminated. In fact, 32 areas are polluted with dangerous levels of solvents, metals and explosive/incendiary waste. The water beneath the plant is contaminated with mutagenic chemicals that include carbon tetrachloride, trichloroethylene and dinitrotoluenes.
• 3.4: Green Technologies for Safer Chemical Production
• 3.5: Conclusions and Review Questions
The opportunity to apply green chemistry to reducing the toxicity in industrial processes is of paramount importance for a sustainable future. In this chapter, we investigated the concept of hazards from the perspective of their meaning, nature, implications, control, and ultimate reengineering. Green chemical principles hold the key to ensuring that society provides a viable channel for all of our physical needs and maintains the environment.
03: Hazards
Hazards Introduction
What we as a population must realize is that no substance in and of itself is a poison or a remedy. It is the dosage that defines the activity of the substance. For example, let’s look at the concept of “hormesis” as a precursor to the concept of poison/remedy by dosage.
We all have heard at one time or another the phrase, “What doesn't kill you makes you stronger”. This phrase contains truth and contains at its essence the theory of hormesis: when organisms are exposed to low levels of stressors or toxins, they become more resistant to larger levels of stressors or toxins. This theory has been met with skepticism. Recently, however, biologists have put together a molecular explanation of its function and it has finally been accepted as a fundamental principle of biomedicine.
For example, exposing mice to low levels of gamma ray radiation before irradiating them with high levels actually decreases their likelihood of cancer. Similarly, when dioxin is given to rats we find the same situation. However, the biochemical mechanisms are not well understood. It is believed that a low dose of a toxin can trigger repair mechanisms that are efficient enough to not only neutralize the toxin, but repair other defects not caused by the toxin.
Thus, hormesis is nature’s way of dealing with harmful agents; in fact, antibodies are a natural consequence of hormesis. However, the toxins/poisons that were once absolute, are NO LONGER. For example, thalidomide was found to be a very dangerous chemical for embryo development, but has recently found great promise in a number of ailments according to the Mayo Clinic including HIV, skin lesions, and multiple myeloma (please see: http://www.mayoclinic.org/diseasesco...e/art-20046534). Such a fact is outstanding considering the horrific aftermath (a few photographs shown below(Figure \(1\)) of its use in the middle part of last century:
In fact, the list of former “pure” toxins is extremely interesting: snake venom, bacteria (botulin), fungi (penicillin), leeches (Hirudin), maggots (gangrene), etc. The toxic aspect notwithstanding, we all live in the wake of a world and society that is rife with potential hazards.
Types of hazards
A hazard is “threat” to life, health, property, or environment. These can come in many forms, but they are classified according to their modalities or nature of operation. The modalities of hazards are the following:
• Dormant: Has the potential, but nothing currently can be affected. This modality is typified by the “volcano” scenario – a volcano that is no longer showing any signs of activity or imminent threat but is lying “dormant”. There is no immediate and pressing issue based on human perception.
• Armed: Has the potential, and something can be affected. This modality is given by a person holding a gun in the midst of a war or other aggressive situation. The gun has the potential to affect life, limb, or other vital function, but it is not yet doing so, although the intention is there
• Active: Currently on-going event (something is being affected). Finally, this is the modality that is actually affecting life, limb, or some other vital function. A volcano that is spewing, a gun that is being fired, a fire that is consuming a building, radiation that is leaking, etc. These are the situations that are causing harm.
Within these modalities of hazards, hazards can be further refined as to their types. The following types of hazards classify the threats to life:
• Physical: Condition or situations that cause the body physical harm, e.g., a bullet that is entering into a human being.
• Chemical: Substances that cause harm or damage to the body, property or the environment, e.g., liquid oxygen converting to gaseous oxygen within a closed container (bomb).
• Biological: Biological agents that cause harm to the body, e.g., anthrax bacteria.
• Psychological: Stress affecting the mental state, e.g., the knowledge that the trajectory of a fivemile wide asteroid coincides with earth.
• Radiation: Electromagnetic radiation that harms or damages biological organisms. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/03%3A_Hazards/3.01%3A_Introduction_to_Hazards_of_Chemistry.txt |
Concepts
Globally Harmonized System
To best classify hazards, the concept of GHS (Globally Harmonized System) has been introduced. Figure \(1\) demonstrates the categories of hazards:
Notice that each logo attempts to demonstrate pictorially the type of hazard represented by a specific chemical or material. For example, notice Figure \(2\) below. This particular chemical’s hazard level could be listed as “gases under pressure” and “explosives”. The intensity of its peculiar hazards is evaluated according to the pressure and the explosive nature of the gas.
Exposure and Categories of Hazardous Substances
Our overall sensitivity to hazards, again, depends on a number of factors that not only include the typical factors of levels, LDs, type of toxin, modality, etc., but also to our idiosyncratic immunity or responsiveness.
We have been accustomed to a number of wonderful creature comforts in life that have only come about because of the power, versatility, and creativity inherent in the chemical enterprise. For example, phosgene, a notorious chemical warfare agent in the Great War, is now used as a precursor for the manufacture of a number of items including polyurethane. Shown below in Figure \(3\) is a representation of phosgene:
Lab Safety
Lab safety is an essential part of working with hazardous materials. It is important to follow good lab practice techniques and to know how to deal with situations should they occur. In the following video by Chemistry crash courses, some good lab techniques and safety guidelines are discussed. Lab Techniques and safety by crash course: https://www.youtube.com/watch?v=VRWRmIEHr3A
Reference: Green, H. Crashcourse. (2013, July 8). Lab Techniques & Safetey: Crash Course Chemistry#21. Retrieved from https://www.youtube.com/watch?v=VRWRmIEHr3A
Lethal Dose
Lethal dose (LD50) is the amount of any ingested or interfering substance that kills 50% of a test 58 sample. It is expressed in mg/kg, or milligrams of substance per kilogram of body weight. In toxicology, it is also referred to as the median lethal dose that refers specifically to a toxin, radiation, or pathogen. The lower the LD50, the more toxic is the item being measured for toxicity. It can be considered a pragmatic approach to toxicity exposure levels because in general, toxicity does NOT always scale with body mass. The choice of 50% lethality as the gold standard avoids ambiguity because it measures in the extremes and reduces the amount of testing. However, such a fact also means that LD50 is not the lethal dose for all subjects; in other words, some may be killed by much less. Measures such as “LD1” and “LD99” (dosages required to kill 1% or 99%, respectively, of the test population) are sometimes used. Shown below is a chart of sample LD50 taken from Wikipedia (en.Wikipedia.org/wiki/Median_lethal_dose) with active links to allow further investigation of the substances whose measurements are given.
Substance
LD50 (LC50)
LD50: g/kg
(LC50 g/L)
standardized
Water
>90 g/kg
>90
Pentaborane <50 mg/kg <0.05
Cobalt(II)chloride 80 mg/kg 0.08
Metallic Arsenic 763 mg/kg 0.763
Cadmium oxide 72 mg/kg 0.072
Sucrose (table sugar) 29,700 mg/kg 29.7
Monosodium glutamate (MSG) 16,600 mg/kg 16.6
Vitamin C (ascorbic acid) 11,900 mg/kg 11.9
Urea 8,471 mg/kg 8.471
Cyanuric acid 7,700 mg/kg 7.7
Cadmium sulfide 7,080 mg/kg 7.08
ethanol (grain alcohol) 7,060 mg/kg 7.06
sodium isopropyl methylphosphonic acid (IMPA, metabolite of sarin)
6,860 mg/kg
6.86
Melamine 6,000 mg/kg 6.00
Melamine cyanurate 4,100 mg/kg 4.1
Venom of Brazilian Wandering Spider 134 µg/kg 0.000134
Venom of Inland Taipan (Australian snake) 25 µg/kg 0.000025
Ricin
22 µg/kg
22-30 mg/kg
0.000022
0.02
2,3,7,8- Tetrachlorodibenzodioxin (TCD-D, a dioxin) 20 µg/kg 0.000002
Sources for toxicity information:
Notice that the venom of the Brazilian Wandering Spider is particularly potent. A ten thousandth of a g/kg or ~5mg would kill a normal sized female. Luckily the venom discharged per bite is quite small. Nevertheless, the point is that the vector (or venom) represents a toxin that is of sufficient lethality that it can cause great harm or injury to a human being.
3.03: Case Study - Badger Army Ammunition Plant
Badger Army Ammunition Plant (BAAP) Case Study
Case Study for Hazardous Waste
Environmental cleanup of the 7,400-acre Badger Army Ammunition Plant will be greater than \$250 million. This is one of the 40 contaminated military sites in Wisconsin which the Defense Environmental Restoration Account cites as the most contaminated. In fact, 32 areas are polluted with dangerous levels of solvents, metals and explosive/incendiary waste. The water beneath the plant is contaminated with mutagenic chemicals that include carbon tetrachloride, trichloroethylene and dinitrotoluenes. One part of the site, known as the Propellant Burning Grounds, is the source of a three-mile plume of contaminated groundwater that has migrated offsite, completely contaminating private drinking water wells in addition to the Wisconsin River.
Elevated Cancer Rates
In 1990, the Wisconsin Division of Health conducted a health survey. It concluded that communities near the Badger plant have a significantly higher incidence of cancer and deaths. In spite of these alarming findings, the State refused to take any action. In 1995, the Division of Health responded because of pressure from CSWAB (Citizens for Safe Water Around Badger) and reopened the community health study.
On October 26, 1998, CSWAB concluded “there was entirely inadequate contact with our community – the population being studied.” Prior to September 1998, no press releases were published, no public meetings were held, and no interviews were conducted. Despite several requests, virtually no resources were devoted to interviewing residents about current health problems and concerns regarding their exposure to air, dust, emissions, and surface soils. Assessment of risk from cleanup activities was also absent. CSWAB also determined that the Wisconsin DOH focused on death studies when many health problems and community concerns have been nonlethal, such as respiratory illnesses or reproductive problems.
CSWAB appealed to ATSDR (Agency for Toxic Substances and Disease Registry in D.C.) to discuss the lack of adequate community participation in this and similar health assessments across the nation. The Wisconsin Department of Health conducted the assessment under a cooperative agreement with ATSDR.
Cleanup Plans Abandoned
The U.S. Army proposed to abandon and severely reduce cleanup of two priority areas within the plant. The Settling Ponds and Spoils Disposal area — a series of lagoons that run the length of the 7,000-acre facility — is contaminated with high levels of lead and dinitrotoluenes.
Maintenance of the Badger plant costs in excess of \$17 million per year. In 1991, only \$3 million was allocated for environmental studies. Since 1975, there have been over 56 chemical spills and incidents. Moreover, there is no national strategic need for maintaining the Badger plant. A February 20, 1997 Government Accounting Office report concludes BAAP and three other military plants could be eliminated “because alternative sources exist…to provide the capabilities these plants provide.” | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/03%3A_Hazards/3.02%3A_Hazard_Concepts.txt |
Since the fallout from the Bhopal tragedy, efforts have centered on avoiding storage of methyl isocyanate (MIC). The final product, Sevin, is no longer being manufactured by a two-step process. The process now consists of no longer reacting naphthol with MIC (one step), but sequentially with phosgene and methyl amine. The basis for the Bhopal tragedy was the reaction shown in Figure \(1\).
Polyurethanes
Polyurethane is a polymer composed of carbamate (urethane) linkages. Polyurethanes are traditionally formed by reacting a di- or polyisocyanate with a polyol (alcoholic polymer such as PEG, polyethylene glycol). Both the isocyanates and polyols used to make polyurethanes contain, on average, two or more functional groups (either on termini or within the molecule, hence being telechelic, i.e., a di-end-functional polymer where both ends possess the same functionality) per molecule. Recent efforts have been dedicated to minimizing the use of isocyanates to synthesize polyurethanes because isocyanates are toxic. Non-isocyanate-based polyurethanes (NIPUs), especially made from soybean oils, have recently been targeted as a new greener class of polyurethanes. Shown in Figure \(2\) is a molecular representation of polyurethane linkages (highlighted in blue). The carbamate linkages are composed of a central carbonyl moiety that has two heteroatoms attached to it – a nitrogen and an oxygen. From an organic perspective, it is an ester/amide hybridized molecular system. Interestingly, telechelic monomers such as adipic acid chloride (left telechelic monomer) and hexamethylene diamine (right telechelic monomer) such as shown in Figure \(3\).
What is immediately noticeable is that the final polymer is a blend of two distinct monomers; thus, the final properties can be tailored by judicious (and discrete) choice of the monomers. More specifically, prepolymers (> 10-12 monomer units) of each monomer may be coupled to provide distinct segments having specific properties. For example, polyethylene glycol (PEG) is a polymer that is hydrophilic, soft, rubbery, and flows well. However, the phenyl-based di-isocyanate segment is much more rigid, tough, and non-stretchable. Therefore, the overall final physical and thermal properties of the polyurethane can be tuned. The opportunity to enhance the polymerization reactivity inherent for polyurethanes can be catalyzed by a non-nucleophilic base (such as DABCO – diazabicyclooctane) shown in Figure \(4\)
In Figure \(4\), DABCO is able to abstract a proton from an alcohol (say ethylene glycol) to allow for the nucleophilic reactivity at the cumulated carbon of the isocyanate.
Curtius Rearrangement to Form Isocyanates
An additional “greener” approach to forming isocyanates, an important class of starting material in the formation of many highly important materials, is through the Curtius Rearrangement (RAR) Reaction, an example of which is shown in Figure \(5\).
The Curtius Rearrangement is a thermal or photochemical decomposition starting from carboxylic azides (left structure in reaction above) to an isocyanate (first product in reaction above). In the above reaction, the solvent also plays the role of a reactant as shown by N atom insertion into cyclohexane via a radicaloid mechanism. These intermediates may be isolated, or their reaction or hydrolysis products can be obtained.
The reaction sequence that includes the subsequent reaction with water that leads to amines is called the Curtius Reaction. This reaction is similar to the Schmidt Reaction, shown below in Figure \(6\) (https://www.wikiwand.com/en/Schmidt_reaction), with acids that differs in that the acyl azide is prepared from the acyl halide and an azide salt.
For the Curtius RAR, the following steps take place: Starting Reagent 1 undergoes an electrophilic attack by an anionic azide molecule to produce an azide whose mesomerism is shown in brackets 2. It then undergoes decomposition under the appropriate stressor (heat, pressure, etc.), which leads to the isocyanate 3 with the loss of nitrogen (denitrogenation). This can react with water to yield a carbamate-like molecule (carbamic acid, urethane-like) which can spontaneously decompose thru decarboxylation to a primary amine 4. In the presence of an alcohol or an amine, 3 can yield an ester, 5, and an amide, 6, respectively. http://www.wikiwand.com/en/Curtius_rearrangement
3.05: Conclusions and Review Questions
Conclusions
The opportunity to apply green chemistry to reducing the toxicity in industrial processes is of paramount importance for a sustainable future. In this chapter, we investigated the concept of hazards from the perspective of their meaning, nature, implications, control, and ultimate reengineering. Green chemical principles hold the key to ensuring that society provides a viable channel for all of our physical needs and maintains the environment.
Review Questions
1. Think about the additive impact of hazards from a LD50 perspective. Why would materials with different values not be additive in their cumulative effects on lethality? Can you think of two chemicals, however, whose lethality may be more than the sum of their individual values?
2. Would phosgene be more or less reactive as a function of relative humidity? Why or why not? In a wartime situation, for example, would an opposing army to whom phosgene is being directed favor a low humidity?
3. Can phosgene be reacted with sodium azide in a 1:2 molar ratio? If so, what would be the reaction and product? If not, why not?
4. What do these GHS labels tell us about ammonia? (See Figure \(1\) ) | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/03%3A_Hazards/3.04%3A_Green_Technologies_for_Safer_Chemical_Production.txt |
In a very general sense, solvents are a class of chemical compounds that allow chemistry to occur. The concept of a solvent has significant ramifications because they serve as the matrix, medium, or carrier for solutes. They are necessary in a number of processes, reactions, and systems. We tend to think of a substance like “water” as a universal solvent because it is so useful in so many disciplines. Water cleans up everything, allows biochemical reactions to occur, is used in paints, coatings, and films, allows cooking to occur (or else everything would catch fire), and provides lubrication and ease of movement for a great many devices.
04: Alternative Solvents
In a very general sense, solvents are a class of chemical compounds that allow chemistry to occur. The concept of a solvent has significant ramifications because they serve as the matrix, medium, or carrier for solutes. They are necessary in a number of processes, reactions, and systems. We tend to think of a substance like “water” as a universal solvent because it is so useful in so many disciplines. Water cleans up everything, allows biochemical reactions to occur, is used in paints, coatings, and films, allows cooking to occur (or else everything would catch fire), and provides lubrication and ease of movement for a great many devices.
Interestingly, in the business world, the word “solvent” has another meaning: A state of financial soundness characterized by the ability of an entity to meet its monetary obligations when they fall due. This latter definition does indeed apply in a certain sense to the solvent concept we are espousing in this chapter. A solvent (\$olvent) carries materials for a reaction or specific function in the same sense that solvent from a financial sense carries funding/money to meet its obligations. Thus, using an analogy:
Solvent : Solute :: \$olvent : Money
Chemically, a solvent will dissolve or “solvate” a solute. What does that mean? It means that solvent molecules will surround the solute in such a way that a solution is formed; in other words, a homogeneous system is generated in which the solute is part of/indistinguishable from the solvent network. A solute/solution conceptcan be visualized in the Figure below:
Solubility
The ability of one a compound to dissolve in some other (likely different) compound is termed “solubility”. Miscibility is another term that characterizes the facility of compound A to dissolve in compound B. When the two compounds can completely dissolve or combine to form a homogeneous solution, the two liquids are said to be miscible. Two that can never blend well enough to form a solution are called immiscible.
All solutions have a positive entropy of mixing whereas the interactions between different compounds may or may not be energetically favored. If interactions are unfavorable, then the free energy decreases with increasing solute concentration. The energy loss may at some point in time outweighs the entropy gain, and no more solute can be dissolved – this is a condition solution in which the solution is said to be saturated. This condition can change with different environmental factors, such temperature, pressure, and purity of the system. A supersaturated solution can be prepared by raising the solubility (e.g., increasing temperature) to dissolve more solute, and then lowering it by cooling. However, most gases and several compounds exhibit solubilities that tend to drop with raised temperatures. The solubility of liquids in liquids, however, is less temperaturesensitive than solids or gases. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/04%3A_Alternative_Solvents/4.01%3A_Introduction_to_Solvents.txt |
According to GC Principle No. 5, “The use of auxiliary substances (solvents, separation agents, etc.) should be made unnecessary wherever possible, and innocuous when used.” This statement signifies or upholds the dogma in GC that simplification to execute a specific transformation should be observed. In other words, if we can find a way to not use a solvent, let’s not use it! Although in our modern society such a stance is almost untenable, the ideal nevertheless is what we as a society should contend to achieve. For example, as shown in Figure \(1\), we have an overwhelming influx of medicines, drugs, etc., within our society to promote our health. We typically take these concoctions with water, in emulsions, in solutions, in suspensions, etc.
Of course, taking a medicine without solution is a difficult proposition, but there are a number of ways to do it:
• Dry swallowing or sublingually (e.g., nitroglycerin tablets for angina);
• Patch application (microneedles or high concentration, usually done dermally);
• Inhaled by a mist/spray(e.g., gaseous phase);
• Via a tube (e.g., stomach tube or intravenous).
Nevertheless, the use of solvents and solvent-based systems are deeply ingrained in our culture and society. The reasons are many, but they tend to be tradition, ease of use, reduced cost, and convenience. Doing without solvents tends to require much more creativity and overall planning.
Reaction Energy Coordinate
Typically, a reaction profile or surface (where and how a reaction proceeds according to an energy perspective) provides sufficient information to understand the pathways (independent of final energy states) necessary to achieve a forward reaction. For example, the coordinate shown below illustrates the SN2 reaction between t-butyl chloride and hydroxide. Note that most of the variables shown relate to the energy constraints or parameters associated with the reaction. The reaction has an inherent activation barrier for the forward direction. This barrier characterized by an energy differential input (DE1¹ ), relates to the difficulty to accessing a transition state (¹) that must be accessed for the forward reaction.
Such an energy input, however, can be accommodated or lessened by a number of factors, viz., the use of a catalyst which in general has shown the ability to reduce the energetic considerations to access that state by virtue of its capacity to engage in structural or energetic interactions with the starting material and strongly encourage the formation of the transition state. The downhill nature of the coordinate surface is akin to rolling a ball whose potential energy conversion to kinetic energy is extremely facile. Thus, an intermediate is achieved (the trialkyl carbocation in Figure \(2\) ) that has a measurable lifetime and properties. Although it is by its fundamental character unstable, it nevertheless exists to then engage in further reactions. Notice that it still at an energy surface that is much higher to the starting energy state. Such a phenomenon is then predictive of its future course. In fact, as you move closer to the product along the abscissa of the reaction coordinate surface, the intermediate resembles in a number of properties the nature of the final product. In this case, the intermediate has to a great extent a number of the features of both the starting materials and the product. Indeed, the next activation barrier that must be crossed to access the final product is much lower in height than the normal activation barrier indicating the gradual easing of the transition of starting material(s) to product(s).
Solvent Categories
There are a number of solvents that from a green chemistry perspective must be dealt with judiciously. The following are representative solvent classes:
• Hydrocarbons
• Halogenated hydrocarbons
• Aromatic hydrocarbons
• Alcohols
• Ethers
• Aprotic solvents
Each of the above have their pros & cons in terms of environmental benefit/issue, economics, and social justice. Each may do different things; for example, the aprotic solvent class is one where there is no chance for the solvent to provide a proton to a reaction it is hosting. Alcohols, on the other hand, can easily do that and should not be used in reactions where water/protons could quench a reaction or trigger a violent reaction.
Methyl Soyate
Methyl soyate is a biobased solvent that is a mixture of long-chain fatty acid methyl esters. More information on its specific properties and potential uses can be found in the following tract:
https://www.yumpu.com/en/document/view/10362159/the-formulary-guide-for-methyl-soyate-soy-new-uses
In general, it can be used to clean countertops, pretreat fabric stains, clean concrete, degreaser, graffiti remover (with ethyl lactate and surfactants), paint stripper, mastic remover, varnish remover, deinker, asphalt remover, and waterless hand cleaner. In addition to the soyate, there are a number of other emerging biobased cleaners/solvents such as shown below:
Please see http://pubs.rsc.org/en/Content/ArticleLanding/2011/GC/c1gc15523g#!divAbstract for more information on this marvelous solvent. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/04%3A_Alternative_Solvents/4.02%3A_Solvent_Concepts.txt |
Alternative Solvent Systems or Modes
Gas Phase
The gas phase is a very useful modality to allow reactions to occur because (like the hydrophobic effect) it forces reactions or processes to occur through non-solvent-mediated channels. For example, the production of methanol can be done in the gas phase by reaction of “syn” gas (hydrogen and carbon monoxide) with ZnO as the solid catalyst for the reaction to occur.
No Solvents
It is altogether possible and highly desirable to use the starting materials themselves for the reaction of interest. This is doable if one of the reactants is a liquid that can allow the other(s) to dissolve into it. This has been shown by taking a p-xylene and reacting it with oxygen to make the terephthalic acid:
The acid in Figure \(1\) can then be used directly with ethylene glycol to synthesize polyethylene terephthalate:
Melt State
It is possible to combine two solids to provide a composition that can achieve a eutectic point, or melts/solidifies at a single temperature lower than the melting points of the separate constituents or of any other mixture of them. At the eutectic point, you can achieve an isotropic blend of dissolved materials such as evidence in Figure \(3\).
Triggered Solid State Reactions
It is possible to induce a chemical reaction outside of the melt by introducing a trigger such as acid, ultrasound, grinding, light, etc. Such triggers take the place of solvent-mediated reaction stabilization. For example, the following reactions are amenable to such triggers:
Michael Reaction:
In addition, it is possible to do the reaction using a catalyst such as alumina with microwave induction as shown in Figure 4-9:
There are a number of other reactions that are similar in their reactivity as a function of triggers in water (non-organic) and without catalysts.
Baeyer-Villiger:
For example, the Baeyer-Villiger Reaction is an important reaction in making an ester or lactone
The Baeyer-Villiger Oxidation is the oxidative cleavage of a carbon-carbon bond adjacent to a carbonyl to converts ketones to esters and cyclic ketones to lactones. It may vw carried out with peracids, such as m-CBPA, or with hydrogen peroxide and a Lewis acid.
Benzilic Acid Rearrangement
1,2-Diketones undergo a rearrangement in the presence of a strong base to yield αhydroxycarboxylic acids. The best yields are obtained when the diketones do not have enolizable protons. Shown below in Figure \(7\) is a representation of the mechanism of the reaction.
Benzilic acid rearrangement has traditionally been conducted by heating benzyl derivatives and alkali metal hydroxides (KOH) in aqueous organic solvent. However, the rearrangements proceed more efficiently and quickly in the solid state.
Williamson Ether Synthesis
The Williamson Ether Synthesis is an organic reaction forming an ether from an organohalide and a deprotonated alcohol (alkoxide) that is done in organic solvents. However, a very rapid synthesis of symmetrical and asymmetrical ethers in “dry” media under microwave radiation has been reported (http://www.cyfronet.krakow.pl/~pcbogdal/alcohol/). The reaction was carried out by mixing an alcohol with 50% excess of an alkyl halide and catalytic amount of tetrabutylammonium bromide, adsorbed onto potassium carbonate or a mixture of potassium carbonate and potassium hydroxide, and irradiated in open conditions in a domestic microwave oven for 45-100 s. In the absence of the ammonium salt, ethers were not detected or were very low yield.
Friedel-Crafts Reaction
The opportunity to perform acylation or alkylation of an aromatic nucleus is a very important transformation in organic chemistry. The principal tool we have for such a transformation is the Friedel-Crafts, a classical reaction dating back to the 19th century, can be done in a green way to avoid the need for acid chlorides, Lewis acids, and hydrochloric acid waste products. Shown below in Figure \(8\) is a representation of the photo-Friedel-Crafts, a mild and greener alternative to the classical analogue.
Ball Milling
Ball milling works on the principle of impact and attrition; size reduction results from impact as the balls drop from near the top of the shell. A ball mill is made up of a hollow cylindrical shell rotating about its axis. The axis may be either horizontal or at an acute angle to the horizontal. The shell is partially filled with balls whose grinding by action of balls, made of steel (chrome steel), stainless steel, ceramic, or rubber, results in much finer particles. The inner surface of the cylindrical shell is usually lined with an abrasion-resistant material such as manganese steel or rubber whereas less wear takes place in a rubber-lined mill. One of the reactions that is done in this manner is polymerization of MMA (methylmethacrylic acid) to PMMA (poly- ) shown in \(9\). Below is a video resource that shows the process of ball milling. In this video it is possible to see how larger materials could be broken down and ground to specific sizes based on the size of grinding ball used.
The polymerization of MMA is normally done through one of several chemical-initiated approaches. In the ball milling approach, the application of mechanical energy is sufficient to conduct the reaction, a significant finding! The same type of approach can be used to extract lignin, the third or fourth most abundant polymer on the planet. A generic molecular representation of the structure of lignin is shown in \(10\).
Process Intensification
Process intensification can be defined as a strategy for introducing dramatic reductions in the footprint of a chemical plant to reach a given production objective. These reductions may consist of shrinking the size of pieces of equipment and reducing the number of unit operations or apparatuses. Such reductions tend to be significant because the objective is to dramatically reduce energy, materials waste, and process efficiency. Several of its most salient characteristics are:
• http://tinyurl.com/bpwpah9
• Continuous, short contact times
• Minimizes further reaction
• Higher, purer yields
• Mixing & heat transfer are very good! No explosive limits reached!
Reactive Extrusion
Reactive extrusion is a chemical engineering process characterized by the forced mixing of one or more components under high pressure conditions for a specific end goal. Hot melt extrusion is one example of this. It is defined as the application of heat and/or pressure to melt a polymer and force it though a small space (extruder) as part of a continuous process. It is a well-known process that was developed to make polymer products of uniform shape and density. It is widely applied in the plastic, rubber and food industries to prepare more than half of all plastic products including bags, films, sheets, tubes, fibers, foams, and pipes.
Supercritical Fluid
A supercritical fluid (sCF) is a substance at a temperature and pressure that are above the critical point, at which distinct liquid and gas phases do not exist. It is a very unique phase that can effuse through solids like a gas, and dissolve materials like a liquid.
sCFs are extremely useful in green chemistry because they can be derived from environmentally friendly materials such as water and carbon dioxide with little to no impact on the carbon footprint of the planet.
Water as a Solvent
Water can behave as an exquisite solvent in a host of typical organic reactions by virtue of its ability to encourage reactivity via the “hydrophobic effect”.
Microemulsions
Microemulsions are clear, thermodynamically stable, isotropic liquid mixtures of oil, water and a surfactant that are frequently in combination with a co-surfactant. The aqueous phase likely contains salt(s) or other ingredients, whereas the “oil” may actually be a complex mixture of different hydrocarbons and olefins.
4.04: Conclusions and Review
The opportunity to use alternatives to typical organic solvents to do the same transformations with the same or better efficiencies has never been better than today. We are learning that we can do a lot of chemistry in a green way because we are putting our efforts into it. Although we still have a long way to go before solventless, gas phase, etc., are the mode of doing chemistry, the range of available reactions and processes is superb.
Review Questions
1. What are the implications of using alternative solvents from a life cycle analysis perspective?
2. What are the energetic considerations for allowing polymerization to occur by ball milling?
3. What is the difference between a “critical point” and the “triple point” as defined in standard phase diagrams?
Further Reading
1. Cagniard de la Tour C. 1822 Exposé de quelques résultats obtenu par l’action combinée de la chaleur et de la compression sur certains liquides, tels que l’eau, l’alcool, l’éther sulfurique et l’essence de pétrole rectifiée. Ann. Chim.
2. Phys. 21, 127–132.
3. Andrews T. 1869 The Bakerian Lecture—On the continuity of the gaseous and liquid states of matter. Phil. Trans. R. Soc. Lond. 159, 575–590.
4. Licence P, Ke J, Sokolova M, Ross SK, Poliakoff M. 2003 Chemical reactions in supercritical carbon dioxide: from laboratory to commercial plant. Green Chem. 5, 99–104.
5. Tang SLY, Smith RL, Poliakoff M. 2005 Principles of green chemistry: PRODUCTIVELY. Green Chem. 7,761–762.
6. Lovelock KRJ, Villar-Garcia IJ, Maier F, Steinrück HP, Licence P. 2010 Photoelectron spectroscopy of ionic liquid-based interfaces. Chem. Rev. 110, 5158–5190.
7. Ball Milling Reference: "Ball Mill Critical Speed & Working Principle". YouTube. N.p., 2016. Web. 29 Nov. 2016. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/04%3A_Alternative_Solvents/4.03%3A_Solvent_Alternatives.txt |
• 5.1: Introduction to Reagants
Reagents, not unlike solvents in the previous chapter, are the chemical materials or reactants used in a chemical conversion to product. When we think of a “reagent”, we tend to think of a chemical used in a reaction, although there are numerous examples of chemicals throughout our world that do not neatly fit that mold. Dirt, water, air, etc., can all be reagents in terms of transformations to products.
• 5.2: Reagant Concepts
The opportunity to do chemistry by means of, for example, a fixed substrate is very much in keeping with the concepts that we are learning. We explored the functionality and power of packed-bed columns that do chemistry normally done in solution with dissolved solutes. A packed-bed reactor can provide a source of protons, ions, or other chemicals that can affect chemistry.
• 5.3: Specific Replacements
• 5.4: Conclusions and Review
05: Alternative Reagants
Reagants
“A reagent ain’t supposed to make you faint!” -Lucian Lucia
Reagents, not unlike solvents in the previous chapter, are the chemical materials or reactants used in a chemical conversion to product. When we think of a “reagent”, we tend to think of a chemical used in a reaction, although there are numerous examples of chemicals throughout our world that do not neatly fit that mold. Dirt, water, air, etc., can all be reagents in terms of transformations to products. For example, the humic and fulvic acids found in dirt can act as exquisite reagents in separation, uptake, and cellular metabolic processes. Fulvics display a sorptive interaction with environmental chemicals before or after they reach concentrations that are toxic. The toxic herbicide Paraquat is rapidly detoxified, as are other organic compounds applied to the soil as pesticides. It is also vital to aid in the formation of new species of metal ions that bind with pesticides and herbicides to catalyze their breakdown.
We must think of a reagent as a means to an end. How can we use ingenious chemistry to take an otherwise invaluable or low-value item (commodity) to a valuable end product? This is the secret to enhancing the appeal and utility of green chemistry.
5.02: Reagant Concepts
Major Concepts
The opportunity to do chemistry by means of, for example, a fixed substrate is very much in keeping with the concepts that we are learning. We explored the functionality and power of packed-bed columns that do chemistry normally done in solution with dissolved solutes. A packed-bed reactor can provide a source of protons, ions, or other chemicals that can affect chemistry. An ion-exchange system can perform ion metathesis, i.e., substitute the anion (X-m) in dilute salt system (M+n)m(X-m)n for the anion enriched on the packed bed. Shown below is an example of the process of interest. The solute (a mixture of proteins) is loaded onto the exchanger and subsequently adsorbs to the carboxymethyl anion substrate (a porogen). This process is typically characterized as a facile equilibrium process in which the protein mixture can replace the sodium cations. UV absorbance can follow the overall displacement as shown in the graph at the lower right (1st: Mes anion peak; 2nd and 3rd peaks are the desired protein chloride substance).
Replacing Hazardous Substances
A common misconception in chemistry is that we should be vigilant only when we know we are working with a hazardous compound. In fact, all substances should be treated with the level of respect that they deserve. Even salt can be an irritant when used incorrectly. The most importance criteria to consider in the evaluation of the hazards of substances are the following:
• Efficacy: the alternative must carry out desired transformation with comparable or superior efficiency
• Safety: the alternative reagent should display reduced volatility, flammability, toxicity, and/or reactivity, as well as increased stability
• Environmental impacts: the alternative reagent should represent a reduced environmental impact
Each of the above three criteria may be used in the assessment of a compound with respect to hazard level. Obviously, efficacy must be emphasized or else there is no need to go any further with the replacement.
Inorganic and Organic Supports
Supports or scaffolds or templates represent a fixed medium for the express purpose of a targeted transformation. We will use the following paper (shown are title, authors, and abstract) for our discussion in this part of the chapter.
“Paper-immobilized enzyme as a green microstructured catalyst” Hirotaka Koga, Takuya Kitaokabc and Akira Isogaia; DOI: 10.1039/c2jm30759f
The facile and direct introduction of methacryloxy groups into cellulose paper was carried out using a silane coupling technique, leading to the improvement of hydrophobicity and both dry and wet physical strengths of the paper. Immobilization of lipase enzymes onto the methacrylate- modified paper was then accomplished, possibly due to hydrophobic interaction. The as-prepared immobilized lipase on methacrylatemodified paper possessed paper-specific practical utility. During a batch process for the nonaqueous transesterification between 1 phenylethanol and vinyl acetate to produce 1- phenylethylacetate, the paperimmobilized lipase showed high catalytic activity, selectivity and reusability, suggesting that the methacryloxy groups introduced into the cellulose paper played a key role in the hyperactivation of lipases. In addition, a higher productivity of 1-phenylethylacetate was achieved in a continuous flow reaction system than in the batch system, indicating that the interconnected porous microstructure of the paper provided favorable flow paths for the reactant solution. Thus, the paper-immobilized enzyme is expected to offer a green catalytic material for the effective production of useful chemicals.
What we note in this “paper” is the “smart” exploitation of paper as a porous microreaction chamber (interconnected paths) for a transesterification reaction between 1-phenylethanol and vinyl acetate to produce 1-Phenylethylacetate, as shown below:
This research effort explores the opportunity to catalyze reactions of interest using natural catalysts (enzymes). In this case, lipase-catalyzed reactions (e.g., esterification, transesterification, aminolysis, acylation, and thio-transesterification) that can be carried out in nonaqueous media have been examined in the synthesis of many useful chemicals for food, cosmetic, pharmaceutical and biodiesel.
However, lipases are typically unstable in nonaqueous media due to aggregation, denaturation, or some other form of deactivation. Enzymes are often immobilized on supporting materials (silica, ceramics, carbon, resins) to provide reusability, isolation advantages, and stability in nonaqueous media. Their study offered a number of advantages for the use of this green system including the asymmetry of the final productcomposition:
Figure 5-3 shows that the paper-based system encourages the formation of the “R” enantiomer as opposed to the S (non-detected). Such a finding signals the value of a simple, facile, and highly conserved approach to an enantioselective and high yield (50%, 5 hrs.). | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/05%3A_Alternative_Reagants/5.01%3A_Introduction_to_Reagants.txt |
Alternative Reagent Strategies
Removal of Lactose
The opportunity to remove lactose through a separation process can be achieved by attaching glutaraldehyde (two terminal-ended aldehyde group below, left) to an aminated silica particle (image on right) which is then covalently linked to a galactosidase enzyme that can be used to remove lactose from milk products followed by coupling with galactosidase and reaction with milk.
Quaternization of Amines
Quaternary ammonium cations, quats, are positively-charged ions of the structure shown below:
The R group can be an alkyl group or an aryl group. Unlike ammonium (NH4 + ), quaternary ammonium cations are permanently charged, independent of pH.
Combinatorial Chemistry
Combinatorial chemistry is an approach to chemical synthesis that makes possible the preparation of a large number (e.g., millions) of compounds in a single sweep. These libraries can be mixtures, individual compounds, or computational structures.
Combinatorial chemistry can be used for small molecules, as well as for biomacromolecules such as peptides. Synthesis can quickly lead to large numbers of products. As an example, a molecule with three points of functionality or diversity (R1, R2, and R3) can generate structures, where NR# = the numbers of different substituents utilized. The basic principle of combinatorial chemistry is the preparation of large numbers of compounds that are then used to identify the useful components of the libraries.
Although combinatorial chemistry has been in play over the last 25 years, its roots go to the 1960s when a Rockefeller researcher, Bruce Merrifield, started investigating solid phase synthesis of peptides. It has had its biggest impact in the drug industry where researchers optimize the activity profile of a compound by creating a library of many different but related compounds.
Starch
Starch is an energy storage biomaterial generated from carbon dioxide and water during photosynthesis. Among the panoply of natural polymers, starch is of considerable interest because of its biodegradability, low cost, renewability, and biocompatibility. It is therefore considered a promising candidate for the sustainable development of new functional materials.
Starch is composed of two homopolymers of D-glucose: amylase, a virtually linear α-D- (1, 4’)- glucan, and branched amylopectin, with many α-1, 6’-linked branch points (Figure \(4\) ).
Starch is a polyol: it has therefore, two secondary hydroxyl groups at C-2 and C-3, and one primary hydroxyl at C-6 when it is not linked. It is also quite hydrophilic and can be oxidized and reduced for the formation of ethers and esters and many other functional molecules. Starch has various proportions of amylose and amylopectin from about 10–20% amylose and 80–90% amylopectin based on the source. It forms a helical structure and occurs naturally as discrete granules because the short- branched amylopectin chains can form helical structures to crystallize. It has a MW of 500-20K consisting of a-(1à4)-D-glucose units whose extended shape has a 7-22 nm hydrodynamic radius range. The helical form of the molecule usually forms a stiff left- handed single helix whose structure, not unlike DNA, is supported by H-bonding between O2 and the O6 atoms. Amylose can undergo syneresis (dehydration synthesis) between vicinal (neighboring) glucan residues to form the cyclodextrin cavity in which the interior maintains a hydrophobicity. Interestingly, it can form double-stranded crystallites that are resistant to amylase (the putative enzyme for its deconstruction), and H-bonding and solubility. The crowing macromolecule from starch, CD, is a veritable green chemist’s dream owing to a number of wonderful properties. Shown below is the toroidal structure of CD and dimensions:
How do Cyclodextrins Work?? https://www.youtube.com/watch?v=UaEes6cJu7k (How do Cyclodextrins work? (1980). Cyclolab.)
Specifically, it is a cyclic oligomer of a-D-glucopyranose held together by glycosidic bonds very much like what is seen in all polysaccharides. It was discovered in 1891 by Villiers who synthesized by the enzymatic conversion of amylose followed by a selective precipitation. The precipitation can be done with several solvents, albeit organic in nature and not sustainable, but with varying degrees of yields. Note that we look at three forms of the CDs as shown in the graph below:
Precipitating Agent Yield (%)
a-CD 1-decanol 40
b-CD toluene 50-60
g-CD Cyclohexadec-8-en-1-ol 40-50
What is remarkable about the CDs is their ability to form host/guest inclusion complexes owing to their unique HLB (Hydrophilic Lipophilic Balance) criteria. These are quite literally molecularscale reactors that can do many things. In the context of biomedical applications, the focus for our particular discussion, the following are among their benefits.
• To increase aqueous solubility of drugs
• To increase chemical stability of drugs
• To enhance drug delivery to and through biological membranes
• To increase physical stability of drugs
• To convert liquid drugs to microcrystalline powders
• To prevent drug-drug and drug-excipient interactions
• To reduce local irritation after topical or oral administration
• To prevent drug absorption into skin or after oral administration
The nanoreactors have an uncanny capacity to encapsulate and/or do chemistry with a number of chemicals that are hydrophobic or low polarity. Shown in Figure 5-9 is a cartoon that depicts the hydrolytic action of CD with a phosphate molecule.
Figure \(6\): A simplified representation of the encapsulating and chemical reactivity behavior of a CD macromolecule for phosphate hydrolysis. https://commons.wikimedia.org/wiki/F...ease_Mimic.png
CD can by virtue of the varieties in its sizes accommodate relatively small to large guest molecules. Shown below in Figure 5-10 is a pictorial description of a 1:2 guest:host complex that engages in photophysical behavior.
The system is engaging in an energy transfer from the peripheral boron- containing molecules to the encapsulated porphyrin molecules which releases a photon of light upon resonant energy transfer (645 nm) from the peripheral molecules.
Phase Solubility Diagrams
Phase solubility studies are carried out in aqueous systems at different temperatures to calculate stability (equilibrium) constants, (Kc) and thermodynamic values for the formation of inclusion complexes. It is known that if phase solubility diagrams show that the solubility of a guest molecule increases linearly along with the concentration of CD, then they can be considered as AL-type phase diagrams [1], suggesting the formation of 1:1 complexes, which are the most common and best understood of these types of interactions. When the phase solubility data are collected at different temperatures, we can obtain valuable additional information such as the thermodynamic parameters for the formation of the complex.
The integrated form of the Van’t Hoff Equation allows for the calculation of the enthalpy (DH) and entropy changes (DS) depending on the variations of the stability constants with temperature [2]:
The Kc can also be expressed as the following relationship:
that can be pictorially described as:
in which there is a complexation with D (the substrate or drug in this case) and the CD. The above represents an AL-type complex which is first order with respect to the CD and the drug (D).
A-type phase-solubility profiles are defined by the solubility of the substrate (drug) increasing with increasing ligand (cyclodextrin) concentration. When the complex is first order relative to the ligand and first or higher order with respect to substrate then AL-type phase-solubility profiles are obtained. If the complex is first order with respect to the substrate but second or higher order with respect to ligand, AP-types are obtained. AN-type phase-solubility profiles are difficult to interpret. B-type phase- solubility profiles indicate formation of complexes with limited solubility in the aqueous medium. The water-soluble cyclodextrin derivatives form A-type phase- solubility profiles while less soluble cyclodextrins form B-type profiles. A number of drug/cyclodextrin complexes form inclusion complexes, butcyclodextrins form non- inclusion complexes and aggregates that dissolve drugs through micelle-like structures. The phase-solubility profiles do not verify inclusion complexes, but only describe how increasing cyclodextrin concentration influences drug solubility. As we already discussed, the most common type of cyclodextrin complexes are the 1:1 where one drug molecule (D) forms a complex with one cyclodextrin molecule (CD). In an AL-type phasesolubility diagram where the slope is less than unity, the stability constant (K1:1) of the complex can be calculated from the slope and the intrinsic solubility (S0) of the drug in the aqueous media (i.e., drug solubility when no cyclodextrin is present):
The value of K1:1 is typically 50 < 2000 M-1 with a mean value of 129, 490, and 355 M-1 for α-, β- and γ- cyclodextrin, respectively [3].
5.04: Conclusions and Review
Advances in drug development such as high throughput have increased the total and variety of drug candidates whose clinical usefulness is principally limited to a great degree by the following:
• Water insolubility;
• Chemical or physical instability;
• Local irritation after administration
Cyclodextrins alleviate many of these issues. As evidence, there are currently around 30 different cyclodextrin pharmaceutical products on the market that include different types of pills, capsules, parenteral solutions, nasal sprays, suppositories, eye drops, and skin products. Cyclodextrins tend to replace organic solvents, enhance oral bioavailability, reduce gastrointestinal irritation, and increase dermal availability. In addition, animal and human studies have demonstrated that cyclodextrins improve drug delivery.
Addition of cyclodextrins to existing formulations will seldom result in acceptable outcome. We still lack deep knowledge of the physical factors involved in complex formation. Cyclodextrins form both inclusion and non-inclusion complexes in aqueous solutions. It has also been shown that cyclodextrins form aggregates that are able to solubilize in a micellar-like fashion; their exact structures are not known, nor is the way they influence drug delivery
Review Questions
1. What are ways that ion exchange columns can be improved to ensure maximum efficiency? In other words, what is the principle that governs their functionality?
2. Why is paper such a good medium for chemical reactions?
3. Does a practical limit exist for the variations of combinatorial chemical processes that can be studied?
4. Can you envision a way that CD systems can aggregate in human circulatory systems?
Further Reading
1. Cagniard de la Tour C. 1822 Exposé de quelques résultats obtenu par l’action combinée de la chaleur et de la compression sur certains liquides, tels que l’eau, l’alcool, l’éther sulfurique et l’essence de pétrole rectifiée. Ann. Chim. Phys. 21, 127– 132.
2. Andrews T. 1869 The Bakerian Lecture—On the continuity of the gaseous and liquid states of matter. Phil. Trans. R. Soc. Lond. 159, 575–590.
3. Licence P, Ke J, Sokolova M, Ross SK, Poliakoff M. 2003 Chemical reactions in supercritical carbon dioxide: from laboratory to commercial plant. Green Chem. 5, 99–104.
4. Tang SLY, Smith RL, Poliakoff M. 2005 Principles of green chemistry: PRODUCTIVELY. Green Chem. 7,761–762.
5. Lovelock KRJ, Villar-Garcia IJ, Maier F, Steinrück H-P, Licence P. 2010 Photoelectron spectroscopy of ionic liquid-based interfaces. Chem. Rev. 110, 5158–5190. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/05%3A_Alternative_Reagants/5.03%3A_Specific_Replacements.txt |
• 6.1: Introduction to Reactions
A budding green chemist such as yourself will investigate how a chemical will undergo one or more treatments to produce a needed product in a manner consistent with the dogmas of GC. Typically, any chemist is chiefly interested in the concepts of yield and specificity, namely, how efficiently is the conversion done based on a theoretical yield and how exactingly is it done (with respect to a series of products).
• 6.2: Reaction Concepts
Reaction types are significant to all fields of chemistry because they are what characterize any transformation from starting materials to products. In the pantheon of reactions, we tend to focus on functional group creation and atom transformation.
• 6.3: Reaction Design Concepts
Green chemistry is all about increasing the overall efficiency of a reaction in terms of numerous criteria. In general, we try to mimic biological reactions because they are among the most efficient, low energy, and atom conserving reactions at our disposal.
• 6.4: Conclusions and Review
06: Reaction Types Design and Efficiency
Introduction
“What has one voice, and is four-footed in the morning, two-footed in the afternoon and three- footed at night”?
-The Sphinx (Oedipus)
In any reaction, a transformation from a starting material to a final product is the basis. In other words, a budding green chemist such as yourself will investigate how a chemical will undergo one or more treatments to produce a needed product in a manner consistent with the dogmas of GC. Typically, any chemist is chiefly interested in the concepts of yield and specificity, namely, how efficiently is the conversion done based on a theoretical yield and how exactingly is it done (with respect to a series of products). These latter terms have long been the “mantras” of organic, synthetic, and biochemical studies. The secret to any successful transformation is ensuring that we get high yield and maximum specificity no matter what else is generated. A successful chemical transformation for a green chemist, however, must go beyond these staid concepts and introduce the concepts of safety, waste, energy, and number of steps (which are reflected in the first three terms).
The history of chemistry has been one of neglecting such concepts because of the lack of environmental consciousness, low regard for safety, and lack of understanding of economics or life cycle. When we approach the riddle of doing chemistry, we have not done so with an eye to mimicking nature. We have always approached it anthropomorphically, that is to say, with the resolute conviction that we can do it better, but we fail.
6.02: Reaction Concepts
Overarching Concepts
Reaction types are significant to all fields of chemistry because they are what characterize any transformation from starting materials to products. In the pantheon of reactions, we tend to focus on functional group creation and atom transformation, both of which can be accomplished through the menu of reaction types shown below:
• Rearrangement (RAR)
• Addition
• Substitution
• Elimination
• Pericyclic reaction
• Ox/Red
• Reaction design
• Atom economy
The above six (6) general reaction categories simplify the panoply of reactivity paradigms in the discipline of chemistry. We can nearly list every pertinent reaction within one or more of the listings shown. Indeed, even acid/base chemistry can fall under Addition or Elimination reactions based on their precise reactivity patterns. We will delve deeply into each type in the following sections.
Rearrangements (RARs)
A rearrangement reaction covers a broad swath of chemistry which is principally characterized by the rearrangement of the skeletal configuration of a molecular system to yield a structural isomer (a swapped-out form of the original structure while retaining the original atoms). Oftentimes, a substituent moves from one backbone atom to another backbone atom. In the classical organic chemistry example shown below, we witness the Wagner-Meerwein RAR.
A rearrangement is not exactly well depicted by discrete electron transfers shown by curly arrows. The actual mechanism of alkyl groups moving in Figure \(1\) involves a fluid transfer of the alkyl group along a bond that is not described by simple ionic bond-breaking and reformation. Explanation by orbital interactions provides a better approximation although it is possible to draw the curved arrows for a sequence of discrete electron transfers to give the same result as a RAR reaction.
There are other types of RARs besides the chemically induced one shown above: we can have thermal or photo-induced as well. In these latter cases, heat or light can act as the triggers to actuate the transformation.
Thermal RARs
The isomerization of unsubstituted azulene to naphthalene was the first reported and hence most studied thermal transformation for an aromatic hydrocarbon. Many mechanisms have been suggested, yet not a one has been unequivocally confirmed.
Five mechanisms were proposed: reversible ring-closure, shown above, a norcaradiene- vinylidene, diradical, methylene walk, and a spiran mechanism. The reversible ring- closure mechanism is inaccurate despite its overall scientific appeal and evidence, and as such, multiple reaction pathways were deemed occurring simultaneously. This idea was widely accepted. It was thought that at high temperatures one mechanism would be energetically favored although energetic studies displayed similar activation energies for all mechanisms [1].
Four mechanisms for thermal isomerizations are warranted: dyotropic, diradical, and two benzene ring contractions. A 1,2-carbon shift to a carbene preceding a 1,2-hydrogen shift, and a 1-2- hydrogen shift to a carbene followed by a 1,2-carbon shift. The dyotropic mechanism shows the 1,2-shift displayed in Figure \(2\) .
Photoinduced RARs
The mechanisms of organic photochemical reactions are a treasure trove for photochemists and physicists because they provide tremendous insight into fundamental electronic behavior as a function of energy-matter interactions. The absorption of ultraviolet light by organics may lead to a number of reactions. Over the last century, an immense quantity of photochemical reactions have become unearthed; it has been found that reactions that are unlikely in ground-states are accessible in electronic excited-state configurations. One of the earliest photochemical studies was on the natural product santonin. Ciamician (1857-1922) observed that under sunlight exposure, santonin gave several photoproducts. The structure of santonin (molecule on left in Figure 6-3) was first described by Clemo and Hayworth in 1929 whose initial photoproduct is lumisantonin. As depicted, the photoreaction involves a C-3 carbonyl group movement to C-2, the C-4 methyl has moved to C-1, and the C-10 carbon has been inverted.
Beckmann RAR
This reaction is named after the German chemist Ernst Otto Beckmann (1853–1923). It is representative of an acid-catalyzed rearrangement of an oxime to an amide (when the oximes are cyclic, they yield “lactams”).
Caprolactam, in this example, is a very important chemical because it is the feedstock in the production of Nylon 6, a material used extensively in the manufacture of textiles, clothes, carpets, cushions, bulletproof vests, etc. The reaction includes acetic acid, hydrochloric acid, sulfuric acid, and acetic anhydride, to catalyze the rearrangement. Sulfuric acid is the most commonly used acid for commercial lactam production because it forms ammonium sulfate (a common agricultural fertilizer) as a by-product when neutralized with ammonia.
In general, rearrangements tend to have no waste of atoms because they are a type of isomerization. They are hence very atom economical and hence very efficient modes of chemical transformations. At this point, we will delve into a series of rearrangements to further illustrate the point.
Addition Reactions
This type of reaction occurs when in general two molecules covalently bind to form a larger one referred to as the adduct. These reactions are limited to chemical compounds that have multiple bonds, such as molecules with carbon–carbon double bonds (alkenes), triple bonds (alkynes) hetero double bonds like carbonyl (C=O) groups, or imine (C=N) groups. An addition reaction is typically the reverse of an elimination reaction. For example, the hydration of an alkene to an alcohol is the reverse of dehydration which leads to an oxidative product (alkene). Electrophilic and nucleophilic additions are the main types of polar addition. The non-polar additions can be free radical and cycloaddition.
In general, addition reactions are very common because they are simple, very powerful, robust, and preserve/conserve atom economy during the reaction sequence (think of the mathematical operation of addition – all numbers sum!). Thus, in general, no additional by-products are generated allowing for a clean and straight forward reaction scheme that in the final analysis may require little to no purification steps.
Substitution Reactions
These types of reactions are also known as single displacement reaction or single substitution reaction). One functional group is replaced by a more reactive functional group or else it would not happen (the reverse would be faster or more thermodynamically stable). Substitution reactions are classified either electrophilic or nucleophilic depending upon the reagents. A good example of a substitution reaction is halogenation. When chlorine gas (Cl-Cl) is irradiated with the appropriate light energy, several of the molecules photolyze into two chlorine radicals (Cl·) whose electrons are very nucleophilic. In Figure 6-6, one of radicals can rupture (homolytically) a C-H bond through abstraction of one of the equivalent protons on methane to form electrically neutral H-Cl. The other radical reforms a covalent bond with the CH3 . to form CH3Cl.
Elimination Reactions
An elimination reaction typically involves the removal of two substituents a substrate in either a one or two-step mechanism. The one-step mechanism is the E2 reaction, and the two-step mechanism is the E1 reaction. The numbering scheme relates to the kinetics of the reaction, i.e., bimolecular and unimolecular, respectively.
Elimination reactions are important because they are among a very few set of useful reactions that generate unsaturation within organic molecules. They are the opposite of addition reactions and involve homolytic or heterolytic dissociation of molecular groups on adjacent (vicinal) atoms leading to increased bond order on these vicinal atoms.
For example,RCH2CH2OH → R-CH=CH2 +H2O.This latter reaction requires the expulsion of a water to lead to one degree of unsaturation. The presence of a leaving group is a pre-requisite for these types of reactions.
In Figure \(8\), the opportunity to retain a hydrogen on the final structure is an unusual mechanistic phenomenon. The conversion relies on a strict orbitally conserved concerted mechanism in which the degree of unsaturation (terminal vinyl group) is generated in tandem with the abstraction of the proton by the lactonic ring oxygen. Such a sigmatropic shift would be over 3 atoms (hence, an unusual 1,3-hydride shift).
Pericyclic Reactions
A pericyclic reaction (Figure \(9\) ) is a reaction in which the transition state of the molecule has a cyclic geometry and progresses in a concerted manner. They are typically rearrangement reactions. The major classes of pericyclic reactions are electrocyclic, cycloaddition, sigmatropic, group transfer, cheletropic, and dyotropic, all of which are considered to be equilibrium processes.
These types of reactions are governed by Frontier Molecular Orbital (FMO) Theory. Frontier molecular orbitals for a molecule are so called because the orbitals are the “frontier” or vanguard (leaders) of electron occupation. These are known classically as the highest-energy occupied (HOMO) and lowest-energy unoccupied (LUMO) molecular orbitals. The HOMO is characterized by nucleophilic or electron donation, whereas the LUMO is the opposite. Chemical reactions and resonance are very successfully described by the overlap between a filled HOMO and an empty LUMO.
Acidity and basicity are terms used in FMO Theory very broadly. Acidity refers to a ligand, metal center, or orbital ability to accept electron density (from electron sources that include Brønsted bases). Basicity refers to a ligand, metal center, or orbital ability to donate electrons (to electron sinks that include Brønsted acids). The theory distinguishes among σ-acids, σ-bases, π-acids, and π-bases. The first two, σ-X, accept or donate electrons in a σ manner, aligned head on with another orbital, whereas the latter two, π-X, accept or donate electrons in a π manner, aligned side by side with another orbital.
A pericyclic reaction typically is unimolecular and zero order with respect to kinetics. As shown below, the bicyclic on the left is highly strained because of the 1,2 bridging and undergoes an electrocyclic RAR to relieve the steric stress associated with the heterocyclic cyclopropanoid moiety.
Diels-Alder
This is a [4+2] cycloaddition as already mentioned that occurs between a conjugated diene and a substituted alkene known as a dienophile to form a substituted cyclohexene. It was first described by Otto Diels and Kurt Adler in 1928 that lead to the Nobel Prize in Chemistry in 1950. It is a reliable method for forming 6- membered systems with good regio- and stereochemical control. The concept has been applied to other π-systems to provide the corresponding heterocycles (hetero-Diels–Alder reaction).
1,3-Dipolar Cycloaddition
This reaction occurs between a 1,3-dipole and a dipolarophile to form a five-membered ring (Figure \(10\) ). The reaction is sometimes referred to as the Huisgen cycloaddition which specifically describes the 1,3-dipolar cycloaddition between an organic azide and an alkyne to generate 1,2,3-triazole. Currently, this reaction is an important route to the region- and stereospecific synthesis of fivemembered heterocycles.
Cope Rearrangement (RAR)
The Cope RAR is an extensively studied [3,3]-sigmatropic RAR of 1,5-dienes developed by Arthur Cope. For example, 3-methyl-1,5-hexadiene heated to 300 °C yields 1,5- heptadiene (see Figure \(11\) ).
Oxidation/Reduction Reactions
The term “redox”, short for reduction–oxidation reaction, is used to describe the complete set of chemical reactions occurring for electron transfers leading to changes in oxidation states. Redox reactions require a reduction and complementary oxidation processes. The chemical species (reductant) where the electron is removed is oxidized, while the counter species (oxidant) to where it is added is reduced:
• Oxidation is the loss of electrons or increase in oxidation state;
• Reduction is the gain of electrons or a decrease in oxidation state.
As an example, in an electrolytic cell (shown in Figure \(12\) ), a complete redox (cycle) is shown.
In the example, zinc is being oxidized from Zn0 → Zn+2 which is a net loss of two electrons per atom. The oxidized zinc solubilizes in the anodic solution and proceeds to pull over an anion from the cathodic solution through the electromotive force of the two electrons that it lost. In other words, the zinc cation is balanced by the anions (likely two chloride anions) in the cathodic solution. Simultaneously, copper sulfate cations (cupric cations) are reduced by the net influx of the two electrons. A redox reaction can occur relatively slowly, for example, in the case of rust, or more rapidly, as in fires. | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/06%3A_Reaction_Types_Design_and_Efficiency/6.01%3A_Introduction_to_Reactions.txt |
Reaction Design and Efficiency
Green chemistry is all about increasing the overall efficiency of a reaction in terms of numerous criteria. In general, we try to mimic biological reactions because they are among the most efficient, low energy, and atom conserving reactions at our disposal. In terms of the “perfect” chemical reaction, we are talking about:
• Selectivity: we want to make one thing and one thing only to ensure that we don’t have contamination or other issues to content with the require cleanup, purification, or multi- step pathways.
• Efficiency: This term refers to the overall energy, time, and material inputs relative to the final product required. How intense is the process needed to obtain a final product? Obviously, a more efficient process/reaction will require much less inputs. In an ideal scenario, a product occurs spontaneously without very little input.
• Safety: Although we don’t talk about it much, the criterion of safety for a reaction or process must be paramount and very redundant given “Murphy’s Law”. Green chemistry is truly a discipline that is framed in the overall theme of safe performance and operation for a process or else it insists upon a no-go juncture point.
• Solventless: Again, the overall inputs are minimized in any reaction, and this is just one of the many criteria to ensuring efficiency and lowest impact. To do a reaction solventless means allowing self-reaction to occur with or a bimolecular reaction with other reagents in the absence of a solvent. Although many biological reactions occur in a solvent, ideally, it would be best to avoid if possible because of the many issues surrounding solvents: disposal, clean up, concentration-based reactions (depend on kinetics), and in the case of water, its precious nature for food, drinking, washing, etc.
• Low or no energetics: Here we try to approach an ideality for energy conservation in which we reduce the need for high heat, high pressure, long times, etc. Ideally, we would use the energy available to us without attempting to burn gas, coal, employ electricity (power grids), etc.
• Chemical yield: In most standard chemistry reaction descriptions, we tend to emphasize the “yield” of a reaction as the chief parameter for assessing the reaction efficiency or utility. A low yield reaction would be undesirable because of the lack of return on the process. Chemical yield is defined as the moles of desired products obtained as a ratio to the possible moles (theoretical) possible expressed as a percentage.
Let’s give an excellent intuitive example from the Khan Academy (www.khanacademy.org/science/...-percent-yield): Let’s assume you have five hot dogs and four hot dog buns. Obviously, you will have one hot dog in excess.
In terms of chemical reactions, the hot dog buns are the limiting reagent and the leftover single hot dog is the excess reagent; thus, four complete hot dogs are the theoretical yield, assuming the hot dogs and buns combine in a one-to-one ratio (you’ve got to balance the reaction first). In any chemical reaction, the limiting reagent is the reactant that determines how much of the products can be made. The other reactant(s) is/are in excess.
How do we measure yield more efficiently?
Typically, chemical yield focuses on a part of the reaction, i.e., our desired product based on the complete reaction of the limiting reagent. However, in many cases, the overall efficiency from a green chemical perspective is compromised despite high chemical yields. Consider the “Gabriel Synthesis”:
The reaction transforms primary alkyl halides into primary amines. In Figure \(13\), only the latter half of the reaction/synthesis is shown! Traditionally, the reaction uses potassium phthalimide (an –NH2 synthon) to homologate the alkyl which is later coupled to the amine. In the Gabriel Synthesis, the phthalimide anion is employed as a surrogate of H2N−. The entirety of the reaction is shown in Figure \(14\) :
In chemical yield methodologies, we do NOT account for the stream of by-products that amass during a reaction or set of reactions. In the above case, we have the following waste products: potassium halide (K-X), potassium chloride (KCl), and phthalhydrazide (see Figure \(13\) for the structure – the aromatic compound, second from right). Unbelievably, this array (especially the hydrazide) make up a huge fraction of wasted atoms that do not contribute in any way to the efficiency of the reaction. Because of the incredible “atomic” waste, we have a reaction with a very poor “atom economy”.
Atom Economy
This is a concept that was first formalized by Trost. 1 Atom economy (atom efficiency) is the conversion efficiency of a chemical process in terms of all atoms and the desired product(s). It is one of the most singularly powerful definitions for understanding the “greenness” of a reaction and its potential usefulness.
Atom economy can be written as:
As we already alluded, in an ideal chemical process, the amount of starting materials equals the amount of all products generated and no atom is wasted. If this is not the case, this is a genuine concern for raw materials that have a high cost or due to economic and environmental costs of disposal of the waste.
In addition to high-yielding process such as the Gabriel Synthesis that result in substantial byproducts, we also have the Cannizzaro Reaction2 where 50% of the reactant aldehyde becomes the other oxidation state of the target and the Wittig3, where high-mass phosphorus reagents are used but become waste.
A Diels-Alder reaction is an example of a very atom efficient reaction that also can be chemo-, regio-, diastereo- and enantioselective. Catalytic hydrogenation comes closest to being an ideal reaction that is extensively practiced both industrially and academically.4Poor atom economy is common in drug (pharmaceutical) synthesis, and in research. For example, during the synthesis of an alcohol by reduction of an ester with lithium aluminum hydride, the reaction produces a huge amount of aluminum salts. The cost can be considerable.
Experimental atom economy
In this derivative calculation of atom economy, we account for the actual quantities of reagents used in a ratio of theoretical yield of product total mass of reactants used as a percentage. Or equivalently, we can calculate the mass of reactants utilized in the desired product divided by the total mass of all reactants as a percentage
Percentage yield x experimental atom economy
This is considered the ultimate measure of the efficiency of a reaction. In this case, you multiply the chemical yield of the product by the experimental atom economy to obtain a true measure of the atom efficiency of the reaction | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/06%3A_Reaction_Types_Design_and_Efficiency/6.03%3A_Reaction_Design_Concepts.txt |
In this final chapter, we became very familiar with the concepts of reaction efficiency and design from a green chemistry perspective. We more or less employed the totality of the green chemistry principles to understanding what goes into maximizing the environmental, social, and economic benefits associated with scientific enterprises. We thoroughly explored the nature of the various types of reactions done in both fundamental and applied settings and how to evaluate them using very useful mathematical constructs. This chapter serves to cement the concepts we learned in this magnificent discipline and to provide a platform for understanding how to best approach the chemistry, chemical engineering, and the biological transformations of reactions and processes.
Review Questions
1. Indicate by drawings what the addition products are for the following reaction and what type of reaction it is:
2. Please indicate what the products are of the reaction of 2-phenyl isopropyl iodide with sodium methoxide.
3. Does the non-salt product in (2) retain its stereochemical configuration? Why?
4. Please explain how the minor product is formed in the E2 reaction below and why it is minor?
5. Using the Diels-Alder Reaction shown below, draw the transition state that will lead to the adduct.
6. Using the redox diagram below, indicate why the process cannot go on forever
7. Define the following terms
a. atom economy
b. percent atom economy
c. percent yield
d. percent experimental atom economy
e. percent atom economy x experimental atom economy
8. Which of the terms in question 7 are only meaningful with experimental results? What experimental result is necessary to make these terms meaningful?
9. Consider the following four (4) reactions shown below:
a. Label each reaction as a substitution, elimination, addition or rearrangement.
b. Rewrite each reaction making sure that the reaction is balanced. Show all the reactant atoms that are incorporated into the desired product, and the atoms of the desired product in green, and all other atoms in brown.
c. Set up a table of atom economy whose headings are: formula, formula weight, number of utilized atoms, weight of utilized atoms, unutilized atoms, and weight of unutilized atoms.
d. Calculate the %-atom economy of each reaction
10. In the Further Reading section (#3), Roger Sheldon has developed a term very similar to the %-atom economy termed % atom utilization. The % atom utilization can be calculated according to the following equation: % Atom Utilization = (MW of desired product/MW of all products) X 100
a. Compare and contrast this with the %-atom economy
b. What concept that you learned in freshman chemistry makes the actual percentages calculated for the % atom utilization, and % atom economy equal (in most circumstances). Prove this by calculating the % atom utilization for each of the reactions in questions 9—13 and comparing your results to the %-atom economy that you calculated in question 9.
Further Reading
1. Williamson, Kenneth L., Macroscale and Microscale Organic Experiments, 2nd ed., D. C. Heath and Co., 1994, 247-252.
2. Trost, Barry M., The Atom Economy-A Search for Synthetic Efficiency. Science 1991, 254, 1471-1477.
3. Cann, Michael C.; Connelly, Marc E. Real-World Cases in Green Chemistry; ACS, Washington, 2000. Roger Sheldon of Delft University has developed a very similar concept called % atom utilization. Sheldon, Roger A. Organic Synthesis- Past, Present and Future. Chem. Ind. (London), 1992, (Dec), 903-906.
4. Anastas, Paul T., and Warner, John C. Green Chemistry Theory and Practice, Oxford University Press, New York, 1998.
5. Westheimer, T. The Award Goes to BHC. Chem. Eng. 1993, (Dec.), 84-95 | textbooks/chem/Environmental_Chemistry/Key_Elements_of_Green_Chemistry_(Lucia)/06%3A_Reaction_Types_Design_and_Efficiency/6.04%3A_Conclusions_and_Review.txt |
"Normal" rainfall is slightly acidic because of the presence of dissolved carbonic acid. Carbonic acid is the same as that found in soda pop.
• Acid Rain
The pH of "normal" rain has traditionally been given a value of 5.6. However scientists now believe that the pH of rain may vary from 5.6 to a low of 4.5 with the average value of 5.0. Acid rain or acid snow is a direct result of the method that the atmosphere cleans itself. The tiny droplets of water that make up clouds, continuously capture suspended solid particles and gases in the atmosphere. The gases of sulfur oxides and nitrogen oxides are converted into sulfuric and nitric acids.
• Acid Rain Transport
The reactions of sulfur oxides to form sulfuric acid are quite slow. Sulfur dioxide may remain airborne for 3-4 days.As a consequence acid rain derived from sulfur oxides may travel for hundreds of miles or even a thousand miles. Nitrogen oxides may persist for only about one half day and therefore may travel only tens or hundreds of miles.
• Acid Snow
The impact of acid precipitation on aquatic ecosystems may be intensified by melting snow. When snow melts rapidly in the spring, the stream or lake may be "shocked" with an excessive amount of acid. In the spring, at the time of acid snow melting, the various aquatic organisms are reproducing and are the most sensitive increases in acid.
• Electricity Generation
Electricity is produced at a an electric power plant. Some fuel source, such as coal, oil, natural gas, or nuclear energy produces heat, which is used to boil water to create steam. The steam under high pressure is then used to spin a turbine that interacts with a system of magnets to produce electricity. The electricity is transmitted as moving electrons through a series of wires to homes and business.
• Sources of Nitrogen Oxides
A natural source of nitrogen oxides occurs from a lightning stroke. The very high temperature in the vicinity of a lightning bolt causes the gases oxygen and nitrogen in the air to react to form nitric oxide.
• Sources of Sulfur Oxides
It has been estimated that on a global basis, natural sources, such as volcanoes, contribute about as the same amount of sulfur oxides to the atmosphere as human industrial activities. This amounts to 75-100 million tons from each source per year. However, in industrial countries such as in Europe and North America, human activities contribute 95 % of the sulfur oxides and natural sources only 5 %. In the Western States, natural sources of sulfur oxides may be more important.
Thumbnail: Acid rain and weathering. (Public Domain; Slick).
Acid Rain
"Normal" rainfall is slightly acidic because of the presence of dissolved carbonic acid. Carbonic acid is the same as that found in soda pop.
Introduction
The pH of "normal" rain has traditionally been given a value of 5.6. However scientists now believe that the pH of rain may vary from 5.6 to a low of 4.5 with the average value of 5.0. Acid rain or acid snow is a direct result of the method that the atmosphere cleans itself. The tiny droplets of water that make up clouds, continuously capture suspended solid particles and gases in the atmosphere. The gases of sulfur oxides and nitrogen oxides are chemically converted into sulfuric and nitric acids. The non-metal oxide gases react with water to produce acids (ammonia produces a base).
$SO_2 + HOH \rightarrow H_2SO_3 \label{1}$
$2 NO_2 + HOH \rightarrow HNO_2 + HNO_3 \label{2}$
When enough of the tiny cloud droplets clump together to form a larger water drop it may fall to the earth as "wet" acid precipitation including rain, snow, ice, sleet, or fog. See Acids and Bases for more details.
Formation
The sulfuric and nitric acids formed from gaseous pollutants can easily make their way into the tiny cloud water droplets. These sulfuric acid droplets are one component of the summertime haze in the eastern United States. Some sulfuric acid is formed directly in the water droplets from the reaction of sulfur dioxide and hydrogen peroxide. Some of these sulfuric acid particles drop to the earth as "dry" acid deposition. The tiny water droplets containing sulfuric acid provide a ready surface to attract more molecules of water to form a larger droplet of dilute sulfuric acid.
Water droplets collected from the base of clouds in the Eastern U.S.during the summer have an average pH of 3.6, with some values as low as pH 2.6. The pH in the upper portion of a cloud is much higher. The final rain droplet has an average pH of 4.2 in the Northeast U.S. In Los Angeles, the pH of fog has been measured at 2.0 - about the acidity of lemon juice.
Contrary View
The benchmark of "natural rain" is 5.6. Acid precipitation in the range of 4.2-5.0 has been recorded in most of the Eastern United States and Canada. EPRI (Electric Power Research Institute) likes to compare these values to familiar objects to give the impression that these pH values are not harmful. Examples: Carrots = 5.0, Bananas = 4.6, Tomatoes = 4.2, Apples and soft drinks = 3.0, Lemon juice = 2.0. EPRI also contends that pH 5.6 may or may not be a valid reference point. It should not be considered the "background" or "natural" acidity of precipitation.
Even without man-made influences, there are natural sources of sulfur oxides, nitrogen oxides, and other species important to determining the precipitation acidity at any given time. Hence, trying to quantify man's contribution to the natural condition will never be possible, since the "natural background" condition cannot be known.
In the forest areas of Brazil at the headlands of the Amazon River, an area remote from civilization, the monthly average of 100 rain events in the 1960s ranged from pH 4.3 to pH 5.2, with the median value of pH 4.6 and one reading as low as pH 3.6. On the island of Hawaii, remote from all industrial activity, the weighted average of precipitation over a 4 year period was pH 5.3, with a minimum value of pH 3.8.
Outside Links
• www.epa.gov/acidrain/what/index.html
• Bretherick, Leslie. "Nitric acid, nitrates, and nitro compounds: Energetic servants, but overwhelming masters (SAFETY)." J. Chem. Educ. 1989, 66, A220.
• Driscoll, Jerry A. "Acid Rain Demonstration: The Formation of Nitrogen Oxides as a By-Product of High-Temperature Flames in Connection with Internal Combustion Engines." J. Chem. Educ. 1997 74 1424.
• Johns, Nicholas; Longstaff, Stephen J. "A convenient, low-cost method for determining sulfate in acid rain (F&R)." J. Chem. Educ. 1987, 64, 449. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Acid_Rain/Acid_Rain.txt |
The reactions of sulfur oxides to form sulfuric acid are quite slow. Sulfur dioxide may remain airborne for 3-4 days.As a consequence acid rain derived from sulfur oxides may travel for hundreds of miles or even a thousand miles. Nitrogen oxides may persist for only about one half day and therefore may travel only tens or hundreds of miles.
Introduction
Once airborne, the sulfur and nitrogen oxides eventually come down in one form or another. Where they come down depends on the height of the smokestack and the prevailing weather conditions. In general, prevailing winds in North America transport pollutants from west to east or northeast. The nine largest coal burning states are in the Midwest and the Ohio River valley. It is estimated that two thirds of the acid rain in the Northeast and Eastern Canada comes from these sources.
Blue arrow shows the upper winds that travel from the west to the east or northeast. Winds travel from the mid-west to the northeast. In addition, a copper-nickel smelter in Sudbury, Ontario, just north of Lake Huron is the most significant sulfur oxide source in Canada. The winds may also carry the sulfur oxide clouds to the Northeast in the U.S. where it may be converted to acid rain.
pH of Acid Rain over a Period of 50 years
These maps show how the areas of lower pH have spread in a 30 years 1955-1988. Darkest area is lowest pH. Since the Clean Air Act Amendments of 1990, there have been significant decreases in the amount of sulfur oxides escaping from the electric power plants. As a result there has been a measurable reduction in the amount of acid rain, which is actually translated as an increase in the pH levels (higher pH means less acid).
Acid Snow
The impact of acid precipitation on aquatic ecosystems may be intensified by melting snow. When snow melts rapidly in the spring, the stream or lake may be "shocked" with an excessive amount of acid. In the spring, at the time of acid snow melting, the various aquatic organisms are reproducing and are the most sensitive increases in acid.
Little Moose Lake
During the winter of 1976-77, there was no significant snow melt from mid-December through February at Little Moose Lake, New York. The highest depth was 130 cm. (top graph). The snow had an average pH of 4.4 in Feb. (middle graph). The snow started to melt in early March.(bottom graph).
Snow Thaw at Little Moose Lake
The first major thaw in early March resulted in the release of 80% of the stored acid in a one week period. During the first couple of days, the snow melt water pH was 3.4-3.6. The pH gradually rose over the eight day period to pH 5.0.
pH at Little Moose Lake
The pH levels in Little Moose lake are normally about 7.0. During the snow melting, in early March, the lake pH dropped to 6.0. An outlet stream from the lake reached a low pH of 4.8. A small brook nearby hit a low pH of 4.6 during the snow melt period. The average pH in this brook during the rest of the year is about 5.4. There was a definite relationship of the amount of aluminum in the brook vs. pH. As the pH decreased, the aluminum concentration increased.
Captive populations of adult, year old, and larval brook trout had been maintained over the winter in Little Moose Lake water without any problems. During the snow melt in March, 3 adult brook trout died, 25 one year old trout died, and about 50,000 recently hatched trout died. Various other abnormal behaviors were also observed. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Acid_Rain/Acid_Rain_Transport.txt |
Electricity is produced at a an electric power plant. Some fuel source, such as coal, oil, natural gas, or nuclear energy produces heat, which is used to boil water to create steam. The steam under high pressure is then used to spin a turbine that interacts with a system of magnets to produce electricity. The electricity is transmitted as moving electrons through a series of wires to homes and business.
Introduction
This is a typical electric power plant located in Shawville, Pennsylvania.
Notice the large pile of coal on the left side of the plant and the three smokestacks, each one taller than the previous. The tallest stack was built to cut down on the local air pollution, where the sulfur oxides are emitted higher into the atmosphere. This has not proven to be a solution to the probelm. As a result the sulfur oxides now travel great distances before coming down in the form of acid rain.
Electric Power Plants
Electric Power Plants have a number of components in common and are an interesting study in the various forms and changes of energy necessary to produce electricity.
• Boiler Unit: Almost all of power plants operate by heating water in a boiler unit into super heated steam at very high pressures. The source of heat from combustion reactions may vary in fossil fuel plants from the source of fuels such as coal, oil, or natural gas. Biomass or waste plant parts may also be used as a source of fuel. In some areas solid waste incinerators are also used as a source of heat. All of these sources of fuels result in varying amounts of air pollution, as well as, the carbon dioxide (a gas implicated in global warming problems).
• Turbine-Generator: The super heated steam is used to spin the blades of a turbine, which in turn is used in the generator to turn a coil of wires within a circular arrangements of magnets. The rotating coil of wire in the magnets results in the generation of electricity.
• Cooling Water: After the steam travels through the turbine, it must be cooled and condensed back into liquid water to start the cycle over again. Cooling water can be obtained from a nearby river or lake. The water is returned to the body of water 10-20 °C higher in temperature than the intake water. Alternate method is to use a very tall cooling tower, where the evaporation of water falling through the tower provides the cooling effect.
• In a nuclear power plant, the fission chain reaction of splitting nuclei provides the source of heat.
Creating Electricity using a Generator
If a magnetic field can create a current then we have a means of generating electricity. Experiments showed that a magnetic just sitting next to a wire produced no current flow through that wire. However, if the magnet is moving, a current is induced in the wire. The faster the magnet moves, the greater the induced current. This is the principal behind simple electric generators in which a wire loop is rotated between to stationary magnetics. This produces a continuously varying voltage which in turn produces an alternating current .
Diagram of a simple electric generator is shown above. To generate electricty then, some (mechanical) mechanism is used to turn a crank that rotates a loop of wire between stationary magnets. The faster the crank turns, the more current that is generated. In hydroelectric, the falling water turns the turbine. The wind can also turn the turbine. In fossil fuel plants and nuclear plants, water is heated to steam which turns the turbine. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Acid_Rain/Electricity_Generation.txt |
Natural Sources - Lighting Bolts
A natural source of nitrogen oxides occurs from a lightning stroke. The very high temperature in the vicinity of a lightning bolt causes the gases oxygen and nitrogen in the air to react to form nitric oxide.
\[\ce{N2 + O2 -> NO}\]
The nitric oxide very quickly reacts with more oxygen to form nitrogen dioxide.
\[\ce{NO + O2 -> NO2}\]
Both of the nitrogen compounds are known collectively as nitrogen oxides or \(\ce{NO_{x}}\).
Human Sources of Nitrogen Oxides
At normal temperatures the oxygen and nitrogen gases do not react together. In the presence of very high temperatures nitrogen and oxygen do react together to form nitric oxide. These conditions are found in the combustion of coal and oil at electric power plants, and also during the combustion of gasoline in automobiles. Both of these sources contribute about equally to the formation of nitrogen oxides.
In areas of high automobile traffic, such as in large cities, the amount of nitrogen oxides emitted into the atmosphere can be quite significant. In the Los Angeles area, the main source of acid rain is from automobiles. In certain national parks such as Yosemite and Sequoia, automobile traffic is banned to limit the amount of air pollution damage to the trees and plants. This also has the effect of reducing the visual smog in the air.
Outside Links
• Clyde, Dale D. " Dynamite Demo? ." J. Chem. Educ. 1995 72 1130.
• Driscoll, Jerry A. " Acid Rain Demonstration: The Formation of Nitrogen Oxides as a By-Product of High-Temperature Flames in Connection with Internal Combustion Engines ." J. Chem. Educ. 1997 74 1424.
• Foster, Natalie I.; Heindel, Ned D. "The discovery of nitroglycerine: Its preparation and therapeutic utility (SBS)." J. Chem. Educ. 1981, 58, 364.
• Leenson, I. A. " Approaching Equilibrium in the N2O4-NO2 System: A Common Mistake in Textbooks." J. Chem. Educ. 2000 77 1652.
Sources of Sulfur Oxides
It has been estimated that on a global basis, natural sources, such as volcanoes, contribute about as the same amount of sulfur oxides to the atmosphere as human industrial activities. This amounts to 75-100 million tons from each source per year. However, in industrial countries such as in Europe and North America, human activities contribute 95 % of the sulfur oxides and natural sources only 5 %. In the Western States, natural sources of sulfur oxides may be more important.
Human Sources of Sulfur Oxides
In 1980, emissions of sulfur dioxide totaled 24.1 million tons in the United States. Of this total 66 % came from electric power companies. Electric power companies that burn coal are a major source of sulfur oxides. Other industrial plants contributed about 22 %. Smelting of metals such as copper, zinc, lead, and nickel can produce large amounts of sulfur dioxide. In Canada, 45% of the emissions are from smelting operations, compared to only 6 % in the United States.
Coal contains mainly carbon with some hydrogen. When coal is burned it reacts with oxygen in the air to produce carbon dioxide and water and large amounts of heat.
$C + O_2 \rightarrow CO_2$
In addition, coal may contain from 1-4 % of the element, sulfur. When the coal is burned with oxygen in the air, the sulfur is reacted to form sulfur dioxide.
$S + O_2 \rightarrow SO_2$
Wood Smoke
In certain resort towns, a significant source of visible smog conditions results from the burning of large quantities of wood in fireplaces and stoves. The smoke contains solid particles which may provide the initial bit of solid or catalyst that initiates the reactions to produce sulfuric acid or nitric acid in the water droplets. This is a well recognized problem in Aspen and Vail Colorado. Steps are being taken to reduce the burning of wood. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Acid_Rain/Sources_of_Nitrogen_Oxides.txt |
Water, a natural occurring and abundant substance that exists in solid, liquid, and gas forms on the planet Earth, has attracted the attention of artists, engineers, poets, writers, philosophers, environmentalists, scientists, and politicians. Every aspect of life involves water as food, as a medium in which to live, or as the essential ingredient of life. The food-science aspects of water range from agriculture, aquaculture, biology, biochemistry, cookery, microbiology, nutrition, photosynthesis, power generation, to zoology. Even in the narrow sense of food technology, water is intimately involved in the production, washing, preparation, manufacture, cooling, drying, and hydration of food. Water is eaten, absorbed, transported, and utilized by cells. Facts and data about water are abundant and diverse. This article can only selectively present some fundamental characteristics of water molecules and their collective properties for readers when they ponder food science at the molecular level.
• 1. Acid-Base Chemistry of Natural Aquatic Systems
• 2. Carbonate Equilibria in Natural Waters
• 3. Redox Equilibria in Natural Waters
• 4. Solids in Contact With Natural Waters
• Fundamental Characteristics of Water
The physics and chemistry of water is the backbone of engineering and sciences. The basic data for the properties of pure water, which are found in the CRC Handbook of Chemistry and Physics (1), are useful for food scientists. However, water is a universal solvent, and natural waters contain dissolved substances present in the environment. All solutes in the dilute solutions modify the water properties.
• Natural Water
Water is the most important resource. Without water life is not possible. From a chemical point of view, water, H2O, is a pure compound, but in reality, you seldom drink, see, touch or use pure water. Water from various sources contains dissolved gases, minerals, organic and inorganic substances. This photograph of Guilin shows the beauty of natural water. The rain curved an interesting landscape out of the lime stones in the area. Natural waters are often important parts of wonders of the world
• Water Biology
Since water supports life, living organisms also modify their environment, changing the nature of the water in which they live. Biology of water pollution, lists the syllabus on a course including a laboratory section. Water and biology interweave into an entangled maze waiting for explorers and curious minds.
• Water Chemistry
Water is an unusual compound with unique physical properties. As a result, its the compound of life. Yet, its the most abundant compound in the biosphere of Earth. These properties are related to its electronic structure, bonding, and chemistry. However, due to its affinity for a variety of substances, ordinary water contains other substances. Few of us has used, seen or tested pure water, based on which we discuss its chemistry.
• Water Physics
Chemical and physical properties of water are often discussed together. These properties are fundamentals of many disciplines such as hydrology, environmental studies, chemical engineering, environmental engineering, civil engineering etc. They are of interest to chemists and physicists of course.
• Water Treatment
Water treatment is a process of making water suitable for its application or returning its natural state. Thus, water treatment required before and after its application. The required treatment depends on the application. Water treatment involves science, engineering, business, and art. The treatment may include mechanical, physical, biological, and chemical methods. As with any technology, science is the foundation, and engineering makes sure that the technology works as designed.
Aquatic Chemistry
Natural waters contain a wide variety of solutes that act together to determine the pH, which typically ranges from 6 to 9. Some of the major processes that affect the acid-base balance of natural systems are:
• Contact with atmospheric carbon dioxide
• Input of acidic gases from volcanic and industrial emissions
• Contact with minerals, rocks, and clays
• Presence of buffer systems such as carbonate, phosphate, silicate, and borate
• Presence of acidic anions, such as $Fe(H_2O)^{3+}_6$
• Input and removal of $CO_2$ through respiration and photosynthesis
• Other biological processes, such as oxidation ($O_2+ H^+ +e^- \rightarrow H_2O$), nitrification, denitrification, and sulfate reduction.
In this chapter and also in the next one which deals specifically with the carbonate system, we will consider acid-base equilibria as they apply to natural waters. We will assume that you are already familiar with such fundamentals as the Arrhenius and Brfinsted concepts of acids and bases and the pH scale. You should also have some familiarity with the concepts of free energy and activity. The treatment of equilibrium calculations will likely extend somewhat beyond what you encountered in your General Chemistry course, and considerable emphasis will be placed on graphical methods of estimating equilibrium concentrations of various species.
Contributors and Attributions
Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook
2. Carbonate Equilibria in Natural Waters
http://www.chem1.com/acad/webtext/pdf/c3carb.pdf
3. Redox Equilibria in Natural Waters
http://www.chem1.com/acad/webtext/pdf/c3redox.pdf
Back to Advanced Aquatic Chemistry
4. Solids in Contact With Natural Waters
http://www.chem1.com/acad/webtext/pdf/c3solids.pdf | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Aquatic_Chemistry/1._Acid-Base_Chemistry_of_Natural_Aquatic_Systems.txt |
I. Introduction
Water, a natural occurring and abundant substance that exists in solid, liquid, and gas forms on the planet Earth, has attracted the attention of artists, engineers, poets, writers, philosophers, environmentalists, scientists, and politicians. Every aspect of life involves water as food, as a medium in which to live, or as the essential ingredient of life. The food-science aspects of water range from agriculture, aquaculture, biology, biochemistry, cookery, microbiology, nutrition, photosynthesis, power generation, to zoology. Even in the narrow sense of food technology, water is intimately involved in the production, washing, preparation, manufacture, cooling, drying, and hydration of food. Water is eaten, absorbed, transported, and utilized by cells. Facts and data about water are abundant and diverse. This article can only selectively present some fundamental characteristics of water molecules and their collective properties for readers when they ponder food science at the molecular level.
The physics and chemistry of water is the backbone of engineering and sciences. The basic data for the properties of pure water, which are found in the CRC Handbook of Chemistry and Physics (1), are useful for food scientists. However, water is a universal solvent, and natural waters contain dissolved substances present in the environment. All solutes in the dilute solutions modify the water properties. Lang’s Handbook of Chemistry (2) gives solubilities of various gases and salts in water. Water usage in the food processing industry is briefly described in the Nalco Water Handbook (3). For water supplies and treatments, the Civil Engineering Handbook (4) provides practical guides. The Handbook of Drinking Water Quality (5) sets guidelines for waters used in food services and technologies. Wastewater from the food industry needs treatment, and the technology is usually dealt with in industrial chemistry (6). Most fresh food contains large amounts of water. Modifying the water content of foodstuffs to extend storage life and enhance quality is an important and widely used process (7).
A very broad view and deep insight on water can be found in “Water – A Matrix of Life” (8). Research leading to our present-day understanding of water has been reviewed in the series “Water – A Comprehensive Treatise” (9). The interaction of water with proteins (10, 11) is a topic in life science and food science. Water is the elixir of life and H2O is a biomolecule.
II. Water and Food Technology
Water is an essential component of food (12). Philosophical conjectures abound as to how Earth evolved to provide the mantle, crust, atmosphere, hydrosphere, and life. Debates continue, but some scientists believe that primitive forms of life began to form in water (13). Complicated life forms developed, and their numbers grew. Evolution produced anaerobic, aerobic and photosynthetic organisms. The existence of abundant life forms enabled parasites to appear and utilize plants and other organisms. From water all life began (14). Homo sapiens are integral parts of the environment, and constant exchange of water unites our internal space with the external.
The proper amount of water is also the key for sustaining and maintaining a healthy life. Water transports nutrients and metabolic products throughout the body to balance cell contents and requirements. Water maintains biological activities of proteins, nucleotides, and carbohydrates, and participates in hydrolyses, condensations, and chemical reactions that are vital for life (15). On average, an adult consumes 2 to 3 L of water: 1-2 L as fluid, 1 L ingested with food, and 0.3 L from metabolism. Water is excreted via the kidney, skin, lung, and anus (16). The amount of water passing through us in our lifetimes is staggering.
Aside from minute amounts of minerals, food consists of plant and animal parts. Water is required for cultivating, processing, manufacturing, washing, cooking and digesting food. During or after eating, a drink, which consists of mostly water, is a must to hydrate or digest the food. Furthermore, water is required in the metabolic process.
Cells and living organisms require, contain and maintain a balance of water. An imbalance of water due to freezing, dehydration, exercise, overheating, etc. leads to the death of cells and eventually the whole body. Dehydration kills far more quickly than starvation. In the human body, water provides a medium for the transportation, digestion and metabolism of food in addition to many other physiological functions such as body temperature regulation (17).
Two-thirds of the body mass is water, and in most soft tissues, the contents can be as high as 99% (16). Water molecules interact with biomolecules intimately (9); it is part of us. Functions of water and biomolecules collectively manifest life. Water is also required for running households, making industrial goods, and generating electric power.
Water has shaped the landscape of Earth for trillions of years, and it covers 70% of the Earth’s surface. Yet, for food production and technology it is a precious commodity. Problems with water supply can lead to disaster (5). Few brave souls accept the challenge to stay in areas with little rainfall. Yet, rainfall can be a blessing or a curse depending on the timing and amount. Praying for timely and bounty rainfall used to be performed by emperors and politicians, but water for food challenges scientists and engineers today.
III. Water Molecules and Their Microscopic Properties
Plato hypothesized four primal substances: water, fire, earth, and air. His doctrine suggested that a combination and permutation of various amounts of these four primal substances produced all the materials of the world. Scholars followed this doctrine for 2000 years, until it could not explain experimental results. The search of fundamental substances led to the discovery of hydrogen, oxygen, nitrogen, etc., as chemical elements. Water is made up of hydrogen (H) and oxygen (O). Chemists use H2O as the universal symbol for water. The molecular formula, H2O, implies that a water molecule consists of two H atoms and one O atom. However, many people are confused with its other chemical names such as hydrogen oxide, dihydrogen oxide, dihydrogen monoxide, etc.
A. Isotopic Composition of Water
The discoveries of electrons, radioactivity, protons, and neutrons implied the existence of isotopes. Natural isotopes for all elements have been identified. Three isotopes of hydrogen are protium (1H), deuterium (D, 2D or 2H), and radioactive tritium (T, 3T or 3H), and the three stable oxygen isotopes are 16O, 17O, and 18O. The masses and abundances of these isotopes are given in Table 1. For radioactive isotopes, the half-lives are given.
Table 1. Isotopes of hydrogen and oxygen; and isotopic water molecules molar mass (amu), relative abundance (%) or half life
Isotopes of hydrogen
Stable isotopes of oxygen
1H
1.007825
2D
2.00141018
3T
3.0160493
16O
15.9949146
17O
16.9991315
18O
17.9991604
99.985%
0.015%
12.33 years
99.762%
0.038%
0.200%
Isotopic water molecules
molar mass (amu) and relative abundance (%, ppm or trace)
H216O
H218O
H217O
HD16O
D216O
HT16O
18.010564
99.78%
20.014810
0.20%
19.014781
0.03%
19.00415
0.0149%
19.997737
0.022 ppm
20.018789
trace
Random combination of these isotopes gives rise to the various isotopic water molecules, the most abundant one being 1H216O (99.78%, its mass is 18.010564 atomic mass units (amu)). Water molecules with molecular masses about 19 and 20 are present at some fractions of a percent. Although HD16O (0.0149%) is much more abundant than D216O (heavy water, 0.022 part per million), D2O can be concentrated and extracted from water. In the extraction process, HDO molecules are converted to D2O due to isotopic exchange. Rather pure heavy water (D2O) is produced on an industrial scale especially for its application in nuclear technology, which provides energy for the food industry.
A typical mass spectrum for water shows only mass-over-charge ratio of 18 and 17 respectively for H2O+ and OH+ ions in the gas phase. Other species are too weak for detection, partly due to condensation of water in mass spectrometers.
The isotopic composition of water depends on its source and age. Its study is linked to other sciences (18). For the isotopic analysis of hydrogen in water, the hydrogen is reduced to a hydrogen gas and then the mass spectrum of the gas is analyzed. For isotopes of oxygen, usually the oxygen in H2O is allowed to exchange with CO2, and then the isotopes of the CO2 are analyzed. These analyses are performed on archeological food remains and unusual food samples in order to learn their origin, age, and history.
B. Structure and Bonding of Water Molecules
Chemical bonding is a force that binds atoms into a molecule. Thus, chemists use H-O-H or HOH to represent the bonding in water. Furthermore, spectroscopic studies revealed the H–O–H bond angle to be 104.5o and the H–O bond length to be 96 picometers (pm) for gas H2O molecules (19). For solid and liquid, the values depend on the temperature and states of water. The bond length and bond angles are fundamental properties of a molecule. However, due to the vibration and rotational motions of the molecule, the measured values are average or equilibrium bond lengths and angles.
The mean van der Waals diameter of water has been reported as nearly identical with that of isoelectronic neon (282 pm). Some imaginary models of the water molecule are shown in Fig. 1
An isolated water molecule is hardly static. It constantly undergoes a vibration motion that can be a combination of any or all of the three principle modes: symmetric stretching, asymmetric stretching, and bending (or deformation). These vibration modes are indicated in Fig. 2.
Absorption of light (photons) excites water molecules to higher energy levels. Absorption of photons in the infrared (IR) region excites the vibration motion. Photons exciting the symmetric stretching, bending, and asymmetric stretching to the next higher energy levels have wave numbers 3656, 1594, and 3756 cm–1 respectively, for H2O (20). These values and those for other water molecules involving only 16O are given in Table 2
Table 2. Absorption frequencies of D2O, H2O, and HDO molecules for the excitation of fundamental modes to a higher energy level. Vibration mode
Symmetric stretching
3656
2726
2671
Bending
1594
1420
1178
Asymmetric stretching
3756
3703
2788
The spectrum of water depends on temperature and density of the gaseous H2O. A typical IR spectrum for the excitation of only the fundamental vibration modes consists of three peaks around 1594, 3656 and 3756 cm–1. Additional peaks due to excitation to mixed modes appear at higher wave numbers.
Rotating the H2O molecule around the line bisecting the HOH angle by 180° (360°/2) results in the same figure. Thus, the molecules have a 2-fold rotation axis. There are two mirror planes of symmetry as well. The 2-fold rotation and mirror planes give the water molecules the symmetry point group C2v.
Rutherford alpha scattering experiment in 1909 showed that almost all atomic mass is in a very small atomic nucleus. In a neutral atom the number of protons in the nucleus is the same as the number of electrons around the nucleus. A proton and an electron have the same amount, but different kind of charge. Electrons occupy nearly all of the atomic volume, because the radius of an atom is 100,000 times that of the nucleus.
Electrons, in quantum mechanical view, are waves confined in atoms, and they exist in several energy states called orbitals. Electrons in atoms and molecules do not have fixed locations or orbits. Electron states in an element are called electronic configurations, and their designation for H and O are 1s1, and 1s22s22p4, respectively. The superscripts indicate the number of electrons in the orbitals 1s, 2s or 2p. The electronic configuration for the inert helium (He) is 1s2, and 1s2 is a stable core of electrons. Bonding or valence electrons are 1s1 and 2s22p4 for H and O respectively.
The valence bond approach blends one 2s and three 2p orbitals into four bonding orbitals, two of which accommodate two electron pairs. The other two orbitals have only one electron each, and they accommodate electrons of the H atoms bonded to O, thus forming the two H–O bonds. An electron pair around each H atom and four electron pairs around the O atom contribute stable electronic configurations for H and O, respectively. The Lewis dot-structure, Fig. 3, represents this simple view. The two bonding and two lone pairs are asymmetrically distributed with major portions pointing to the vertices of a slightly distorted tetrahedron in 3-dimensional space. The two lone pairs mark slightly negative sites and the two H atoms are slightly positive. This charge distribution around a water molecule is very important in terms of its microscopic and macroscopic, chemical and physical properties described later. Of course, the study of water continues and so does the evolution of bonding theories. Moreover, the distribution of electrons in a single water molecule is different from those of dimers, clusters, and bulk water.
The asymmetric distribution of H atoms and electrons around the O atom results in positive and negative sites in the water molecule. Thus, water consists of polar molecules.
The dipole moment, m, is a measure of polarity and a useful concept. A pair of opposite charge, q, separated by a distance, d, has a dipole moment of m = d q with the direction pointing towards the positive charge as shown in Fig. 4.
The dipole moment of individual water molecules is 6.187´10–30 C m (or 1.855 D) (21). This quantity is the vector resultant of two dipole moments due to the O–H bonds. The bond angle H–O–H of water is 104.5o. Thus, the dipole moment of an O–H bond is 5.053´10–30 C m. The bond length between H and O is 0.10 nm, and the partial charge at the O and the H is therefore q = 5.053´10–20 C, 32 % of the charge of an electron (1.6022´10–19 C). Of course, the dipole moment may also be considered as separation of the electron and positive charge by a distance 0.031 nm.
It should be pointed out that the dipole moments of liquid and solid water appear to be higher due to the influence of neighboring molecules. For the liquid and solid, macroscopic properties need be considered.
C. Hydrogen Bonds
Attraction among water molecules is more than polar-polar in nature. The O atoms are small and very electronegative. As a result, the positive H atoms (protons) are very attractive to the negative O atoms of neighboring molecules. This O- - -H – O strong attraction is called a hydrogen bond, a concept popularized by L. Pauling (22). Furthermore, hydrogen atoms bonded to atoms of N and F, neighboring elements of O in the periodic table, are positive, and they form hydrogen bonds with atoms of N, O, or F. The strength of hydrogen bonds depends on the X–H - - - Y (X or Y are N, O, or F atoms) distances and angles; the shorter the distances, the stronger are the hydrogen bonds.
When two isolated water molecules approach each other, a dimer is formed due to hydrogen bonding. The dimer may have one or two hydrogen bonds. Dimers exist in gaseous and liquid water. When more water molecules are in close proximity, they form trimmers, tetramers and clusters. Hydrogen bonds are not static, they exchange protons and partners constantly. Hydrogen bonding is a prominent feature in the structures of various solid phases of water usually called ice as we shall see later.
Water molecules not only form hydrogen bonds among themselves, they form hydrogen bonds with any molecule that contains N–H, O–H and F–H bonds. Foodstuffs such as starch, cellulose, sugars, proteins, DNA, and alkaloids contain N–H and O–H groups, and these are both H-donors and H-acceptors of hydrogen bonds of the type N- - H–O, O- -H–N, N- -H–N, etc. A dimer depicting the hydrogen bond and the van der Waal sphere of two molecules is shown in Fig. 6 (23).
Carbohydrates (starch, cellulose and sugars) contain H–C–O–H groups. The O–H groups are similar to those of water molecules, and they are H-acceptors and H-donors for hydrogen bonds. Proteins contain O–H, R–NH2 or R2>NH groups, and the O–H and N–H groups are both H-donors and H-acceptors for the formation of hydrogen bonds. Thus, water molecules have intimate interactions with carbohydrates and proteins.
IV. Macroscopic Properties of Water
Collectively, water molecules exist as gas, liquid, or solid depending on the temperature and pressure. These phases of water exhibit collective or macroscopic properties such as phase transitions, crystal structures, liquid structures, vapor pressures, and volume-pressure relationships of vapor. In addition, energies or enthalpies for melting, vaporization and heating are also important for applications in food technology.
Thermodynamic constants for phase transitions given in Table 3 are those of pure water. Natural waters, of course, contain dissolved air, carbon dioxide, organic substances, microorganisms, and minerals. Water in food or used during food processing usually contains various organic and inorganic substances. These solutes modify the properties of water and caution should be taken to ensure proper values are applied in food technology.
The triple point and boiling points of water are defined as 273.16 and 373.16 K (kelvin) in the SI unit of temperature, respectively. Thus, the temperature differences can be in units of K or oC.
Water has many unusual properties due to its ability to form hydrogen bonds and its large dipole moment. As a result, the melting, boiling and critical points for water are very high compared to substances of similar molar masses. In general, the higher the molar masses, the higher are the melting and boiling points of the material. Associated with these properties are its very large heat of melting, heat capacity, heat of vaporization and heat of sublimation. Moreover, its surface tension and viscosity are also very large. Thermodynamic energies, and volume changes for phase transitions of H2O are summarized in Table 3. These data are mostly taken from the Encyclopedia of Chemical Technology, Vol. 25 (1991) (24).
A. Crystal Structures and Properties of Ice
Hydrogen bonding is prominent in the crystal structures of various solid phases of H2O. The triple point of water is at 273.16 K and 4.58 torr (611 Pa). The melting point at 1.00 atm (760 torr or 101.325 kPa) is used to define the Kelvin scale as 273.15 K. When water freezes at these temperatures and atmospheric pressure or lower, the solids are hexagonal ice crystals usually designated as Ih. Properties of Ih are given in Table 4. Snowflakes have many shapes because their growth habit depends on temperature and vapor pressure, but they all exhibit hexagonal symmetry, due to the hexagonal structure of ice (25).
However, from a geometric point of view, the same bonding may also be arranged to have cubic symmetry. The existence of cubic ice has been confirmed. When water vapor deposits onto a very cold, 130 – 150 K, surface or when small droplets are cold under low pressure at high altitude, the ice has a cubic symmetry usually designated as Ic. At still higher pressures, different crystal forms designated as ice II, III, IV, … etc., up to 13 phases of cubic, hexagonal, tetragonal, monoclinic, and orthorhombic symmetries have been identified (26). The polymorphism of solid water is very complicated. Some of these ice forms are made under very high pressures, and water crystallizes into solid at temperatures above the normal melting or even boiling temperatures. Ice VII is formed above 10 G Pa (gigapascal) at 700 K (26).
When liquid water is frozen rapidly, the molecules have little chance of arranging into crystalline ice. The frozen liquid is called amorphous ice or glassy ice.
The basic relationships between nearest neighboring water molecules are the same in both Ih and Ic. All O atoms are bonded to four other O atoms by hydrogen bonds, which extend from an oxygen atom towards the vertices of a tetrahedron. A sketch of the crystal structure of hexagonal Ih is shown in Fig. 7 (27). In Ih, hydrogen positions are somewhat random due to thermal motion, disorder and exchanges. For example, the hydrogen may shift between locations to form H3O+ and OH ions dynamically throughout the structure. In this structure, bond angles or hydrogen-bond angles around oxygen atoms are those of the idealized tetrahedral arrangement of 109.5o rather than 104.5o observed for isolated molecules. Formation of the hydrogen bond in ice lengthens the O–H bond distance, 100 pm compared to 96 pm in a single water molecule. The diagram illustrates a crystal structure that is completely hydrogen bonded, except for the molecules at the surface.
Each O atom of hexagonal ice Ih is surrounded by four almost linear O–H - - O hydrogen bonds of length 275 pm, in a tetrahedral fashion. Each C atom of cubic diamond is also surrounded by four C–C covalent bonds of length 154 pm. Thus, the tetrahedral coordination can either be cubic or hexagonal, from a geometrical viewpoint. Indeed, the uncommon cubic ice and hexagonal diamond have been observed, giving a close relationship in terms of spatial arrangement of atoms between ice and diamond (26). Strong hydrogen bonds make ice hard, but brittle. The structure is related to its physical properties, which vary with temperature.
The pressure of H2O vapor in equilibrium with ice is called the vapor pressure of ice, which decreases as the temperature decreases. At the triple point or 0° C, the pressure is 611.15 Pa. When ice is slightly overheated to 0.01° C, the pressure increases to 611.657 Pa. However, at this temperature, the vapor pressure of liquid water is lower. The vapor pressures of ice between 0° C and – 40° C are listed in Table 5 at 1°C interval. Various models can be used to estimate the vapor pressure at other temperatures. One method uses the Clausius-Clapeyron differential equation
dp H
––– = –––––
dT T DV
where p is the pressure, T is the temperature (K), H is the latent heat or enthalpy of phase transition, and DV is the difference in volume of the phases. The enthalpy of sublimation for ice depends on the temperature. At the freezing point, the enthalpy of sublimation for ice is 51 (51.06 in Table 3) kJ mol-1, estimated from the vapor pressure at 0 and –1o C. The enthalpy of sublimation is required to overcome hydrogen bonding, dipole, and intermolecular attractions. The energy required in freeze-drying processes varies, depending on temperature and other conditions. Water in solutions and in food freezes below 0oC.
The number of hydrogen bonds is twice the number of water molecules, when surface water molecules are ignored. The energy required to separate water molecules from the solid is the enthalpy of sublimation (55.71 J mol–1). Half of this value, 26 kJ mol-1, is the energy to separate the H- -O linkages, and it translates into 0.26 eV, per H- -O bond. These values are close to those obtained by other means (25, 26, 28, 29, 30). Several factors contribute to this linkage, and the hydrogen-bond energy is less than 0.26 eV.
B. Properties of Liquid Water
The macroscopic physical properties of this common but eccentric fluid at 298 K (25o C) are given in Table 6. Water has an unusually high melting and boiling points for a substance of molar mass of only 18 daltons. Strong hydrogen bonds and high polarity accounts for this.
The heat of formation is the energy released when a mole of hydrogen and half a mole of oxygen at 298 K and 1.00 atm react to give one mole of water at 298 K. This value differs from that for ice in Table 4 due to both temperature and phase differences. As temperature increases, the average kinetic energy of molecules increases, and this affects water’s physical properties. For example, surface tension of water decreases, whereas the thermal conductance increases as the temperature increases. Heat capacity at constant pressure (Cp), vapor pressure, viscosity, thermal conductance, dielectric constant, and surface tension in the temperature range 273–373 K (0–100oC) are given in Table 7.
Table 7. Properties of liquid water in the range 273 – 373 K (0 – 100o C)
Temp. Heat Viscosity Thermal Dielectric Surface
t capacity Cp conductance constant tension
(°C) (J g–1 K–1) (mPa s) (W K–1 m–1 ) (mN m–1)
0 4.2176 1.793 561.0 87.90 75.64
10 4.1921 1.307 580.0 83.96 74.23
20 4.1818 1.002 598.4 80.20 72.75
30 4.1784 0.797 615.4 76.60 71.20
40 4.1785 0.653 630.5 73.17 69.60
50 4.1806 0.547 643.5 69.88 67.94
60 4.1843 0.466 654.3 66.73 66.24
70 4.1895 0.404 663.1 63.73 64.47
80 4.1963 0.354 670.0 60.86 62.67
90 4.2050 0.315 675.3 58.12 60.82
100 4.2159 0.282 679.1 55.51 58.91
-----
More detailed data can be found in the CRC Handbook of Chemistry and Physics (1)
Liquid water has the largest heat capacity per unit mass of all substances. Large quantities of energy are absorbed or release when its temperatures changes. The large heat capacity makes water an excellent reservoir and transporter of energy. A large body of water moderates climate. The heat capacity Cp of water varies between 4.1 to 4.2 J g–1 K–1 (74 to 76 J mol–1 K–1) even at temperature above 100o C and high pressure. The enthalpy of vaporization for water is also very large (55.71 kJ mol–1 at 298 K). Thus, energy consumption is high for food processing when water is involved.
Water and aqueous solutions containing only low molar-mass solutes are typical Newtonian fluids for which the shear stress is proportional to shear strain rate. Viscosity is the ratio of shear stress to shear strain rate. On the other hand, viscosity of solutions containing high molar-mass substances depends on shear strain rate. For pure water, the viscosity decreases from 1.793 to 0.282 mPa s (millipascal seconds; identical to centipoise (cp)) as temperature increases from 0 to 100o C. Thus, the flow rate through pipes increases as water or solution temperature increases.
The dielectric constant of water is very large, and this enables water to separate ions of electrolytes, because it reduces the electrostatic attraction between positive and negative ions. Many salts dissolve in water. When an electric field is applied to water, its dipole molecules orient themselves to decrease the field strength. Thus, its dielectric constant is very large. The dielectric constant decreases as temperature increases, because the percentage of molecules involved in hydrogen bonding and the degree of order decrease (28, 29). The measured dielectric constant also depends on the frequency of the applied electric field used in the measurement, but the variation is small when the frequency of the electric field is less than 100 MHz. The dielectric behavior of water allows water vapor pressure to be sensed by capacitance changes when moisture is absorbed by a substance that lies between the plates of a capacitor. These sensors have been developed for water activity measurement (31).
The light absorption coefficients are high in the infrared and ultraviolet regions, but very low in the visible region. Thus, water is transparent to human vision.
The variation of vapor pressure as a function of temperature is the bases for defining water activities of food. Liquid water exists between the triple-point and the critical-point temperatures (0 – 373.98°C) at pressures above the vapor pressures in this range.
As with ice, the vapor pressure of liquid water increases as the temperature increases. Vapor pressures of water (in kPa instead of Pa for ice in Table 5) between the triple and critical points, at 10°C interval, are given in Table 8. When the vapor pressure is 1.00 atm (101.32 kPa) the temperature is the boiling point (100° C). At slightly below 221°C, the vapor pressure is 2.00 atm. The critical pressure at the critical temperature, 373.98° C, is 217.67 atm (22,055 kPa). Above this temperature, water cannot be liquefied, and the phase is called supercritical water.
The partial pressure of H2O in the air at any temperature is the absolute humidity. When the air is saturated with water vapor, the relative humidity is 100%. The unsaturated vapor pressure divided by the vapor pressure of water as given in Table 8 at the temperature of the air is the relative humidity. The temperature at which the vapor pressure in the air becomes saturated is the dew point, at which dew begins to form. However when the dew point is below 273 K or 0oC, ice crystals (frost) begin to form. Thus, the relative humidity can be measured by finding the dew point. Dividing the vapor pressure at the dew point by the vapor pressure of water at the temperature of the air gives the relative humidity. The transformations between solid, liquid, and gaseous water play important roles in hydrology and in the transformation of the environment on Earth. Phase transitions of water combined with the energy from the sun make the weather.
Density is a collective property, and it varies with temperature, isotopic composition, purity, etc. The International Union of Pure and Applied Chemistry (IUPAC) has adopted the density of pure water from the ocean as the density standard. The isotopic composition of ordinary water is constant, and the density of pure water between 0 and 39o C extracted from (32) is given in Table 9.
The density of cage-like ice Ih, due to 100% of its molecules involved in hydrogen-bonded is only 9% lower than that of water. This indicates that water has a high percentage of molecules involved in the transient and dynamic hydrogen bonding. The percentage of hydrogen-bonded water molecules in water decreases as temperature increases, causing water density to increase. As temperature increases the thermal expansion causes its density to decreases. The two effects cause water density to increase from 0 to 3.98oC, reaching its maximum of 1.0000 g mL–1 and then decrease as temperature increases.
Incidentally at 8o C, the density of water is about the same as that at 0o C. At 25o C, the density decreases 0.3 % with respect to its maximum density, whereas at 100o C, it decreases by 4 %. Dense water sinks, and convection takes place when temperature fluctuates at the surface of lakes and ponds, bringing dissolved air and nutrients to various depths of waters for the organism living in them. On the other hand, the pattern of density dependence on temperature of water makes temperatures at the bottoms of lakes and oceans vary little if the water is undisturbed. When water freezes, ice begins to form at the surface, leaving the water at some depth undisturbed. Water at the bottom remains at 4oC preserving various creatures living in water.
When hydrogen-bonded to tissues and cells or in food, water has a unique order and structure, and the vapor pressure and density differ from those of pure water. Yet, the collective behavior of water molecules sheds some light regarding their properties in food, cells, tissues, and solutions.
V. Chemical Properties of Water
Water is a chemical as is any substance, despite the confusion and distrust of the public regarding the term “chemical”. Thus, water has lots of interesting chemical properties. It interacts intimately with components of food particularly as a solvent, due to its dipole moment and its tendency to form hydrogen bond. These interactions affect the chemical properties of nutrients, including their tendency to undergo oxidation or reduction, to act as acids or bases, and to ionize.
A. Water as a Universal Solvent
Water is dubbed a universal solvent, because it dissolves many substances due to strong interactions between water molecules and those of other substances. Entropy is another driving force for a liquid to dissolve or mix with other substances. Mixing increases disorder or entropy.
(a ) hydrophobic effect and hydrophilic effect
Because of its large dielectric constant, high dipole moment and ability to donate and accept protons for hydrogen bonding, water is an excellent solvent for polar substances and electrolytes, which consist of ions. Molecules strongly interact with or love water molecules are hydrophilic, due to hydrogen bonding, polar-ionic or polar-polar attractions. Nonpolar molecules that do not mix with water are hydrophobic or lipophilic, because they tend to dissolve in oil. Large molecules such as proteins and fatty acids that have hydrophilic and hydrophobic portions are amphipathic or amphiphilic. Water molecules strongly intermingle with hydrophilic portions by means of dipole-dipole interaction or hydrogen bonding.
The lack of strong interactions between water molecules and lipophilic molecules or the nonpolar portions of amphipathic molecules is called the hydrophobic effect, a term coined by Charles Tanford (33). Instead of a direct interaction with such solutes, water molecules tend to form hydrogen-bonded cages around small nonpolar molecules when the latter are dispersed into water. Hydrogen-bonded water molecules form cages, called hydrates or clathrates, For example, the clathrate of methane forms stable crystals at low temperatures (34).
Nonpolar chains in proteins prefer to stay together as they avoid contact with water molecules. Hydrophilic and hydrophobic effects play important roles for the stability and state of large molecules such as enzymes, proteins, and lipids. Hydrophobic portions of these molecules stay together forming pockets in globular proteins. Hydrophilic and hydrophobic effects cause nonpolar portions of phospholipids, proteins, and cholesterol to assemble into bilayers or biological membranes (34).
(b) Hydration of ions
Due to its high dielectric constant, water reduces the attractions among positive and negative ions of electrolytes and dissolves them. The polar water molecules coordinate around ions forming hydrated ions such as Na(H2O)6+, Ca(H2O)82+, Al(H2O)63+ etc. Six to eight water molecules form the first sphere of hydration around these ions. Fig. 8 is a sketch of the interactions of water molecules with ions. The water molecules point their negative ends of their dipoles towards positive ions, and their positive ends towards negative ions. Molecules in the hydration sphere constantly and dynamically exchange with those around them. The number of hydrated-water molecules and their lifetimes have been studied by various methods. These studies reveal that the hydration sphere is one-layer deep, and the life times of these hydrated-water molecules are in the order of picoseconds (10–12 s). The larger negative ions also interact with the polar water molecules, not as strong as those of cations. The presence of ions in the solution changes the ordering of molecules even if they are not in the first hydration sphere (9).
The hydration of ions releases energy, but breaking up ions from a solid requires energy. The amounts of energy depends on the substance, and for this reason, some are more soluble than others. Natural waters in the ocean, streams, rivers and lakes are in contact with minerals and salts. The concentrations of various ions depend on the solubility of salts (35) and the contact time.
Drinking water includes all waters used in growth, processing, and manufacturing of food. J. De Zuane divides ions in natural water into four types in The Handbook on Drinking Water Quality (5).
Type A includes arsenic, barium, cadmium, chromium, copper, fluoride, mercury, nitrate, nitrite, and selenium ions. They are highly toxic, yet abundant.
Type B includes aluminum, nickel, sodium, cyanide, silver, zinc, molybdenum, and sulfate ions. Their concentrations are also high, but they are not very toxic.
Type C consists of calcium, carbonate, chloride, iron, lithium, magnesium, manganese, oxygen, phosphate, potassium, silica, bromine, chlorine, and iodine and ozone. They are usually present at reasonable levels.
Type D ions are present usually at low levels: antimony, beryllium, cobalt, tin, thorium, vanadium and thallium.
Most metals are usually present in water as cations, with a few as anions. However, some chemical analyses may not distinguish their state in water. The most common anions are chloride, sulfate, carbonate, bicarbonate, phosphate, bromide, iodide, etc. Toxicity is a concern for ions in water, but some of these ions are essential for humans.
Pure water has a very low electric conductivity, but ions in solutions move in an electric field making electrolyte solutions highly conductive. The conductivity is related to total dissolved solids (TDS), salts of carbonate, bicarbonate, chloride, sulfate, and nitrate. Sodium, potassium, calcium and magnesium ions are often present in natural waters because their soluble salts are common minerals in the environment. The solubilities of clay (alumina), silicates, and most common minerals in the Earth crust, are low.
(c ) Hard waters and their treatments
Waters containing plenty of dissolved CO2 (H2CO3) are acidic and they dissolve CaCO3 and MgCO3. Waters with dissolved Ca2+, Mg2+, HCO3- and CO32- are called temporary hard waters as the hardness can be removed by boiling, which reduces the solubility of CO2. When CO2 is driven off, the solution becomes less acidic due to the following equilibria (the double arrows, D, indicate reversible reactions):
H+ (aq) + HCO3 (aq) D H2CO3 (aq) D H2O + CO2 (g)
HCO3 (aq) D H+ (aq) + CO32– (aq)
Reducing the acidity increases the concentration of CO32– and solids CaCO3 and MgCO3 precipitate:
Ca2+ (aq) + CO32– (aq) D CaCO3 (s)
Mg2+ (aq) + CO32– (aq) D MgCO3 (s)
Water containing less than 50 mg L–1 of these substances is considered soft; 50-150 mg L–1 moderately hard; 150 - 300 mg L–1 hard; and more than 300 mg L–1 very hard.
For the lime treatment, we determine the amount of dissolved Ca2+ and Mg2+ first; and then add an equal number of moles of lime, Ca(OH)2, to remove them by these reactions:
Mg2+ + Ca(OH)2(s) ® Mg(OH)2(s) + Ca2+
Ca2+ + 2 HCO3- + Ca(OH)2(s) ® 2 CaCO3(s) + 2 H2O
Permanent hard waters contain sulfate (SO42-) ions with Ca2+ and Mg2+. Calcium ions, Ca2+, of the sulfate solution can be removed by adding sodium carbonate:
Ca2+ + Na2CO3 ® CaCO3 (s) + 2 Na+
Hard waters cause scales or deposits to build up in boilers and pipes, and they are usually softened by ion exchange with resins or zeolites. In these processes, the calcium and magnesium ions are taken up by the zeolite or resin that releases sodium or hydrogen ions into the water. Reverse osmosis has also being used to soften hard water.
However, water softening replaces desirable calcium and other ions by sodium ions. Thus, soft waters are not suitable drinking waters. Incidentally, bakers use hard water because the calcium ions strengthen the gluten proteins in dough mixing. Some calcium salts are added to dough to enhance bread quality.
(d) Properties of aqueous solutions
Waters containing dissolved substances are aqueous solutions; their physical properties differ from those of pure water. For example, at the same temperature, the H2O vapor pressures of solutions are lower than that of pure water, resulting in boiling point elevation (higher), freezing point depression (lower) and osmotic pressure.
Several ways can be used to express concentrations: part per million (ppm), percent, moles per liter, mole fraction, etc. The mole fraction of water is the fraction of water molecules among all molecules and ions in the system. The vapor pressure of an ideal solution, Psolution, is the vapor pressure of water (at a given temperature), P o water, modified by the mole fraction Xwater.
Psolution = Xwater P o water. (X water < 1)
If the solute has a significant vapor pressure, P solute is also modified by its mole fraction,
Psolute = Xsolute P o solute.
For non-ideal solutions, in which water and solute strongly interact, the formulas require modifications. A practical method is to use an effective mole fraction X defined by:
Psolution / P o water. = X
In any case, the vapor pressures of solutions containing nonvolatile electrolytes are lower than those of pure water at their corresponding temperature.
Phase transitions take place when the vapor pressures of the two phases are the same. Because solutions’ vapor pressures are lower, their melting points are lower but their boiling points are higher. The difference in temperature, DT, is proportional to the concentrations of all solutes, mall-solute (molality),
DT = K mall-solute
where K is either the molar boiling point elevation constant, Kb, or the molar freezing point depression constant Kf. For water, Kf = 1.86 K L kg-1, and Kb = 0.52 K L kg-1. Due to ionization of electrolytes, positive and negative ions should be treated as separate species and all species should be included in mall-solute.
The tendency of water molecules from a dilute solution to diffuse into a more concentrated solution, through semipermeable membranes, has a measurable quantity called osmotic pressure, p, which is proportional to the concentration (mol per kg of water) of all dissolved species, m all-solute in mol kg-1 and temperature T in K,
p = – m all-solute R T
where R is the gas constant 8.3145 J mol–1 K–1. Water molecules diffuse from pure water (p = 0) into the solution, and the osmotic pressure is therefore given as a negative value here. Theoretically, a solution with mall-solute = 1.0 mol kg–1 of water, p = – 2477 J kg or
– 2.477 kJ kg at 298 K. Note mall-solute = m i; (i being the number of ions produced by the solute) in the van’t Hoff equation, which is often used in other literature.
Solutions having the identical osmotic pressure are isotonic. Applying more pressure to a solution to compensate for the osmotic pressure causes water molecules to diffuse through membranes, generating pure or fresh waters. This process is called reverse osmosis, and it has been used to soften water or desalinate seawater, converting it to fresh water.
The lowering of vapor pressure and the osmotic pressure of solutions play important roles in hydration and dehydration of food and in living cells. Solutions containing proper concentrations of nutrients and electrolyte have been used to medically treat dehydrated patients. J.R. Cade and his coworkers applied these principles to formulate drinks for athletes; he and his coworkers were credited as the inventor of the sports drink Gaterade (36). The concept of a balanced solution for hydration became a great business decades after its invention.
B. Acidity and Alkalinity of Water
Acidity and alkalinity are also important characteristics of water due to its dynamic self-ionization equilibrium,
H2O (l) D H+ + OH, Kw = [H+][OH] = 1 ´ 10–14 at 298 K and 1 atm
where [H+] and [OH] represent the molar concentrations of H+ (or H3O+) and OH ions, respectively, and Kw is called the ion product of water, (see tables in ref. 1 and 36). Values of Kw under various conditions have been evaluated theoretically (37, 38). Solutions in which [H+] = [OH] are said to be neutral. At 298 K, for a neutral solution,
pH = – log [H+] = pOH = – log [OH] = 7 (at 298 K)
The H+ ions or protons dynamically exchange with protons in other water molecules. The self-ionization and equilibrium are present in all aqueous solutions, including acid and base solutions, as well as in pure water. Water is both an acid and a base.
Strong acids such as HClO4, HClO3, HCl, HNO3, and H2SO4 ionize completely in their solutions to give H+ (H3O+) ions and anions: ClO4, ClO3, Cl, NO3, and HSO4, respectively. Strong bases such as NaOH, KOH, and Ca(OH)2 also ionize completely giving OH ions and Na+, K+, and Ca2+ ions respectively. In an acidic solution, [H+] is greater than [OH-]. In a 1.0 mol L–1 HCl solution, [H+] = 1.0 mol L–1, pH = 0.
Weak acids such as formic acid, HCOOH, acetic acid (CH3COOH), ascorbic acid (C6H8O6), oxalic acid (H2C2O4), carbonic acid (H2CO3), benzoic acid (C6H5COOH), malic acid (C4H6O5), lactic acid H3CCH(OH)COOH, and phosphoric acid (H3PO4) also ionize in their aqueous solutions, but not completely. The ionization of acetic acid, is represented by the equilibrium,
CH3COOH (aq) D H+ (aq) + CH3COO (aq), Ka = 1.75 x 10–5 at 298 K
where Ka is the acid dissociation constant.
The solubility of CO2 in water increases with its (CO2 partial) pressure, according to Henry’s law, and the chemical equilibria for the dissolution is,
H2O + CO2 (g) D H2CO3 (aq)
Of course, H2CO3 dynamically exchanges H+, and H2O with other water molecules, and this weak diprotic acid ionizes in two stages with their acid constants, Ka1 and Ka2.
H2O + CO2 (aq) D H+ (aq) + HCO3 (aq), Ka1 = 4.30 x 10-7 (at 298 K)
HCO3 (aq) D H+ (aq) + CO32– (aq), Ka2 = 5.61 x 10-11
Constants Ka1 and Ka2 increase with temperature. At 298 K, the pH of a solution containing 0.1 mol L-1 H2CO3 is 3.7, acidophilic organisms may grow, but most pathogenic organisms are neutrophiles and they cease growing. Soft drinks contain other acids – citric, malic, phosphoric, ascorbic acids etc; they lower the pH further.
Ammonia and many nitrogen-containing compounds are weak bases. The ionization equilibrium of NH3 in water and the base dissociation constant Kb are,
NH3 + H2O D NH4+ (aq) + OH Kb = 1.70 x 10–5 at 298 K.
Other weak bases react with H2O similarly.
The ionization or dissociation constants of inorganic and organic acids and bases are extensive, and they have been tabulated in various books (39, 40, 41).
Amino acids and proteins contain acidic and basic groups. At some specific pH called the isoelectric point, they carry no charge, but exist as zwitterions. For example, the isoelectric point for glycine is pH = 6.00 and it exists as the zwitterion H2C(NH3+)COO.
C. Oxidation-Reduction Reactions in Water
Oxidation of hydrogen by oxygen not only produces water, but also releases energy. At the standard conditions, the electrochemical half reaction equations are:
H2 = 2 H+ + 2 e Eo = 0.000 V (defined)
O2 + 4 H+ + 4 e = 2 H2O Eo = 1.229 V
The cell reaction and the cell potential at the standard condition for it are:
2 H2 + O2 = 2 H2O DEo = 1.229 V
Proper setups for harvesting this energy are the goals of hydrogen-fuel-cell technology. The cell potential DE for non-standard conditions depends on pH and temperature. Its value is related to the energy released in the reaction. A plot of DE versus pH yields a Pourbaix diagram, which is useful to evaluate the stability of various species in water. Water can be a reducing or oxidizing reagent, because it offers protons or electrons. Applying a voltage to pass electrons through a chemical cell decomposes water by electrolysis.
Waters containing dissolved oxygen cause additional reactions, for example:
2 H2O + 2 e = H2 + 2 OH Eo = -0.828 V
O2 + 2 H+ + 2 e = H2O2 Eo = 0.682 V
O2 + H2O + 2 e = HO2 + OH Eo = 0.076 V
O2 + 2 H2O + 2 e = 4 OH Eo = 0.401 V
At the proper conditions, a suitable chemical reaction driven by the potential takes place.
Oxidation-reduction reactions involving water usually are due to proton or electron transfer. These oxidation-reduction reactions occur for the growth, production, manufacture, digestion, and metabolism of food.
Water participates in oxidation-reduction reactions in many steps of photosynthesis, resulting in the fixation of CO2 into biomolecules, releasing oxygen atoms of water as O2. Engineering a new generation of plants with greater photosynthetic capacity facing lack of waters challenges geneticists (42) and botanists. We now understand photosynthesis to great details, from the studies by many scientists. Photosynthetic reactions are related to food production, but they are so complex that we can only mention them (43).
The oxidation-reduction reactions of water cause corrosion on metal surfaces. Not only deterioration of facilities is very costly for the food industry, corrosion of pipes results in having toxic metal ions Cu2+ and Pb2+ in drinking water. The concern of lead ions in drinking water led the Environmental Protection Agency to ban the use of high-lead solders for water pipes. These reactions are electrochemical processes. Galvanic effects, high acidity, high flow rate, high water temperature, and the presence of suspended solids accelerate corrosion, as do lack of Ca2+ and Mg2+ ions in purified waters. The formation of scales protects the metal surface. However, balancing the clogging against surface protection of pipes is a complicated problem, requiring scientific testing and engineering techniques for a satisfactory solution.
D. The Hydrogen Bond and Chemical Reactions
Enzymes are mostly large protein molecules, and they are selective and specific catalysts responsible for most of reactions in biological bodies. Folding of the long protein provides specific 3-dimensional selective pockets for their substrates. The pockets not only fix the substrates in position, they also weaken certain bonds to facilitate specific reactions. This is the mechanism by which enzymes select their substrates and facilitate their specific reactions.
Hydrogen bond strength is stronger in nonaqueous media than in aqueous solutions as the charge densities on the donor and acceptor atoms increase (44). Hydrogen bonds between the enzyme and its substrate can be stronger than those in an aqueous environment, thus speeding up the reaction rate even further.
The hydrolysis of peptide linkage is the reaction of a protein with water,
R-C(=O)-NH-R’ + H2O ® R-C(=O)OH + H2N-R’
This type of reaction can be catalyzed by acids, bases, and enzymes.
VI. Water Activity
Water is a nutrient and a component in food groups: grains, meat, dairy, fruits and vegetables. Furthermore, major nutrients such as carbohydrates, proteins, water-soluble vitamins, and minerals are hydrophilic. Even parts of fat or lipid molecules are hydrophilic, but the alkyl chains of fats and proteins experience the hydrophobic effect in an aqueous environment. (45).
Foodstuffs interact with water by means of polar, hydrogen-bonding and hydrophobic interactions. The results of these interactions change the chemical potential (properties) of water. Foodstuffs dissolve in or absorb water. Thus, water within food may be divided into bound water, affected water, and free water in the order of their interaction strength. The bound water molecules are similar to those in the first hydration sphere of ions, and those close to the first sphere are affected water molecules. Further away from the interface are free water molecules. The structure and properties of the first two types change. Interaction of water with dietary fiber is an example (46). Thus, properties of water in food are different from those of pure water.
Water molecules in both liquid and vapor phases can participate in hydration reactions. At equilibrium in a system with two or more phases, their vapor pressure or chemical potential, m, must be equal. The chemical potential, m, of a solution or water-containing foodstuff must be equal at a given temperature T, and
m = mw + R T ln (p / pw),
where R is the gas constant (8.3145 J mol–1 K–1), and p is the vapor pressure of the solution or of water in foodstuff, and pw is the vapor pressure of pure water at the same temperature. The ratio p / pw is called the water activity aw (= p / pw), which is related to the water chemical potential of water in solutions or in the foodstuff. For ideal solutions and for most moist foods, aw is less than unity, aw < 1.0 (31).
Both water activity and relative humidity are fractions of the vapor pressure of pure water. Methods for their measurements are the same. We have mentioned the measurement by changes in capacitance earlier. Water contents have a sigmoidal relationship with water activities, aw = 1.0 for infinitely dilute solutions, aw > 0.7 for dilute solutions and moist foods and aw < 0.6 for dry foods. Of course, the precise relationship depends on the material in question. In general, if the water vapor of the atmosphere surrounding the food is greater than the activity of the food, water is absorbed. Otherwise, dehydration takes place. The water activity reflects the combined effects of water-solute, water-surface, capillary, hydrophilic and hydrophobic interactions. The water activity of a foodstuff is a vital parameter, because it affects its texture, taste, safety, shelf life, and appearance.
Furthermore, controlling water activity, rather than water content is important. When aw < 0.9, most molds are inhibited. Growth of yeasts and bacteria also depends on aw. Microorganisms cease growing if aw < 0.6.
VII Water Potential
Similar to water activity in food, water potential is a term used in plant, soil, and crop sciences. Water potential, represented by Y (psi) or Yw, is a measure of the free energy of water in a system: soil, material, seeds, plants, roots, leaves, or an organism. Water potential is the difference between the chemical potential of pure water and water in the system at the same temperature. Pure water has the highest free energy: Y = 0 for pure water by convention, and Y < 0 for solutions. Water diffuses from high potential to low potential. Physiological processes decrease as the water potential decreases.
In general, water potential, Yw ,is a combined effect of osmotic (Ys), matrix (interface and water binding Ym), turgor (Yt) pressures, and gravity (Yg).
Yw = Ys + Ym + Yt + Yg
Osmotic pressure, Ys, is always present due to solutes in the fluids. The metric pressure, Ym, is related to bound-, affected- and free-waters in the system. The outwardly directed pressure extended by the swelling protoplast against the wall is called turgor pressure, Yt. Usually, this term is insignificant until the cell is full, and at such point, Yt increases rapidly and stops when Yw = Yt. Otherwise, the cell ruptures. The mechanical rigidity of succulent plant parts, the opening of stomata and the blossom are usually the results of turgor pressure. In systems such as tall plants and soil science, pressure due to the gravitational pull of water, Yg, is also included in the water potential.
For example, the water potential of potato tissues can be measured by incubating them in a serious of solutions of known osmotic pressures. The potato will neither lose nor gain water if the osmotic pressure of the solution equals the water potential of potato tissues. The osmotic pressure (p = – m R T) may be evaluated from a known concentration m, using the equation given earlier. Instead of energy units, water potential is often expressed in units of pressure (megapascal, MPa), which is derived by dividing the energy by the molar volume (0.018 L mol–1 for H2O) of water (47).
There are many other methods for water potential measurements depending on the system: soil, leaf, stem, organism, etc. The soil water potential is related to the water available for the plants growing on the soil. Water potential of a plant or leaf indicates its health or state with respect to water. Thus, water potential is a better indicator for plant, agriculture, irrigation, and environmental managements than water content. Water moves through plants because
Ywater = 0 > Ysoil > Yroot > Ystem > Yleaf
Thus, the concept of water potential and water activity are very useful in growth, manufacture, handling, storage, and management of food.
VIII. Living Organisms in Water
The closer we look, the more we see. Living organisms on Earth are so complicated that their classification and phylogeny are still being studied and revised. New relationships are proposed to modify the five kingdoms proposed by Robert Whittaker in 1969. Nevertheless, most of the earliest unicellular living organisms in the Monera, and Protista kingdoms are still living in water. Both the numbers of species and individuals are staggering. For example, photosynthesis by algae in oceans consumes more CO2 than that by all plants on land. Algae were probably present on Earth before other organisms. Many phyla (divisions) of fungi, plantae, and animalia kingdoms also make water their homes. Both numbers and species of organisms living in water are probably more than those on land. The subject on living organisms in water is fascinating, but we can only mention some fundamentals about their relationships to water here. Certainly, every aspect of living organisms in water is related to food, because Homo sapiens is part of the food chain, if not at the top of it.
All life requires energy or food. Some living organisms receive their energy from the sun whereas others get their energy from chemical reactions in the aquatic media. Chemical reactions are vital during their lives. For example, some bacteria derive energy by catalyzing the oxidation of iron sulfide, FeS2, to produce iron ions Fe(H2O)62+ and elemental sulfur. Water is the oxidant, which in turn reduces oxygen (48). Chemical reactions provide energy for bacteria to sustain their lives and to reproduce. Factors affecting life in water are minerals, solubility of the mineral, electrochemical potentials of the material, acidity (pH), sunlight, dissolved oxygen level, presence of ions, chemical equilibria, etc. Properties of water influence life in general, and in the aquatic system in particular. As the population grows, aquaculture probably will be seen as a more efficient way of supplying protein for the ever-increasing population.
Regarding drinking water, we are concerned with aquatic organisms invisible to the naked eye. Pathogenic organisms present in drinking water cause intestinal infections, dysentery, hepatitis, typhoid fever, cholera, and other illnesses. Pathogens are usually present in waters contaminated with human and animal wastes that enter the water system via discharge, run offs, flood, and accidents at sewage treatment facilities. Insects, rodents, and animals can also bring bacteria to the water system (49, 50). Testing for all pathogenic organisms is impossible, but some organisms have common living conditions to some pathogenic bacteria. Thus, water testing can use these harmless bacteria as indicators for drinking water safety.
IX. Water Resource, Supply, Treatment, and Usage
About 70% of the Earth surface is covered with water, but only about 2% is covered by fresh water. Ocean waters are salty, and only the small percentages that is fresh water resources (lakes, rivers, and underground). Fresh water is needed for drinking, food, farming, washing, and manufacturing.
When salty water freezes, the ice so formed contains very little salt, if any. Thus, nearly all ice, including the massive ice at the polar cap, is fresh water. In fact, the ice cap in the Antarctic contains a lot of fresh-water ice, but that cannot be considered a water resource.
Hydrologists, environmentalists, and scientists, engineers, sociologists, economists, and politicians are all concerned with problems associated with water resources. Solutions to these problems require experts and social consensus.
X. Subcritical and Supercritical Waters
Waters at temperatures between the normal boiling and critical points (0 to 373.98° C) are called subcritical waters, whereas the phase above the critical point is supercritical water. In the 17th century, Denis Papin (a physicist) generated high-pressure steam using a closed boiler, and thereafter pressure canners were used to preserve food. Pressure cookers were popular during the 20th century. Analytical chemists have used subcritical waters to extract chemicals from solids for analysis since 1994 (51).
Water vapor pressures up to its critical point are given in Table 8, but data on polarity, dielectric constant, surface tension, density and viscosity above 100oC are scarce. In general, these properties decrease as the temperature increases. In fact, some drop dramatically for supercritical water. On the other hand, some of them increase with pressure. Thus, properties of sub- and super-critical waters can be manipulated by adjusting temperature and pressure to attain desirable properties.
As the polarity and dielectric constant decrease, water becomes an excellent solvent for non-polar substances such as those for flavor and fragrance. However, foodstuffs may degrade at high temperatures. Applications of sub- and super-critical water are relatively recent events, but applications of supercritical CO2 (critical temperature 32oC) for chemical analyses started in the 1980s, and investigations of supercritical water followed. However, research and development have been intensified in recent years (52). Scientists and engineers explore the usage of supercritical water for waste treatment, polymer degradation, pharmaceutical manufacturing, chromatographic analysis, nuclear reactor cooling, etc. Significant advances have also been made in material processing, ranging from fine particle manufacture to the creation of porous materials.
Water has been called a green solvent compared to the polluting organic solvents. Sub- and super-critical waters have been explored as replacement of organic solvents in many applications including the food industry (53). However, supercritical water is very reactive, and it is corrosive for stainless steels that are inert to ordinary water. Yet, the application of sub- and super-critical waters is a wide-open field.
XI Postscript
Water, ice, and vapor are collections of H2O molecules, whose characteristics determine the properties of all phases of water. Together and in concert water molecules shape the landscape, nurture lives, fascinate poets, and captivate scientists. Human efforts in understanding water have accumulated a wealth of science applicable in almost all disciplines, while some people take it for granted.
Water molecules are everywhere, including outer space. They not only intertwine with our history and lives, they are parts of us. How blessed we are to be able to associate and correlate the phenomena we see or experience to the science of water.
An article has a beginning and an end, but in the science of water, no one has the last word, as research and exploration, including its presence in outer space, on water continue (54). Writing this article induced my fascination on this subject, and for this reason, I am grateful to Professors Wai-kit Nip, Lewis Brubacher, and Peter F. Bernath for their helpful discussions and encouragement. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Aquatic_Chemistry/Fundamental_Characteristics_of_Water.txt |
Discussion Questions
• What is hard water?
• What are the differences between temporary and permanent hard water?
• How can hard water be converted to soft water?
• How to produce deionized water?
Water is the most important resource. Without water life is not possible. From a chemical point of view, water, H2O, is a pure compound, but in reality, you seldom drink, see, touch or use pure water. Water from various sources contains dissolved gases, minerals, organic and inorganic substances.
The Hydrosphere
The total water system surrounding the planet Earth is called the hydrosphere. It includes freshwater systems, oceans, atmosphere vapour, and biological waters. The Arctic, Atlantic, Indian, and Pacific oceans cover 71% of the Earth surface, and contain 97% of all water. Less than 1% is fresh water, and 2-3 % is ice caps and glaciers. The Antarctic Ice Sheet is almost the size of North America continent. These waters dominate our weather and climate, directly and indirectly affecting our daily lives. They cover 3.35x108 km2. The four oceans have a total volume of 1.35x109 km3.
• The sunlight dims by 1/10 for every 75 m in the ocean, and humans barely see light below 500 m. The temperature of almost all of the deep ocean is 4°C (277 K).
• The average ocean depth is 4 km, and the deepest point at the Mariana Trench is 10,912 m (35,802 ft), which compares to the height of 8.8 km for Mount Everest.
Hydrospheric processes are steps by which water cycles on the planet Earth. These processes include sublimation of ice, evaporation of liquid, transportation of moisture by air, rain, snow, river, lake, and ocean currents. All these processes are related to the physical and chemical properties of water, and many government agencies are set up to study and record phenomena related to them. The study of these processes is called hydrology
Among the planets, Earth is the only one in which there are solid, liquid and gaseous waters. These conditions are just right for life, for which water is a vital part. Water is the most abundant substance in the biosphere of Earth. Groundwater is an important part of the water system. When vapor is cooled, clouds and rain develop. Some of the rain percolate through the soil and into the underlying rocks. The water in the rocks is groundwater, which moves slowly.
A body of rock, which contains appreciable quantities of water, is called an aquifier. Below the water table, the aquifier is filled (or saturated) with water. Above the water table is the unsaturated zone. Some regions have two or more water tables. These zones are usually separated by water-impermeable material such as boulder and clay. Groundwater can be brought to the surface by drilling below the water table, and pumped out. The amount of water that can be pumped out depends on the structure of the aquifer. Little water is stored in tight granite layers, but large quantities of water are stored in limestone aquifier layers. In some areas, there are under ground rivers.
Table $1$: Ions in sea water
Species $\ce{Cl^{-}}$ $\ce{Na^{+}}$ $\ce{SO4^{^-}}$ $\ce{Mg^{2+}}$ $\ce{Ca^{2+}}$ $\ce{K^{+}}$ $\ce{HCO3^{-}}$ $\ce{Br^{-}}$ $\ce{Sr^{2+}}$ $\ce{BO4^{3-}}$ $\ce{F^{-}}$ $\ce{H4SiO4}$ $\ce{H^{+}}$
mg/Kg 10,760 2,710 2,710 1,290 411 399 142 67 8 4.5 1.3 0.5-10 $10^{-8.35}$
Common Ions Present in Natural Water
Hydrology is also the study of how solids and solute interact in, and with, water. In this link, the compositions of seawater, composition of the atmosphere, compositions of rain and snow, and compositions of river waters and lake waters are given in details. Table $1$ list the major ions present in seawater. The composition does vary, depending on region, depth, latitude, and water temperature. Waters at the river mouths contain less salt. If the ions are utilized by living organism, its contents vary according to the populations of organisms.
Dust particles and ions present in the air are nucleation center of water drops. Thus, waters from rain and snow also contain such ions: Ca2+, Mg2+, Na+, K+, NH4+. These cations are balanced by anions, HCO3-, SO4-, NO2-, Cl-, and NO3-. The pH of rain is between 5.5 and 5.6. Rain and snow waters eventually become river or lake waters. When the rain or snow waters fall, they interact with vegetation, top soil, bed rock, river bed and lake bed, dissolving whatever is soluble. Bacteria, algae, and water insects also thrive. Solubilities of inorganic salts are governed by the kinetics and equilibria of dissolution. The most common ions in lake and river waters are the same as those present in rainwater, but at higher concentrations. The pH of these waters depends on the river bed and lake bed. Natural waters contain dissolved minerals. Waters containing Ca2+ and Mg2+ ions are usually called hard water.
Hard Water
Minerals usually dissolve in natural water bodies such as lakes, rivers, springs, and underground waterways (ground waters). Calcium carbonate, CaCO3, is one of the most common inorganic compounds in the Earth crust. It is the ingredient for both calcite and aragonite. These two minerals have different crystal structures and appearance. This photograph shows crystals of typical Calcite.
Calcium-carbonate minerals dissolve in water, with a solubility product as shown below.
$CaCO_3 \rightleftharpoons Ca^{2+} + CO_3^{2-} \;\;\; K_{sp} = 5 \times 10^{-9}$
From the solubility product, we can (see example 1) evaluate the molar solubility to be 7.1x10-5 M or 7.1 mg/L (7.1 ppm of CaCO3 in water). The solubility increases as the pH decrease (increase acidity). This is compounded when the water is saturated with carbon dioxide, CO2. Saturated CO2 solution contains carbonic acid, which help the dissolution due to the reaction:
$H_2O + CO_2 \rightleftharpoons H_2CO_3$
$CaCO_3 + H_2CO_3 \rightleftharpoons Ca^{2+} + 2 HCO_3^-$
Because of these reactions, some natural waters contain more than 300 ppm calcium carbonates or its equivalents.
The carbon dioxide in natural water creates an interesting phenomenon. Rainwater is saturated with CO2, and it dissolves limestones. When CO2 is lost due to temperature changes or escaping from water drops, the reverse reaction takes place. The solid formed, however, may be a less stable phase called aragonite, which has the same chemical formula as, but a different crystal structure than that of calcite.
The rain dissolves calcium carbonate by the two reactions shown above. The water carries the ions with it, sips through the crack of the rocks. When it reached the ceiling of a cave, the drop dangles there for a long time before fallen. During this time, the carbon dioxide escapes and the pH of the water increases. Calcium carbonate crystals begin to appear. Calcite, aragonite, stalactite, and stalagmite are four common solids found in the formation of caves.
Natural waters contain metal ions. Water containing calcium, magnesium and their counter anions are called hard waters. Hard waters need to be treated for the following applications.
• Heat transfer carrier in boilers and in cooling systems
• Solvents and reagents in industrial chemical applications
• Domestic water for washing and cleaning
Temporary vs. Permanent Hard Water
Due to the reversibility of the reaction,
$CaCO_{3(s)} + H_2CO_3 \rightleftharpoons Ca^{2+} + 2 HCO_3^-$
water containing Ca2+, Mg2+ and CO32- ions is called temporary hard water, because the hardness can be removed by boiling. Boiling drives the reverse reaction, causing deposit in pipes and scales in boilers. The deposits lower the efficiency of heat transfer in boilers, and diminish flow rates of water in pipes. Thus, temporary hard water has to be softened before it enters the boiler, hot-water tank, or a cooling system. The amount of metal ions that can be removed by boiling is called temporary hardness
After boiling, metal ions remain due to presence of chloride ions, sulfate ions, nitrate ions, and a rather high solubility of MgCO3. Amount of metal ions that can not be removed by boiling is called permanent hardness. Total hardness is the sum of temporary hardness and permanent hardness. Hardness is often expressed as equivalence of amount of calcium ions in the solution. Thus, water conditioning is an important topic. The value of water treatment market has been estimated to be worth \$30 billion.
Lime-soda Softening
Lime-soda softening is the removal of temporary hardness by adding a calculated amount of hydrated lime, Ca(OH)2:
$Ca^{2+} + 2 HCO_3^- + Ca(OH)_{2(s)} \rightarrow 2 CaCO_{3(s)} + 2 H_2O$
Adding more lime causes the pH of water to increase, and as a result, magnesium ions are removed by the reaction:
$Mg^{2+} + Ca(OH)_{2(s)} \rightarrow Mg(OH)_{2(s)} + Ca^{2+}$
The extra calcium ions can be removed by the addition of sodium carbonate.
$Na_2CO_3 \rightarrow 2 Na^+ + CO_3^{2-}$
$Ca^{2+} + CO_3^{2-} + \rightarrow CaCO_{3(s)}$
In this treatment, the amount of Ca(OH)3 required is equivalent to the temporary hardness plus the magnesium hardness. The amount of sodium carbonate required is equivalent to the permanent hardness. Thus, lime-soda softening is effective if both the temporary and total hardness have been determined. The sodium ion will remain in the water after the treatment. The pH of the water is also rather high depending on the amount of lime and sodium carbonates used.
Complexation Treatment
Addition of complexing reagent to form soluble complexes with Ca2+ and Mg2+ prevents the formation of solid. One of the complexing agents is sodium triphosphate Na3PO4, which is marketed as Calgon, etc. The phosphate is the complexing agent. Other complexing agents such as Na2H2EDTA can also be used, but the complexing agent EDTA4- forms strong complexes with transition metals. This causes corrosion problem, unless the pipes of the system are made of stainless steel.
Ion Exchange
Today, most water softeners are using zeolites and employing ion exchange technique to soften hard water. Zeolites are a group of hydrated crystalline aluminosilicates found in certain volcanic rocks. The tetrahedrally coordinated aluminum and silicon atoms form AlO4 and SiO4 tetrahedral groups. They interconnect to each other sharing oxygen atoms forming cage-type structures as shown on the right. This diagram and the next structural diagram are taken from an introduction to zeolites There are many kinds of zeolites, some newly synthesized.
Whatever kind, the crystal structure of zeolites contains large cages. The cages are connected to each other forming a framework with many cavities and channels. Both positive and negative ions can be trapped in these cavities and channels as shown below.
For each oxygen that is not shared in the AlO4 and SiO4 tetrahedral groups, a negative charge is left on the group. These negative charges are balanced by trapping alkali metal and alkaline earth metal ions. When more cations are trapped, hydroxide and chloride ions will remain in the cavities and channels of the zeolites.
To prepare a zeolite for water treatment, they are soaked in concentrated NaCl solution. The cavities trap as many sodium ions as they can accommodate. After the treatment, the zeolite is designated as Na-zeolite. Then the salt solution is drained, and the zeolite is washed with water to eliminate the extra salt. When hard water flow through them, calcium and magnesium ions will be trapped by the Na-zeolite. For every Ca2+ or Mg2+ trapped, two Na+ ions are released. The treated water contains a rather high concentration of Na+ ions, but low concentrations of Mg2+ and Ca2+. Thus, zeolite ion exchange convert hard water into soft water.
Pure Water by Ion Exchange
In most cases, the resins are polystyrene with functional -SO3H groups attached to the polymer chain for cation exchange resin, and with functional group -N(CH3)3+ attached to the chain for anion exchange resin. To prepare the resin for making pure or deionized water, the cation resin is regenerated with HCl so that the groups are really -SO3H. The anion resin is regenerated with NaOH, so that the functional groups are -N(CH3)3(OH). When water containing any metal ion M+ and anion A- passes through the ion exchange resins in two stages, the following reactions take place,
$\ce{M+ + -SO3H \rightarrow H+ + -SO3M}$
$\ce{A^{-} + -N(CH3)3(OH) \rightarrow OH^{-} + -N(CH3)3A}$
$\ce{H^{+} + OH- <=> H2O}$
Thus, ion exchange provides pure water to meet laboratory requirement.
Reverse Osmosis Water Filter System
This method can also be used to prepare water for domestic and laboratory applications. This method has been discussed in Wastewater treatment
Magnetic Water Treatment
The following is a list of companies selling magnetic devices for magnetic water treatment. All devices are based on results of some ressearch indicating that when water runs through a magnetic field, the calcium carbonate will precipitate as aragonite rather than the usual calcite. For example, K.J. Kronenberg has published an article in IEEE Transactions on Magnetics, (Vol. Mag-21, No. 5, September 1985, pages 2059-2061). and stated the following:
The crystallization mode of the water's mineral content was found to change from a dendritic, substrate-bound solidification habit to the form of separate disc-shaped crystals after the water had moved through a number of magnetic fields. The former scarcity of crystallization nucleii in the water had been turned into an abundance of nucleation centers in the water. The reduction of the number of the substrate-bound crystals has been used as a quantitative measure of the magnetic effect.
Many companies have made various devices for magnetic conditioning of water, and they claim that their devices will clean up the pipes and boilers at little or no cost. I have yet to test one of these devices for its claim, but my preliminary tests shows that permanent magnet has little effect on the calcium carbonate deposit of temporary hard water. The cleaning effect they have claimed is probably much overstated.
Example $1$
From the solubility product shown for the dissolution of calcium carbonate,
$CaCO_3 \rightleftharpoons Ca^{2+} + CO_3^{2-} \;\;\; K_{sp} = 5 \times 10^{-9}$
Evaluate the molar solubility of Ca2+ in saturated solution.
Solution
From the definition of solubility product, we have
$[Ca^{2+}] [CO_3^{2-}] = 5 \times 10^{-9}$
Thus,
$[Ca^{2+}] = [CO_3^{2-}] = 7.1 \times 10^{-5}\; M$
The concentration of 7.1x10-5 M is equivalent to 7.1 mg/L (7.1 ppm of CaCO3 in water).
DISCUSSION
There may be other ions present in the system and other equilibria conditions in addition to the equilibrium mentioned here. Problems are more complexed in the real world.
Exercise $1$
Boiling of 1.0 L of water produced 10 mg of CaCO3 solid. What is the temporary hardness of the water?
Answer
10 ppm | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Aquatic_Chemistry/Natural_Water.txt |
Discussion Questions
• Is water the source of life, why?
• What is life?
• Is life a symbiotic system of many lives?
• What properties of water make water the prime ingredient of life?
• What are the essential substances of life?
• Other than material, what else is essential for life?
• What is the role of water in supporting life?
• Is there a life form involving no water?
• Is there a method to determine the total water content?
Not finished yet, but the major ideas are here now!
Water Biology
Chemical and physical properties of water discussed in other pages are essential considerations for water biology. Natural water also contains biological matters as well as living creatures. In the discusssion of biology and water, you must feel pleasant, because they make each other grow more lovely as this picture from water lily cottage. The water pages of National Wildlife Federation, offers some interesting reading too.
All living things have a cycle of life. A cycle involves all or some of these process: birth, growth, mature, reproduction, matamorphosis, and death. There are millions of living organisms on Earth, ranging from single-cell amebas, bacteria, to the complicated homosapiens. There are also viruses which are fragment of DNA or RNA that depend on host cells for their reproduction. They are not cells.
Living things usually have cells that isolate their systems so that the cells contain unique materials to sustain the lives of cells. Cells regulate their contents (homeostatic), and carry out their metabolsims. They divide or making copies of themselves. Many reproduction process involve two individuals and future populations are subject to a greater diversity. Mutation is a fact of life, and many adopt to their changing environment.
How life started? Let the research and debate continue by not giving any clusive statements here. A physical geography course suggests that the marine invertebrates began their life 600 million years ago, and they are followed by fish, land plants, amphibians, reptiles, mommals, and then flowering plants, in this order. All these started more than a hundred million years ago, and hominid (primate) line began its evolution 20 to 15 million years ago.
There are strong evidences that life on earth appeared in a body of water. Only the planet Earth has three states of water, and it offers a suitable environment for life to began, among all nine solar planets. Since all life forms involve water. Water is seen as the source, matrix, and mother of life. Water is important, because water is required for life, and some people even consider water as life blood.
Since water supports life, living organisms also modify their environment, changing the nature of the water in which they live. Biology of water pollution, lists the syllabus on a course including a laboratory section. Water and biology interweave into an entangled maze waiting for explorers and curious minds.
Water dissolves or emulsifies other life-supporting substances and transport them to intercellular and intracellular fluids. It is also a medium in which reactions take place. Reactions provide energy (non-matter) for living. Energy causes changes, and manifestation of changes is at least related to, if not the whole, life. An organized and systematized set of reactions is essential in each life.
Balancing Water in Bio-systems
Many living organisms live their lives entirely in water as shown here in this photo from a job center talking about work in marine biology. Aquatic living organisms extract neutrients from water, yet maintaining a balance of electrolyte and nurrishment concentration in their cells. For living things not living in water, they extract water from their environment by whatever mechanism they can. Cells in their body are surrounded by body fluid, and all cells maintain constant concentrations of electrolytes, neutrints, and metabolites. The process of maintaining constant concentrations is called homeostasis. Certainly, some active transport mechanisms are involved in this balance.
The rooting of every type of plants is unique. Generally speaking, plants having extensive roots are able to extract water under harsh conditions. On the other hand, some plants such as cactus, jade and juniper have little roots, but their leaves have a layer of wax that prevents water from evaporation. Water conserving plants tolerate draught, and they survive under harsh conditions. The picture shown here is a jade plant from the above link.
Lately, some pumpkin growers harvested squash weighing almost 500 kg. At the peak of the growing season, the squash grows almost 0.5 kg a day. That is equivalent to 25 moles of water collected by the roots, discounting the water evaporated through the leaves. The growth is particularly good during a hot and wet day, but during a hot sunny after noon, the temperature of the leaves and fruits get very hot.
Essential Electrolytes for Life Support
In addition to water, many inorganic substances or minerals are essential to life. These substances ionize in water to form ions and their solutions conduct electricity. Therefore, they are called electrolytes. Because most of these substances are already dissoved in natural water, we list ions instead of the mineral they come from.
When ions dissove, they form complexes with water molecules. For most metals, the first sphere of coordination usually involve 6 water molecules. For example, when sodium chloride dissolve, we have
\[\ce{NaCl + 12 H2O <=> Na(H2O)6+ + Cl(H2O)6^{-}}\]
\[\ce{FeCl2 + 18 H2O <=> Fe(H2O)62+ + 2 Cl(H2O)6^{-}}\]
Formation of complexes are due to the high dipole moment of water, and the dissolution can be attribute to the high dielectric constant (80). However, in most publications, we ignore the water molecules in the complexes, and simply consider them as ions.
In the following, we describe some essential ions or salts as electrolytes.
• Sodium chloride, Na+ and Cl-
NaCl is readily dissolved and absorbed in extracellular fluid. The two ions help to balance water, acid/base, osmotic pressure, carbon dioxide transport, and excreted in human urine and sweat. Lack of sodium chloride shows symptoms of dehydration.
• Potassium, K+
Good sources of potassium ions are vegetables, fruits, grains, meat, milk, and legumes. It is readily absorbed, and actively transported into the intracellular fluid. Its function is similar to that of sodium ions, but cells prefer potassium ions over sodium. Lack of potassium leads to cardiac arrest.
• Calcium, Ca2+
Divalent calcium ions are usually poorly absorbed by human, but the are essential for the bones, teeth, blood clotting. Lack of calcium hinders growth and osteoporosis in old age.
• Phosphates, PO43-
Calcium phosphate is essential for bones, teeth, etc. However, phosphates are also responsible for many life reactions. ATP, NAD, FAD etc are metabolic intermediates, and they involve phosphate. Phospholipids and phosphoproteins are some other phosphate containing species.
• Magnesium, Mg2+
Magnesium ions are essential in chlorophyll. These ions are absorbed readily, and compete with calcium at times. Magnesium and calcium ions are present in hard water, and this link alerts the lack of magnesium leading to cardiovascular disease.
• Ferrous or ferric ions, Fe2+ or Fe3+
Usually known as iron, but iron are present either as divalent or as trivalent ions. Iron is absorbed according to body need; aided by HCl, ascorbic acid (vitamin C), and regulated by apoferritin. In mammals, iron is stored in liver as ferritin and hemosiderin. Iron deficiency leads to anemia. Good food sources of iron are liver, meats, egg yolk, green vegetables, and whole grains.
• Zinc ions, Zn2+
Zinc ions are important ingredients for many enzymes. They are present in insulin, carbonic anhydrase, carboxypeptidase, lactic dehydrogenase, alcohol dehydrogenase, alkaline phosphatase etc. Like iron, zinc deficiency leads to anemia and poor growth.
• Copper ions, Cu2+
Copper ions help iron utilization, and this metal is present in may enzymes.
• Cobalt ions, Co2+
Cobalt ions are centers of vitamin B12, and deficiency of which leads to anemia.
• Iondine ions, I-
Iondine is a constituent of thyroxin, which regulates cellular oxidation.
• Fluoride ions, F-
Fluoridation of drinking water is often a controversal issue. Childen's teeth are less seceptible to decay. Once they began to bruch their teeth, the fluoride in tooth paste is sufficient.
The eletrolytes listed above are present in significant amount in water, or fluids of organisms. There are some metals present in very minute quantities in biological systems, and these are not listed above.
Metal ions also interact with proteins. An enzyme is usually a very large protein molecule, and it folds into a kidney shape enclosing one or more metal ions forming a complex. The metal is usually responsible for the enzyme activity. Cobalt, copper, iron, molebdenium, nickel, and zinc have groups of enzymes each, and further discussion can be found in The Prosthetic groups and Metal Ions in Protein Active Sites Database (PROMISE). A general discussion is called bioinorganic chemistry and this site has an extensive General references on Bioinorganic Chemistry.
Balancing Electrolytes
Ions Extracellular Intracellular Interstitial
Na+ 140 10 150
K+ 5 150 4
Ca2+ 10 4 6
Mg2+ 6 80 4
Total 161 244 164
Cl- 103 2 120
HCO3- 30 10 30
HPO42- 4 177 4
SO42- 2 10 2
Organic acid 6 5 6
Protein 16 40 2
Total 161 244 164
Electrolyte balance are maintained by passive transport or diffusion and slective active transport mechanisms. Diffusion process tends to make the concentration all the same throughout the entire fluid, but active or selective transport moves ions to special compartment. For example, the active transport of sodium and potasium by an enzyme called sodium-potassium ATPase is usually known as sodium-potassium pump. This process pumps potassium ions inside a cell while removing sodium ions from the cells. Thus, a high concentration of potassium is maintained inside cells. Energy is required in active transport, and cellular metabolism provides the energy and the necessary molecular motions to facilitate the process.
Hormons are produced by special cells, and they are responsible for the communication between various part of the body. Some complicate harmon actions regulates the rate of transport and balance the ion concentrations depending on the portion of the tissue and the need. This is generally called the hormonal effects following the suggestion of human biochemistry.
Gibbs-Donnan effect considers the equilibrium in compartments that are separated by memberances or cell walls. There will be no net change when the products of concentrations of say [Na+]1, [Cl-]1 are the same for compartments 1 and 2.
[Na+]1 [Cl-]1 = [Na+]2 [Cl-]2
the subscript 1 and 2 refer to the two compartments. When no other components are present, we have
[Na+]1 = [Cl-]1 = [Na+]2 = [Cl-]2
But if compartment 2 has a sodium salt with other anions, this salt ionize to give Na+ too. The above condition will not be maintained, in this case. In other words, thermodynamic will be a force to adjust the concentrations.
In general, the cations should be balanced with anions. Otherwise, the solution will be charged.
Water in Human Biology
In human, water in the tissue and body fluid is mostly free, but some fraction may be bounded in pockets of hydrophilic compartments. Body fluids have many electrolytes and neutrients dissolve in them.
Intracellular fluid 70%
Interstitial fluid (lymph) 20%
Blood plasma 7%
Intestinal lumen etc. 3%
Human Biochemistry by J.M. Orten and O.W. Neuhaus (1982), 10th Ed. suggests that about 70% of human body weight is water, most found in three major compartments: 70% intracellular fluid, 20% interstitial fluid, and 7% blood plasma, and only 3 % in intestinal lumen, cerebrospinal fluid and other compartments.
However, Human Biochemistry also suggests that blood makes up about 8% of the total body weight.
Example 1
For a person weighing 50 kg (110 lb), what is the weight of the blood?
Solution
From the distribution given above,
Amount of blood = 50 kg * 0.08
= 4 kg.
This is a lot of blood, and donating 0.5 L of blood will not affect the normal function of the blood.
Input Output
Drinking 400 g Skin 500 g
Beverage 580 Expired air 350
Preformed water
in solid food
720 Urine 1100
Metabolic water 320 Feces 150
Total 2020 Total 2100
Balance -80 g?
Water in human comes from ingestion. Aside from drinking water, there is other beverages. Much of the food also contain water. When food is oxidized in the cells, all hydrogen in food converts to water, which is called metabolic water. Water is excreted via urine, feces, skin, and expiration. A typical daily water balance is shown in a table here. Water balance is maintained between cells and fluid, and the output depends on kidney functions and body insensible perspiration (Expired air from the lung is saturated with water vapor, and evaporation from the skin).
Drinking Water
Drinking water affects health. An Excite search using the phrase "drinking water" came up with 57890 documents. Drinking Water Resources gives annotated links to web sites that provide information about the drinking water.
A rather recent book Chemistry of Water Treatment by S.D. Faust and O.M. Aly, 2nd Ed. (1998) [TD433 F38 1998] addresses the standards for drinking water in the first Chapter. The standards have changed over the years, as we better understand the science.
Safe drinking water is a suitable combination of minerals and electrolytes. Usually, one should not drink water softened by water softeners. Using distilled water for beverages and cooking may not reach your set goals. Hard water with calcium and magnesium ions is good for drinking.
Usually, a government set up a non-profit organization to provide rules for safe drinking water. This organization has an infrastructure to monitor drinking water systems, and it shall also carry out research to improve the quality of drinking water.
Regarding making rules, reliable tests should be developed to determine the electrolyte we have mentioned, plus others such as lead ions, Pb2+, mercury Hg2+, methylmercury, arsenic, radioactivity, etc. Bacteria tests should be carried out regularly. This organization should also have a communication channel to release relavant message.
The Environmental Protection Agency of the U.S. gives a list of comtaminants. The list has suggested limits, and it divides the contaminants into
• Inorganic substnaces
• Organic substances
• Radioactivities (alpha and beta rays, radium)
• Micro-organisms
Among inorganic substances, limits are given to contents of antimony, arsenic, asbestos, barium, beryllum, cadmium, chromium, copper, mercury, nitrate, nitrite, selenium, and thallium.
More than 50 organic compounds are on the list, and some familar ones are: acrylamide, benzene, carbon tetrachloride, chlorobenzene, 2 4 D, dichlorobenzen, dioxin, polychlorinated biphenyls (PCBs), toluene, and vinyl chloride. Many of these have a zero limit.
In terms of microorganisms, Giardia lamblia, and Legionella, are checked. Furthermore, viruses, turbidity, total coliforms, and heterotrophic plate should be checked.
Contaminant Standard
Aluminum 0.05 to 0.2 mg/L
Chloride 250 mg/L
Color 15 (color units)
Copper 1.0 mg/L
Corrosivity noncorrosive
Fluoride 2.0 mg/L
Foaming Agents 0.5 mg/L
Iron 0.3 mg/L
Manganese 0.05 mg/L
Odor 3 threshold odor number
pH 6.5-8.5
Silver 0.10 mg/L
Sulfate 250 mg/L
Total Dissolved Solids 500 mg/L
Zinc 5 mg/L
The secondary standard lists most electrolytes as shown in this table on the right.
The Secondary Drinking Water Standards are non-enforceable guidelines regulating contaminants that may cause cosmetic effects (such as skin or tooth discoloration) or aesthetic effects (such as taste, odor, or color) in drinking water.
There are many brands of bottled drinking water, they have been very popular only in recent years. Do we know the bottle procedure? Is the industry regulated? Is the water quality reliable? Are all bottled water the same? Is there a brand bottled water for real good health? Do we know what should be in healthy drinking water? There is an opinion expressed in Ontario Clean Water Agency (OCWA). Check it out.
The magnesium web site gave the following news release Oct. 4, 1999. According to the U.S. National Academy of Sciences (1977) there have been more than 50 studies, in nine countries, that have indicated an inverse relationship between water hardness and mortality from cardiovascular disease. That is, people who drink water that is deficient in magnesium and calcium generally appear more susceptible to this disease. The U.S. National Academy of Sciences has estimated that a nation-wide initiative to add calcium and magnesium to soft water might reduce the annual cardiovascular death rate by 150,000 in the United States. This is a good summary from the report.
Sports Drinks
Sports talk gives information about Sports Drinks. This is science, art, testing, and myth. However, some fundamentals should be considered.
Our bodies are mostly water, about 70%. The body fluid has many different things dissolved in it, particularly salt. the salinity - varies somewhat with where you take the sample of water to measure. Don't worry about that). I recall the concentration as being 0.5%, but I'm not sure. Doctors call this "normal saline".
Now if you put a human, or other animal, cell in saltwater that is the same concentration as the saltwater inside the cell, the cell pretty much just sits there. If you put it in distilled water, the cell absorbs the water through its cell membrane - called diffusion - until it eventually pops. If you put the cell in concentrated saltwater, the cell looses water. The water diffuses out of the cell through the membrane, leaving a small, shriveled up cell.
What does this have to do with sports drinks? If you give someone distilled water, it seems like they would absorb the water faster because of what I just described. On the other hand, during sweating, you're loosing sodium, potassium and small quantities of other electrolytes. If you're exercising particularly long or hard, you need to replace those electrolytes. Researchers found that adding some salt to water replaced the salt lost through sweating and helped the body to get water to the cells. If you look at a label on a Gatorade or other drink, you'll find that the main electrolyte is simple salt. But if you put too much of the electrolytes in the water, the cells shrivel up just like the way I described above.
I hope that helps you understand what happens. This stuff on how cells swell up or shrivel up is in high school biology books, and maybe even in something you can find in your school's library.
Taste and Order of Drinks
Taste and order are sensations, and thus hard to quantify and systematize. Often, the Weber Fechner law is used. This law expresses the taste or order sensations S as being proportional to the logarithm of the stimulus R, with the proportional constant K,
S = K log R
For some common substances, the minimum amounts detected by expert nose or expert taster gives the sensation as a threshold, below which no taste or order was detected.
However, orderous sensation of water may be reported as threshold order number (TON). If A mL of ordorous sample is diluted with B mL of order-free water to be "just detectable" to the expert nose, the TON is defined as
A + B
TON = -------------
A
Similarly, a flavour threshold number (FTN) can be defined in the same manner.
A + B
FTN = -------------
A
except that A and B are volumes of samples and taste free water used.
These formulations show a method of defining some quantities that are, otherwise, very diffcult to quantify. There are other methods of reporting orders and taste, and some bottling companies have their standard methods of comparison.
Of course, the source of order and taste comes are organic, inorganic compounds as well as bacteria, algae. For example, mercaptans such as C2H2SH and ammonia offers disagreeable smell and taste.
Example 2
A sample of water was tested by 10 expert noses, and only 5 of them can detect an order. Thus, this is "just detected". What is the TON for this sample?
Solution
Since no dilution was used,
TON = A / A = 1.
DISCUSSION
If equal amount of orderless water is required to dilute it so that the order is "just detected", then the test order number is (1+1)/1 = 2. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Aquatic_Chemistry/Water_Biology.txt |
Water is an unusual compound with unique physical properties. As a result, its the compound of life. Yet, its the most abundant compound in the biosphere of Earth. These properties are related to its electronic structure, bonding, and chemistry. However, due to its affinity for a variety of substances, ordinary water contains other substances. Few of us has used, seen or tested pure water, based on which we discuss its chemistry.
The chemistry of water deals with the fundamental chemical property and information about water. Water chemistry is discussed in the following subtitles.
• Composition of water
• Structure and bonding of water
• Molecular Vibration of water
• Symmetry of water molecules
• Formation of hydrogen bonding in water
• Structure of ice
• Autoionization
• Leveling effect of water and acid-base characters
• Amphiprotic nature
• Reactivity of water towards alkali metals; alkaline earth metals; halogens; hydrides; methane; oxides; and oxygen ions.
• Electrolysis of water
Composition of Water
Water consists of only hydrogen and oxygen. Both elements have natural stable and radioactive isotopes. Due to these isotopes, water molecules of masses roughly 18 (H216O) to 22 (D218O) are expected to form. Isotopes and their abundances of H and O are given below. From these data, we can estimate the relative abundances of all isotopic water molecules.
Abundances (% or halflife) of hydrogen and oxygen isotopes
1H 2D 3T
99.985% 0.015% 12.33 y
14O 15O 16O 17O 18O
70.6 s 122 s 99.762% 0.038% 0.200%
The predominant water molecules H216O have a mass of 18 amu, but molecules with mass 19 and 20 occur significantly. Because the isotopic abundances are not always the same due to their astronomical origin, The isotopic distribution of water molecules depends on its source and age. Its study is linked to other sciences. (See Dojlido, J.R. & Best, G.A. (1993) Chemistry of Water and Water Pollution, Ellis Harwood for isotopic distribution of water.)
Relative abundance of isotopic water
H216O H218O H217O HD16O D216O HT16O
99.78% 0.20% 0.03% 0.0149% 0.022 ppm trace
18 20 19 19 20 20 amu
In particular, D216O is called heavy water, and it is produced by enrichment from natural water. Properties of heavy water are particularly interesting due to its application in nuclear technology.
Structure and Bonding of the Water Molecule
Pure water, H2O, has a unique molecular structure. The O-H bondlengths are 0.096 nm and the H-O-H angle = 104.5°. This strange geometry can be explained by various methods. From carbon to neon, the numbers of valence electrons increase from 4 to 8. These elements require 4, 3, 2, 1, and 0 H atoms to share electrons in order to complete the octet requirement. Their Lewis dot structures are shown on the right, and note the trend in bondlengths.
There are six valance electrons on the oxygen, and one each from the hydrogen atom in the water molecule. The eight electrons form two H-O bonds, and left two lone pairs. The long pairs and bonds stay away from each other and they extend towards the corners of a tetrahedron. Such an ideal structure should give H-O-H bond angle of 109.5°, but the lone pairs repel each other more than they repel the O-H bonds. Thus, the O-H bonds are pushed closer, making the H-O-H angle less than 109°.
After the introduction of quantum mechanics, the electronic configuration for the valence electron of oxygen are 2s2 2p4. Since the energy levels of 2s and 2p are close, valence electrons have characters of both s and p. The mixture is called sp3 hybridization. These hybridized orbitals are shown on the right. The structures of CH4, NH3, and H2O can all explained by these hybrid orbitals of the central atoms. The above approach is the valence bond theory, and both the C-H bonds and lone electron pairs are counted as VSPER pairs in the Valence-shell Electron-Pair Repulsion (VSEPR) model, according to which, the four groups point to the corners of a tetrahedron.
For triatomic molecules such as water, molecular orbital (MO) approach can also be applied to discuss the bonding. The result however is similar to the valence bond approach, but the MO theory gives the energy levels of the electron for further exploration.
Molecular Vibration of Water
Atoms in a molecule are never at rest, and for each type of molecule, there are some normal vibration modes. For the water molecule, the three normal modes of vibrations are symmetric stretching, bending and asymmetric stretching. The vibrations are quantized, as do any microscopic system, and their quantum numbers are designated as v1, v2 and v3. The observed transition bands of D2O, H2O, and HDO are given in the table below.
Transition bands of D2O, H2O, and HDO
Quantum numbers of upper state Absorption wavenumbers of bands /cm-1
v1 v2 v3 D2O H2O HDO
0 1 0 1178 1594 1402
1 0 0 2671 3656 2726
0 0 1 2788 3756 3703
0 1 1 3956 5332 5089
Data from Eisenberg, D. and Kauzmann, W. (1969) Structure and properties of water, Oxford University press.
The ideal transition bands are centered in the given wavenumbers. However, these wavenumbers are calculated based on isolated molecules with no interaction with any neighbor. When molecules interact with each other, the energy levels are modified, and the bands shift.
Many more less intense absorption bands extend into the green part of the visible spectrum. The absorption spectrum of water may contribute to the blue color for lake, river and ocean waters.
Symmetry of Water Molecules
The water molecules are rather symmetric in that there are two mirror planes of symmetry, one containing all three atoms and one perpendicular to the plane passing through the bisector of the H-O-H angle. Furthermore, if the molecules are rotated 180° (360°/2) the shape of the molecule is unperturbed. This indicates that the molecules have a 2-fold rotation axis. The three symmetry elements are 2-fold rotation, and two mirror planes. Both mirror planes contain the rotation axis, and this type of symmetry belongs to the point group C2v. A point group has a definite number of symmetry elements arranged in certain fashion. Molecules can be classified according to their point groups. Molecules of the same point group have similar spectroscopic characters. Other molecules of C2v point group are CH2=O, CH2Cl2, the bent O3 etc.
Formation of Hydrogen Bonding
Under certain conditions, an atom of hydrogen is attracted by rather strong forces to two atoms instead of only one, so that it may be considered to be acting as a bond between them. This is called hydrogen bond. This statement is from Linus Pauling (1939) in his book The Nature of the Chemical Bond. He gave the ion [F:H:F]- as an example. At that time, the hydrogen bond was recognized as mainly ionic in nature. The energy associated with hydrogen bond is 8 to 40 kJ/mol.
Comparison of melting and boiling points for a few substances
Molecule Molar
mass
m.p. b.p. /° C
NH3 17 -77.8 -33.5
H2O 18 -0 100
H2S 34 -85.6 -60
H2Se 81 -60.4 -41.5
H2Te 128.6 -51 -1.8
CH3OH 32 ? 65
C2H5OH 46 ? 78
C2H5OC2H5 74 ? 34
Normally, the melting point and boiling point of a substance increase with molecular mass. For example the melting points of inert gases are 0.95, 24.48, 83.8, and 116.6 K respectively for He, Ne, Ar, and Kr.
In this table, the melting and boiling points for water are particular high for its small molecular mass. This is usually attributed to the formation of hydrogen bonds. The small electronegative atoms F, O and N are somewhat negatively charged when they are bonded to hydrogen atoms. The negative charges on F, O and N attract the slightly positive hydrogen atoms, forming a strong interaction called hydrogen bond.
A graph showing the melting points and boiling points of group 16 provided by Prof. J. Boucher illustrates the same point.
Based on the observed absorption at 3546 and 3691 cm-1, Van Thiel, Becker, and Pinmentel (1957, J. Chem. Phys. 27 386) suggested the formation of water dimer when trapped in a matrix of nitrogen.
Due to hydrogen bonding, water molecules form dimers, trimers, polymers, and clusters. The hydrogen bonds are not necessarily liner.
Structure of Ice
Ice occurs in many places, including the Antarctic. If all the ice melted, the water level of the oceans will rise about 70 m. The structure of ice and the caption are from this link.
The density of ice is dramatically smaller than that of water, due to the regular arrangement of water molecule via hydrogen bonds. In an idealized structure of ice, every hydrogen atom is involved in hydrogen bond. Every oxygen atom is surrounded by four hydrogen bonds.
This diagram from caltech.edu, shows the structure of hexagonal ice in (a) and cubic ice in (b). A rod here represents a hydrogen bond. Since the hydrogen bonds are not linear, the real structure is a little more complicated.
The tetrahedral coordination opens up the space between molecules. On each hydrogen bond, shown by a rod joining the oxygen atoms, lies one proton in an asymmetric position (not shown). Bond lengths, 275 pm, are indicated. Ordinary ice is hexagonal. and the hexagonal c axis is labelled 732 pm, and one of the hexagonal a axes is labelled 450 pm. If water vapor condenses on very cold substrate at 143-193 K (-130 to -80ºC) a cubic phase is formed. In (b) the cubic unit cell is outlined with dashed lines; dimensions are in pm determined at 110 K.
These diagrams can also be used to represent the two forms of diamond, and in this case, the rods joining the atoms represent C-C bonds. Each C-C bondlength is 154 pm. Silicon and germanium crystals have the same structure, but their bondlengths are longer. The two diamond types of structure are related to the packing of spheres. The hexagonal type has the ABABAB... sequence, whereas the cubic type has the ABCABC... sequence. In both cases, half of the tetrahedral sites are occupied by tetrahedrally bonded carbon atoms. Hexagonal diamonds have been observed in meteorites.
The four hydrogen bonds around an oxygen atom form a tetrahedron in a fashion found in the two types of diamonds. Thus, ice, diamond, and close packing of spheres are somewhat topologically related.
A phase diagram of water shows 9 different solid phases (ices). Ice Ih is the ordinary ice. In addition to ice Ic from vapor deposition, conditions for nine phases are shown. Aside from ice I, other phases are formed and observed under high pressure generated by machines built by scientists. So far, ten different forms of ice have been observed, and some ice forms exist at very high pressure. The pressure deep under the polar (Antarctic) ice cap is very high, but we are not able to make any direct observation or study.
There is a report of the 11th ice, and the ice phase diagram and drawings of ice structures given here is extremely interesting.
The Autoionization of Water
The Autoionization of Water in the formation of ions according to
HOH(l) + HOH(l) = H3O+ + OH-
This is an equilibrium process and is characterised by an equilibrium constant, K'w:
[H3O+] [OH-]
K'w = ------------
[H2O]
Since [H2O] = 1000/18 = 55.56 M, and remains rather constant under any circumstance, we usually write
Kw = [H3O+] [OH-]
= 10-14 (or 1e-14)
pKw = -log Kw (defined)
= 14 (at 298 K)
tºC Kw
20 1.14e-15
25 1.00e-14
35 2.09e-14
40 2.92e-14
50 5.47e-14
For neutral water, [H3O+] = [OH-] = 1e-7 at this temperature. Furthermore, we define
pH = -log[H3O+]
pOH = -log[OH-]
pH = pOH = 7 at 298 K; (in neutral solutions)
It is important to realize that Kw depends on temperature as shown in the Table here.
Leveling Effect of Water and Acid-Base Characters
The strength of strong acids and bases is dominated by the autoionization of water. In aqueous solutions, the strongest acid and base are the hydronium ion, H3O+, and the hydroxide ion OH- respectively. Acids HCl, HBr, HI, HNO3, HClO3, HClO4, and H2SO4 completely ionize in water, making them as strong as H3O+ due to the leveling effect of water. Furthermore, strong acids, strong bases, and salts completely ionize in their aqueous solutions.
For example, HCl is a stronger acid than H2O, and the reaction takes place as HCl dissolves in water.
HCl + H2O = Cl- + H3O+
A similar equation can be written for another strong acid.
On the other hand, a stong base also react with water to give the stong base species, OH-.
H2O + B- = OH- + HB
For example, O2-, CH3O-, and NH3 are strong bases. The leveling effect also apply to bases.
Amphiprotic Species
Equilibria of acids and bases, are interesting chemistry. When an acid and a base differ by a proton, they are called a conjugate acid-base pair. A water molecule is a weak acid and base, due to its ability to accept or donate a proton. Such properties make water an amphiprotic species. In fact, H3O+, H2O and OH- are amphiprotic, as are some other conjugate acid-base pairs of weak acids and bases.
If several acids and bases are dissolved in water, all equilibria must be considered. To estimate the pH of these solutions requires the exact treatment of several equilibrium constants. For example, many species dissolve in rain water, and many equilibria must be considered. Detail consideration and examples are given in Acid-Base Reactions
Carbon dioxide in the air dissolve in rain water, lakes and rivers. A solution of CO2 involves the following reaction:
Reaction K formula K value
H2O(l) + CO2(g) = H2CO3(l) 1/PCO2 ?
H2CO3 = HCO3- + H+ [HCO3-] [H+] / [H2CO3] 5e-7
HCO3- = CO3-2 + H+ [CO3-2] [H+] / [HCO3-] 5e-11
HOH(l) + HOH(l) = H3O+ + OH- [H3O+] [OH-] 1e-14
These complicated equilibria make natural water a buffer.
Example 1
Assume that the partial pressure of carbon dioxide causes a total concentration of carbonic species to be 8e-4 M. Estimate the pH of this solution.
Solution
From the given data, we have the following five equations and five unknowns:
Equilibrium Equations No.
H2CO3 « HCO3- + H+ [HCO3-] [H+]
---------------- = 5e-7
[H2CO3]
(1)
HCO3- « CO32- + H+ [CO32-] [H+]
-------------- = 5e-11
HCO3-
(2)
2 H2O « H3O+ + OH- [H3O+] [OH-] = 1e-14 (3)
Charge balance [H+]
= [HCO3-] + [OH-] + 2 [CO32-]
(4)
All species containing C [H2CO3] + [HCO3-] + [CO32-]
= 8.0e-4 M
(5)
Unknowns
[H+], [OH-], [H2CO3], [HCO3-], [CO32-]
Solving these equations for the 5 unknowns can be done using Maple, Mathcad, spread sheet, or approximation. In any case, we are interested in the pH, and we can make the following approximations or assumptions
Assume H+ mostly
comes from (1)
[H+] = [HCO3-]
H2CO3 is a weak acid
most unionize
[H2CO3] = 8.0e-4 M (6)
Let x = [HCO3-] = [H+] [HCO3-] [H+] / [H2CO3]
= x2 / [H2CO3]
= 5.0e-7
(1)
Combining (1) and (6) gives [H+]2 = x2 = 8.0e-4 * 5.0e-7 = 4.0e-10. Therefore,
[H+] = 2.0e-5
pH = -log(2.0e5) = 4.7
DISCUSSION
Generally speaking, rain water has a pH about 5, rather acidic. It dissolves limestone and marble readily. Due to the dissolved carbon dioxide, rain water is a buffer solution.
Increased carbon dioxide level forces an increase in dissolved carbon dioxide. Would this causes pH of rain water to decrease or increase? Justify your answer by giving the reasons.
Since [H+] = 2.0e-5, [OH-] = 5e-9, the amount of H+ from ionization of water is also 5.0e-9, small with respect to 2.0e-5 from ionization of H2CO3. Similarly, the ionization from
HCO3- « CO32- + H+
is also small. Most of the C-containing species is H2CO3
H2CO3 is a weak acid, its ionization is small indeed.
Now, you may proceed to evaluate other concentrations: [OH-], [HCO3-], and [CO32-]
Reactivity of Water Towards Metals
Alkali metals react with water readily. Contact of cesium metal with water causes immediate explosion, and the reactions become slower for potassium, sodium and lithium. Reaction with barium, strontium, calcium are less well known, but they do react readily. Warm water may be needed to react with calcium metal, however. Many metals displace H+ ions in acidic solutions. This is often seen as a property of acids.
Electrolysis of Water
The enthalpy of formation for liquid water, H2O(l), is -285.830 and that of water vapour is -241.826 kJ/mol. The difference is the heat of vaporization at 298 K. Liquid water and vapor entropies (S) are 69.95 and 188.835 kJ K-1 mol-1 respectively, (see Thermodynamic Data. These are entropies, not standard entropies of formation. The entropy of formation for water is obtained by,
DSof water = Sowater - SoH2 - 0.5 SoO2
= 69.95 - 130.68 - 0.5*205.14 (data from Thermodynamic Data)
= - 163.3 J K-1 mol-1
DGowater = DH - T DS (note H in kJ/mol and S in J/mol)
DGowater = -285.83 - 298.15 * 163.3/1000 = -237.13 kJ
The equilibrium constant and Gibb's energy are related,
DGo = - R T ln K
K = exp(- DGo / R T)
= 3.5e41 atm-3/2
This is a very large value for the formation of water,
H2 + 0.5 O2 = 0.5 H2O(l).
In other words, the reaction is complete, and the possibility of water dissociated into hydrogen and oxygen is very small. A negative value for DGo indicates an exothermic reaction.
The Gibb's energy is the energy released other than pressure-volume work. This redox reaction to form water can be engineered to proceed in a Daniel cell. In this case, the energy is converted into electric energy according to this equation.
DGowater = - n F E = -237.13 kJ
where n is the number of electrons (= 2) in the redox equation, F is the Faraday constant (= 96485 C), and E is the potential of the Daniel cell. Thus,
- 237130 J
E = - -------------
2*96485 C
= 1.23 V
Ideally, a reverse voltage of 1.23 V is required for the electrolysis of water. But in reality, a little over voltage is required to carry out the electrolysis to decompose water. Furthermore, pure water does not conduct electricity, and acid, base or salt is often added for the electrolysis of water. This link has a demonstration.
Example 2
In order to carry out the electrolysis of water, 1.50 V is applied. Assume the energy not converted to chemical energy is converted to heat. How much heat is generated for the electrolysis of 1 mole water?
Solution
Ideally, 1.23 V will be used for the electrolysis. Energy due to the over voltage of 1.50 - 1.23 = 0.27 V is converted to heat.
Heat = 0.27 V * 2 * 96485 C
= 52102 J
= 52 kJ
DISCUSSION
The excess energy can also be evaluated using
Heat = n F *1.50 - 237130
This problem also illustrates the principle of conservation of energy.
Questions
1. For the reaction
H2O(l) -> H2(g) + 0.5 OH2(g)
the equilibrium constant as shown earlier is 1/3.5e41 = 2.9e-43 atm3/2. What is the partial pressure of H2(g)?
Solutions
1. Hint: 2.6e-64
Skill -
Evaluate this value please! | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Aquatic_Chemistry/Water_Chemistry.txt |
Physical Properties of Water
Chemical and physical properties of water are often discussed together. These properties are fundamentals of many disciplines such as hydrology, environmental studies, chemical engineering, environmental engineering, civil engineering etc. They are of interest to chemists and physicists of course.
Here are some highlights of the physical properties of water. Pure liquid water has a high heat capacity of 4.182 J K-1 g-1; it is a good heat conductor, but a poor electric conductor. It is a solvent for dissolving ionic and polar substances, but interact with non-polar substance weakly. Surface tension of water is rather high, and little quantities aggregate into drops rather than spread out as thin layers.
Hydrogen bonding contributes to many of the physical and chemical properties, such as the unusual but normally known freezing point of 273.15 K, and 373.15 K respectively. The critical temperature and pressure are 647.3 K and 220.5 bar (22050 kPa) and critical volume = 0.056 m3 kmol-1.
Because of the many applications of water, some more details on the properties of water are desirable. Thus, keeping track of water properties is of national interest. For example, the American Society of Mechanical Engineers (ASME) is such an organization. International co-operation on research and information exchange are more economical, and for thermal properties, The International Association for the Properties of Water and Steam (IAPWS) is set up. Canada is a member of this organization.
From the application point of view, the variations of the following properties as functions of temperature and pressure are required.
• Compressibility of steam and water as a function of pressure at various temperature
• Density of water as a function of temperature
• Viscosity of water as a function of temperature
• Enthalpy of water for various thermodynamic evaluations
• Molar volumes and expansion coefficients of water and vapor as functions of pressure and temperature
• Speed of sound in water and vapor and speed of sound in air-vapor mixture as functions of temperature and pressure
• Entropy of water as a function of temperature and pressure
• Thermal conductivity of water and steam
• Viscosity of water at any temperature for pipe and pump design
• Dielectric constant as functions of temperature and pressure
• Surface tension as functions of temperature and pressure
• Gibbs energy at various temperature and pressure
• Properties such as dielectric constants and ion products of supercritical water (fluid)
• Applications of supercritical water: Plastic waste recycle, recovery of toluenediamine, hydrolysis of PET (polyethylene terephthalate) etc.
We will discuss some of these to illustrate the point, but not all of them.
Density of Water
Density of Water
T /K Density g/mL D2O
273 0.999841 1.10469
274 0.999900
275 0.999941
276 0.999965
277 0.999973 1.1057
278 0.999965 1.10562
279 0.999941
280 0.999902
281 0.999849
282 0.999781
281 0.999700
Density is the mass per unit volume. The density of water is usually taken as 1.0 g/mL or 1.000e3 kg m-3 at 277 K. This suggests that the density varies with temperature and water density is the highest at 277 K, and the density between 273 and 281 K from the CRC Handbook of Chemistry and Physics are given in the Table here. These data are calculated from experimental data for pure water based on the standard at 276.98 K. The same source gives the density of ordinary water as 1.000000 g/mL at 277 K.
The volume occupied by one mole of substance is called the molar volume. The molar volume of liquid water is 18.016/density. At 277 K, the molar volume is 18.016 mL. For liquid water, the molar volumes of liquid water increase to 18.03 mL at both 269 K and 285 K.
The density of ice is 0.917 at 273 K, and the molar volume is 19.65 mL, 9% more than the molar of liquid. Thus, 9% of an ice cube containing no air bubble float above the surface, and 91% of it is below the waterline. The density makes behavior of icebergs interesting. Ice bergs are major tourist attractions in Newfoundland and Labrador, Canada.
Electric Dipole Moment and Dielectric Constant
Charged ions interact with each other due to electrostatic attraction or repulsion. The force F between two charge particles with charges q1, and q2 separated by a distance r is
q1 q2
F = ------------
4 p eo r2
Cl d-
|
Na d+
Uncharged molecules still interact with each other, not due to electrostatic interaction, but due to electric dipole interaction. The electric dipole moment is a vector due to uneven distribution of unlike charges. In diatomic systems, the magnitude of the electric dipole moment can be estimated as the difference between the Pauling electronegativities of the two atoms. For convenience, let us assume that centres of positive and negatives of Na-Cl are separated by a distance l, then the electric dipole moment, $\mu$ is
m = q l
Traditionally, the dipole moments of molecules have been tabulated in electrostatic units, in which case the charge of an electron is 4.80e-10 esu (= (1.6e-19 C) (3e9 esu/C)). In NaCl crystals, the distance between Na and Cl ions is 240 pm. If the NaCl molecule (in a gas) has the same distance between the ideallized ions, then the dipole moment is calculated below:
m = q l
= (1.60e-19) (240e-12 m)
= 3.84e-29 C m.
or in cgs-esu units
m = 4.8e-10 esu * 2.40e-8 cm
= 11.5e-18 esu cm
In the cgs-esu unit, 1e-18 esu cm is define a Debye (symbol D). Thus, we have
1 D = 1e-18 esu cm
= 3.355e-30 C m (from the calculation above)
m = 11.5 D for ideallized NaCl gas molecule
but
mobserved = 9 D in NaCl gas =3e-29 C m.
Electric Dipole Moment
of Some Gas Molecules
Molecule m /D
NaCl 9.0
KCl 10.3
CO 0.1
HF 1.8
HCl 1.1
HBr 0.8
H2O 1.8
SO2 1.6
N2O 0.2
NH3 1.5
However, experimental dipole moment = 9 D for NaCl gas. Thus, the model for the calculation has to be modified to account for the partial delocalization of the charges or by including some covalent character in the Na-Cl bond. In any case, the model shows a physical method (model) for the evaluation of dipole moment. Dipole moments of some gas molecules are given in the table here.
The dipole moment is a vector from the negative to the positive charge along the bond. For triatomic moecules such as those of water, the total electric dipole moment is the sum of al dipoles for each bond. The experimental dipole moment for water is 1.8 D, which is the same as that of H-F. Water is a very polar compound. Ammonia with three N-H bonds has a dipole moment of 1.5 D.
Homonuclear diatomic molecules have zero dipole moment, of course. So are linear CO2 and CS2 molecules.
The high dipole moment makes water a very special substance. Water has a very high dielectric constant, 80. Due to dipole-charge interaction, water is the universal solvent for ionic substances, especially mono-valent ions. The diagram below shows some typical ion-dipole and dipole-dipole interaction in solutions. The dissolution in water is called hydration.
The dipole moment has something to do with its interaction with microwave. Application of Ground Penetrating Radar in Glaciology is a web site that gives the theory of radar and discusses the interaction of radar with water. It further illustrates the application in glaciology.
In contrast, supercritical water has a low dielectric constant, making it a good solvent for non-polar substances.
Example 1
Let us assume the H-F molecule as composed of two ions, a positive and a negative ion. What is the distance that separating these two ions in order to give a dipole moment of 1.8 D?
Solution
Recall the first formula from the discussion of dipole moment above, and proceed with the calculation as shown:
m = q l
q l = 4.8e-10 esu * l
= 1.8e-18 esu cm.
1.8e-18 esu cm
l = ----------------- = 0.375e-8 cm (37.5 pm)
4.8e-10 esu
DISCUSSION
What is the normal H-F bond distance? (Ans. 92 pm).
Dividing the apparent bondlength of 38 pm by the observed bondlength 92 pm gives the ionic character 38/92 = 41 % ionic character.
Questions
1. In a fire or nuclear power generator, steam at 600ºC under pressure is often used to transfer the heat from the boiler to the turbine. What phase is this steam?
Solutions
1. Hint: supercritical fluid of water. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Aquatic_Chemistry/Water_Physics.txt |
Water treatment is` a process of making water suitable for its application or returning its natural state. Thus, water treatment required before and after its application. The required treatment depends on the application. For example, treatment of greywater (from bath, dish and wash water) differs from the black water (from flush toilets). Composting toilet is not allowed in urban dwelling. Yet, composting toilets are used in a 30,000-square-foot office complex at the Institute of Asian Research, University of British Columbia.
Water treatment involves science, engineering, business, and art. The treatment may include mechanical, physical, biological, and chemical methods. As with any technology, science is the foundation, and engineering makes sure that the technology works as designed. The appearance and application of water is an art.
In terms of business, RGF Environmental, Water Energy Technologies, Aquasana Store, Vitech, Recalyx Industrial SDN BHD and PACE Chemicals ltd are some of many companies that offer various processes for water treatment. Millipore, a Fisher Scientific partner, offers many lines of products to produce ultrapure water, using a combination of active charcoal membranes, and reverse osmosis filter. Internet sites of these companies offer useful information regarding water.
An environmental scientist or consultant matches the service provider, modify if necessary, with the requirement.
• Natural Water includes some discussion on hard and soft water. Softening hard water for boiler, cooler, and domestic application is discussed therein. These treatments prepare water so that it is suitable for the applications.
• Water Biology deals with water and biology. Drinking water is part of making water suitable for living. Thus, this link gives some considerations to drinking water problems.
• There are many different industry types, and waters from various sources are usually treated before and after their applications. Pre-application treatment and wastewater treatment offer a special opportunity or challenge. Only a general consideration will be given to some industrial processes.
• General municipal and domestic wastewater treatment converts used water (waste) into environmentally acceptable water or even drinking water. Every urban centre requires such a facility.
General Wastewater Treatment
Water is a renewable resource. All water treatments involve the removal of solids, bacteria, algae, plants, inorganic compounds, and organic compounds. Removal of solids is usually done by filtration and sediment. Bacteria digestion is an important process to remove harmful pollutants. Converting used water into environmentally acceptable water or even drinking water is wastewater treatment
Water in the Great Lakes Region is an organization dealing with the water resources. Ontario Clean Water Agency (OCWA) is a provincial Crown corporation in business to provide environmentally responsible and cost-efficient water and wastewater services. It currently operates more than 400 facilities for 200 municipalities. This web site provides information on water and water treatment.
In April 1993, 403,000 people in Milwaukee were ill as a result of cryptosporidium contaimination of water due to spring run off. This outbreak caused the more stringent regulations to be implemented in the public dringking water system. The measures were aimed at removing cryptosporidium.
In May 2000, due to torrential downpour surface water got into shallow wells in a small town Walkerton, Ontario, Canada. On May 17, some residents complained of fever, bloody diarrhea and vomiting. This was know as the Walkerton E. Coli Outbreak. Nearly half of the population of the town fell ill, and several people died due to the E. Coli O157:H7 infection. A public inquiry recommended many measures to prevent similar outbreaks. These measures were aimed at eliminating E. Coli.
Sewage Treatment
As a general discussion, let us look at a typical process in sewage treatment. A flow diagram for a general sewage treatment plant from Water Education, Department of Computer Science, University of Exeter, U.K., is shown below:
Sewage is SCREENED to remove large solid chunks, which are disposed in LANDFILL SITE. It flows over to the SETTLEMENT TANK to let the fine particles to settle. The settlement is called the activated SLUDGE. The supernatant is then PERCOLATING FILTERED and/or AERATED. The water can be filtered again, and then disinfected (chlorinated in most cases). When there is no other complication, the water is returned to nature back to the ecological cycle.
The SLUDGE removed from the settlement is composed of living biological material. A portion of it may be returned to the AERATION TANK, but the raw SLUDGE is digested by both microorganism. Anaerobic (without oxygen) and aerobic (with air) bacteria digestions are used. At the digestion stage, carbon dioxide, ammonia, and methane gases are evolved. Volume of the digested sludge is reduced, and it is acceptable as a fertilizer supplement in farming.
Wastewater Treatment
Although the sewage water may be discharged back to the ecological system after AERATED DIGESTION and PERCOLATING FILTRATION, but in some cases, further treatment is required. Some general consideration of water treatment is given below.
A rather recent book, Chemistry of Water Treatment by S.D. Faust and O.M. Aly, 2nd Ed. (1998) [TD433 F38 1998], addresses the problem of quality natural and treated water.
The first three chapters discuss the criteria and standards for drinking water quality, organic compounds in waters, taste and order of water. Understandably, the standards change over the years. So are the standards of treated waters. Guidelines are available from government agencies such as Environment Canada which is equivalent to U.S. Public Health Service and the Environment Protection Agency (EPA). We have talked about drinking water in Water Biology.
Next seven chapters deal with the removal of the following:
• organics and inorganics by activated carbon
• particulate matter by coagulation
• particulate matter by filtration and sedimentation
• hardness and other scale-forming substances
• inorganic contaminants
• corrosive substances
• pathogenic (disease producing) bacteria, viruses, and protozoans (microorganisms).
There is a chapter dealing with aeration
These items cover the chemistry, biology, and physics involved in the treatment of water. Some of these topics have been discussed in chemistry of water, physical properties of water, biology of water, and natural water. Introductions are going to be given to some selected topics below.
Treatment by Activated Carbon
Treatment by activated carbon is mostly due to adsorption or absorption. When a chemical species is adhered to the surface of a solid, it is an adsorption. When partial chemical bonds are formed between adsorbed species or when the absorbate got into the channels of the solids, we call it absorption. However, these two terms are often used to mean the same, because to distinguish one from type from the other is very difficult.
Application of activated charcoal for the removal of undesirable order and taste in drinking water has been recognized at the dawn of civilization. Using bone char and charred vegetation, gravel, and sand for the filtration of water for domestic application has been practised for thousands of years. Active research and production of activated charcoal was accelerated during the two world wars. The use of poison gas prompted the development of masks. They are still in use today.
Charcoal absorbs many substances, ranging from colored organic particulates to inorganic metal ions. Charcoal has been used to remove the colour of raw sugar from various sources.
Charcoal consists of microcrystallites of graphite. The particles are so small in charcoal that they were considered amorphous. The crystal structure of graphite consists of layers of hexagonal networks, stacked on top of each other. Today, making activated carbon is a new and widely varied industry. Other molecules attach themselves to the porous surface and dangling carbons in these microcrystallites.
Carbon containing substances are charred at less than 900 K to produce carbon in the manufacture of activated carbon. However, the carbon is activated at 1200 K using oxidizing agent to selectively oxidize portions of the char to produce pores in the material. Because of the special process to produce used, these materials with high surface to mass ratio, they are called activated carbon rather than activated charcoal. Factors affecting the absorption are particle size, surface area, pore structure, acidity (pH), temperature, and the nature of the material to be absorbed. Usually, adsorption (absorption) equilibria and rate of adsorption must be considered for effective applications.
Coagulation, Flucculation and Sedimentation
Natural and wastewater containing small particulates. They are suspended in water forming a colloid. These particles carry the same charges, and repulsion prevents them from combining into larger particulates to settle. Thus, some chemical and physical techniques are applied to help them settle. The phenomenon is known as coagulation. A well known method is the addition of electrolyte. Charged particulates combine with ions neutralizing the charges. The neutral particulates combine to form larger particles, and finally settle down.
Another method is to use high-molecular-weight material to attract or trap the particulates and settle down together. Such a process is called flocculation. Starch and multiply charged ions are often used.
Historically, dirty water is cleaned by treating with alum, Al2(SO4)3.12 H2O, and lime, Ca(OH)2. These electrolytes cause the pH of the water to change due to the following reactions:
Al2(SO4)3.12 H2O, -> Al(aq)3+ + 3 SO4(aq)2- + 12 H2O
SO4(aq)2- + H2O -> HSO4(aq)- + OH- (causing pH change)
Ca(OH)2 -> Ca(aq)2+ + 2 OH- (causing pH change)
The slightly basic water causes Al(OH)3, Fe(OH)3 and Fe(OH)2 to precipitate, bringing the small particulates with them and the water becomes clear. Some records have been found that Egyptians and Romans used these techniques as early as 2000 BC.
Suspension of iron oxide particulates and humic organic matter in water gives water the yellow muddy appearance. Both iron oxide particulates and organic matter can be removed from coagulation and flocculation. The description given here is oversimplified, and many more techniques have been applied in the treatment of water. Coagulation is a major application of lime in the treatment of wastewater.
Other salts such as iron sulfates Fe2(SO4)3 and FeSO4, chromium sulfate Cr2(SO4)3, and some special polymers are also useful. Other ions such as sodium, chloride, calcium, magnesium, and potassium also affect the coagulation process. So do temperature, pH, and concentration.
Disposal of coagulation sludge is a concern, however.
Sedimentation let the water sit around to let the floculated or coagulated particles to settle out. It works best with relatively dense particles (e.g. silt and minerals), while flotation works better for lighter particles (e.g. algae, color). A settling tank should be big enough so that it takes a long time (ideally 4 hours +) to get through. Inlets and outlets are designed so the water moves slowly in the tank. Long and narrow channels are installed to let the water to snake its way through the tank. The settled particles, sludge, must occasionally be removed from the tanks. The water is next ready to be filtered. Sedimentation is used in pre-treatment and wastewater treatment.
Filtration
Filtration is the process of removing solids from a fluid by passing it through a porous medium. Coarse, medium, and fine porous media have been used depending on the requirement. The filter media are artificial membranes, nets, sand filter, and high technological filter systems. The choice of filters depends on the required filtering speed and the cleanness requirement. The flow required for filtration can be achieved using gravity or pressure. In pressure filtration, one side of the filter medium is at higher pressure than that of the other so that the filter plane has a pressure drop. Some portion of this filter type must be enclosed in a container.
The process of removing the clogged portion of the filter bed by reversing the flow through the bed and washing out the solid is called back washing. During this process, the solid must be removed out of the system, but otherwise the filters must be either replaced or taken out of service to be cleaned.
Aqua-Rain manufacture water filters as shown here. This unit consists of four filters. Regarding the filtering system, its technical info gave the following statement.
At the heart of the AquaRain™ Water Filtration System are Marathonr State-of-the-art ceramic elements utilizing a long-proven filtration process that is over 100 years old which will safely remove dangerous waterborne pathogens such as cysts (Cryptospordium, Giardia lamblia) and bacteria (E. coli, Samonelli typhus, etc...). These innovative Marathonr ceramic elements are also filled with a high grade silvered granulated activated carbon (GAC). The GAC reduces pesticides, chemicals, chlorine, tastes & odors, while leaving the naturally occurring minerals found in the water unaffected.
The units are designed for emergency and perhaps undeveloped countries.
AquaSelect of Mississauga has a pitcher water filter system, and its cartridge contains hundreds of high efficiency activated carbon and ion exchange beads, its web site claims. Brita filters is very popular.
Aeration
Bringing air into intimate contact with water for the purpose of exchanging certain components between the two phases is called aeration. Oxygenation is one of the purposes of aeration. Others are removal of volatile organic substances, hydrogen sulfide, ammonia, and volatile organic compounds.
A gas or substance dissolved in water may further react with water. Such a reaction is called hydration. Ionic substance dissolve due to hydration, for example:
\[\ce{HCl (g) + x H2O <=> H(H2O)_{x}^{+} + Cl(aq)^{-}}\]
\[\ce{H2S <=> H^{+}(aq) + HS^{-}(aq)}\]
These reactions are reversible, and aeration may also causes dehydration resulting in releasing the gas from water. Henry's law is applicable to this type of equilibrium for consideration. Methods of aeration are
• Diffused aeration - Air bubbles through water.
• Spray aeration - Water is sprayed through air.
• Multiple-tray aeration - Water flows through several trays to mix with air.
• Cascade aeration - Water flows downwards over many steps in the form of thin water falls.
• Air stripping - A combination of multiple tray and cascade technique plus random packed blocks causing water to mix thoroughly with air.
Reverse Osmosis Water Filter System
In the following discussion, a dilute solution and a concentrated solution are considered. The dilute solution can be a clean water whereas the concentrated solution contains undesirable solute (electrolyte or others).
When a compartment containing a dilute solution is connected to another compartment containing a concentrated solution by a semipermeable membrane, water molecules move from the dilute solution to concentrated solution. This phenomenon is called osmosis. Pig bladders are natural semipermeable membranes. As the water molecules migrate through the semipermeable membrane, water level in the solution will increase until the (osmotic) pressure prevents a net migration of water molecules in one direction. A pressure equivalent to the height difference is called the osmotic pressure. The illustration given on the right is from the PurePro, one of the many companies that manufacture reverse osmosis water filter devices. Millipore also use this technique.
By applying pressure in the higher concentration solution, water molecules migrate from a high concentration solution to a low concentration solution. This method is called reverse osmosis water filter system. The concept of reverse osmosis is illustrated in the diagram here from PurePro.
In this technique, the membrane must be able to tolerate the high pressure, and prevent solute molecules to pass through. Regarding membranes, PurePro made the following statement:
Semipermeable membranes have come a long way from the natural pig bladders used in the earlier osmosis experiments. Before the 1960's, these membranes were too inefficient, expensive, and unreliable for practical applications outside the laboratory. Modern advances in synthetic materials have generally solved these problems, allowing membranes to become highly efficient at rejecting contaminants, and making them tough enough to withstand the greater pressures necessary for efficient operation.
This technology certainly works, and it has been used to convert salt (ocean or sea) water into fresh water. With this technique, the water with higher concentration is discharged. Thus, this technology is costly in regions where the water cost is high. Free Drinking Water also uses reverse osmosis filter system for domestic applications.
Industrial Wastewater Treatment
The Environment Canada's (Atlantic Region) Waste Management and Remediation Section contains links to detailed information on the programs and activities relating to Petroleum and Allied Petroleum Storage Tank Systems, Ocean Disposal, Contaminated Sites, and Hazardous Waste Disposal Advice for the provinces of Newfoundland and Labrador, Nova Scotia, New Brunswick and Prince Edward Island. It also contains links to sites within and outside of Environment Canada with related information. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Aquatic_Chemistry/Water_Treatment.txt |
Atmospheric chemistry is a branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied.
• Atmosphere
The atmospheric chemistry studies the chemical composition of the natural atmosphere, the way gases, liquids, and solids in the atmosphere interact with each other and with the earth's surface and associated biota, and how human activities may be changing the chemical and physical characteristics of the atmosphere. It is interesting to note that the 1995 Nobel Prize in Chemistry 1995 was awarded to the atmospheric scientists P. Crutzen, M. Molina and F. S. Rowland.
• Carbon Cycle
A scientist and an engineer may be called upon to solve a particular problem involving coal (carbon), gasoline (hydrocarbon), combustion of carbon or carbon containing fuel, lime stone, sea shells, carbon monoxide, or carbon dioxide. When we formulate a solution, we should be aware of the impact not only of the problem, but also of the solution for such a problem. Otherwise, the solution may result in a problem that is more expensive to solve later. Thus, it is important to know how carbon evolv
• Carbon Oxides
Carbon forms two important gases with oxygen: carbon monoxide, CO, and carbon dioxide, CO2. Carbon oxides are important components of the atmosphere, and they are parts of the carbon cycle. Carbon dioxide is naturally produced by respiration and metabolism, and consumed by plants in their photosynthesis. Since the industrial revolution, greater amount of carbon dioxide has been generated for over a hundred years due to increased industrial activities.
• Ozone
Most of the ozone in the atmosphere is in the stratosphere of the atmosphere, with about 8% in the lower troposphere. As mentioned there, the ozone is formed due to photo reaction. The ozone level is measured in Dobson Unit (DU), named after G.M.B. Dobson, who investigated the ozone between 1920 and 1960. One Dobson Unit (DU) is defined to be 0.01 mm thickness of ozone at STP when all the ozone in the air column above an area is collected and spread over the entire area.
• Photosynthesis
Photosynthesis is the process of converting light energy (E = h v) to chemical energy and storing it in the chemical bonds of sugar-like molecules. This process occurs in plants and some algae (Kingdom Protista). Plants need only light energy, CO2, and H2O to make sugar. The process of photosynthesis takes place in the chloroplasts (chloro = green; plasti = formed, molded), specifically using chlorophyll (phyll = leaf), the green pigment involved in photosynthesis.
Atmospheric Chemistry
Discussion Questions
• How do atmospheric scientists view the atmosphere?
• What gases are pollutants in the atmosphere?
The atmospheric chemistry studies the chemical composition of the natural atmosphere, the way gases, liquids, and solids in the atmosphere interact with each other and with the earth's surface and associated biota, and how human activities may be changing the chemical and physical characteristics of the atmosphere. It is interesting to note that the 1995 Nobel Prize in Chemistry 1995 was awarded to the atmospheric scientists P. Crutzen, M. Molina and F. S. Rowland. For convenience of study, atmospheric scientists divide the atmosphere as if it consists of four layers. The division is mainly due to temperature variations as the altitude increases. The four layers according to the variation of temperature are.
• Ionosphere (Aurora) or Thermosphere
• Mesosphere
• Stratosphere
• Troposphere
Above 100 km is the thermosphere and ionosphere where the temperature increases from 200 K at 100 km to 500 K at 300 km. The temperature goes even higher as the altitude increases. activity as the altitude decrease. In the outer space, most particles consist of single atoms, H, He, and O etc. At lower altitude (200 - 100 km), diatomic molecules N2, O2, NO etc are present. The ionosphere is full of electrically charged ions. The UV rays ionizes these gases. The major reactions are
In the ionosphere:
$O + h v \rightarrow O^+ + e^- \label{18.1.1}$
$N + h v \rightarrow N^+ + e^- \label{18.1.2}$
In the neutral thermosphere:
$N + O_2 \rightarrow NO + O \label{18.1.3}$
$N + NO \rightarrow N_2 + O \label{18.1.4}$
$O + O \rightarrow O_2 \label{18.1.5}$
Beyond the neutral thermosphere is the ionosphere and exosphere. These layers are of course interesting for space explorations and environmental concerns and space sciences. The atmosphere in the outer space is more like a plasma than a gas. Below the thermosphere is the mesosphere (100 - 50 km) in which the temperature decreases as the altitude increase. In this region, OH, H, NO, HO2, O2, and O3 are common, and the most prominent chemical reactions are:
$H_2O + h\nu \rightarrow OH + H \label{18.1.6}$
$H_2O_2 + O \rightarrow OH + OH. \label{18.1.7}$
Below the mesosphere is the stratosphere, in which the temperature increases as the altitude increase from 10 km to 50 km. In this region, the following reactions are common:
$NO_2 \rightarrow NO + O \label{18.1.8}$
$N_2O \rightarrow N_2 + O \label{18.1.9}$
$H_2 + O \rightarrow OH + H \label{18.1.10}$
$CH_4 + O \rightarrow OH + CH_3 \label{18.1.11}$
Air flow is horizontal in the stratosphere. A thin ozone layer in the upper stratosphere has a high concentration of ozone. This layer is primarily responsible for absorbing the ultraviolet radiation from the sun. The ozone is generated by these reactions:
$O_2 + h\nu \rightarrow O + O \label{18.1.12}$
$O_2 + O \rightarrow O_3 \label{18.1.13}$
The troposphere is where all weather takes place; it is the region of rising and falling packets of air. The air pressure at the top of the troposphere is only 10% of that at sea level (0.1 atmospheres). There is a thin buffer zone between the troposphere and the next layer called the tropopause.
The major components in the region close to the surface of the Earth are N2 (78%), O2 (21%), Ar (1%) with variable amounts of H2O, CO2, CH4, NO2, NO2, CO, N2O, and O3. The ozone concentration in this layer is low, about 8% of the total ozone in the atmosphere is in the troposphere.
What gases are pollutants in the atmosphere?
From the atmospheric science viewpoint, interactions of all gasses among themselves and their interaction with the environmental elements are of interest. However, for identification purposes, we need to identify the gases produced by man-made process (industry).
Some of the gases due to human activities are:
• Carbon dioxide result from the excess burning of carbon-containing fuel.
• Carbon monoxide produced by automobiles. This orderless and colorless gas is very toxic.
• Ozone produced in the exhaust of internal combustion engine, and the variation of ozone concentration in the stratosphere.
• Nitrogen oxides such as NO, NO2, N2O4; due to the production of NO in the internal combustion engine.
• Methane gas produced due to treatments of large amount of waste.
• Sulfur oxides produced in mining operation and in the combustion of sulfur containing fuel. Sulfur oxide causes the so called acid rain problem.
• Chlorofluorocarbons (CFC) are gases used as refrigerant. When disposed into the atmosphere, they cause the ozone concentration to decrease.
Water vapor is also considered a greenhouse gas, but it is also generated by nature continuously due to radiation from the Sun. Of course, when water vapor condense into a liquid, much energy is released in the exothermal process. Condensation of water vapor causes storms and many of the weather phenomena.
Questions
1. According to what is the atmosphere divided into 4 layers?
Skill -
Describe the structure of the atmosphere.
2. Which layer contains the most ozone?
Skill - Describe all the details of ozone?
3. How thick is the troposphere?
Discussion - At the top of the world highest mountain, ~10 km in altitude, the atmosphere is only 0.1 of that at sea level. This is the top of the troposphere.
4. What type of gas is present in the thermosphere?
Skill - Explain the chemistry taken place in the thermosphere?
5. How is the ionosphere different from other layers?
Discussion - The aurora is related to the ions in the atmosphere.
6. What causes the gas molecules in the ionosphere to ionize and become charged particles?
Skill - Describe the chemistry in the ionosphere.
7. If ozone is a beneficial gas in the atmosphere, why is ozone also a gaseous pollutant?
Discussion - Decomposition of ozone releases O, OOH, OH radicals and they are harmful to many living organisms.
8. Why are chlorofluorocarbons a gases pollutant?
Discussion - Ozone in the stratosphere absorbs harmful UV C and UV B, which are harmful to humans and plants.
9. Why is warming up of the Earth a bad thing?
Skill - Give your opinion please on an issue.
Solutions
1. The division is made according to patterns of temperature variation.
2. The stratosphere, between 15 and 50 km.
3. The troposphere ranges from 8 to 15 km.
4. A very dilute concentration of monoatomic gas.
5. The ionosphere contains high concentration of charged particles.
6. Radiation or high energy photons from the sun caused ionization of atoms.
7. Ozone is very reactive, causing harm to living organisms.
8. Because they catalyze the decomposition of ozone in the stratosphere.
9. Because we do not know what is the consequence in the future.
Learning Guide
• How is the atmosphere divided into layers? What are the names of the layers?
• Where is the ozone layer located in the atmosphere? What is the molecular structure of ozone? How is the ozone formed? | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Atmospheric_Chemistry/Atmosphere.txt |
Discussion Questions
• How is carbon cycled at a global scale?
The Global Carbon Cycle
A scientist and an engineer may be called upon to solve a particular problem involving coal (carbon), gasoline (hydrocarbon), combustion of carbon or carbon containing fuel, lime stone, sea shells, carbon monoxide, or carbon dioxide. When we formulate a solution, we should be aware of the impact not only of the problem, but also of the solution for such a problem. Otherwise, the solution may result in a problem that is more expensive to solve later. Thus, it is important to know how carbon evolve at a global scale. The carbon cycle? is part of the Earth cycle. The diagram from this link is shown here, because it illustrate the global cycle of carbon without including respiration and metabolism. It illustrates the geological processes.
How is carbon cycled at a global scale?
The carbon atoms undergo a complicated chemistry forming what is known as the global carbon cycle, as do oxygen, nitrogen, and other elements; but the carbon cycle is the most widely recognized. An animal produces carbon dioxide and consumes oxygen in its metabolism of food. Glucose is a typical food and a metabolic reaction can be represented by:
$C_6H_{12}O_6 + 6 O_2 \rightarrow 6 CO_2+ 6 H_2O$
A plant and green bacteria, on the other hand, produces oxygen and consumes carbon dioxide in its photo synthesis. Energy in the form of electromagnetic radiation (or photons) is supplied so that the low-energy-content carbon dioxide can be converted to high-energy-content glucose. An overall reaction for the complicated multi-step photosynthesis reaction can be represented by:
$6 CO_2+ 12 H_2O -- h\nu \rightarrow C_6H_{12}O_6 + 6 O_2 + 6 H_2O$
At a glance, animals and plants make food for each other. The plants convert solar energy into high-energy food for the animals. Water is a reactant and a product in the photosynthesis. Radioactive labeled studies showed that the oxygen in the water produced comes from those in carbon dioxide. You may be thinking about plants with leaves that give beautiful flowers. In fact, primitive plants in the ocean play a more important role in in the photosynthesis process, because of the large number of them.
The solubility of carbon dioxide depends on its partial pressure. As we know, carbon dioxide dissolves in water to form carbonic acid:
$CO_2+ H_2O \rightarrow H_2CO_3$
$H_2CO_3 \rightarrow H^+ + HCO_3^- K_{a1} = 4.2 \times 10^{-7}$
$HCO_3^- \rightarrow H^+ + CO_3^{2-} K_{a2} = 4.8 \times 10^{-11}$
The dissolved carbon dioxide further reacts with metal ions in the water forming calcium and magnesium carbonates. The Ksp values for CaCO3 and MgCO3 are $5\times 10^{-9}$ and $3\times 10^{-3}$ respectively. Extensive limestone (CaCO3) and dolomite (mixture of CaCO3 and MgCO3) have been formed this way.
$CaCO_3 \rightarrow Ca^{2+} + CO_3^{2-} \;\;\ K_{sp} = 5 \times 10^{-9}$
$MgCO_3 \rightarrow Mg^{2+} + CO_3^{2-} \;\;\; K_{sp} = 3 \times 10^{-3}$
Some believed that this is how lime stone produced. Lime stone is soluble in acidic solutions, which may be formed by dissolving large amount of carbon dioxide.
$CaCO_{3(s)} + 2 H^+_{(aq)} \rightarrow Ca^{2+}{(aq)} + H_2CO_{3(aq)}$
or,
$CO2(aq) + H2O(l) + CaCO3(s) \rightarrow Ca2+(aq) + 2 HCO3-(aq)$
When the concentration of carbon dioxide is reduced, the acidity decreases and the reverse reaction takes place forming a solid, CaCO3(s). Thus, metabolism, photosynthesis, mineralization and geological process are the major chemical processes in the global carbon cycle.
Example 1
In general, it is known that rain water saturated with carbon dioxide has a pH of 5.6. Lower than 5.6 is called acid rain due to the presence of sulfur oxides and nitrogen oxides. Assume the water to be otherwise pure than the dissolved carbon dioxide, estimate the solubility of carbon dioxide in water?
Solution
Since pH = 5.6,
$[H+] = 10-5.6 M \notag$
$= 2.5\times 10^{-6} M \notag$
Thus, the contribution of hydrogen ions from self ionization of water (pH = 7) is negligible. We have $[H^+] = 2.5 \times 10^{-6}\; M$ = $[HCO_3^-]$
The ionization of dissolved carbon dioxide is represented by these reactions,
$CO_2 + H_2O \rightarrow H_2CO_3\notag$
$H_2CO_3 \rightarrow H^+ + HCO_3^- \;\; \; K_{a1} = 4.2 \times 10^{-7}\notag$
$HCO_3^- \rightarrow H^+ + CO_3^{2-} \;\;\; K_{a2} = 4.8 \times 10^{-11}\notag$
The major contribution to the production of hydrogen ion comes from the first ionization of H2CO3, and other contributions are almost negligible. If we assume the concentration of H2CO3 to be x M in its ionization,
$H_2CO_3 \rightarrow H^+ + HCO_3^- \;\;\; K_{a1} = 4.2 \times 10^{-7}\notag$
then, by definition of Ka1 we have
$\dfrac{(2.5\times 10^{-6)} }{x} = 4.2\times 10^{-7}\notag$
Thus,
$x = 1.5 \times 10^{-5}\; M\notag$
= 0.65 mg / L
Thus, the solubility is about 0.65 ppm by weight.
DISCUSSION
Many assumptions have been made here, other wise the solution will be more complicated. Make sure you understand the assumptions.
Example 2
From the solubility products of $CaCO_3$, estimate its molar solubility in natural water saturated with carbon dioxide at pH 5.6 and 298 K.
Solution
From the estimates given in Example 1, we still have to consider these equilibria:
$H_2CO_3 \rightarrow H^+ + HCO_3^- \;\;\; K_{a1} = 4.2\times 10^{-7}\notag$
$HCO_3^- \rightarrow H^+ + CO_3^{2-} \;\;\; K_{a2} = 4.8\times 10^{-11}\notag$
Since the second ionization constant Ka2 << Ka1, it is safe to assume the following:
$[H+] = [HCO3-] = 2.5\times 10^{-6} M\notag$
in the above equilibria. The following formulation is derived by the definition of Ka2:
[H+] [CO32-] 2.5\times 10^{-6 [CO32-]
-------------- = -------------------- = 4.8\times 10^{-11}
[HCO3-] 2.5\times 10^{-6
Thus, [CO32-] = 4.8\times 10^{-11.
By the definition of the solubility product,
$CaCO_3 \rightarrow Ca^{2+} + CO_3^{2-} \;\;\; K_{sp} = 5\times 10^{-9}$
we have
[Ca2+] = 5\times 10^{-9/4.8\times 10^{-11
= 100 M
DISCUSSION
This value is obviously too high and unreasonable. The result is certainly incorrect. Thus, we should re-examine the last assumption. As CaCO3 dissolves, the concentration of carbonate ion also increase. If this concentration is high, then the contribution due to dissolved carbon dioxide is negligible. Effectively, we have [Ca2+] = [CO32-] = y, and
$y^2 = 5 \times 10^{-9}\notag$
Thus,
y = 7\times 10^{-5 M = 0.007 g CaCO3 / L.
This results is obtained by ignoring the dissolved carbon dioxide. The true value is probably somewhere in between, because as calcium carbonates dissolves, the pH of the solution changes.
Questions
1. Skill - Explain the chemistry involved in the global carbon cycle.
2. Skill - Explain and identify the chemical composition of minerals and ores.
3. Skill - Explain the meaning of a scientific datum, Ksp in plain language.
4. Skill - Describe the trend of a chemical property when a variable changes.
Solutions
1. Metabolism, photosynthesis, and mineralization are the major chemical processes.
2. Calcium carbonate
3. Since Ksp for MgCO3 is larger, it is more soluble.
4. In general terms, the solubility of limestone decreases as pH increases. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Atmospheric_Chemistry/Carbon_Cycle.txt |
Discussion Questions
• What are the molecular structures of carbon oxides?
• What atomic orbitals are involved in the molecular orbitals of carbon oxides?
• Why do CO molecules form strong bonds with metal atoms in carbonyls?
• What are some of the applications of carbon oxides?
• How has carbon dioxide level changed?
• What measure can be taken to reduce CO2 emission?
Carbon Oxides
Carbon forms two important gases with oxygen: carbon monoxide, CO, and carbon dioxide, CO2. Carbon oxides are important components of the atmosphere, and they are parts of the carbon cycle.
Carbon dioxide is naturally produced by respiration and metabolism, and consumed by plants in their photosynthesis. Since the industrial revolution, greater amount of carbon dioxide has been generated for over a hundred years due to increased industrial activities.
Today, information on carbon oxides is important. Issues related to carbon oxides have no boundaries. The Carbon Dioxide Information Analysis Center (CADIAC) provides global datasets on carbon dioxide and other atmosphere gases and climate. These datasets are available to international researchers, policymakers, managers, and educators to help evaluate complex environmental issues associated with potential climate change.
Carbon monoxide is also a national and global concern. The Consumer Product Safety Commission (CPSC) considers CO a senseless killer, and it provides information on CO poisoning and detection.
What are the molecular structures of carbon oxides?
The formation of carbon oxides is due to electronic configurations of carbon and oxygen. They have 4 and 6 valence electrons respectively. Using these valence electrons, we can give the Lewis dot structure for CO and three resonance structures for CO2 as follows:
.. .. .. ..
:C:::O: :O::C::O: « :O:C:::O: « :O:::C:O:
'' ''
These formulas suggest very strong bonding between carbon and oxygen in these gaseous molecules: triple bond in CºO, and double bonds in O=C=O. However, a formula containing a triple bond contribute to the resonance structure.
What atomic orbitals are involved in the molecular orbitals of carbon oxides?
The chemical bonding is more of an interpretation of the molecules in view of their properties. Using results from quantum mechanical approach, we may start by reviewing the electronic configurations of carbon and oxygen:
C: 1s2 2s22p2
O: 1s2 2s22p4
Thus, the carbon has 4 valence electrons and oxygen has 6 valence electrons. The s and p atomic orbitals are available for chemical bonding.
The valence bond approach suggests that p orbitals of carbon and oxygen are used in these molecules. In CO, only one such atomic orbital from each atom of C and O are employed to form a sigma, s, bond, and overlapping of two p orbitals leads to the formation of the two pi, p, bond. Thus, the bond order is 3 between C and O in CºO.
Note
A $CO$ molecule has the same number of electrons as $N_2$, and these molecules are said to be iso-electronic. The N2 molecule is also represented by NºN.
The molecular orbital (MO) approach for CO is describe in the lecture, and the MO energy level diagram has been given there. The plots of contours of equal electron densities has also been shown in earlier lectures, and the diagram for CO molecular orbitals is shown below:
The valence bond approach for CO2 bonding is also very interesting. The two sp hybrid orbitals of central carbon overlaps with one p orbital from each of the oxygen atoms to from the two C-O s bonds in O-C-O. The two remaining p orbitals of carbon overlap with a p orbital each of the two oxygen atoms forming two p bonds, leading to the formation of O=C=O.
Here is a challenge: find a suitable diagram for either valence bond approach or for the MO approach for carbon dioxide in the web.
Why do CO molecules form strong bonds with metal atoms in carbonyls?
Molecules of $CO$ and $NN$ have two $\pi$ bonds. Since the two atoms in $CO$ are different, this made $CO$ much more reactive than nitrogen. Indeed, $CO$ forms many carbonyls with metal atoms or ions. For example, you have encountered some of the following carbonyls on the page of Heterogeneous catalysts
• Ni(CO)4
• Fe(CO)5
• Co2(u-CO)2(CO)6, (u-CO meaning CO bridged between two metal atoms)
• Mn2(CO)10
• Fe3(CO)12
• Co4(CO)12
• Rh4(CO)12
• CFe5(CO)15
• Rh6(CO)16
• Os6(CO)18
The study of metal carbonyls started with the discover of ferrocene Fe(C5H5)2 and now hundreds if not thousands of metal carbonyls have been synthesized..
A sbond is formed between the carbon of CO and the metal atom. Such a bonding is very sensible if we consider the sp hybrid atomic orbitals of carbon being used in this case. Since there are two electrons in this orbitals of CO, the metal atom gains at least some fraction of electron due to the formation of this bond.
The empty antibonding $pi$ orbital $\pi^*$ of $CO$ has the right symmetry and orientation to receive the back-donated electron from the filled d-orbitals of the metal atom. The back donation reinforce the sigma bond, and vice versa. This type of bonding has been called the synergic bonding mechanism by Cotton and Wilkinson in their Advanced Inorganic Chemistry. A diagram showing this type of bonding scheme is shown on page 159 in Inorganic Chemistry by Swaddle.
What are some of the applications of carbon oxides?
Carbon oxides are useful commodities. A gas containing CO and hydrogen is called synthetic gas, because it can be converted to methanol using a catalyst. During the past few decades, many metal carbonyls have been prepared. These carbonyls are potential catalysts. When the metal carbonyl is a gas, the purified metal carbonyl gas can be used for the production of extra-pure metals.
Carbon dioxide is also a useful industrial gas. It is widely used in food and beverage industry. Here are some of its applications.
• making efferescent drinks
• manufacture of urea, CO(NH2)2, as fertilizer
• promote plant growth in green house
• making dry ice
• fire extinguisher
• provide an inert atmosphere for fruit and vegetable preservation
• as a supercritical fluid for solvent extraction
The critical temperature of carbon dioxide is only 304 K at a critical pressure of 7.39 MPa (almost 7 atm). These conditions can easily be met to generate supercritical carbon dioxide, which is a powerful and descriminating solvent. The supercritical fluid penetrates porous solids, evaporates without leaving a trace. Thus, this fluid is widely used as an extracting solvent. This fluid is also very useful in the field of analytical chemsitry.
On the high technology side, carbon dioxide lasers can provide a continuous laser beam from several milliwatts to several killowatts with a typical efficiency of 30%, one of the most efficient laser generation devices. This link also illustrates the basic theory of laser. Among many applications of laser, carbon dioxide laser has been used for skin resurfacing as an art of cosmetic surgery.
On the other hand, the heavier carbon dioxide usually stay in lower grounds, and when its concentration is very high, it can be a thread to living beings who will die of asphyxiation.
How has carbon dioxide level changed?
When Henry Ford put people to work on the assembly line, he did not worry about the consequence of automobile exhaust. He probably did not foresee the change of the society. Now everyone wants a piece of the carbondioxide generating machine. You can imagine that when all countries use as much energy as Canadians do, the carbon dioxide level of the atmosphere will be much higher. We need good measurements about carbon dioxide level in order to know how it is changing.
The National Oceanic and Atmospheric Administration (NOAA) of the U.S. is keeping track of it, and the measurements at Barrow, Alaska. The annual increase has been reported to be 1.49 ppm by volume per year. On the other hand, the atmospheric CO2 concentration was about 280 ppm by volume in the 1700s before the industrial revolution, and it was 360 ppm in 1994. If you want more details about Carbon Dioxide Emissions in the U.S., this link is full of data.
Engineers, scientists, politician and the general public believe that increase level of CO2 will cause the world to warm up, because some scientists have demonstrated their findings and the experts agreed. Lengthy discussion is required to present scientific evidences for the so called green house effect of CO2, and hopefully some day you will be able to judge the argument yourself. I have not found simple and convincing evidence to present at this time. However, experts have suggested a correlation with the increase level of CO2 and the average temperature of the globe.
What measure can be taken to reduce CO2 emission?
The green house effect of $CO_2$ has attracted attention not only of expert and politicians, the public pressure (mostly the news media) have made the reduction of $CO_2$ emission an international priority. The United Nations Panel on Climate Change made several recommendations regarding $CO_2$.
• use natural gas as fuel rather than coal or oil
• use solar or nuclear energy instead of fossil fuels for electricity generation
• reduce rate of deforestation
• limit use of automobiles
Well, what can be done at a personal level, as a community, and as a country requires the determination of individuals. This is a challenge for us all especially engineers, because they are at the forefront of many industries. We face many front for a solution to the problem of carbon dioxide emission.
Example 1
The standard enthalpy of formation of NO is 90.25 kJ / mol and the standard entropies of $N_2$, $O_2$, $NO$ are 191.61, 205.138 and 210.761 J (K mol)-1 respectively. Calculate the Gibb's energy for the reaction
$\ce{N_2 + O_2 \rightarrow 2 NO}\nonumber$
at standard conditions.
Solution
The entropy of formation for the above reaction is
\begin{align*} \Delta S^o &= 2 \times 210.761 - (205.138 + 191.61) \[4pt] &= 24.77 \frac{J}{K mol} \end{align*}
The Gibb's energy of formation is then:
\begin{align*} \Delta G^o &= \Delta H^o - T\Delta S^o \[4pt] &= 180.5 - 298*0.02477 \[4pt] & = 180.5 - 7.38 \,kJ \[4pt] &= 173.12\, kJ \end{align*}
This value shows that the reaction is endothermic.
DISCUSSION
What is the equilibrium constant for the reaction as written and what is the implication of the result in the discussion of NO in air? | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Atmospheric_Chemistry/Carbon_Oxides.txt |
Discussion Questions
• What is UV?
• How is ozone produced in the atmosphere?
• How much ozone is in the atmosphere, and where is the ozone layer?
• What is the interaction of ozone and UV?
• What is ozone depletion?
• What is ozone hole and how does it vary over time?
• What are CFCs?
• How do CFCs help depleting ozone?
• How is ozone depletion in the polar region different from other regions?
• What has been done and what can be done to reduce ozone depletion?
Most of the ozone in the atmosphere is in the stratosphere of the atmosphere, with about 8% in the lower troposphere. As mentioned there, the ozone is formed due to photo reaction. The ozone level is measured in Dobson Unit (DU), named after G.M.B. Dobson, who investigated the ozone between 1920 and 1960. One Dobson Unit (DU) is defined to be 0.01 mm thickness of ozone at STP when all the ozone in the air column above an area is collected and spread over the entire area. Thus, 100 DU is 1 mm thick.
What is UV?
In the electromagnetic radiation spectrum, the region beyond the violet (wavelength ~ 400 nanometer nm) invisible to eye detection is called ultraviolet (UV) rays. Its wavelength is shorter than 400 nm.
UV is divided into three regions:
• UV A, wavelength = 400 - 320 nm
• UV B, wavelength = 320 - 280 nm
• UV C, wavelength = < 280 nm
Obviously, photons of UV C are the most energetic. UV-A radiation is needed by humans for the synthesis of vitamin-D; however, too much UV-A causes photoaging (toughening of the skin), suppression of the immune system and, to a lesser degree, reddening of the skin, and cataract formation. Ozone strongly absorbs UV B and C, but the absorption decreases as the wavelength increases to 320 nm. Very little UV C reaches the Earth surface due to ozone absorption.
How is Ozone produced in the Atmosphere?
When an oxygen molecule receive a photon (h\nu), it dissociates into monoatomic (reactive) atoms. These atoms attack an oxygen molecule to form ozone, O3.
$\ce{O2 + h\nu \rightarrow O + O}\label{1}$
$\ce{O2 + O \rightarrow O3} \label{2}$
The last reaction requires a third molecule to take away the energy associated with the free radical $O^{\cdot}$ and $O_2$, and the reaction can be represented by
$\ce{O2 + O + M \rightarrow O3 + M*} \label{3}$
The over all reaction between oxygen and ozone formation is:
$\ce{3 O2 \rightleftharpoons 2 O3} \label{4}$
The absorption of UV B and C leads to the destruction of ozone
$\ce{O3 + h\nu \rightarrow O + O2} \label{5}$
$\ce{O3 + O \rightarrow 2 O2} \label{6}$
A dynamic equilibrium is established in these reactions. The ozone concentration varies due to the amount of radiation received from the sun.
$1$
The enthalpy of formation of ozone is 142.7 kJ / mol. The bond energy of O2 is 498 kJ / mol. What is the average O=O bond energy of the bent ozone molecule O=O=O?
Solution
The overall reaction is
$\ce{3 O2 \rightarrow 2 O3} \;\;\; \Delta H = 286 kJ$
Note that 3 O=O bonds of oxygen are broken, and 4 O-O bonds of ozone are formed. If the bond energy of ozone is E, then
\begin{align*} E &= (3*498 + 286) kJ / 4 mol \[4pt] &= 445 kJ / mol \end{align*}
DISCUSSION
The ozone bonds are slightly weaker than the oxygen bonds. The average bond energy is not the bond energy for the removal of one oxygen from ozone.
$\ce{O3 + h\nu \rightarrow O + O2}$
Can the energy to remove one oxygen be estimated from the data given here?
The techniques used in this calculation is based on the principle of conservation of energy.
Example $2$
The bond energy of O2 is 498 kJ / mol. What is the maximum wavelength of the photon that has enough energy to break the O=O bond of oxygen?
Solution
The energy per O=O bond is:
$(498000 J/mol) / (6.022x1023 bonds/mol) = 8.27x10-19 J/bonds$
The wavelength $\lambda$ of the photons can be evaluated using
$E = \dfrac{h c}{\lambda}$
\begin{align*} \lambda &= \dfrac{(6.626 \times 10^{-34}\, J \cdot s)*(3 \times 10^8\, m/s)}{8.27 \times 10^{-19} J} \[4pt] &= 2.403 \times 10^{-7} \,m = 240 nm \end{align*}
DISCUSSION
The visible region range from 300 nm to 700 nm, and radiation with a wavelength of 240 nm is in the ultraviolet region (Figure $1$). Visible light cannot break the O=O bond, and UV light has enough energy to break the O=O bond.
Chlorofluorohydrocarbons (CFCs)
Chemist Roy J. Plunkett discovered tetrafluoroethylene resin while researching refrigerants at DuPont. Known by its trade name, Teflon, Plunkett's discovery was found to be extremely heat-tolerant and stick-resistant. After ten years of research, Teflon was introduced in 1949. His continued research led to the usage of chlorofluorohydrocarbons known as CFCs or freon as refrigerants.
CFCs are made up of carbon, hydrogen, fluorine, and chlorine. DuPont used a number system to distinguish their product based on three digits. The digits are related to the molecular formulas.
• The first digit is the number of carbon atoms minus 1.
• The second digit is the number of hydrogen atoms plus 1.
• The third digit is the number of fluorine atoms minus 1.
For example, CFC (or freon) 123 should have a formula C2HF3Cl2. The number of chlorine atoms can be deduced from the structural formula of saturated carbon chains. CFC's containing only one carbon atom per molecule has only two digits. Freon 12 used for fridge and automobil air conditioners has a formula of CF2Cl2. The nontoxic and nonflammable CFCs have been widely used as refrigerants, in aerosol spray, and dry cleaning liquid, foam blowing agents, cleansers for electronic components in the 70s, 80s and early 90s.
In 1973, James Lovelock demonstrated that all the CFCs produced up to that time have not been destroyed, but spread globally throughout the troposphere. (Lovelock's report was later published: J. E. Lovelock, R.J.Maggs, and R.J. Wade, (1974); Nature, 241, 194) In the article, concentrations of CFCs at some parts per 1011 by volume was measured, and they deducted that with such a concentration, CFCs are not destroyed over the years. In 1974, Mario J. Molina published an article in Nature describing the ozone depletion by CFCs. (see M.J. Malina and F.S. Rowland, (1974); Nature, 249, 810) NASA later confirmed that HF was present in the stratosphere, and this compound had no natural source but from the decomposition of CFCs. Molina and Rowland suggested that the chlorine radicals in CFCs catalyze the decomposition of ozone as discussed below.
How do CFCs help depleting ozone?
A relatively recent concern is the depletion of ozone, O3 due to the presence of chlorine in the troposphere, and eventually their migration to the stratosphere. A major source of chlorine is Freons: CFCl3 (Freon 11), CF2Cl2 (Freon 12), C2F3Cl3 (Freon 113), C2F4Cl2 (Freon 114). Freons decompose in the troposphere. For example,
$\ce{CFCl3 \rightarrow CFCl2 + Cl}$
$\ce{CF2Cl3 \rightarrow CF2Cl + Cl^.}$
The chlorine atoms catalyze the decomposition of ozone,
$\ce{Cl + O3 \rightarrow ClO + O2}$
and ClO molecules further react with O generated due to photochemical decomposition of ozone:
$\ce{O3 + h\nu \rightarrow O + O2}$
$\ce{ClO + O \rightarrow Cl + O2}$
$\ce{O + O3 \rightarrow O2 + O2.}$
The net result or reaction is
$\ce{2 O3 \rightarrow 3 O2}$
Thus, the use of CFCs is now a world wide concern. In 1987, one hundred and forty nine (149) nations signed the Montreal Protocol. They agreed to reduce the manufacturing of CFCs by half in 1998; they also agree to phase out CFCs.
Ozone depletion in the polar region is different from other regions. The debate of ozone depletion often involves the North and South Poles. In these regions when temperatures drop to 190 K, ice cloud is formed. The ice crystals act as heterogeneous catalyst converting HCl and ClONO2 into $HNO_3$ and $Cl_2$,
$\ce{Cl + ClONO2 \rightarrow HNO3 + Cl2}$
$\ce{H2O + ClONO2 \rightarrow HNO3 + HOCl.}$
Both Cl2, and HOCl are easily photolyzed to Cl atoms, which catalyze the depletion of ozone. This has just been discussed in the previous section.
What has been done and what can be done to reduce ozone depletion?
The U.S. and Canadian governments have banned the use of Freons in aerosol sprays, but their use in air conditioner and cooling machines continue. In order to eliminate Freon in the atmosphere, international concerted effort and determination is required. However, sound and reliable scientific information is required. The banning of CFCs opens a research opportunity for another invention to find its substitute. Who knows what other problems will the new product bring?
Questions
1. What is the unit used for measuring ozone layers?
Skill - Define a unit you use.
2. What is the wavelength range of the UV radiation?
Skill - Describe UV radiation.
3. How is ozone different from oxygen?
Skill - Describe the formation of ozone.
4. When CFCs are exposed to UV or sun light, what species are produced?
Skill - Explain a photodecomposition reaction.
5. What is the role of chlorine radical in the ozone formation or reactions.
Skill - Explain the mechanism of the catalytic reaction.
6. What in the polar zone makes the depletion of ozone more serious?
Solutions
1. Dobson unit.
2. UV radiation is electromagnetic radiation with wavelength between 100 and 400 nm.
3. The ozone molecules consist of 3 atoms whereas the usual oxygen molecules 2.
4. Reactive radicals are produced, including monoatomic chlorine radical.
5. Chlorine radical catalyze the decomposition of ozone.
6. The ice clouds act as heterogeneous catalysts for the formation of chlorine gas.
Discussion - The chlorine gas is photodissociated into Cl radicals that catalyze the decomposition of ozone.
Learning Guide
• Arrange the regions of the electromagnetic spectrum in increasing energies of their photons: X-rays, visible, gamma rays, ultraviolet, infrared, microwave, etc.
• Examples on this page can be testing questions.
• What are CFCs?
What are freon 12, 123, and 114?
Explain how CFC help destroy the atmosphere ozone layer, particularly the polar region?
Photosynthesis
Discussion Questions
• How is light energy harvested in photosynthesis and what is the reaction center?
Photosynthesis
Photosynthesis is the process of converting light energy (E = h v) to chemical energy and storing it in the chemical bonds of sugar-like molecules. This process occurs in plants and some algae (Kingdom Protista). Plants need only light energy, CO2, and H2O to make sugar. The process of photosynthesis takes place in the chloroplasts (chloro = green; plasti = formed, molded), specifically using chlorophyll (phyll = leaf), the green pigment involved in photosynthesis.
How is light energy harvested in photosynthesis and what is the reaction center?
As early as 1640, people have demonstrated that photosynthesis was a means of converting carbon dioxide in the air into plant material. Thus, photosynthesis has been studied by many scientists, and there are many sites related to photosynthesis. Some of these sites describe their recent recent research activities, whereas others are for educational purposes. Even books and journals are introduced in this link.
The photosynthesis is a very complicated process, and we can only give an introduction here. Photosynthesis reduces carbon dioxide into carbohydrates,
\[\ce{6 CO2 + 12 H2O + hv -> C6H12O6 + 6 O2 + 6 H2O}\]
Thus, both electrons and energy are required. The electrons come from water molecules, and the energy is first absorbed by pigments known as chlorophylls and carotenoids. The former absorb blue (wavelength 430 nm) and red (wavelength 670 nm) light and the later absorbe blue-green light (wavelenghts between 400 and 500 nm). Green and yellow light are not absorbed. Reflection of these types of light makes plants appear green.
There are many varieties of pigments. They are bonded to proteins which provide pigment molecules with the appropriate orientation and position with respect to each other. After absorption by pigment, light energy is transferred to chlorophylls that are bonded to special proteins. Pigments and protein involved with this actual primary electron transfer event together are called the reaction center. A large number of pigment molecules (100-5000), collectively referred to as antenna, "harvest" light and transfer the light energy to the same reaction center. The purpose is to maintain a high rate of electron transfer in the reaction center, even at lower light intensities.
What are produced and consumed in plants respiration?
Photosynthesis is responsible for the production of oxygen and carbohydrates in plants. All living organisms respire, and so do plants. In the respiration, oxygen is consumed and carbon dioxide is produced.
respiration takes place all the time, but respiration is masked by higher rate of photosynthesis when the light intensities is high.
What is the Calvin-Benson cycle?
Working with the green algae chlorella, Melvin Calvin and Andy Benson, at the University of California at Berkeley, elucidated the following pathway for the conversion of carbon dioxide into carbohydrates: | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Atmospheric_Chemistry/Ozone.txt |
• Biochemical Cycles
• Carbon Cycle
• Nitrogen Cycle
Nitrogen in the air becomes a part of biological matter mostly through the actions of bacteria and algae in a process known as nitrogen fixation. Legume plants such as clover, alfalfa, and soybeans form nodules on the roots where nitrogen fixing bacteria take nitrogen from the air and convert it into ammonia, NH3. The ammonia is further converted by other bacteria first into nitrite ions and then into nitrate ions.
Biochemical Cycles
Plants such as trees and algae undergo the photosynthesis reaction where carbon dioxide and water in the presence of sunlight are converted to organic materials and oxygen. An important reverse reaction occurs in the water: Fish use metabolism where oxygen and organic materials - other small fish or algae - as food is converted to carbon dioxide, water, and energy. Bacteria in water, as well as land, also undergo metabolism and use oxygen and decompose organic wastes as food to convert to carbon dioxide, water, and energy. By products in the decomposition of organic waste are nitrates and phosphates. The major natural biochemical cycles include the carbon, nitrogen, and phosphate cycles. They are presented in brief in this graphic.
The overall health of a body of water depends upon whether these factors are in balance. Municipal sewage systems are now doing a better job of removing most of the organic waste products in the discharge water, but some organic waste still enters the streams and lakes. If an excess amount of organic waste is present in the water, the bacteria use all of the available oxygen in the water in an attempt to decompose the organic waste.
The amount of organic waste in water is represent by a chemical test called BOD - Biological Oxygen Demand. The concentration of oxygen is measured in a water sample at the beginning of the test and again after five days. The difference between the oxygen concentrations represents the amount of oxygen consumed by the bacteria in the metabolism of the waste organics present.
Eutrophication
In situations where eutrophication occurs, the natural cycles are overwhelmed by an excess of one or more of the following: nutrients such as nitrate or phosphate, or organic waste.
In the first case under aerobic conditions (presence of oxygen), the natural cycles may be more or less in balance until an excess of nitrate and/or phosphate enters the water. At this time the water plants and algae begin to grow more rapidly than normal. As this happens there is also an excess die off of the plants and algae as sunlight is blocked at lower levels. Bacteria try to decompose the organic waste, consuming the oxygen, and releasing more phosphate and nitrate to begin the cycle anew. Some of the phosphate may be precipitated as iron phosphate to remove the soluble form from the water solution.
In the second case under anaerobic conditions (absence of oxygen), as conditions worsen as more phosphates and nitrates may be added to the water, all of the oxygen may be used up by bacteria in trying to decompose all of the waste. Different bacteria continue to carry on decomposition reactions, however the products are drastically different. The carbon is converted to methane gas instead of carbon dioxide, sulfur is converted to hydrogen sulfide gas. Some of the sulfide may be precipitated as iron sulfide. Under anaerobic conditions the iron phosphate in the sediments may be solubilized into solution to make it available as a nutrient for the algae which would start the growth and decay cycle over again. The pond may gradually fill with undecayed plant materials to make a swamp. | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Biochemical_Cycles/Biochemical_Cycles.txt |
Photosynthesis
Photosynthesis is a complex series of reactions carried out by algae, phytoplankton, and the leaves in plants, which utilize the energy from the sun. The simplified version of this chemical reaction is to utilize carbon dioxide molecules from the air and water molecules and the energy from the sun to produce a simple sugar such as glucose and oxygen molecules as a by product. The simple sugars are then converted into other molecules such as starch, fats, proteins, enzymes, and DNA/RNA i.e. all of the other molecules in living plants. All of the "matter/stuff" of a plant ultimately is produced as a result of this photosynthesis reaction. An important summary statement is that during photosynthesis plants use carbon dioxide and produce oxygen.
Combustion/Metabolism Reaction
Combustion occurs when any organic material is reacted (burned) in the presence of oxygen to give off the products of carbon dioxide and water and ENERGY. The organic material can be any fossil fuel such as natural gas (methane), oil, or coal. Other organic materials that combust are wood, paper, plastics, and cloth. Organic materials contain at least carbon and hydrogen and may include oxygen. If other elements are present they also ultimately combine with oxygen to form a variety of pollutant molecules such as sulfur oxides and nitrogen oxides.
Metabolism occurs in animals and humans after the ingestion of organic plant or animal foods. In the cells a series of complex reactions occurs with oxygen to convert for example glucose sugar into the products of carbon dioxide and water and ENERGY. This reaction is also carried out by bacteria in the decomposition/decay of waste materials on land and in the water.
An important summary statement is that during combustion/metabolism oxygen is used and carbon dioxide is a product. The whole purpose of both processes is to convert chemical energy into other forms of energy such as heat.
Sedimentation
Carbon dioxide is slightly soluble and is absorbed into bodies of water such as the ocean and lakes. It is not overly soluble as evidenced by what happens when a can of carbonated soda such as Coke is opened. Some of the dissolved carbon dioxide remains in the water, the warmer the water the less carbon dioxide remains in the water.
Some carbon dioxide is used by algae and phytoplankton through the process of photosynthesis.
In other marine ecosystems, some organisms such as coral and those with shells take up carbon dioxide from the water and convert it into calcium carbonate. As the shelled organisms die, bits and pieces of the shells fall to the bottom of the oceans and accumulate as sediments. The carbonate sediments are constantly being formed and redissolved in the depths of the oceans. Over long periods of time, the sediments may be raised up as dry land or into mountains. This type of sedimentary rock is called limestone. The carbonates can redissolve releasing carbon dioxide back to the air or water.
Human Impacts: Fossil Fuels
In the natural carbon cycle, there are two main processes which occur: photosynthesis and metabolism. During photosynthesis, plants use carbon dioxide and produce oxyge and during metabolism oxygen is used and carbon dioxide is a product. Humans impact the carbon cycle during the combustion of any type of fossil fuel, which may include oil, coal, or natural gas. Fossil Fuels were formed very long ago from plant or animal remains that were buried, compressed, and transformed into oil, coal, or natural gas. The carbon is said to be "fixed" in place and is essentially locked out of the natural carbon cycle. Humans intervene during by burning the fossil fuels. During combustion in the presence of air (oxygen), carbon dioxide and water molecules are released into the atmosphere. The question becomes as to what happens to this extra carbon dioxide that is released into the atmosphere. This is the subject of considerable debate and about it possible effect in enhancing the greenhouse ffect which may than result in global warming.
Nitrogen Cycle
The main component of the nitrogen cycle starts with the element nitrogen in the air. Two nitrogen oxides are found in the air as a result of interactions with oxygen. Nitrogen will only react with oxygen in the presence of high temperatures and pressures found near lightning bolts and in combustion reactions in power plants or internal combustion engines. Nitric oxide, NO, and nitrogen dioxide, NO2, are formed under these conditions. Eventually nitrogen dioxide may react with water in rain to form nitric acid, HNO3. The nitrates thus formed may be utilized by plants as a nutrient.
Nitrogen in the air becomes a part of biological matter mostly through the actions of bacteria and algae in a process known as nitrogen fixation. Legume plants such as clover, alfalfa, and soybeans form nodules on the roots where nitrogen fixing bacteria take nitrogen from the air and convert it into ammonia, NH3. The ammonia is further converted by other bacteria first into nitrite ions, NO2-, and then into nitrate ions, NO3-. Plants utilize the nitrate ions as a nutrient or fertilizer for growth. Nitrogen is incorporate in many amino acids which are further reacted to make proteins.
Ammonia is also made through a synthetic process called the Haber Process. Nitrogen and hydrogen are reacted under great pressure and temperature in the presence of a catalyst to make ammonia. Ammonia may be directly applied to farm fields as fertilizer. Ammonia may be further processed with oxygen to make nitric acid. The reaction of ammonia and nitric acid produces ammonium nitrate which may then be used as a fertilizer. Animal wastes when decomposed also return to the earth as nitrates.
To complete the cycle other bacteria in the soil carry out a process known as denitrification which converts nitrates back to nitrogen gas. A side product of this reaction is the production of a gas known as nitrous oxide, N2O. Nitrous oxide, also known as "laughing gas" - mild anesthetic, is also a greenhouse gas which contributes to global warming | textbooks/chem/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Biochemical_Cycles/Carbon_Cycle.txt |
Pathophysiology is the study of the physical and functional changes that occur during a disease process. In this e-module you will learn about the concept of pathophysiology, types of toxicity, repair and adaptation, and patterns of toxic injury.
01: Pathophysiology
Pathophysiology describes the changes that occur during a disease process, with “patho-“ referring to the physical changes that are observed and “physio-“ referring to the functional processes or mechanisms that occur during a disease process. In toxicology, pathophysiology encompasses the biochemical and physical alterations that occur upon exposure of an individual (generally termed the “host”) to harmful amounts of a toxicant.
In toxicology, pathophysiology takes into account how the characteristics of the toxicant (for example, dosage, physical properties and chemical properties) and the characteristics of the host (including species, life stage, health/reproductive status, metabolism, and individual sensitivity) interact to produce physical and/or biochemical changes in the host. Pathophysiology also encompasses the host response to the effects of a toxicant. With acutely lethal intoxications, the physical and chemical injury may be sufficient to cause rapid death of the organism. In non-lethal toxic exposures, toxicant-induced injury results in dysfunction of cells, tissues and/or organs that may persist or that may progress to death. Persistent toxic injury that does not result in death generally leads to attempts at repair of toxicant-induced damage. With some toxicants, the host is able to develop strategies to adapt to continued exposure to toxicants. Dysfunction, repair and adaptive processes that occur in response to exposure to certain toxicants may trigger development of unregulated cell growth leading to tumor formation in a process termed carcinogenesis.
In many cases the toxicant is the unchanged xenobiotic to which the host was exposed, but in some cases the xenobiotic itself may be relatively innocuous and requires bioactivation to more toxic metabolites before toxic effects occur. A variety of endogenous systems have evolved to mitigate the effects of many toxicants and/or their metabolites. However, when these systems fail or when the dose of toxicant exceeds the capacity of the system to neutralize the toxic effects, poisoning occurs. The clinical syndrome associated with a poisoning is referred to as toxicosis.
Most toxicants exert their effects on specific molecules, tissues or organs based on the physical and chemical makeup of the toxicant as well as the absorption, distribution and metabolism of the toxicant within the body.
Toxicants such as some strong acids or alkalis (e.g. concentrated hydrochloric acid) are not systemically absorbed, so are limited to causing local injury upon contact with skin, eyes or mucous membranes. Toxicants that are ingested and absorbed from the gastrointestinal tract are shuttled via the portal vein to the liver, where they may cause direct injury, where they may be bioactivated to toxic metabolites, or where they may be detoxified before they reach the general circulation (a process termed “first pass effect”). Inhaled toxicants such as smoke may cause local tissue injury due to irritants or corrosive components as well as systemic intoxication from toxic gases such as carbon monoxide or cyanide.
Topic 1: Key Points
In this section, we explored the following main points:
• 1: Pathophysiology is the study of the physical and functional changes that occur during a disease process.
• 2: Toxic insults can result in physical and biochemical alterations that may lead to cellular dysfunction, repair, adaptation, carcinogenesis and/or death. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/01%3A_Pathophysiology/1.01%3A_What_is_Pathophysiology.txt |
Learning Objectives
After completing this lesson, you will be able to:
• 1: Discuss the various effects that toxicants have on target molecules and how these effects result in injury to the host.
• 2: Discuss the pathogenesis of the rash that occurs following exposure to poison ivy.
The interaction of toxicants with host molecules may lead to the dysfunction or destruction of the target molecule, or it may result in formation of adducts that the immune system identifies as “foreign,” triggering immune responses against these neoantigens.
Target Molecule Dysfunction
Target molecule dysfunction is a common mechanism by which xenobiotics, particularly drugs, exert their effects; remember it is the dose of xenobiotic that determines whether the effect will be therapeutic (pharmacologic) or harmful (toxicologic). Target molecule dysfunction may occur through activation of cellular membrane receptors, resulting in over-stimulation of some cellular function. For instance, the toxic effects of methomyl, a carbamate insecticide, include over-stimulation of cells of excretory glands, resulting in excessive salivation, excessive tear formation and excessive secretion of mucus by goblet cells within the respiratory tract.
Conversely, other toxicants may inhibit or impede the action of cellular receptors; the resulting clinical effects will depend on the type of receptor affected and what action is impeded. For example, there are channels in nerve cell membranes that allow sodium to pass into and out of the cell; when exposed to pyrethrins, insecticides extracted from chrysanthemum flowers, these channels are unable to close, which results in excessive stimulation seen as muscle twitches, tremors and convulsions. However, when these channels are exposed to tetrodotoxin, the infamous puffer fish toxin, these channels are unable to open, which prevents stimulation of muscle and results in paralysis.
Toxicants may induce target molecule dysfunction by altering protein structure such that the protein is no longer functional, resulting in disruption of membrane protein channels, interference with transmembrane signaling or loss of enzyme function. Many of these types of effects involve the toxicant or its metabolite binding to reactive moieties on the protein molecule; the sulfhydryl or thiol (S-H) moiety is particularly susceptible to binding with other reactive compounds. Toxicant-induced alteration of DNA structure can lead to mispairing of nucleotides during mitosis, with potential effects ranging from altered protein synthesis to initiation of carcinogenesis.
Neoantigen formation results when a xenobiotic or its metabolite binds to a larger protein to form a novel molecule that elicits an immune response. Molecules that trigger an immune response upon binding to carrier proteins are termed haptens, and the process of neoantigen formation in this manner is termed haptenization. Neoantigens can trigger humoral immune responses resulting in the development of antibodies that can trigger acute allergic reactions such as hives or anaphylaxis. Neoantigens that trigger cellular-mediated immune responses cause injury to specific tissues or organs such as skin, liver or blood vessels in a process termed autoimmunity.
DID YOU KNOW?
The rash caused by poison ivy (Toxicodendron spp.) is caused by urushiol, an oily mixture of chemicals called catechols, in the sap of the plant. Upon exposure to human skin, urushiols bind to membranes on skin cells and serve as haptens, changing the shape of proteins in the membranes. The body’s immune cells no longer recognize these skin cells as normal parts of the body and mount an immune response, resulting in inflammation, itching, blisters, swelling and redness at the site of urushiol contact. In addition to local irritation, serious, systemic reactions to urushiol can occur if the leaves are ingested, or if smoke from burning poison ivy is inhaled.
Figure \(3\): Skin contact with leaves of poison ivy can result in a blistering rash.
Topic 3: Key Points
In this section, we explored the following main points:
• 1: Toxicant-induced target molecule dysfunction can occur through activation or inhibition of cellular receptors, denaturing of membrane proteins, and destruction of target molecules.
• 2: Haptenization results in the formation of neoantigens that can trigger immune responses against cells and tissues of the body, resulting in allergic or autoimmune reactions.
• The rash caused by poison ivy is an autoimmune reaction against skin cells whose membranes have bound to the toxicant urushiol from the plant.
Knowledge Check
1. On a protein molecule highly reactive toxicant molecules have a predilection for_____.
sulfhydryl (S-H) moiety
Haptenization
dose
Answer
sulfhydryl (S-H) moiety
2. The therapeutic or toxic effect of a xenobiotic is entirely dependent on its_____.
sulfhydryl (S-H) moiety
Haptenization
dose
Answer
dose
3. A xennobiotic binding a larger protein molecule, resulting in an immune response.
sulfhydryl (S-H) moiety
Haptenization
dose
Answer
Haptenization | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/01%3A_Pathophysiology/1.03%3A_Knowledge_Check_3_of_13.txt |
Learning Objectives
After completing this lesson, you will be able to:
• 1: Describe the range of cellular injury that may occur following exposure to a toxicant.
• 2: Discuss the major mechanisms of toxicant-induced cell death.
• 3: Compare the processes of necrosis and apoptosis in terms of inciting causes and the cellular changes that occur with each.
Toxicants can exert a variety of effects at the molecular level that have significant repercussions at the cellular, tissue and organ levels. The effects of a toxic exposure can range from reversible cellular dysfunction to irreversible cellular injury to cell death, all of which can alter normal organ function and have significant impact on the health and well-being of the body as a whole. The ability of cells, tissues and organs to overcome the effect of a toxicant through repair and/or adaptation will dictate the ultimate outcome of a toxic exposure.
Following toxic insult, cells have a limited repertoire of responses. Nonlethal cell injury may lead to cellular degeneration, seen microscopically as swelling of the cells. Toxicant-injured cells may accumulate water, lipid, pigments, glycogen or metabolic waste products due to impairment of normal maintenance functions. Accumulation of lipids in hepatocytes, termed steatosis or hepatic lipidosis, is a common toxic effect seen in cases of alcohol-related liver disease. Degenerative changes in cells are often reversible if the inciting cause is removed. When cell injury proceeds beyond the self-repair capability of the cell, cell death ensues.
Mechanisms of Cell Death
Major mechanisms of toxicant-induced cell death include disruption of cell membrane structure and/or function, loss of cellular maintenance functions, and impairment of cellular energy production. Loss of membrane structural or functional integrity can result in uncontrolled passage of water, ions and other compounds into or out of the cell. The subsequent loss of normal cytosolic environment interferes with normal biochemical processes necessary for cell function and/or survival. Loss of ability to synthesize proteins and other macromolecules impedes maintenance of organelles and enzymatic pathways vital to cellular survival. Impairment of cellular energy production generally occurs when toxic effects alter mitochondrial function and/or structure and can lead to cell death due to failure to produce sufficient ATP to power essential cellular functions.
Necrosis
Necrosis is the term used to describe cell death due to irreversible injury. Necrotic cells undergo degenerative processes including swelling of organelles, loss of organelle function, oxidative and hydrolytic degradation of intracellular membranes and macromolecules by electophiles and free radicals, and, ultimately, lysis (loss of cellular constituents to surrounding tissues due to cell membrane rupture). Necrosis generally results in the generation of an inflammatory response as cellular components and free radicals that are released to the extracellular matrix attract inflammatory cells.
Apoptosis
In contrast, apoptosis (sometimes nicknamed “cell suicide” or “programmed cell death”) is a more orderly form of cell death. Apoptosis is an active process involving activation of specific enzymes which triggers the systematic fragmentation of cell constituents into blebs of cell membrane that pinch off of the main cell to form apoptotic bodies. During this fragmentation, the cell continues to produce energy and proteins, unlike necrosis where organelle and energy production cease prior to cellular fragmentation. The end result of apoptosis is numerous apoptotic bodies, each composed of a cellular membrane surrounding intact and functional cellular components. Apoptosis can be triggered by various forms of oxidative stress, particularly the presence of excessive oxygen-derived free radicals, due to excessive free radical generation and/or to lack or exhaustion of endogenous antioxidants. Because intracellular components are not spilled into the extracellular matrix, apoptosis generally does not incite an inflammatory response; instead the apoptotic bodies are removed by local phagocytes.
Topic 4: Key Points
In this section, we explored the following main points:
• 1: Toxicant-induced cellular injury can range from reversible cellular dysfunction to irreversible cellular injury to cell death.
• 2: Cellular responses to toxic injury may include cellular degeneration and accumulation of substrates within the cell.
• Steatosis is lipid accumulation within hepatocytes and is a common toxic effect secondary to alcohol exposure.
• 3: Major mechanisms of cell death include disruption of cell membrane structure and/or function, loss of cellular maintenance functions, and impairment of cellular energy production.
• 4: Necrosis is cell death resulting from cessation of organelle function, degradation of intracellular structures and culminating in lysis of the cell and attraction of inflammatory cells.
• 5: Apoptosis is an active form of cell death whereby cellular function is maintained as the cell components are compartmentalized and packaged into apoptotic bodies that pinch off of the main cell.
• Apoptosis can be triggered by oxidative stresses caused by oxygen-derived free radicals.
• Apoptosis is not generally associated with an inflammatory response.
Knowledge Check
1. The accumulation of excess lipid within liver cells:
Answer
Steatosis
2. Cellular degeneration secondary to a toxic insult is seen microscopically as:
Answer
Cellular Swelling
3. Loss of organelle function, hydrolytic degradation of intracellular membranes and lysis of cells are characteristics of_______.
Answer
necrosis
4. The orderly decommissioning of cellular organelles without loss of energy and protein synthesis leading to fragmentation into membrane-bound packets is characteristic of_____.
Answer
apoptosis | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/01%3A_Pathophysiology/1.04%3A_New_Page.txt |
Learning Objectives
After completing this lesson, you will be able to:
1. Explain cell and tissue repair mechanisms and how toxicants may alter these processes.
2. Discuss adaptation mechanisms that may occur with exposures to toxicants.
Repair
The ability of cells to repair toxicant-induced damage plays a large role in the outcome of exposure to toxicants. Repair mechanisms include molecular repair such as reversal of thiol oxidation, removal of damaged units with replacement by newly synthesized units, and degradation followed by resynthesis of damaged structures. Cellular repair mechanisms include autophagy of damaged organelles and resynthesis/regeneration of damaged structures. Tissue repair mechanisms include active removal of damaged cells via apoptosis, regeneration of cells by hyperplasia (increase in cell numbers) or hypertrophy (increase in cell size), and resynthesis of extracellular matrix. When cellular or tissue repair is unable to restore original tissue architecture, production of extracellular connective tissue results in fibrosis.
Some toxicants can interfere with the normal cellular repair mechanisms, resulting in early cellular senescence and death. Inability to repair damage to DNA can result in disruption of normal protein synthesis or initiation of carcinogenesis. Other toxicants trigger exaggerated repair mechanisms which themselves pose a hazard to the survival of the individual. For instance, the herbicide paraquat causes lung injury mediated by oxygen-derived free radicals; because the lung is a highly oxygenated organ, this damage can be quite extensive as the ready availability of oxygen provides plenty of fuel for the snowballing generation of free radicals such as superoxide anions. The immediate effect of paraquat on the lung is damage to alveolar walls, resulting in stimulation repair mechanisms that cause intense, progressive fibrosis of the lung over several weeks, which ultimately leads to the death of the patient from asphyxia. Intense fibrosis is also seen in alcoholic cirrhosis of the liver, with normal liver parenchyma being replaced by fibrous connective tissue that impedes normal liver function.
ADAPTATION
Adaptation mechanisms have evolved at the cellular, tissue and organ levels that allow the individual to survive in the face of exposure to levels of toxicants that might otherwise result in serious or lethal injury. Adaptation requires time to develop, so generally occurs upon multiple exposures to levels of toxicants that are too low to cause severe acute injury. Adaptation mechanisms include alteration of toxicant delivery to target sites, decreasing reactivity of the target site to the toxicant, increasing local repair mechanisms, and development of compensatory mechanisms to mitigate toxicant-induced injury.
Decreasing toxicant delivery to target site may include decreasing toxicant absorption, detoxification of the toxicant before it can reach the target site, or binding of the toxicant with a neutral molecule. The opioid fentanyl is a powerful narcotic when injected, but it is quickly metabolized in the liver if ingested, greatly reducing the amount that enters the systemic circulation and reaches the central nervous system; this “first pass effect” occurs with many substances and is often the reason why some drugs must be administered via injection rather than the oral route. Similarly, chronic ingestion of ethyl alcohol can result in the induction (increased expression) of the enzyme alcohol dehydrogenase in the gastrointestinal tract and liver, resulting in more alcohol being metabolized before it can reach the systemic circulation.
Some toxic heavy metals that reach the bloodstream become bound to metal-binding proteins called metallothioneins; because only unbound metals are free to react, metallothioneins prevent the toxic metals from interacting with their target sites. Many sea-dwelling mammals such as killer whales (Orca spp.) bind organic mercury to selenium-containing metallothioneins which allows them to accumulate large amounts of organic mercury that would be lethal to terrestrial mammals.
Adaptation by decreased reactivity to the target site is classically illustrated by the tolerance that can develop in people addicted to opioids such as heroin. Constant stimulation of opioid receptors results in downregulation (decreased expression) of those receptors and increased amounts of opioids required to stimulate remaining receptors.
Adaptive mechanisms may include increased function or number of cells to compensate for the loss of function of similar cells due to toxic insult. A variety of toxicants can cause injury to kidney tubules, and as a result, other kidney tubules may undergo hypertrophy and/or hyperplasia in an attempt to provide adequate kidney function.
Adaptive and repair mechanisms frequently occur simultaneously, resulting in characteristic lesions in some organs or tissues, For example, toxic hepatocellular injury from chronic alcohol exposure may result in nodular areas of regenerating hepatocytes (adaptation) surrounded by fibrous connective tissue formed to replace lost hepatocytes (repair), resulting in the characteristic “bubbly” look of hepatic cirrhosis.
Topic 5: Key Points
In this section, we explored the following main points:
• 1: Tissue repair mechanisms include removal of damaged cells via apoptosis, regeneration of cells by hyperplasia (increase in cell numbers) or hypertrophy (increase in cell size), and regeneration of extracellular matrix.
• Fibrosis may result when tissue repair is incomplete.
• 2: Adaptation mechanisms may allow the host to survive in the face of continuous exposure to toxicants and include:
• alteration of toxicant delivery to target sites.
• decreasing reactivity of target site to toxicant
• increasing local repair processes
• compensatory mechanisms to mitigate toxicant-induced injury
Knowledge Check
1. An example of decreasing the reactivity of a target site to a toxicant
Answer
Downregulation of toxicant receptors
2. A regenerative response that results in an increased number of cells being produced
Answer
Hyperplasia | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/01%3A_Pathophysiology/1.05%3A_New_Page.txt |
Learning Objectives
• 1: Explain how the structure and function of the liver relates to its susceptibility and responses to toxicants.
• 2: Discuss how the pattern of toxic injury seen in an organ may provide clues to the toxicant that is involved.
Toxicants can exert their effects systemically or by altering target tissues or organs, and sometimes the patterns of injury left behind can aid in the identification of an unknown toxicant, or at least allow the list of suspects to be narrowed down to a smaller number than the thousands of potential toxicants that exist. Because most absorbed toxicants are carried in the bloodstream, blood flow patterns often dictate which tissues may be affected. Within the tissues, variations in cell type and function can also influence the pattern of injury that can be seen with a given toxicant. Because they receive high volumes of blood flow, and because of their roles in metabolism and excretion of xenobiotics, the liver and kidney are particularly vulnerable to effect from systemically absorbed toxicants.
Patterns of Hipatic Injury
The microscopic appearance of the liver is that of a hexagonal lobule. Blood enters the parenchyma from the hepatic artery and portal vein situated, along with a bile duct, at an area termed the portal triad situated at each of the six vertices of the lobule (periportal regions). From the triad region, the blood passes through endothelial-lined sinusoids that separate anastomosing sheets of hepatocytes before exiting via the terminal hepatic, or central vein (centrilobular region). Physiologically, the liver has highly oxygenated blood entering at zone 1, which is roughly equivalent to the periportal areas, surrendering its oxygen as it filters through the sinusoids, and reaching zone 3 (centrilobular region) as poorly oxygenated blood that exits via the central vein. Zone 2, also called the midzonal region, is the intermediate area between zones 1 and 3.
Within the different zones of the liver, hepatocytes vary in their physiologic functions, with those in zone 1 being more efficient in oxidative metabolism and zone 3 hepatocytes being efficient at xenobiotic biotransformation. Hepatocytes in zone 1 are the first to be exposed to toxicants that enter the liver; if those toxicants are directly injurious to hepatocytes (e.g. white phosphorus), the pattern of cell injury will be periportal. Toxicants requiring bioactivation to cause injury (e.g. acetaminophen) will generally cause zone 3 (centrilobular) hepatic injury, since this area contains higher levels of biotransforming enzymes. This area is also at risk for hypoxic injury due to toxicants that alter oxygen delivery to cells (e.g. carbon monoxide). Because of these vulnerabilities, centrilobular injury is the most common form of toxicant-induced hepatic injury. Massive liver necrosis affects entire hepatic lobules, and has been associated with exposure to a variety of toxicants including acetaminophen, aflatoxin, blue-green algae, and hepatotoxic mushrooms. Piecemeal necrosis is a less commonly seen form of liver injury wherein scattered individual hepatocyte necrosis or apoptosis occurs along the limiting plate between portal triads; this form of hepatic injury has been associated with immune-mediated processes such as is seen with non-steroidal anti-inflammatory drug-induced hepatopathy.
Patterns of Renal Injury
The nephron is the functional unit of the kidney, consisting of the renal corpuscle (Bowman’s capsule and glomerulus), proximal tubule, loop of Henle, and distal tubules. Toxicants that are directly injurious may cause damage to the glomerular structures or to the anterior portion of the proximal tubule. Toxicants that require bioactivation generally cause injury to the more distal section of the proximal renal tubules, as this is where the majority of biotransforming enzymes occur. As the glomerular filtrate passes through the loop of Henle and distal tubules and becomes more concentrated, toxicants that were too dilute to affect earlier renal structures may cause injury in these more distal sections of the kidney. Toxicants that decrease renal blood flow (e.g. non-steroidal anti-inflammatory drugs) can also cause injury to more distal tubules, collecting ducts and renal papillae as these regions receive less blood flow than more proximal structures.
Topic 6: Key Points
In this section, we explored the following main points:
• 1: Toxic injury to a tissue or organ will depend largely on the pattern of blood flow to that tissue or organ.
• Cells in areas that normally have low oxygenation are at increased risk of injury from toxicants that reduce blood flow or oxygen delivery.
• 2: Variations in cell type and function can affect the pattern of toxicant-induced injury.
• Cells with large capacity for metabolism of xenobiotics will be at higher risk for injury by toxicants requiring bioactivation.
• 3: Identification of the pattern of toxicant-induced injury can provide clues to the types of toxicants that may have caused the injury.
Knowledge Check
1. Which of the following patterns of injury would be consistent with a toxicant that is directly injurious to liver and kidney cells?
Centrilobular hepatic injury, distal tubular renal injury
Centrilobular hepatic injury, proximal tubular renal injury
Periportal hepatic injury, distal tubular renal injury
Periportal hepatic injury, proximal tubular renal injury
Answer
Periportal hepatic injury, proximal tubular renal injury
2. Which features make the centrilobular region of the liver lobule more susceptible to toxic insult?
Relatively low levels of biotransformation enzymes and relatively low oxygenation
Relatively high levels of biotransformation enzymes and relatively low oxygenation
Relatively low levels of biotransformation enzymes and relatively high oxygenation
Relatively high levels of biotransformation enzymes and relatively high oxygenation
Answer
Relatively high levels of biotransformation enzymes and relatively low oxygenation
Final Evaluation
1. General rule of thumb for apoptosis is that it is more commonly seen at higher levels of toxicant exposure while necrosis occurs more frequently at relatively lower levels of toxicant exposure. True or False?
True
False
Answer
False
2. Neoantigen formation results when a xenobiotic or its metabolite binds to a larger protein to form a novel molecule that elicits an immune response. Molecules that trigger this immune response are called:
Haptens
Free radicals
Electrophils
Poisons
Answer
Haptens
3. Lipid peroxidation occurs when an _______ compound steals an electron from a membrane phospholipids resulting in the production of a fatty acid radical.
Electrophobic
Hydroxyl radical
Electrophilic
Hydrophilic
Answer
Electrophilic
4. Electron transfer can result in oxidation of some endogenous macromolecules and formation of _________ such as superoxide ion (•O2-) and hydroxyl radical (HO•).
Enzymes
Hormones
Free radicals
Proteins
Answer
Free radicals
5. Upon exposure of human skin with leaves of poison ivy, the toxin found in this plant binds to membranes on skin cells to stimulate an immune response resulting in blistering rash. This toxin is known as:
Urushiol
Ricin
Tetrodotoxin
Pyrethrins
Answer
Urushiol
6. Which of the following covalently binds to the acetaminophen metabolite N-Acetyl-P-Benzoquinone Imine (NAPQI) to detoxify it?
N-acetylcysteine
Superoxide dismutase
Catalase
Amylase
Answer
N-acetylcysteine
7. Differences between Necrosis and Apoptosis include the following EXCEPT:
Necrosis is a degenerative process while Apoptosis is an active process
Necrosis triggers inflammatory response while Apoptosis does not incite inflammatory response
Necrosis results in loss of energy while Apoptosis does not result in loss of energy
Necrosis is an active process while Apoptosis is a degenerative process
Answer
Necrosis is an active process while Apoptosis is a degenerative process
8. A regenerative response that results in an increase in cell size is termed ______.
Hypertrophy
Hypoplasia
Hyperplasia
Atrophy
Answer
Hypertrophy
9. ______ helps to terminate lipid peroxidation when present in sufficient quantities.
Vitamin E
Hydroxyl radical
Vitamin A
Oxygen
Answer
Vitamin E
10. An “adduct” is the product of an irreversible bond between the toxicant and target molecule; this type of binding is known as:
Noncovalent binding
Covalent binding
Hydrogen abstraction
Ionic binding
Answer
Covalent binding
11. An example of decreasing the reactivity of a target site to a toxicant is describe as:
Binding of toxicant to target receptors
Downregulation of toxicant target receptors
Increased metabolism of alcohol within the stomach and liver
Stimulation of membrane receptors
Answer
Downregulation of toxicant target receptors
12. 10% of acetaminophen bioactivated by Cytochrome P450 in the liver to a toxic metabolite is called:
Superoxide dismutase
N-acetylcysteine
N-acetyl-P-Benzoquinone Imine (NAPQ1)
Superoxide ion
Answer
N-acetyl-P-Benzoquinone Imine (NAPQ1) | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/01%3A_Pathophysiology/1.06%3A_New_Page.txt |
Biochemistry is the study of chemical processes within and related to living organisms. In this e-module you will learn about biomolecules and cell components, cell structure and subcellular compartments, DNA and RNA metabolism, and epigenetic mechanisms.
02: Biochemistry and Molecular Genetics
Learning Objectives
• 1: Define the basic structure of biomolecules, such as: amino acids and proteins, carbohydrates, fatty acids, triacylglycerol, phospholipids, steroids and nucleic acids.
• 2: Define the meaning and significance of essential and non-essential amino acids.
• 3: Understand the function of enzymes.
• 4: Define the basic structure of ribonucleic acid (RNA) and deoxyribonucleic acid (DNA).
Amino Acids and Proteins
Amino acids are the basic units of proteins. All amino acids present in proteins carry a carboxyl- and an amino group, hydrogen and variable side chains (R) at a single α – carbon atom.
Amino Acid Basic Structure:
Every amino acid has four components linked together with a central carbon atom α – carbon:
• Amino group (NH2)
• Carboxylic acid group (COOH)
• Hydrogen atom (H)
• R-group, which varies with each amino acid (R)
R groups may be:
• Hydrophobic
• Hydrophilic
• Charged R-groups: positive or negative charged
• Special R-groups: conjugated with other molecules
Amino Acids are classified into two groups.
Essential: Humans cannot synthesize them and must be obtained directly from food (phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, histidine, cysteine, and arginine).
Non-essential: The human body is able to produce them (glycine, alanine, serine, asparagine, glutanine, tyrosine, aspartic acid, glutanic acid, and proline).
Below, you can see different structures of the most common amino acids in humans. Amino acids link together, in a reaction known as peptide bond, to form proteins.
Levels of Protein Structure
Primary (1°) Structure
The sequence of amino acids in a protein is named as primary structure. The amino acids are linked via peptide bonds formed with the carboxylic acid group of one amino acid and the amino group of a other amino acid.
Secondary (2°) Structure
The secondary structure is the way a polypeptide folds to form α-helix, β-strand, or β-turn.
Tertiary (3°) Structure
The tertiary structure is the way the polypeptide chain coils and turns to form a complex molecular shape. Additionally, tertiary structure starts to develop an active sites of proteins where critical actions and interactions will take place.
Quaternary (4°) Structure
The quaternary structure is the combination of the multiple protein subunits that interact to form a single, larger, biologically active protein.
Protein Functions
Proteins have several functions in the human body including hormonal, enzymes, structural proteins in cell membranes, proteins also receive signals from outside the cell and mobilize intracellular response, and they are part of the immune system.
Enzymes are specialized proteins that accelerate a chemical reaction by serving as a biological catalyst. By catalyzing these reactions, enzymes cause them to take place one million or more times faster than in their absence. Several biochemical reactions important for cellular maintenance occur due to enzymes activity. For example: environmental response and metabolic pathways.
Carbohydrates
Carbohydrates are made of molecules of carbon (C), Hydrogen (H) and Oxygen (O), and are composed of recurring monomers called monosaccharides (which typically form ring structures). A common name of monomers and dimers is ‘sugar’.
Carbohydrates are classified into three subtypes.
1. Monosaccharides: 1 unit of monomer. Examples: fructose, glucose, galactose.
Present in fruits, etc.
2. Disaccharides: 2 units of monosaccharides. Examples: lactose, maltose, sucrose.
Present in milk, etc.
3. Polysaccharides: Many monosaccharide units. Examples: cellulose, glycogen, starch.
Present in breads, grass, etc.
Carbohydrates are a group of macromolecules that are important energy source required for various metabolic activities. Carbohydrates may bind to proteins and lipids that play important roles in cell interactions e.g. receptor molecules and immune system e.g. antigens.
Lipids
Lipid molecules are mainly hydrophobic molecules i.e. found in areas away from water molecules, but also present smaller hydrophilic parts that are important for its biological function. The major roles of lipid molecules are to serve as storage of biological energy (Example: triacylglycerols) and provide the building blocks for biological membranes (Example: phospholipids and cholesterol). Although there are other types of lipids, in this topic we will discuss the structure and function of these main groups of lipids.
Triacylglycerols
Triacylglycerols are composed of fatty acids and glycerol.
• Glycerol is a simple three-carbon molecule with hydroxyl groups at each carbon.
• Fatty acids are chains of carbon molecules with a carboxylic acid (COOH) in the first carbon and a CH3 (methyl) group at the end of the chain.
Fatty acids can be...
Saturated: Fatty acids contain only single carbon-carbon bonds, and all of the carbon molecules are bonded to the maximum number of hydrogen molecules.
Unsaturated: Fatty acids have at least one double carbon-carbon bond with the potential for additional hydrogen atom bonding still existing for some of the carbon atoms in the backbone chain. If more than one double bond is present, the term polyunsaturated is used.
Essential Fatty Acids
Examples of two essential fatty acids, linoleic acid (known as omega-6;ω-6) and linolenic acid (known as omega-3; ω-3). These fatty acids present double bonds at the sixth and third carbon atoms, respectively, counting from the methyl end of their chains. They are considered essential because humans do not have the ability to produce double bonds at these locations and, therefore, must obtain these two fatty acid from vegetable oils.
Phospholipids
Phospholipids are the major component of cell membranes. They form lipid bilayers because of their amphiphilic characteristic.
The structure of the phospholipid molecule generally consists of two hydrophobic fatty acid "tails" and a hydrophilic "head" consisting of a phosphate group (PO4−3) attached to the third glycerol carbon. This head group is usually charged, creating a part of the lipid that is hydrophilic, and wants to be near water, a quality that is essential for the formation of biological membranes and many lipid functions.
Steroids
Steroids are lipids that have four rings made of carbon atoms—three rings have six sides and one has five sides—with a six-carbon ring tail. Examples: bile salts, cholesterol, the sexual hormones estrogen, progesterone and testosterone, corticosteroids and pro-vitamin D.
Cholesterol
Cholesterol is an important molecule found only in eukaryotic organisms with a variety of functions. Cholesterol is also a component of biological membranes and its main function is to control the fluidity of membranes. Cholesterol does not like to be exposed to water environments, preferring to be shielded by other hydrophobic molecules such as lipids or hydrophobic parts of proteins.
Cholesterol also serves as the primary source for the production of steroid hormones, bile salts, and vitamin D.
1.4
Nucleosides and nucleotides are involved in the preservation and transmission of the genetic information of all living creatures. In addition, they play roles in biological energy storage and transmission, signaling and regulation of various aspects of metabolism.
These molecules can be divided into two major families.
Purines: They are two-ring structures: adenine and guanine.
Pyrimidines: They are one-ring structures: thymine, cytosine, and uracil.
The unique structure and interaction of these molecules serve as the fundamental building block of RNA and DNA molecules and allow fundamental processes of DNA replication and protein synthesis to occur.
Components of Nucleotides
• Nitrogenous base: The nitrogenous base of a nucleoside or nucleotide may be either a purine or a pyrimidine.
• Carbohydrate: The carbohydrate component of nucleosides and nucleotides is usually the sugar ribose for RNA molecules and deoxyribose for DNA molecule
• Phosphate Group: One or more phosphate groups (PO4−3) may be attached to the carbon 5 of the carbohydrate molecule.
DNA
DNA stands for deoxyribonucleic acid.
It is an extremely long molecule that forms a double-helix.
DNA components:
- Sugars - Deoxyribose
- Phosphates - (PO4−3)
- Base - cytosine (C), guanine (G), adenine (A) and thymine (T).
The DNA consists of two strands attached to each other by hydrogen bond created by nucleotide pairing (A-T and C-G).
The double-helix structure of DNA is important for its function because these two bonded strands can temporarily separate to allow for DNA replication.
The sequences of nucleotides (A, C, T, G) in the DNA molecule will make up the genes and, subsequently, proteins are referred to as “expressed sequences” or “exons.” Sequences that do not code for a protein are called “intervening sequences” or “introns.”
Human Genome
The genome of humans is estimated to contain approximately 20,000–25,000 different genes arranged on multiple chromosomes.
Twenty three pairs of chromosomes:
Twenty two pairs (autosomes).
One pair (sex chromosome) (xx) (female) or (xy) (male).
Humans have 23 pairs of chromosome in every cell (except mature red blood cells); Gametes or sex cells (sperm and eggs) have half the normal complement of chromosomes.
RNA
RNA stands for ribonucleic acid.
RNA molecules are single strands.
RNA components:
• Sugars - Ribose
• Phosphates - (PO4−3)
• Base: cytosine (C), guanine (G), adenine (A) and uracil (U)
RNA molecules often form secondary (2°) structures and may interact with DNA, other RNA molecules, and proteins. These interactions help to define the particular function of each type of RNA.
Types of RNA molecules and functions:
Messenger RNA (mRNA):
Molecules which function as the transmitter of genetic information from the DNA genetic code to the resulting protein.
Transfer RNA (tRNA)
Molecules that carry amino acids and match them with a specific mRNA sequence during protein synthesis.
Ribosomal RNA (rRNA)
Molecules associated with proteins and are responsible for the synthesis of protein molecules.
Regulatory RNA
Molecules involved in regulation of DNA expression, posttranscriptional mRNA processing, and the activity of the transcribed mRNA message.
The basic structure of DNA and RNA are similar, however with 3 main differences:
Nitrogenous Bases: Three of the nitrogenous bases are the same in the DNA and RNA: adenine, cytosine, and guanine. The fourth base for DNA is thymine while for RNA it is uracil. Thymine and uracil both bind to adenine.
Number of Strands: The DNA molecule is usually double-stranded and most cellular RNA molecules are single-stranded.
Type of Sugar: In the DNA molecule the sugar is deoxyribose and in the RNA molecule the sugar is ribose.
Topic 1: Key Points
In this section, we explored the following main points:
• 1: Amino acids link together, in a reaction known as peptide bond, to form proteins.
• 2: One important function of protein is to act as enzymes to accelerate chemical reactions.
• 3: Carbohydrates are important energy source required for various metabolic activities and may bind to proteins and lipids that play important roles in cell interactions
• 4: Lipid molecules serve as storage of biological energy and provide the building blocks for biological membranes
• 5: DNA and RNA structures have 3 main differences .The nitrogenous bases (DNA has thymine and RNA has uracil). The DNA molecule is usually double stranded and most of the RNA molecules are single stranded. In the DNA molecule the sugar is deoxyribose and in the RNA molecule the sugar is ribose.
Knowledge Check
1. What type of nucleic acid does thymine belong to?
Answer
DNA
2. Uracil?
Answer
RNA
3. Enzymes are...
specialized proteins that accelerate a chemical reaction by serving as a biological catalyst.
specialized proteins that stops a chemical reaction.
Answer
specialized proteins that accelerate a chemical reaction by serving as a biological catalyst.
4. A nucleotide consists of...
Check all that apply.
• A sugar (either deoxyribose or ribose )
• Uracil as the nitrogen base
• A phosphate group
• One of the four nitrogen bases
Answer
A sugar, A phosphate group, and One of the four nitrogen bases
5. Lipid molecules are mainly hydrophilic molecules. True or False?
Answer
false | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/02%3A_Biochemistry_and_Molecular_Genetics/2.01%3A_New_Page.txt |
Learning Objectives
• 1: Describe the basic components of cell structure and organelles.
• 2: Understand the chromatin organization into nucleus.
• 3: Understand basic mechanisms of cell signaling to respond to changes in the environment.
Structure of a cell
Cells are the smallest unit of life. A typical eukaryotic cell and it components are illustrated in the next figure.
Membranes are composed of lipids arranged in a lipid bilayer, with the hydrophilic glycerol and phosphate “head” groups of the lipid molecules forming the two outside layers and the hydrophobic “tail” groups arranged inside. Proteins are the second major part of biological membranes and makeup approximately 20%–80% of both the structural and functional components of these membranes. Many of these proteins are embedded into the membrane and stick out on both sides; these are called transmembrane proteins.
Subcellular compartments
CYTOSKELETON
is a structure that helps cells maintain their shape and internal organization, and it also provides mechanical support that enables cells to carry out essential functions like division and movement. The cytoskeleton is made up of microtubules, actin filaments, and intermediate filaments.
ENDOPLASMIC RETICULUM (ER)
It consists of a system of sac- and tube-like structures, which locally expand into cisterns. Its internal lumen is connected with the intermembrane space of the nuclear membrane. Part of the ER is studded on the outside with ribosomes (rough ER), which take part in protein synthesis. The other part of the ER is free of ribosomes (smooth ER). Enzymes of the smooth ER are involved in the synthesis of fatty acids. The smooth ER also plays a role in detoxification by hydroxylation reactions.
GOLGI APPARATUS
This organelle consists of stacks of flattened membrane sacs. Their main function is the further processing and sorting of proteins and their export to the final targets. In most cases, these are secreted as membrane proteins. In addition, the Golgi apparatus also produces polysaccharides.
LYSOSOMES
Lysosomes are vesicles enclosed by a lipid bilayer. These organelles are filled with many enzymes for polysaccharide, lipid, protein and nucleic acid degradation. They act also on intracellular material to be removed and even contribute to the apoptosis of their own cell. Lysosomes of special cells (e.g., macrophages) destroy bacteria or viruses as a defense mechanism.
PEROXISOMES
Peroxisomes are surrounded by a single membrane. They are generated from components of the cytosol and do not bud from other membranes. The main task of these organelles is the performance of monooxygenase (hydroxylase) or oxidase reactions, which produce hydrogen peroxide (H2O2).
MITOCHONDRIA
In a typical eukaryotic cell, there are in the order of 2000 of these organelles, which are often of ellipsoidal shape. They have a smooth outer membrane and a highly folded inner membrane with numerous invaginations (cristae), which contain most of the membrane-bound enzymes of mitochondrial metabolism. Mitochondria are the site of respiration and ATP synthesis, but also of many other central reactions of metabolism, e.g., citrate cycle, fatty acid oxidation, glutamine formation, and part of the pathway leading to steroid hormones. Mitochondria are the only organelles which are equipped with their own (circular) DNA, RNA and ribosomes and thus can perform their own protein synthesis.
NUCLEUS
All eukaryotic cells show the presence of a separate nucleus, which contains the major portion of the genetic material of the cell (DNA). The nuclear DNA is organized in a number of chromosomes. The nucleus is surrounded by a double membrane of lipid bilayers with integrated proteins, called the nuclear membrane (also known as the nuclear envelope). Nuclear pores span the nuclear membrane and enable the transport of proteins, rRNA etc.
Image credits
2. DataBase Center for Life Science (DBCLS) licensed under CC BY 4.0
3. Kelvin Song licensed under CC BY 3.0
4. Lumoreno licensed under CC BY-SA 3.0
5. Agateller licensed under CC BY-SA 3.0
Structure of Chromatin
Chromosomal DNA is packaged inside microscopic nuclei by its association of histones H2A, H2B, H3, and H4. These are positively-charged proteins that strongly adhere to negatively-charged DNA and form complexes called nucleosomes. (11-nm “beads on a string” structure). Each nucleosome is composed of DNA wound 1.65 times around eight histone proteins. Nucleosomes fold up to form a 30-nanometer chromatin fiber, which forms loops averaging 300 nanometers in length. The 300 nm fibers are compressed and folded to produce a 250 nm-wide fiber, which is tightly coiled into the chromatid of a chromosome. The level of structure varies depending on the cell cycle stage and, as a result, the requirement for DNA transcription or replication.
Figure \(2\): Chromosomes are composed of DNA tightly-wound around histones - © 2010 Nature Education
Chromatin: The most important structure inside the nucleus is chromatin, consisting, in humans, of the 46 chromosomes. Chromatin is the combination or complex of DNA and proteins that make up the contents of the nucleus of a cell. The most abundant protein in the nucleus are histones. Histones are rich in basic amino acids (positively charged), which interact with negative charges of the DNA.
Cell Signaling
Cell signaling consists in the ability of cells to respond to environment changes through signals received outside their borders. Cells may receive many signals simultaneously and also send out messages to other cells.
Cells have proteins called receptors, that are generally transmembrane proteins, which bind to signaling molecules outside the cell and subsequently transmit the signal through a sequence of molecular switches to internal signaling pathways and initiate a physiological response. Different receptors are specific for different molecules. In fact, there are hundreds of receptor types found in cells, and varying cell types have different populations of receptors.
Figure \(3\): Main types of transmembrane receptors
Examples of receptors membrane:
• G-protein-coupled receptors
• Ion channel receptors
• Enzyme-linked receptors
Receptor may be located in the cellular membrane (important to receive extracellular signals) but also may be present inside the cell or inside the nucleus.
Topic 2: Key Points
In this section, we explored the following main points:
• 1: The characteristics of the cell membrane, such as lipid bilayer and transmembrane proteins are very important for the cell function.
• 2: The chromatin is the combination of complex DNA and proteins that make up the contents of the nucleus of a cell.
• 3: Cells have proteins called receptors that are important to receive extracellular signals and initiate signaling pathways in the cell.
Knowledge Check
1. Histones are proteins present in the nucleus of the cells with the function of:
interacting with the DNA to form nucleosomes.
regulating the fluidity of cell membranes.
Answer
interacting with the DNA to form nucleosomes.
2. Membranes are composed of lipids arranged in a ...
Check all that apply.
• lipid monolayer, with...
• lipid bilayer, with...
• ...the hydrophilic glycerol and phosphate “head” groups of the lipid molecules forming the two outside layers and the hydrophobic “tail” groups arranged inside.
• ...the hydrophilic and hydrophobic “tail” groups arranged inside.
Answer
lipid bilayer, with...the hydrophilic glycerol and phosphate “head” groups of the lipid molecules forming the two outside layers and the hydrophobic “tail” groups arranged inside. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/02%3A_Biochemistry_and_Molecular_Genetics/2.02%3A_New_Page.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Understand the mechanism of DNA replication.
• Understand the process of gene expression.
• List the main types of DNA mutation and mechanism of DNA repair.
DNA Replication
The process of DNA replication consists of uncoiling double-stranded DNA, copying each DNA strand and then separating the two, new, double-stranded copies. The process starts at an origin of replication (ori), a nucleic acids sequence where the replication can start. There are around 100,000 origins of replication in each human cell. This means that the DNA replication may start simultaneously in different positions at the same time. The replication fork is the point where two DNA strands, one termed the leading strand and the other the lagging strand, are separated and DNA copying occurs. The coiled-coil, double-helical DNA structure is initially unwound by the enzyme DNA helicase by breaking the hydrogen bonds between complementary nucleic acids. Single-stranded binding proteins attach to the new DNA strands to keep them separated.
An enzyme termed primase then produces a short strand of RNA to serve as a primer for the remainder of the process. The enzyme DNA polymerase replicates each DNA strand in the 5′ to 3′ direction by adding the correct, matching nucleotide triphosphate to the 3′-hydroxyl end of the primer strand. As each new nucleic acid is added, a new phosphodiester bond is formed, utilizing the energy contained in the remaining diphosphate group.
RNA Transcription
RNA Transcription is the process whereby a particular segment of DNA is transcripted into an equivalent RNA sequence.
• mRNA: For genes that codes for a protein.
• tRNA: For a transfer RNA.
• rRNA: For assembly of a ribosome.
• miRNA (micro RNA): Which binds to mRNA and inhibits its translation.
• siRNA (small-interfering RNA): Which binds to mRNA and aids in its degradation.
• snRNA (small nuclear RNA): Which participates in RNA processing as part of the spliceosomes.
• snoRNA (small nucleolar RNA): Which participate in nucleolar RNA processing.
The transcription starts with binding of the enzyme RNA polymerase to a promoter sequence on the DNA, a regulatory region that dictates where the transcription should start.
The DNA is transcribed from 3′ to 5′ and occurs only on one of the DNA strands, the template strand.
As in DNA replication, energy for the formation of the phosphodiester bond is derived from hydrolysis of the two terminal phosphate bonds of the nucleoside triphosphate.
Multiple RNA polymerases can transcribe on a single DNA gene sequence, allowing rapid production of the RNA product.
Enhancer: This is a short region of DNA that can be bound by transcription factors to increase or facilitate the transcription of a particular gene. They can be located up away from the gene, upstream, or downstream from the start site.
Transcription Factors (TFs): These include a wide number of proteins, excluding RNA polymerase, that promotes (as an activator), or blocks (as a repressor) the recruitment of RNA polymerase to DNA. TFs bind to promoter regions of DNA.
Promoter Region: These are specific DNA sequences, usually located upstream and near the transcription start sites of genes and serves as a binding site for proteins called transcription factors that recruit RNA polemerase. Example: TATA bos, CpG island.
Figure \(1\): Machinery of RNA transcription
RNA Processing
The newly synthesized RNA transcripts are processed prior to their use in the cell as mature RNA.
A 7-methyl guanosine nucleic acid is added to the 5′-end (known as a 5′ cap) of the pre-mRNA as it emerges from RNA polymerase II (Pol II). The cap protects the RNA from being degraded by enzymes and serves as an assembly point for the proteins to begin translation to protein.
Removal of introns present in the pre-mRNA and splicing of the remaining exons, in a process called RNA splicing. The continuous series of DNA bases coding for a protein are interrupted by base sequences that are not translated. The translated sequences are referred to as exons (expressed sequences) and the nontranslated sequences as introns (intervening sequences). This completes the mRNA molecule, which is now ready for export to the cytosol. (The remainder of the transcript is degraded, and the RNA polymerase leaves the DNA.)
3.4 Protein Translation
Protein synthesis requires the interaction of mRNA, tRNA, several accessory proteins, called initiation factor (IF) and elongation (EF) factor, and ribosomes.
The Triplet Code (Genetic Code)
A sequence of three bases in DNA identifies each of the 20 amino acids that are to be incorporated into the newly synthesized protein. This information is incorporated into mRNA which is synthesized using DNA as the template. Considering that three bases are required at a minimum, and we have 4 nucleotides ( 43 = 64 code words are possible). This is more than the 20 amino acids. In fact, many triplets are used to define one same amino acid. In addition, some “extra” triplet sequences are used as stop codons to terminate protein synthesis. AUG is used as the start codon for the N-terminal amino acid in eukaryotes.
Using the same binding rules as DNA double strands (e.g., a tRNA that binds the starting mRNA codon AUG has an anticodon sequence of UAC), insuring the specific order of AAs required for proper production of the protein.
Figure \(4\): Genetic code - © 2014 Nature Education
RNA plays three distinct and important roles:
• mRNA: The intermediary between gene and protein – provides the message.
• tRNA: The key or adaptor – reads the genetic code, brings amino acids to the growing polypeptide chain
• rRNA: In ribosome, provides a scaffold for protein synthesis, catalyzes peptide bond formation
The IF and EF accessory proteins serve a number of roles, including enabling binding of the mRNA molecule to the ribosome, movement of the mRNA along the ribosome to the start point of the synthesis, docking of the tRNA–amino acid, and movement of the mRNA and growing peptide chain, as well as accuracy assurance.
The protein biosynthesis, can be divided into three phases: initiation, elongation, and termination.
Initiation
Initiation of protein synthesis begins when the protein initiator factor IF-3 binds to the small subunit of the ribosome and causes its dissociation. The small ribosomal subunit then binds to the 5’ side of mRNA which carries information in a triplet code from DNA. The small subunit is then translocated where it meets the large ribosomal subunit, other protein initiator factors, and initiator tRNA. The tRNA is bound to methionine at a site in the ribosome known as the P site.
Elongation
In the elongation phase of protein synthesis, a specific aminoacyl-tRNA, directed by hydrogen bonding interactions between the anticodon region of the aminoacyl-tRNA and the codon region of mRNA, adds to a site distinct from the P site, the A site. The A and P sites are in close proximity, allowing the peptide bond formation between the amino acids. The newly synthesized tRNA bound dipeptide then moves from the A site to the P site. After a translocation of the ribosome in the 5’ - 3’ direction along the mRNA occurs to expose a new codon. Then, another amino acid-tRNA identity and binds to the mRNA at the A site and the peptidyl transferase reaction is again initiated. As the polypeptide chain grows through subsequent cycles of amino acid residue incorporation, it emerges from the ribosome and undergoes folding into its native secondary and tertiary conformations.
Termination
In termination step, the peptide bond synthesis ceases when a stop codon on the mRNA is reached. This termination site will not bind aminoacyl-tRNA and peptide synthesis stops. Release factors allow the newly synthesized protein to dissociate from the ribosome.
3.5 Gene Expression
Gene expression is a two-step process in which DNA is converted into a protein.
Step 1: The first step is DNA transcription to RNA. In this step, the information from the archival copy of DNA is imprinted into mRNA. The structure of RNA is a little different, it contains ribose instead of deoxyribose, and the four bases that bind to it are cytosine (C), guanine (G), adenine (A), and uracil (U). During transcription, DNA unfolds, and mRNA is created by pairing mRNA bases with the bases of DNA. In this process C in DNA translates to G, G to C, A to U, and T to A. After mRNA is transcripted it is transported to the ribosome.
Step 2: The second step, protein translation occurs at the ribosome. During translation, the sequence of codons (triplet of bases) of mRNA is, with the help of tRNA, translated into a sequence of amino acids
Figure \(5\): Gene expression seems to be a straightforward process, the mechanism that control the gene expression that causes most phenotypic differences in organisms.
3.6 Mutation
Mutations are changes in the genetic sequence (DNA or RNA sequence) , and they are a main cause of diversity among organisms. Although some of mutations are beneficial, offering resistance to disease or improved structure and/or function, some other specific mutations can lead to disease and/or death of the cell or organism.
Mutations can occur due to assaults from the environment or spontaneous mutation may occur during the DNA replication. Mutations are estimated to occur at an approximate rate of 1000–1,000,000 per cell per day in the human genome, and every new cell is believed to contain approximately 120 new mutations.
TYPES OF MUTATION
Point mutations when only a single base pair is changed into another base pair. They can be classified as the following:
• Transition: When a purine nucleotide is changed to a different purine (A ↔ G) or a pyrimidine nucleotide is changed to a different pyrimidine nucleotide [C ↔ T(U)].
• Transversion: When the orientation of a single purine and pyrimidine nucleotide is reversed [A/G ↔ C/T(U)].
• Silent: When the same AA is coded.
• Missense: When a different AA is coded.
• Neutral: When an AA change occurs but does not affect the protein's structure or function.
• Nonsense: When a stop codon results, terminating translation and shortening the resulting protein.
Insertion and deletion mutations, which are together known as indels. Indels can have a wide variety of lengths. At the short end of the spectrum, indels of one or two base pairs within coding sequences have the greatest effect, because they will inevitably cause a frameshift, i.e. change the entire reading of the mRNA sequence. At the intermediate level, indels can affect parts of a gene or whole groups of genes. At the largest level, whole chromosomes or even whole copies of the genome can be affected by insertions or deletions. At this high level, it is also possible to invert or translocate entire sections of a chromosome, and chromosomes can even fuse or break apart.
If a large number of genes are lost as a result of one of these processes, then the consequences are usually very harmful.
3.7 DNA Repair
The human body have mechanisms to detect and repair the various types of damage that can occur to DNA, no matter whether this damage is caused by the environment or by errors in replication.
Because DNA is a molecule that plays an active and critical role in cell division, during the cell cycle, checkpoint mechanisms ensure that the DNA is intact before permitting DNA replication and cell division to occur. Failures in these checkpoints can lead to an accumulation of damage, which in turn leads to mutations.
UV radiation causes DNA lesions that may distort DNA's structure, introducing bends or kinks and thereby impeding transcription and replication. These lesions may be repaired through a process known as nucleotide excision repair (NER), a mechanism where an enzyme catalyze the removal of damaged nucleotides, and replacement of the correct sequence, guided by the intact complementary DNA strand. Defects in this mechanism is related to human diseases like skin cancer.
Another repair mechanism that handles the spontaneous DNA damage caused oxidation or hydroxylation generated by metabolism is the base excision repair (BER). In this mechanism, enzymes known as DNA glycosylases remove damaged bases by literally cutting them out of the DNA strand through cleavage of the covalent bonds between the bases and the sugar-phosphate backbone. The resulting gap is then filled by a specialized repair polymerase and sealed by ligase.
DNA damage also may occur in form of double-strand breaks, which are caused by ionizing radiation, including gamma rays and X-rays. Double-strand breaks may be repaired through one of two mechanisms: nonhomologous end joining (NHEJ), where an enzyme called DNA ligase IV uses overhanging pieces of DNA adjacent to the break to join and fill in the ends; or homologous recombination repair (HRR) where the homologous chromosome itself is used as a template for repair.
Topic 3: Key Points
In this section, we explored the following main points:
1. The process of DNA replication is controlled by several proteins that act together to assure the correct base pairing for creation of the new DNA strand.
2. Transcription is the first step of gene expression, in which a particular segment of DNA is copied into RNA (mRNA) by the enzyme RNA polymerase.
3. In translation, messenger RNA (mRNA)—produced by transcription from DNA—is decoded by a ribosome and tRNA to produce a specific amino acid chain, or protein.
4. Mutations are changes in the genetic sequence (DNA or RNA sequence), that can be beneficial or may result in damage, if not repaired.
Knowledge Check
1. RNA splicing is the process which involves the removal of introns present in the pre-mRNA and splicing of the remaining exons. True or False?
True
False
Answer
true
2. Which of the following does not belong to the process of DNA replication?
Primase
DNA Polymerase
RNA Polymerase
Answer
RNA Polymerase
3. During protein translation, the sequence of codons (triplets of bases) of mRNA is important to:
Maintain the structure of the mRNA
Translated the correct a sequence of amino acids.
DNA replication
Answer
Translated the correct a sequence of amino acids. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/02%3A_Biochemistry_and_Molecular_Genetics/2.03%3A_New_Page.txt |
Learning Objectives
• 1: Identify the main epigenetic mechanisms related to control of gene expression.
Introduction
Epigenetics is defined as potentially heritable and reversible changes in gene expression mediated by methylation of DNA, modifications of histone proteins or by non-coding RNAs that are not due to any alteration in the DNA sequence. These processes singularly or jointly affect transcript stability, DNA folding, nucleosome positioning, chromatin compaction, and ultimately nuclear organization. They determine whether a gene is silenced or activated and when and where this occurs.
Epigenetic change is a regular and natural occurrence, essential for normal cell development, but can also be influenced by several factors including age, the environment/lifestyle, and disease state.
Histones Post-Translational Modifications
The nucleosome is composed of five histone proteins (H1, H2A, B, H3, and H4). The N-terminus of these histone proteins are subject to covalent modifications such as methylation, phosphorylation, acetylation, ubiquitination or sumoylation by a group of histone-modifying enzymes . Alterations in these proteins contribute to the accessibility and compactness of the chromatin, and result in activation or suppression of particular genes.
EXAMPLES OF TYPES AND ROLES OF HISTONE MODIFICATIONS
Activation of gene Transcription:
Acelylation in the lysine 9(K9) of histone 3(H3) - H3K9ac
Acetylation in the lysine 27(K27) of histone 3(H3) - H3K27ac
Trimethylation in the lysine 4(K4) of histone 3(H3) - H3K4me3
Repression of gene transcription:
Trimethylation in the lysine 4(K3) of histone 3(H3) - H3K9me3
Trimethylation in the lysine 27(K27) of histone 3(H3) - H3K27me3
Trimethylation in the lysine 20(K20) of histone 3(H3) - H3K20me3
microRNAs
miRNAs have an important role in gene regulation and they can influence biological functions including cell differentiation and proliferation during normal development and pathological responses. They are small non-coding RNA molecules (containing about 22 nucleotides), derived from regions of RNA transcripts that fold back on themselves to form short hairpins. They regulate gene expression at the post-transcriptional level. A number of miRNAs may bind to specific regions of the messenger RNA (mRNA) and block its translation to proteins. Alteration of the expression of miRNAs is believed to contribute to the progression of tumorigenesis and other diseases.
Topic 4: Key Points
In this section, we explored the following main points:
• 1: Epigenetic mechanisms may influence gene expression without alteration in the DNA sequence.
• 2: Main epigenetic mechanisms are DNA methylation, histones post-translational modification and alteration in the expression of microRNAs.
• 3: Epigenetic change is a regular and natural occurrence but can also be influenced by several factors including age, the environment/lifestyle, and disease state.
• 4: Different from genetic alterations, epigenetic alterations are considered reversible.
Knowledge Check
1. Epigenetic mechanisms are considered:
Reversible
Irreversible
Answer
Reversible
2. DNA methylation and histone post-translational modifications play important role in the establishment of chromatin structure and in consequence in the gene expression modulation. True or False?
False
True
Answer
True
3. DNA methylation may inhibit the binding of transcription factors to their recognition site, at promoter regions resulting in:
Inhibition of gene transcription
Activation of gene transcription
Answer
Inhibition of gene transcription
Section 2 Final Evaluation
1. The carbohydrate component of a nucleoside and nucleotide is usually:
The sugar ribose for RNA molecule and deoxyribose for DNA molecule
Carboxylic group for DNA molecule and amino group for RNA molecule
The deoxyribose for RNA molecule and sugar ribose for DNA molecule
The deoxyribose for RNA molecule and deoxyribose for DNA molecule
Answer
The sugar ribose for RNA molecule and deoxyribose for DNA molecule
2. During protein translation, the sequence of codons (triplets of bases) of mRNA is important for:
Maintaining the structure of the mRNA
Translating the correct sequence of an amino acid
DNA replication
DNA synthesis
Answer
Translating the correct sequence of an amino acid
3. In DNA replication, the lagging strand short chains of nucleic acids, called Okazaki fragments, are generated and the enzyme DNA ______ joins the Okazaki fragments together with lagging strand for replication to proceeds.
topoisomerase
polymerase
helicase
ligase
Answer
ligase
4. Point mutation when the orientation of a single base pair purine and pyrimidine nucleotide is reversed is termed:
Transition
Missense
Transversion
Nonsense
Answer
Transversion
5. The main function of _______ is to control the fluidity of biological membranes.
Glycerol
Cholesterol
Glucose
Phospholipid
Answer
Cholesterol
6. The smooth endoplasmic reticulum (ER) is involved in the synthesis of fatty acids and
Detoxification by hydroxylation reactions
Provides mechanical support for cells
Monooxygenase reaction to produce hydrogen peroxide (H2O2)
Production of energy
Answer
Detoxification by hydroxylation reactions
7. The only organelles which are equipped with their own (circular) DNA, RNA and ribosomes and can perform their own protein synthesis are:
Nucleus
Golgi apparatus
Peroxisomes
Mitochondria
Answer
Mitochondria
8. Phospholipids are the major component of cell membranes forming lipid bilayers because of their amphiphilic property whose structure consist of:
Two hydrophilic fatty acid “tails” and a hydrophobic head consisting of a phosphate group (PO4-3) attached to the third glycerol carbon.
Two hydrophobic fatty acid “tails” and a hydrophobic head consisting of a phosphate group (PO4-3) attached to the third glycerol carbon.
Two hydrophobic fatty acid “tails” and a hydrophilic head consisting of a phosphate group (PO4-3) attached to the third glycerol carbon.
Four hydrophilic fatty acid “tails” and a hydrophobic head consisting of a phosphate group (PO4-3) attached to the third glycerol carbon.
Answer
Two hydrophobic fatty acid “tails” and a hydrophilic head consisting of a phosphate group (PO4-3) attached to the third glycerol carbon.
9. The most important structural component of a nucleus consisting of positively charged protein and negatively charged DNA is called:
Lysosome
Chromatin
Cytoskeleton
Nucleolus
Answer
chromatin
10. The difference between amino acids and fatty acids in terms of structure is that amino acids carry a carboxyl and an amino group while fatty acids carry a carboxyl and a methyl group.
True
False
Answer
true
11. An example of a nucleotide with a two ring structures is:
Cytosine (C)
Adenine (A)
Uracil (U)
Thymine (T)
Answer
Adenine (A) [and also Guanine(G)]
12. DNA methylation occurs predominantly in ________ to inhibit the binding of transcription factors to their recognition site which results in the inhibition of gene transcription.
Guanine
Cytosine
Adenine
Thymine
Answer
Cytosine (C) | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/02%3A_Biochemistry_and_Molecular_Genetics/2.04%3A_New_Page.txt |
The genetic toxicology methodology or assay technique helps to test or evaluate the level of damages of the genetic information caused by toxicants or agents within the cells. In this e-module you will learn about different genetic toxicology assays, different genetic damages, and cytotoxicity and epigenetics assays.
03: Principles of Genetic Toxicology
Learning Objectives
• Know about the definition of genetic toxicology assay.
1.1. What is Genetic-Toxicology Assay?
The genetic-toxicology methodology or assay technique helps to test or evaluate the level of damage to the genetic information caused by toxicants or agents within the cells (Figure 1). Damages resulted as induced mutations, which may lead to different diseases including cancer. The causative toxic agents are know as mutagens. Mutagens caused for cancer disease, is known as carcinogens.
Topic 1: Key Points
In this section, we explored the following main points:
• 1: Definition of genetic toxicology methodology
• 2: Causative toxic agents as mutagens.
• 3: Genotoxic damages result in the phenotypically or genotypically mutant cells.
Knowledge Check
Genetic-toxicology methodology or assay technique helps to test...
the level of damages caused by toxicants within the cells.
the level of reactive oxygen species (ROS) within the cells.
the stages of cancer disease.
Answer
the level of damages caused by toxicants within the cells.
3.02: New Page
Learning Objectives
• 1: Know different types of genetic damages or mutations.
• 2: Know how different mutations resulted in the cells by mutagens.
Do you know how many mutations or damages are possible by mutagens in the cells?
Mutations can be:
• Microlesions (gene mutation)
• Macro lesions (Chromosomal mutation)
2.1: Microlesions (Gene Mutation)
Microlesions are the damages or mutations in DNA bases. These mutations are with invisible phenotypic changes. The types of microlesions are illustrated in the figure on the right.
Base-pair substitution mutation (qualitative change in nucleotide pairs)
• In this mutation, single base nucleotide is replaced by another nucleotide.
Frame shift (quantitative change in nucleotide pairs)
• In this mutation, addition or deletion of nucleotide in the DNA sequence resulted to shift or change the entire DNA or amino acid sequence.
Figure \(1\): Micro lesions mutation by mutagens.
2.2 Macro Lesions (Chromosomal Mutation)
Macro lesions are chromosomal mutations with mutagens and are with distinct morphological changes in the phenotype. These morphological changes of chromosomes can be cytologically visible under microscope. Macro lesions are following types:
Numerical changes in chromosomes
• Polyploidy: Duplication of entire set of chromosome to triploid or tetraploid.
• Aneuploidy: Changes of single missing chromosome to monosomy or three copies of a single chromosome to trisomy.
Structural changes in chromosomes
• Deletion: loss of chromosome segment
• Translocation: A segment of one chromosome becomes attached to a non homologous chromosome. It can be one way transfer as simple translocation and two way transfer as reciprocal translocation.
• Inversion: A change in the direction of material along a single chromosome.
• Duplication: Repetition of chromosome segment
Micronuclei changes
• Micronuclei (MN) are the damaged chromosome fragments or whole chromosomes that were not incorporated into the cell nucleus and stayed as the extra-nuclear bodies after the cell division.
• MN can be resulted by the defects of the cell repair machinery and by the accumulation of damaged DNA and chromosomal aberrations.genetic.html
Topic 2: Key Points
In this section, we explored the following main points:
• 1: Different types of Microlesions (gene mutation) and Macro-lesions (Chromosomal mutation).
• 2: How toxic agents or mutagens modulate the two different microlesions namely base pair substitution and frame shift mutation.
• 3: Genotoxic mutagens involve also in different macro-lesions namely numerical or structural changes in chromosomes or changes of micronuclei in cell.
Knowledge Check
1. What is monosomy?
Duplication of entire set of chromosome .
Single missing chromosome from diploid set.
Three copies of a single chromosome.
Answer
Single missing chromosome from diploid set.
2. In base-pair substitution mutation, single base nucleotide is replaced by another nucleotide.
True
False
Answer
True
3. What are the structural changes in chromosomes caused by toxicants?
Deletion
Translocation
Inversion
Duplication
All of the above
Answer
All of the above
4. Micronuclei (MN) changes are the damaged chromosome fragments or whole chromosomes that were not incorporated into the cell nucleus and stayed as the extra-nuclear bodies after the cell division.
True
False
Answer
true
5. Frame shift mutation, resulted to shift or change of entire DNA or amino acid sequence by:
addition of nucleotide in the DNA.
deletion of nucleotide in the DNA.
addition or deletion of nucleotide in the DNA
Answer
addition or deletion of nucleotide in the DNA | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/03%3A_Principles_of_Genetic_Toxicology/3.01%3A_New_Page.txt |
Learning Objectives
• 1: Know different types of genetic-toxicology assays
• 2: Know how different genetic-toxicology assays are used in toxicology when cells are exposed to mutagens.
The goal of genetic toxicology assay is to determine whether any chemical or mutagen will do any adverse effect on genetic material or may cause different diseases including cancer. The assays can be performed using bacterial, yeast, or mammalian cells. One can early control and save vulnerable organisms from genotoxic chemicals by preforming genetic toxicology assay.
The following different types of genetic toxicology assays are used now a days:
• Bacterial Reverse Mutation Assay (Ames Assay)
• Genetic mutation assay
• Allele-Specific PCR
• Sanger Dideoxy Sequencing
• Chromosome aberration study
• Micronucleus assay
3.1: Bacterial Reverse Mutation Assay (Ames Assay )
This assay was discovered by Bruce Ames in 1970. This assay is widely used to test for gene mutation. The technique uses several strains of the bacterium Salmonella typhimurium which carry mutations in genes involved in histidine synthesis. These strains are auxotrophic mutants and they require histidine for growth and they cannot produce it. This assay examine the ability of the chemical or mutagen in creating mutations or a "prototrophic" state of strains, when the strains can grow on a histidine-free medium.
3.2: Genetic Mutation Assay
Following are the different molecular assay to study nucleotide variants or alternation of genetic material caused by mutagens:
Allele-Specific PCR
Single nucleotide polymorphism (SNP) resulted from base substitution mutation can be analyzed by this method. In this real-time PCR, fluorescent reporter probes are added to the reaction mixture and one fluorescent reporter probe is selected for wild type and other fluorescent probe is used for mutant. The PCR primers with fluorescent probe will match or mismatch one of the alleles at 3’ end of the primer. DNA polymerase extends the probes in a complementary fashion and releasing the reporter fluorescent molecules for detection. The PCR cycles with the reporter probes show the amplified signals and allow for precise measurement of one or both alleles of interest. Similarly, the 3’ end of the mutant-specific primer is extended only in the presence of DNA with that mutation.
Sanger Dideoxy Sequencing
The goal of this method is to detect unknown mutations including single nucleotide variants (SNVs) and small duplications, insertions, deletions, and indels of interest caused by mutagens. In this method, sequencing primers hybridized to the PCR product and are extended using the four deoxynucleotides (dNTPs), a mixture of fluorescently labeled dideoxynucleotides (ddNTPs) and DNA polymerase. Four ddNTPs are marked with a different fluorescent dye. Random incorporation of the marked ddNTPs shows in termination of strands at each location along the sequence. The gel electrophoresis separates the strands by size. Fluorescence spectroscopy measured the terminating nucleotides.
3.3: Chromosome Aberration Study
Cytogenetic assays of mammalian cells are performed to detect different types of structural and numerical chromosomal aberrations caused by genotoxic chemicals. The clastogenic or aneugenic effects from the genotoxic chemicals will result in an increase in frequency of structural (premature centric separation, chromosome breaks, dicentric chromosomes, ring) complex rearrangements (Figure 4) or numerical aberrations of the genetic material in mammalian cells.
3.4: Micronucleus Assay
Micronucleus assay is used as a tool to evaluate genetic damage caused by genotoxic chemicals. The number of micronuclei (Figure 5) generated directly relates to the amount of DNA damage in the cells.
Topic 3: Key Points
In this section, we explored the following main points:
• 1: Definition of genetic toxicology assay
• 2: Different types of genetic-toxicology assays.
• 3: How genetic mutation assays are performed by Allele-Specific PCR and Sanger Dideoxy Sequencing techniques.
• 4: What are the different types of chromosomal aberrations observed under microscope by chromosome aberration study?
• 5: Changes of micronuclei observed under microscope by Micronucleus assay.
Knowledge Check
1. Cytogenetic assays of mammalian cells are performed to detect different types of structural and numerical chromosomal aberrations caused by a genotoxic chemicals. The structural chromosomal aberrations are:
premature centric separation
ring
chromosome breaks
dicentric chromosomes
All of the above
Answer
All of the above
2. In Allele-Specific PCR, fluorescent reporter probes are added to the reaction mixture and one fluorescent reporter probe is selected for wild type and other fluorescent probe is used for mutant.
True
False
Answer
True
3. Which instrument is used to measure the terminating nucleotides in Sanger Dideoxy Sequencing?
Fluorescence spectroscopy
Spectrophotometer
Fluorescence microscopy
None of the above
Answer
Fluorescence spectroscopy
4. The Ames technique uses several strains of the bacterium Salmonella typhimurium which carry mutations in genes involved in:
Arginine Synthesis.
Histidine synthesis.
Lysine Synthesis.
None of the above.
Answer
Histidine synthesis. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/03%3A_Principles_of_Genetic_Toxicology/3.03%3A_New_Page.txt |
Learning Objectives
• 1: Know different types of cytotoxicity assays.
• 2: Know how different cytotoxicity assays are used when cells are exposed to toxicants or mutagens.
The goal of cytotoxicity assay is to determine whether any chemicals or drugs will do any toxic effect or load on milieu or genetic material of the cells caused for lethality of the cells or caused for different diseases.
The following are the different types of cytotoxicity assays:
• DNA fragmentation/ladder assay
• Comet assay
• Necrosis assay
• Enzyme assay
• Proteomics assay
• Expression array assay
4.1: DNA Fragmentation/Ladder Assay
DNA fragmentation or ladder assay are used to know the fragmented DNA of the cells caused by chemicals or drugs. Fragmented DNA can be separated by agarose gel electrophoresis and can be visualized as “ladder” by ethidium bromide staining. The evaluation of cytotoxicity through cell death is an acceptable common assessment.
Ladder assays are performed for the following reasons:
1. to simply characterize the toxicity of the chemicals or drugs in cells, or
2. to determine the maximum doses of the test chemicals or drugs that can be used for cells without causing too much cell death.
4.2: Comet Assay
The Comet Assay (single cell gel electrophoresis /SCGE) is used to detect DNA damage by using a micro gel electrophoresis. The image of the damaged DNA shows a comet with head and tail. The analysis of image for comet assay is calculated for the “tail length” of the comet which is the measurement from the point of highest intensity within the comet head as well as the “tail moment” which is the product of the tail length and the fraction of total DNA present within the tail.
4.3: Necrosis Assay
The necrosis assay is performed by flow cytometry analysis with staining of Annexin V and propidium iodide (PI) in the cells. The cells are considered in the stage of necrosis if the cells lose membrane integrity and die promptly due to cell lysis when exposed to chemicals, drugs, toxins or foreign antigens. In necrosis, cells show swelling, loss of membrane integrity and disruption of metabolism. Cells with necrosis do not go to stage of apoptosis, apoptotic cell may undergo secondary necrosis. These necrotic cells will shut down metabolism, lose membrane integrity, lyse and formed cell injury autolysis.
The flow cytometric analysis for necrosis assay showed that the cells stained positive for both FITC Annexin V and PI are in the end stage of apoptosis and are undergoing to the stage of necrosis as dead cells stained PI positive. Cells that stain negative for both FITC Annexin V and PI are alive and not undergoing apoptosis or necrosis.
4.4: Enzyme Assay
The enzyme assay is used to monitor passaging of lactate dehydrogenase (LDH), due to loss of cell membrane integrity when cells are exposed to cytotoxic compounds. LDH reduces NAD to NADH which generates a color change by interaction with a specific probe (Figure 4a). In other enzyme assay, Adenosine triphosphate (ATP )-based assay combined with bioluminescent assay are used to measure cytotoxicity of the cells in which ATP is the reagent for the luciferase reaction.
4.5: Proteomics Assay
The proteomic assay is performed to know the mechanism of cellular toxicity by measuring expression of a specific protein which may consider as a biomarker for particular toxic mechanism or cellular toxicity signaling pathway. Immunofluorescence, immunoprecipitation and immunoblot assay are mainly used to know the effect of toxicants in cellular toxicity signaling pathway or mechanism.
4.6: Expression Array Assay
The expression array is the chip based microarray of more gene expressions (finger print of genes) by the effect of cellular toxicants. This is a rapid and sensitive detection method which allows to detect all toxicological end points at wide range of molecular level changes in the cell at single assay. The microarray process can be divided into two main parts. First is the printing of known gene sequences onto glass slides or other solid support followed by hybridization of fluorescently labeled cDNA (containing the unknown sequences to be interrogated) to the known genes immobilized on the glass slide. After hybridization, arrays are scanned using a fluorescent microarray scanner. Analyzing the relative fluorescent intensity of different genes provides a measure of the differences in gene expression.
Topic 4: Key Points
In this section, we explored the following main points:
• 1: Different types of cytotoxicity assays.
• 2: How different cytotoxicity assay namely DNA fragmentation/ladder assay, Comet assay, Necrosis assay, Enzyme assay, Proteomics assay, and Expression array assay are used when cells are exposed to cytotoxic agents.
Knowledge Check
1. The cells are considered in the stage of necrosis, if the cells lose membrane integrity and die promptly due to cell lysis when exposed to chemicals, drugs, toxins or foreign antigens.
True
False
Answer
True
2. The Comet Assay is used to detect DNA damage by using a micro gel electrophoresis:
True
False
Answer
True
3. The expression array is the chip based microarray of more gene expressions (finger print of genes) by the effect of cellular toxicants.
True
False
Answer
True
4. Ladder assay are performed for the following reasons: 1) to simply characterize the toxicity of the chemicals or drugs in cells, or 2) to determine the maximum doses of the test chemicals or drugs that can be used for cells without causing too much cell death.
True
False
Answer
True
5. What are the proteomic assays to know the effect of toxicants in cellular toxicity signaling pathway or mechanism?
Immunofluorescence assay
ring
Immunoprecipitation assay
Immunoblot assay
All of the above
Answer
All of the above | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/03%3A_Principles_of_Genetic_Toxicology/3.04%3A_New_Page.txt |
Learning Objectives
• 1: Know different types of epigenetic assays.
• 2: Know how different epigenetic assays are used when cells are exposed to toxicants or mutagens.
5.1: DNA Methylation Assay
DNA methylation assays are important to know the epigenetic modification which is a heritable, enzyme-induced modification without alteration of the nucleotide base pairs. The transfer of a methyl-group to the 5-carbon on the cytosine in a CpG dinucleotide happens in the DNA methylation by DNA methyltransferases (DNMT1, DNMT3A, and DNMT3B). The high level of promoter CpG island methylation results in gene silencing. The methylated DNA immunoprecipitation (MeDIP)-chip technique is used for DNA methylation assay.
In brief, the MeDIP-chip procedure is mentioned as follows. The genomic DNA is sheared to low molecular weight fragments (approximately 400 bp) by sonication. Then, the methylated DNAs are immunoprecipitated with the anti-methyl-cytosine antibody, and are amplified with PCR, if source material is less. Input and methylated DNA are labeled with fluorescent dyes Cy3 (green) and Cy5 (red), pooled, denatured, and are hybridized to a microarray slide containing all the annotated human CpG islands or other whole genome or promoter microarray designs. Then the slide is scanned using a scanner and each image is analyzed with the image analysis software ( Figure 1).
5.2: Histone Modification Assay
Histone modification assays are useful to find the modification of histone proteins (e.g. lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, and lysine ubiquitination and sumoylation) which have important roles in epigenetic inheritance. The chromatin immunoprecipitation (ChIP) assay followed by hybridization to microarrays (ChIP-chip) (left) or by high-throughput sequencing (ChIP-seq) (right) are both powerful techniques to find histone modification.
5.3: MicroRNAs Assay
MicroRNAs assays are used to know the non-coding RNAs (17-25 nucleotides) which target messenger RNAs (mRNAs) and decayed the mRNAs or downregulated at the level of translation into protein. Almost, 60% of human protein coding genes are controlled by miRNAs and these miRNAs are epigenetically regulated. About 50% of miRNA genes are related with CpG islands, which may be repressed by epigenetic methylation. Other miRNAs are epigenetically controlled by either histone modifications or by DNA methylation. The expression of microRNAs are quantified by RT-PCR followed by quantitative PCR (qPCR). Then, miRNAs are hybridized to microarrays, slides or chips with probes to hundreds or thousands of miRNA targets. The microRNAs can be both invented and profiled by sequencing methods (microRNA sequencing).
Topic 5: Key Points
In this section, we explored the following main points:
• 1: Different types of Epigenetic assays.
• 2: How different Epigenetic assays namely DNA methylation assay, histone modification assay and MicroRNAs assay are used when cells are exposed to toxic chemicals or agents.
Knowledge Check
1. DNA methylation assays are important to know the non-epigenetic modification.
True
False
Answer
false
2. MicroRNAs assays are used to know the non-coding RNAs. These non-coding RNAs are :
17 to 25 nucleotides.
50 to 100 nucleotides.
200 to 400 nucleotides.
None of the above.
Answer
17 to 25 nucleotides.
3. Histone modification assays are useful to find the modification of histone proteins which have important roles in epigenetic inheritance:
True
False
Answer
true
Section 3 Final Evaluation
1. _______ is a method used in analyzing base substitution mutation resulting from single nucleotide polymorphism.
Micronucleus assay
Chromosome aberration study
Allele-Specific PCR
Chromatography
Answer
Allele-Specific PCR
2. Proteomic assay analyzes the effect of toxicants in cellular toxicity signaling pathways or mechanisms through:
Immunofluorescence
Immunoblot
Immunoprecipitation
All of the above
Answer
All of the above
3. In Ladder Assay, fragmented DNA can be separated by agarose gel electrophoresis and can be visualized as “ladder” by __________ staining.
Ethidium bromide
Eosin
Gram
Wright’s
Answer
Ethidium bromide
4. Trisomy is a form of aneuploidy interpreted as:
Single missing chromosome from diploid set.
Three copies of a single chromosome from a diploid set
Three copies of a single chromosome from a triploid set
Single missing chromosome from a triploid set
Answer
Three copies of a single chromosome from a diploid set
5. Gene mutations in which a single base nucleotide is replaced by another nucleotide are known as:
Frame shift
Quantitative change in nucleotide
Qualitative change in nucleotide
Left shift
Answer
Qualitative change in nucleotide
6. Some structural chromosomal aberrations caused by genotoxic chemicals in cytogenetic assays of mammalian cells include the following EXCEPT:
Dicentric Chromosomes
Ring Chromosomes
Spiral Chromosomes
Chromosome breaks
Answer
Spiral Chromosomes
7. Structural changes in chromosomes include the following EXCEPT:
Aneuploidy
Inversion
Translocation
Deletion
Answer
Aneuploidy
8. Addition or deletion of nucleotides in the DNA sequence results in the change of the entire DNA or amino acid sequence. This process is known as:
Frame shift
Base-pair substitution mutation
Qualitative change in nucleotide
Right shift
Answer
Frame shift
9. In Allele-Specific PCR, fluorescent reporter probes are added to the reaction mixture, one fluorescent reporter probe is selected for the wild type and the other fluorescent probe is used for the mutant.
True
False
Answer
true
10. The analysis of image to determine DNA damage in comet assay is calculated for the ______ and _______.
“tail length” and “head length”
“head length” and “head moment”
“head length” and “tail moment”
“tail length” and “tail moment”
Answer
“tail length” and “tail moment”
11. Cells which stain negative for both fluorescein isothiocyanate Annexin V and propidium iodide in flow cytometric analysis for necrosis assay are:
Dead and undergoing apoptosis and necrosis
Alive and not undergoing apoptosis or necrosis
Dead and undergoing necrosis only
Alive and undergoing apoptosis only
Answer
Alive and not undergoing apoptosis or necrosis
12. Histone Modification Assay uses Chromatin immunoprecipitation assay (ChIP) followed by:
Hybridization to microarrays (ChIP-chip)
Immunofluorescence
Fluorescence spectroscopy
Fluorescence microscope
Answer
Hybridization to microarrays (ChIP-chip) | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/03%3A_Principles_of_Genetic_Toxicology/3.05%3A_New_Page.txt |
Systems toxicology is a branch of science that utilizes data from different branches of toxicology and integrates them to provide a holistic approach for safety assessment. In this e-module you will learn about the concept of systems toxicology, dose level in toxicology, and different approaches to traditional and new toxicology.
04: Applied Systems Toxicology
Learning Objectives
• 1: Understand the concept of systems toxicology.
• 2: Understand the approaches of traditional toxicology approaches vs. the new toxicity testing paradigm.
• 3: Recognize the driving force behind the growth of this field.
• 4: Applications of this field.
The word “systems” originates from the Latin word “systema” which means a complete concept that has several parts. Similarly systems toxicology is a branch of science that utilizes data from different branches of toxicology and integrates them to provide a holistic approach for safety assessment.
Toxicology is the science of understanding the adverse effects of xenobiotics (drugs, chemicals, etc.) on biological systems. Biological systems are extremely complex. Due to the vast number of toxicology research approaches over the years, and a lot of data have been generated in different systems- in vivo, in vitro, in silico (especially due to “omics” approaches). However, there is currently a lack of interpreting/utilizing that data for efficient safety assessment of xenobiotics.
Systems toxicology aims to fill this gap and utilize these data from different systems and integrate them into meaningful assessment for safety. It relies heavily on mathematical and computational models to link the data from various systems. So, in order to have fully validated systems toxicology approaches it is important to have “real” (in life) data from animal models to validate the hypothesis.
The Driving Force
The main driving force behind the development of systems toxicology approaches is the fact that the whole “safety assessment” process is a very lengthy, time consuming and expensive process in case of chemicals as well as the pharmaceutical industry. In order to make this process more efficient, it is important for early pharmaceutical/chemical (especially pharmaceuticals)candidate selection/screening. Screening thousands of compounds is a lengthy process and current high-throughput screening approaches together with large volume data analysis techniques have helped in more efficient selection of target molecules.
Topic 1: Key Points
In this section, we explored the following main points:
• 1: The concept of systems toxicology and the different approaches and the driving force behind the development of this field.
Knowledge Check
1. Systems toxicology is usually applicable in the ...
Early discovery phases of drug development
During marketing of the drug
During regulatory safety testing phases
None of the above
Answer
Early discovery phases of drug development
2. Systems toxicology approach involves...
Traditional animal experiments
Alternative in vitro methods
Computational and mathematical models
All of the above
Answer
All of the above
4.02: New Page
Learning Objectives
After completing this lesson, you will be able to:
• 1: Assess toxicity in response to dose levels.
As students of toxicology it is critically important to understand that everything including water and oxygen has the potential to act as poison. It is only the dose that determines the toxic/beneficial effect.
Dose level and Applied Toxicology
While traditional toxicology approaches in many academic laboratories and also in industry utilized very high dose levels of xenobiotics in order to study various mechanisms of toxicity, it has been identified recently that this is not a very ideal approach especially for industrial applications.
Step 1 Title
• Using very high dose levels may saturate the physiological processes in a living system and hence may cause “forced toxicity” which is not very relevant in terms of real life exposures (could be valid in case of overdose/accidental overexposure).
In industry, a lot of preliminary research is conducted in order to determine dose levels for toxicology experiments. Typically, in case of pharmaceutical compounds a lot of pharmacokinetic modeling and simulations is performed to come up with exposure levels that are multiples of the real drug exposure level in humans. In the chemical industry relevant exposure levels that human beings may be exposed to based on the use of the chemical is determined.
Dose Level Selection
• 1: Traditional toxicity testing involved using a large number of animals and using very high dose levels. Study designs such as LD 50 (Lethal Dose for 50% animals in the study) are no longer used. It is not feasible from a scientific or animal welfare point of view
• 2: Nowadays, dose level selection often involves mathematical simulation and modeling utilizing various forms of in vitro data that are analyzed with the help of medium and high-throughput modeling tools. This is a systems toxicology approach where data from different platforms is utilized to come up with relevant dose levels that are scientifically justifiable and also to refine and minimize animal experiments.
Schematic Approach: Systems Toxicology
Key Points
In this section, we explored the following main points:
• 1: The understanding of how toxicity is driven strongly by the dose response relationship.
Knowledge Check
1. Utilizing very high dose levels in animal models are not useful because
It causes forced toxicity due to saturation of the absorption and elimination processes
It is too toxic and no meaningful information is available from such data
It is necessary for planning acute studies
It causes a lot of wastage of test article
Answer
It causes forced toxicity due to saturation of the absorption and elimination processes
2. Traditional LD 50 studies evaluated...
Adverse effects in 50 animals per dose group
Death in 50 % of the population
Effects in 50% of the animal population
Doses to be used in subsequent chronic studies
Answer
Death in 50% of the population | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/04%3A_Applied_Systems_Toxicology/4.01%3A_New_Page.txt |
Learning Objectives
• 1: Understand tools and technologies used in systems toxicology, the concept of adverse effects and Adverse Outcome Pathways.
Tools and Technologies
1. Tools at the molecular level: Omics technologies such as genomics, protemics, metabolimics.
2. Tools at the cellular level include highly refined high-throughput in vitro model systems.
3. Modeling softwares that can utilize in vitro data and mathematically translate that into relevant in vivo information utilizing pharmacokinetic (PK) or physiologically based pharmacokinetic models (PBPK). This is known as in vitro to in vivo extrapolation (IVIVE).
4. Modeling softwares that can perform sophisticated species scaling (prediction of human parameters from nonclinical species such as rat, dog, monkey, etc.)
5. Risk assessment and exposure modeling tools that enable calculation of hazard and risk in populations on a whole and specific subpopulations.
Toxicity Testing in Chemicals vs. Pharmaceuticals
Regulatory based toxicity testing is different for pharmaceutical based compounds versus chemical and agrochemical compounds. Non clinical safety assessment for phamaceutical products is governed by the different stages of drug development while the toxicological data requirement for chemicals/agrochemicals is based on the amount (tonnage) produced. While most pharmaceutical compounds undergo extensive animal toxicity testing, there are thousands of chemicals in commerce today that have undergone very limited or non toxicity testing. In order to address this several governmental mandates are being put into action.
Government Initiatives & Mandates for 21st Century Toxicity Testing
• In Europe, the Registration Evaluation Authorization and Restriction of Chemicals (REACH) was initially implemented in 2007.
• This substantially altered the safety testing performed on new as well as existing chemicals.
• In The United States too several initiatives to increase safety testing on more and more chemicals are underway which would increase the cost of safety assessments astronomically.
• The regulations under REACH have been directly estimated to cost the industry more than 4.2 billion dollars (Brown, 2003).
21st Century Toxicity Testing
Traditional toxicity testing involves the use of a lot of animals and is an extremely expensive and time consuming process. In order to address the large number of untested chemicals the US Environmental Protection Agency (EPA) initiated the ToxCast program. The ToxCast Program is a high-throughput screening program that would enable the prioritization of chemicals so that resources can be channelized towards those chemicals that possess the greatest risk to human safety.
THE TOXCAST PROGRAM
The ToxCast program developed and utilized automated in vitro assays ( to test effects of chemicals on various biological processes using living cells, isolated proteins etc.). The assay designs included endpoints such as cytotoxicity, enzyme activity, endocrine endpoints, gene expression etc. A total of approximately 600 endpoints were evaluated.
The biggest question to answer was how relevant was this data to human safety and how to utilize this in vitro information to predict human safety.
Figure \(2\): National Institute of Environmental Health Sciences – NIH, 50th Anniversary
The Overall Perspective
In vitro assays may be relevant in the light of the fact that overall disesase/toxicology processes are actually mediated by molecular and cellular perturbations. However, the overall picture at the whole organism level includes several other complex factors such as pharmacokinectics of the compound, metabolism, clearance etc. This gap could be filled by utilizing computational modeling tools that would utilize the in vitro data and integrate them into human physiology with the help of pharmacokinetic (PK) or physiologically based pharmacokinetic models (PBPK).
Understanding the concept of effects Vs. adverse Effects
In order to conduct safety assessments it is important to understand the concept of “adverse effects” versus simply “effects”. It is also important to understand the significance of biological relevance of isolated in vitro, molecular assays as it pertains to the whole organism.
Adverse Outcome Pathways (AOPs) have been developed to try and link the causal molecular initiating event to a host of intermediate processes at the cellular level that finally leads to adverse outcome (AO) in the whole organism that can be used for safety assessment purposes.
Adverse Outcome Pathways
The AOP programme was launched by the Organization for Economic Co-operation and Development (OECD) in 2012. The objective was to link the main molecular initiating event with the phenotypic/functional toxicity/adverse effect at the organism level.
Using Systems Toxicology Approach For Toxicity Predictions
The molecular initiating events could be used in selecting in vitro assays that could have possible potential for predicting toxicity at the whole organism level.
Topic 3: Key Points
In this section, we explored the following main points:
• 1: Tools and Technologies, used in the field of systems toxicology.
• 2: Concept of Adverse effects and AOPs .
Knowledge Check
1. The ToxCast program utilizes which kind of assays?
In vivo
In silico
In vitro
None of the above
Answer
in vitro
2. Adverse Outcome Pathway (AOP) Programme was launched by...
National Toxicology Program (NTP)
National Institute of Health (NIH)
Environmental protection Agency (EPA)
Organization for Economic Co-Operation and Development
Answer
Organization for Economic Co-Operation and Development
4.04: New Page
Learning Objectives
• 1: Understand what Quantitative Structure Activity Relationships are and how they are used in the field of systems toxicology.
Quantitative Structure Activity Relationships (QSAR)
QSAR models are classification models that are used to link particular structures in some molecules to causative/adverse effects at cellular/organism level. This is based on the fact that compounds that are structurally similar would have similar mechanisms of mediating toxicity. For example compounds belonging to the structural class of triazoles have been reported to cause developmental malformations such as cleft palate in rats.
Mechanisms at the cellular level include events such as positive response for the TGFb1 signaling pathway. This information can be used to build a molecular fingerprint where compound having similar chemical structures as triazoles and having a positive response to TGFb1 signaling pathway could be flagged for potential developmental toxicity (teratogens).
QSAR Approaches for Predictive Toxicity Testing
This method however requires a lot of data in order to build robust databases. This also requires validation of QSAR relationships with large sets of in vitro and in vivo datasets. This requires extensive collaboration between scientists across different disciplines of toxicology.
Currently, several softwares are available commercially that are used for predictive toxicology assessments. These softwares have been validated against compounds across different classes and various toxicological end points. Utilizing such tools requires detailed research and information on the extent of evaluation and validation conducted in order to develop these tools. For example softwares that have been validated with large data sets for pharmaceutical compounds may not be appropriate for using in a chemical space and vice versa.
Names of certain commercially available Computational Tools Used In Industry
• QSAR tools: Derek, GastroPlus, ADMET Predictor, OECD QSAR Toolbox, ACD ToxSuite
• PK/PBPK Modeling Tools: GastroPlus, Simcyp, WinNonlin, Berkeley Madonna
Topic 4: Key Points
In this section, we explored the following main points:
• 1: The application of QSARS in systems toxicology.
Knowledge Check
1. QSAR databases link:
Relationship between chemical structures of compounds and their activity/toxicity
Relationship between different responses at different organizational levels
Physiologically relevant modeling tools
In vitro to in vivo extrapolation
Answer
Relationship between chemical structures of compounds and their activity/toxicity
2. Development of efficient QSAR tools depend on...
The user’s familiarity with the software/tool
Knowledge of multiple QSAR models
A large database of validation training sets
All of the above
Answer
A large database of validation training sets | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/04%3A_Applied_Systems_Toxicology/4.03%3A_New_Page.txt |
Learning Objectives
• 1: Understand the different omics technologies and how they are used.
"Systems biology/toxicology uses very powerful high-throughput platforms/tools such as the “omics” technologies. The human genome was first sequenced in 2003. It took 13 years and was extremely expensive. Advancement in technology has now made it possible to have genome sequencing completed in less than a day. It has also made handling and interpretation of large volumes of data at different levels (gene, transcriptome, protein, metabolic) possible. Scientists across different disciplines are now using these technologies to provide a more holistic approach to disease and toxicity."
Systems Biology in Toxicology and Environmental Health; First Edition, Chapter 1
Technologies Used In Systems Biology/Toxicology
Genomics: This refers to the technology which allows us to study the complete genetic material of an organism. This involves DNA sequencing and analysis. Some of the most common methods used for high-throughput genotyping for genome wide association studies (GWAS) include Illumina Omni Arrays which can simultaneously analyze upto 5 million markers per sample. Other examples are the HumanOmni 5 Quad (Omni 5) and the Affymetrix platforms.
GWAS: A GWAS involves analysis of genetic variants in different individuals to study particular traits associated with genetic variations. This enables scientists to study the underlying mechanisms of different diseases at the genomic level. Single base pair changes are the most common form of variants of the genome and are known as single nucleotide polymorphisms (SNP). While most of them are functionally harmless, rare ones can lead to changes at the protein level leading to functional impairment and diseases.
Genomics: Example Application
SNPs that results in functional changes at the protein level thereby causing a difference in phenotype such as a disease state is known as a mutation. These are considered rare genetic variants. An example of a disease state is cystic fibrosis which is cause by multiple mutation in the CFTR gene. This was made possible by genotyping families that were affected by this disease and identifying markers (genetic variants) that could be linked to this disease. This is known as linkage analysis. This type of analysis was also successfully used to identify unique mutations that lead to rare diseases like Huntington’s disease. While this approach has been successful for rare diseases it has been a challenge to utilize this for more common disease states such as cancer, heart and liver disease etc.
Transcriptomics
"Transcriptomics is the study of all transcriptomes (RNA) in a genome. In other words, it is the study that enables us to study large volumes (> 200,000) of data related to gene expression at the RNA level.
Traditionally RNA expression was studied through a technique known as Northern blots. But this technique could not handle large volumes of data. Current techniques used for high-throughput data analysis are different kinds of microarrays and biochips. The most commonly used are the DNA based microarrays.
Quantitative gene expression analysis helps us understand the difference in expression of genetic product between different cells,tissues, species etc. Such microarrays are commercially made available from companies like Affymetrix, Agilent, Applied Microarrays, Illumina etc."
Systems Biology in Toxicology and Environmental Health; First Edition, Chapter 4
Proteomics
"Proteomics is the field that studies total proteins in a system. This involves high-throughput profiling of proteins. This kind of expression pattern is especially useful since it allows us to study post translational effects, since all transcription products are not always converted to proteins; however, all functional aspects at the phenotypic level are almost always driven by proteins.
Proteomics is a powerful tool that can help us in identification of protein biomarkers specific to toxicity due to particular exposures or specific diseases states. Mass spectrometers are commercially sold by several companies such as Waters, Thermofisher, Agilent, Shimadzu, Perkin Elmer etc.
A mass spectrometer is used for analysis of proteomics data. Interpretation requires sophisticated bioinformatics tools."
Rebecca C. Fry. Systems Biology in Toxicology and Environmental Health, Chapter 4 (Kindle Location 1954). Elsevier Inc.
Metabolomics
"Metabolomics refers to the study of metabolites (low molecular weight products of cellular /biological processes that are found in cells, tissues, biological fluids etc.). Metabolomics provides an understanding of the differences in the biochemistry between different variants (such as diseased and healthy patient populations, control group versus treatment group, high dose group versus low dose group etc.)
Metabolic profiling can be easily performed in biological matrices such as urine, blood, plasma, serum and also a wide variety of tissues. Hence this can be used very efficiently in human health and safety assessment for evaluating biomarkers for exposure to toxicity to various agents. It can be used in drug discovery in the pharmaceutical industry for comparing metabolic profile between different dose level and control groups. In the clinical setting metabolomics can provide an understanding of differences in metabolite products in diseased versus healthy patient populations.
Platforms/tools used in metabolomics is similar to proteomics. Liquid chromatography mass spectrometers (MS) are broadly used for metabolite profiling of different biological matrices. Additionally, nuclear magnetic resonance (NMS) imaging is also widely used for metabolomics analysis. A major difference between the two tools is that NMR is a non destructive process and samples used for NMR can be reused for other purposes or returned to the biorepository whereas this is not possible with the LCMS method.
Metabolomic studies can be conducted using a “targeted approach” where a few selected analytes (metabolites) are analyzed based on hypothetical research. Such studies are mainly performed in early discovery phases. Alternatively, a more “broad spectrum” approach may be used in order to develop biomarkers for certain treatments or specific disease states.
As with other “omics” approaches, state-of-the art bioinformatics and statistical tools are used for quantitative interpretation of the data generated from metabolomics platforms (LCMS, NMR)."
Rebecca C. Fry. Systems Biology in Toxicology and Environmental Health, Chapter 4 (Kindle Location 1954). Elsevier Inc.
Metabolomics Workflow
Topic 5: Key Points
In this section, we explored the following main points:
• 1: The application of the different “omics” technologies in systems toxicology.
Knowledge Check
1. Which of the following processes is a non destructive process in which samples can reused or returned back to the biorepository?
Microarray
DNA sequencing
NMR
Protein Isolation
Answer
NMR
2. Which of the following technologies is used to study complete RNA profiles
Nuclear Magnetic Resonance
Microarrays
DNA sequencing
Physiologically based pharmacokinetic models
Answer
Microarrays
3. The central dogma of molecular biology states is...
DNA to RNA and RNA to protein
RNA to protein to DNA
Protein to DNA to RNA
None of the above
Answer
DNA to RNA and RNA to protein
4. Liquid Chromatography mass spectroscopy is used for...
Genomics
Proteomics
Metabolomics
Proteomics and metabolomics
Answer
Proteomics and metabolomics | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/04%3A_Applied_Systems_Toxicology/4.05%3A_New_Page.txt |
The tremendous advancement at the technological level has made it possible to generate data at “high-throughput” levels. It has also enabled scientists to study toxicological processes from a more holistic approach. Instead of answering single questions at a time, a more comprehensive approach is now being applied to understand toxicological responses. “Systems” approach is being effectively used in industry, government and clinical settings.
In biopharmaceutical/chemical industries, thousands of molecules are screened in the early discovery phase to select target compounds with efficacy. Similarly, compounds can also be screened based on their toxicity fingerprints. Molecular fingerprints generated for each class of compounds/ specific chemistries can be used to screen compounds for different indications for future uses. This has also significantly reduced the time and increased efficiency in discovery programs in industry settings.
The government has also been using “the systems” approach successfully for safety assessment programs. Efforts such as the ToxCast and Tox 21 are examples where “systems” toxicology is being efficiently used to prioritize animal testing towards chemicals that pose the greatest risk to human health and safety.
In the clinical setting more and more information is becoming available via the “systems” approach for specific disease states that enable physicians and researchers to develop personalized medicines based on specific needs of patients.
While, all these efforts mark the beginning of a very promising future, there is still a lot of research necessary to utilize these tools and technologies to their full potential.
REFERENCES
• Systems Biology in Toxicology and Environmantal Health. First Edition. Editor Rebecca Fry. Chapters 1 and 4
• Incorporating Human Dosimetry and Exposure into High-Throughput In Vitro Toxicity Screening. Rotroff et al., Toxicological Sciences; 117(2), 348–358 (2010)
Section 4 Final Evaluation
1. In vitro to in vivo extrapolation (IVIVE) is modeling software that can utilize in vitro data and mathematically translate that into relevant in vivo information utilizing pharmacokinetic (PK).
True
False
Answer
True
2. Which of the following is a common tool used for both Quantitative Structure Activity Relationships (QSAR) and Physiologically based pharmacokinetic (PBPK)?
GastroPlus
ADMET Predictor
ACD Toxsuite
Derek
Answer
GastroPlus
3. HumanOmni 5 Quad (Omni 5), illumina Omni Arrays and the Affymetrix platforms are common methods used in:
Transcriptomics
Genomics
Proteomics
Metabolomics
Answer
Genomics
4. Quantitative Structure Activity Relationships (QSAR) approaches for predictive toxicity testing requires all the following EXCEPT:
Lots of data in order to build robust databases.
Validation of QSAR relationships with large sets of in vitro and in vivo datasets.
Extensive collaboration between scientists across different disciplines of toxicology.
A mass spectrophotometer.
Answer
A mass spectrophotometer.
5. Computational modeling tools utilizes the in vitro data and integrate them into human physiology with the help of...
Physiologically based pharmacokinetic model.
High-throughput in vitro model.
Hypothesis and diagnostic model.
Mathematical and computational model.
Answer
Physiologically based pharmacokinetic model.
6. Systems toxicology aims to fill this gap and utilize these data from different systems and integrate them into meaningful assessment for safety. This gap is:
Lack of modern equipment and technology for data collection.
Lack of interpretation and utilization of collected data.
Lack of human resources and management.
Lack of organ and system models.
Answer
Lack of interpretation and utilization of collected data.
7. Adverse Output Pathways (AOP) can be assessed in the following stages:
Molecular initiating event
Intermediate response at cellular level
Toxic response at organic level
All of the above
Answer
All of the above
8. In Metabolomics, the major difference between liquid chromatography mass spectrometers (LCMS) and nuclear magnetic resonance (NMS) is...
NMR samples cannot be reused for other purposes while LCMS samples can be reused for other purposes.
NMR is a destructive process whereas LCMS is a non-destructive process.
NMR is a non-destructive process whereas LCMS is a destructive process.
NMR samples cannot be returned to the biorepository whereas LCMS samples can be returned to the biorepository.
Answer
NMR is a non-destructive process whereas LCMS is a destructive process.
9. In the study of the underlying mechanisms of different diseases at the genomic level, the most common form of variants at this level is:
Polyploidy
Aneuploidy
Single Nucleotide Polymorphism (SNP)
Micronuclei variants
Answer
Single Nucleotide Polymorphism (SNP)
10. ____ is used for analysis of proteomics data:
A mass spectrophotometer
DNA micro array
Illumni Omni Array
DNA sequencing
Answer
A mass spectrophotometer
11. The ToxCast program utilizes which kind of assays?
In vivo
In vitro
In silico
All of the above
Answer
In vitro
12. The following agencies were established to address the challenge of large number of chemicals that go to market with very limited or no toxicity testing:
Environmental Protection Agency (EPA) in the US.
Registration Evaluation Authorization and Restriction of Chemicals (REACH) in Europe.
Both A & B are correct.
Occupational Safety and Health Administration (OSHA).
Answer
Both A & B are correct. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/04%3A_Applied_Systems_Toxicology/4.06%3A_New_Page.txt |
Regulatory toxicology is where the science of toxicology meets the regulations, policies and guidelines that protect human health and the environment from chemicals. In this e-module you will learn about global, regional, national, state, and non-governmental regulatory toxicology.
05: Regulatory Toxicology
Learning Objectives
• 1: Identify regulatory toxicology inside and outside of government.
• 2: Give examples of regulatory toxicology at various scales and locales.
• 3: Explain the difference between regulation and guidance.
What is Regulatory Toxicology?
Regulatory Toxicology is where the science of toxicology meets the regulations, policies and guidelines that protect human health and the environment from chemicals. Regulatory toxicology commonly is associated with government agencies. These agencies may vary dramatically in their size and scope. For example, the United Nations covers the entire globe, while agencies within a city are limited to the area covered by the municipality.
Regulatory agencies generally have specific focus areas that they address. For example, the U.S. Occupational Safety and Health Administration (OSHA) covers hazardous chemicals in the workplace, while the U.S. Consumer Product Safety Commission (CPSC) addresses chemical hazards in consumer products. Lastly, regulatory toxicology also occurs in non-governmental agencies such as professional societies, private industry and various advocacy groups. Further discussion of these is provided later in the module. More information on OSHA and CPS.
What is the difference between a regulation, policy, and guideline?
A regulation is a rule or order issued by a governmental authority that has the force of law. Often regulations are developed by experts in a governmental authority to enforce legislation. An example of a regulation is the Food Quality Protection Act (FQPA) passed by the U.S. Congress and signed into law by the President in 1996.
Policies and guidelines are principles and approaches that clarify and interpret regulations. As such, policies and guidelines do not carry the force of law but provide important direction.
The Federal Insecticide, Fungicide and Rodenticide Act (FIFRA) is a U.S. regulation that regulates the broad class of chemicals used as pesticides (i.e., substances used to combat “pests”). The U.S. Environmental Protection Agency (EPA) has authority over FIFRA, and has in turn established many policies and guidelines concerning pesticides.
One area that EPA has established multiple policies and guidelines is in the registration of pesticides (i.e., EPA review and approval).
Topic 1: Key Points
In this section, we explored the following main points:
• 1: What is Regulatory Toxicology?
• 2: What is the difference between a regulation, policy and guidance
Knowledge Check
1. What is regulatory toxicology?
The forces that govern the regulation of homeostasis in the body
The branch of toxicology that studies regulations
The intersection of the science of toxicology with the regulatory world protecting human health and the environment
Regulations that dictate the science underlying toxicology
Answer
The intersection of the science of toxicology with the regulatory world protecting human health and the environment
2. Which of the following have the force of law?
A regulation
A policy
A guideline
All of the above
Answer
A regulation
5.02: New Page
Learning Objectives
• 1: Define what is meant by “global regulatory toxicology”
• 2: Give an example of a global regulatory guideline
What is Global Regulatory Toxicology?
Global regulatory toxicology is exactly as it sounds, it deals with regulatory toxicology on a global scale (i.e., the entire planet). Relative to other jurisdictions (e.g., nations, states, cities), there are few regulatory toxicology initiatives that are global in nature. Some initiatives (e.g., clean drinking water, clean air) with similar intent may span many parts of the globe and appear global, but they lack a global consensus.
Global regulatory toxicology initiatives often originate from activities by the United Nations (UN), a global organization bringing together member countries to confront common challenges.
Example of Global Regulatory Toxicology: Globally Harmonized System (GHS)
Full name is the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). It was created by the UN. Work on GHS began in 1992 and the first edition was released in 2003. GHS is updated every two years. The goal is to harmonize the criteria by which chemicals are classified in terms of their hazards. Hazards include physical (e.g., flammability), environmental (e.g., toxicity to fish), and human heath (e.g., acute toxicity people).
The Globally Harmonized System (GHS)
Prior to Globally Harmonized System (GHS), each country had its own criteria for hazard classification, and some countries had multiple criteria. This provided challenges and confusion to the general public and other stakeholders. GHS is a guideline, not a regulation. However, once adopted by a country, GHS generally becomes a regulation.
Topic 2: Key Points
In this section, we explored the following main points:
• 1: What is Global Regulatory Toxicology?
• 2: The Globally Harmonized System (GHS)
Knowledge Check
1. Which of the following generally is involved with establishing a global regulatory toxicology initiative?
Environmental Protection Agency (EPA)
United Nations (UN)
Food and Drug Administration (FDA)
All of the above
Answer
United Nations (UN)
2. Which of the following is an example of a global regulatory toxicology initiative?
Clean Air Act
Clean Water Act
Globally Harmonized System
Safe Drinking Water Act
Answer
Globally Harmonized System | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/05%3A_Regulatory_Toxicology/5.01%3A_New_Page.txt |
Learning Objectives
• 1: Define what is meant by a “regional regulatory toxicology”
• 2: Give an example of a regional regulation
What is Regional Regulatory Toxicology?
Regional regulatory toxicology exists between global and national regulatory toxicology. It deals with regulatory toxicology that includes multiple countries. Often these countries are adjacent or in close proximity (e.g., United States and Canada) but that is not a requirement.
A group of nations that frequently exert regulations on a regional scale is the European Union (EU). The EU started in 1951 with six European countries agreeing to cooperate and has expanded over the ensuing decades to the current list of 28 member countries. -One current member of the EU, the United Kingdom has decided through a popular vote to eventually exit the EU (i.e., Brexit).
It started as a paper in 2001 and entered into force on June 1, 2007. It's a new approach for the regulation of chemicals in the EU. Required key hazard information to be obtained for existing chemicals to remain in commerce, or for a new chemicals to enter into commerce.
Information falls into three broad hazard categories: physical-chemical, human health and environmental health. The types and amounts of information varies depending on the volume of chemical used in commerce. The more the chemical is used, the more the data.
In addition to hazard information, REACH requires assessments to determine if chemicals in commerce pose unreasonable risk(s). REACH has resulted in the assessment and/or re-assessment of 1000s of chemicals. A large database of toxicity information was created. Some countries are using REACH as a model to develop new, or update existing, chemical regulations, for example, South Korea with K-REACH.
Topic 3: Key Points
In this section, we explored the following main points:
• 1: What is Regional Regulatory Toxicology?
• 2: The Registration, Evaluation, Authorization and restriction of CHemicals (REACH) regulation
Knowledge Check
1. Which of the following generally is a trait commonly found in a regional regulatory toxicology initiative?
Involvement of multiple regulatory agencies within a country
Involvement of all/most countries around the globe
Involvement of multiple states within the United States
Involvement of two or more countries in a regulatory initiative
Answer
Involvement of two or more countries in a regulatory initiative
2. Which of the following is an example of a regional regulatory toxicology initiative?
GHS
REACH
K-REACH
FQPA
Answer
REACH
5.04: New Page
Learning Objectives
After completing this lesson, you will be able to:
• Define what is meant by “National Regulatory Toxicology”.
• Give an example of a national regulatory regulation.
What is National Regulatory Toxicology?
As the name suggests, it focuses on regulatory toxicology at the national level (e.g., United States, Canada, China). Regulations are set by authorities that apply for the entire country. The focus on the United States (US) deals with regulations established at the federal level, and all states within the US must comply with federal regulations.
Example agencies that set national toxicology regulations in the U.S.:
All agencies aim to set regulations to protect against adverse health effects from exposure to substances. Different agencies regulate on different types of substances, that are used for different purposes and affect different populations.
Goal of EPA is to protect human and environmental health; deals with exposures to substances in the environment (e.g., air, water, soil). It typically pertains to exposures that the general population generally does not have control over (e.g., air pollution) as well as modernizing chemical regulation in the US, e.g. Toxic Substances Control Act (TSCA) was recently revised.
TSCA was first passed in 1976. TSCA was revised with passage of the Frank R. Lautenberg Chemical Safety for the 21st Century Act in 2016. It regulates new or already existing chemicals manufactured or in use in the US.
Topic 4: Key Points
In this section, we explored the following main points:
• 1: What is National Regulatory Toxicology?
• 2: Examples of National Agencies in the U.S.
• 3: What do these Agencies regulate?
• 4: Highlight: EPA and TSCA Reform.
Knowledge Check
1. EPA regulates chemicals that are in:
Food
Environmental media (e.g., water, air)
Drugs and Medical Devices
Shampoo
Answer
Environmental media (e.g., water, air)
2.US States can choose not to comply with regulations set by the FDA
True
False
Answer
False | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/05%3A_Regulatory_Toxicology/5.03%3A_New_Page.txt |
Learning Objectives
• 1: Define what is meant by “State Regulatory Toxicology”.
• 2: Give an example of a state regulatory regulation.
What is State Regulatory Toxicology?
As the name suggests, it deals with regulatory toxicology for the US on a state level (e.g. Texas, Delaware, Arizona). It was established by authorities applicable to a given state. State regulatory toxicology only applies with the state’s boundaries; however, they may influence adjacent states or even national regulatory toxicology.
State regulatory toxicology is under the layer of the US national government. It sits under a complex web of state and local laws and policies, in addition to regulatory authorities. The make-up of state and local governments varies widely across the US; while they have mutual specific features, their organizations differ. Whatever their design, state and local governments can sometimes have a much greater impact on people's lives than the federal government.
The Federal-State Toxicology and Risk Analysis Committee (FSTRAC) is made up of representatives from U.S. state health and environmental agencies and U.S. EPA personnel.
FSTRAC is an integral part of EPA’s communication strategy with states and tribes for human health risks associated with water contamination. It fosters cooperation, consistency, and an understanding of EPA’s and different states’ goals and problems in human health risk assessment. Additionally, it allows states and the federal government to work together on issues related to the development and implementation of regulations and criteria under the Safe Drinking Water Act and Clean Water Act.
FSTRAC members have supported development of Human Health Benchmarks for Pesticides (HHBP).
• 1: Represent levels of pesticides in drinking water that are not anticipated to cause health effects.
• 2: Used to help assess drinking water quality for pesticides that do not have other regulatory toxicology standards.
Examples of agencies that set national toxicology regulations in the U.S.:
The goal of state agencies is the same as federal agencies, protect people and the environment from health effects associated with chemical exposures.
States have differing regulatory toxicology requirements and focuses. Reasons for the differences could be due to: state history, geography, culture, population size and diversity, major industries, etc…
Examples of state regulatory toxicology as it concerns chemicals in air:
TEXAS
Effects Screening Levels (ESLs) – Air concentrations generally applicable to a specific chemical set by the Texas Commission on Environmental Quality (TCEQ). Play an important role in the regulation of air emissions from companies located in the state. List of ESLs here.
CALIFORNIA
Reference Exposure Levels (RELs) – Air concentrations generally applicable to a specific chemical set by the California Environmental Protection Agency (CalEPA). Represent an air concentration that does not pose a health risk to people. List of RELs here.
Example agencies that set state toxicology regulations
An example of a State Regulatory Agency is the CalEPA Office of Environmental Health Hazard Assessment (OEHHA): Provides requirements and guidance for the California Proposition 65 Safe Drinking Water and Toxic Enforcement Act of 1986 (Proposition 65).
Proposition 65
Purpose
Enable consumers to make informed decisions regarding chemical exposures.
Reason
Established to protect California citizens from chemicals known to the state to cause cancer, birth defects, or other reproductive harms.
Scope
Addresses chemical exposures to the citizens of California that may occur through consumer products, workplace exposures, and exposures occurring via the environment.
Basics
OEHHA publishes a list of chemicals known to cause cancer, birth defects or other reproductive harm. The list is updated regularly and currently contains approximately 900 chemicals. Once a chemical is listed, companies have 12 months to comply with warning requirements under the regulation.
Proposition 65 is referred to as a “risk-based” regulation in that the warning requirements only apply if the risk from chemical exposure are too high as defined by the regulation.
Exposure examples:
• Oral
• Inhalation
• Skin contact
Proposition 65 upcoming changes…
The regulation has undergone revisions: New Proposition 65 Warnings, that will now require companies to add a symbol and change the phrasing of the warning. For example: “WARNING: This product can expose you to chemicals including arsenic, which is known to the State of California to cause cancer. For more information, visit here.
Topic 5: Key Points
In this section, we explored the following main points:
• 1: What is State Regulatory Toxicology?
• 2: An example of how federal and state agencies work together
• What is the Federal-State Toxicology and Risk Analysis Committee (FSTRAC)
• Two example outcomes of the FSTRAC’s workgroup
• 3: Example of State Agencies
• 4: Highlight two state specific guidance and rules as it concerns chemicals in air
• Guidance
• Effects Screening Levels (ESLs)
• Reference Exposure Levels (RELs)
• 5: Rules
• California Proposition 65
• Texas Risk Reduction Program rule
Knowledge Check
1. Proposition 65 is a regulation established in:
Florida
Texas
California
New York
Answer
California
2. Which of the following could explain some of the variability observed between states in terms of regulatory toxicology?
Degree of urbanization
Type of natural resources
Degree of industrialization
All of the above
Answer
All of the above | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/05%3A_Regulatory_Toxicology/5.05%3A_New_Page.txt |
Learning Objectives
• 1: Define what is meant by a “Non-Governmental Regulatory Toxicology”.
• 2: Give several examples of non-governmental entities that conduct and influence regulatory toxicology.
What is Non-Governmental Regulatory Toxicology?
They are groups that are not officially part of a government agency that conduct and/or influence regulatory toxicology. Examples of such groups may include not-for-profit organizations, advocacy groups, professional societies, industry trade associations and individual companies. The term “NGO” (short for non-governmental organization(s)) is sometimes used; however, the term “NGO” often is limited to advocacy groups.
These groups often can conduct or influence regulatory toxicology on a much faster timeframe than government agencies. Government agencies often take many months or years to enact regulations, policies, and guidance; while non-governmental entities often proceed at a fraction of this time. Lastly, they often lack the transparency found in government agencies in terms of decision making, as well as the ability to provide input.
Examples of a non-governmental entities that conduct and influence regulatory toxicology
• Advocacy groups – Often have a particular focus such as the proper treatment of animals, environmental sustainability, women’s health…
• Individual companies – For-profit businesses that decide to address a particular topic related to regulatory toxicology. Examples include eliminating use of certain chemicals, reduce use of water, increasing recycling…
• Trade Associations – represent business that have a common or similar commercial interest (e.g., chemical manufacture, consumer products). Often create industry best practices or guidance for issues related to regulatory toxicology.
Topic 6: Key Points
In this section, we explored the following main points:
• 1: What is Non-Governmental Regulatory Toxicology?
• 2: Examples of different types of non-governmental entities that conduct and influence regulatory toxicology.
Knowledge Check
1. Which of the following is a trait found in a non-governmental entities?
Must engage with the general public prior to taking actions.
Have lengthy processes that often take decades to complete.
Is not a government agency or formally associated with a government agency.
A narrowly defined entity that is not common in society.
Answer
Is not a government agency or formally associated with a government agency.
2. Which of the following is an example of non-governmental regulatory toxicology?
A not-for profit company developing standards and conducting certifications regarding the sustainability of products.
An advocacy group that provides input to government and industry related to reducing lead in consumer products.
An organization representing chemists develops guidance on workplace safety concerning corrosive chemicals.
All of the above
Answer
All of the above
Section 5 Final Evaluation
1. _______ was created by UN to harmonize the criteria by which chemicals are classified in terms of their hazards.
Federal Insecticide, Fungicide and Rodenticide Act (FIFRA).
Consumer Product Safety Commission (CPSC).
Globally Harmonized System of Classification and Labelling of Chemicals (GHS).
The US Food and Drug Administration (FDA).
Answer
Globally Harmonized System of Classification and Labelling of Chemicals (GHS).
2. Regulatory agencies generally have specific focus areas that they address, examples include the following EXCEPT:
Food Quality Protection Act (FQPA) addresses the quality of food and chemicals used in an environment.
Consumer Product Safety Commission (CPSC) addresses chemical hazards in consumer products.
Occupational Safety and Health Administration (OSHA) addresses hazardous chemicals in the work place.
Food and Drug Administration (FDA) regulates pharmaceutical drugs used in humans and animals.
Answer
Food Quality Protection Act (FQPA) addresses the quality of food and chemicals used in an environment.
3. Food and Drug Administration (FDA) is a national regulatory agency that serves general population and medical patients while Consumer Product Safety Commission (CPSC) serves general population and environment.
True
False
Answer
False
4. The purpose of Proposition 65 is to enable consumers make informed decisions regarding chemical exposures while its scope is to address chemical exposures to citizens of California that may occur through consumer products, workplace and environmental exposures.
True
False
Answer
True
5. Registration, Evaluation, Authorization and restriction of CHemicals (REACH) is an example of:
Global regulatory toxicology.
Regional regulatory toxicology.
National regulatory toxicology.
State regulatory toxicology.
Answer
Regional regulatory toxicology.
6. One of the advantages of a Non-Governmental Organization (NGO) over government agencies is:
NGOs often conduct regulatory toxicology on a much faster timeframe than government agencies.
NGOs are transparent in terms of decision making while government agencies are not transparent.
NGOs are not only limited to advocacy group but it influences global regulations.
NGOs are officially part of government agency that influence regulatory toxicology.
Answer
NGOs often conduct regulatory toxicology on a much faster timeframe than government agencies.
7. The following are examples of national toxicology regulatory agencies in the U.S. EXCEPT:
Consumer Product Safety Commission (CPSC).
Environmental Protection Agency (EPA).
Texas Commission on Environmental Quality (TCEQ).
Food and Drug Administration (FDA).
Answer
Texas Commission on Environmental Quality (TCEQ).
8. State of Washington Department of Ecology is an agency that sets state toxicology regulations for:
Proposition 65.
Wellhead Protection Program.
Children’s Safe Products Act (CSPA).
Risk reduction Program.
Answer
Children’s Safe Products Act (CSPA).
9.Global regulatory toxicology brings together member countries to confront common challenges on a global scale, an example is:
United Nations (UN).
Environmental Protection Agency (EPA).
Consumer Product Safety Commission (CPSC).
Occupational Safety and Health Administration (OSHA).
Answer
United Nations (UN).
10. Which of the following statements is true?
CPSC mainly regulates substance in consumer products e.g. shampoos, clothing etc.
EPA mainly regulates substance in occupational environment e.g. air in a factory.
CPSC mainly regulates pharmaceutical drugs and medical devices.
OSHA mainly regulates toxic substances in consumer products.
Answer
CPSC mainly regulates substance in consumer products e.g. shampoos, clothing etc.
11. Which of the following statements is true?
Policies are principles that clarify regulation and carry the force of law.
Guidelines are approaches that interpret regulations and carry the force of law.
Regulations are rules issued by governmental authority and carry the force of law.
Regulations are orders issued by government authority and do not carry the force of law.
Answer
Regulations are rules issued by governmental authority and carry the force of law.
12. Key hazard information to be obtained for existing chemicals to remain in commerce or for new chemicals to enter into commerce falls into three broad hazard categories:
Physical-chemical, animal health and environmental health.
Animal health, human health and environmental health.
Environmental health, physical-chemical and biosafety.
Physical-chemical, human health and environmental health
Answer
Physical-chemical, human health and evironmental health | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/05%3A_Regulatory_Toxicology/5.06%3A_New_Page.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Define toxicology and identify adverse effects.
• Recognize the history of toxicology.
• Explain how dose determines whether a substance is a remedy or a poison.
• Differentiate between toxic agents and toxic substances.
Topics include:
What We've Covered
In this section, we covered several important concepts:
• Toxicology is the study of adverse effects of chemicals and physical agents on living organisms.
• A xenobiotic is a foreign substance taken into the body.
• A toxic agent is any chemical, physical, or biological agent that can produce an adverse biological effect.
• Toxic substances can be systemic toxicants, which affect the entire body or multiple organs, or organ toxicants, which affect a specific organ or tissues.
• The dose of a substance is the most important determinant of toxicity.
Coming Up...
In the next section, we will explore the concept of dose and its importance to toxicology in greater detail.
Section 1: Introduction to Toxicology
What is Toxicology?
Toxicology is traditionally defined as "the science of poisons." Over time, our understanding of how various agents can cause harm to humans and other organisms has increased, resulting in a more descriptive definition of toxicology as "the study of the adverse effects of chemical, physical, or biological agents on living organisms and the ecosystem, including the prevention and amelioration of such adverse effects."
These adverse effects can take many forms, ranging from immediate death to subtle changes not appreciated until months or years later. They may occur at various levels within the body, such as an organ, a type of cell, or a specific biochemical. Our understanding of how toxic agents damage the body has progressed along with medical knowledge. We now know that various observable changes in anatomic or bodily functions actually result from previously unrecognized changes in specific biochemicals in the body.
Did you know?
The study of toxicology may appear to focus only on poisonings or disasters, but some toxic chemicals can have positive effects. Animal venoms, whether from bees, wasps, snakes, or Gila monsters, are composed of hundreds of chemicals that are being studied as treatments for human diseases.
For example, exantide, a drug derived from Gila monster saliva, has been approved for use in Type 2 diabetes. Captopril, which is used to treat hypertension and heart failure, was developed from studies on the chemical bradykinin-potentiating factor (BPF) in the venom of a South American snake Bothrops jararaca. Melitten, which comes from honeybee venom, is being investigated for its anticancer and antifungal properties.
Figure \(1\): Figure 1. Gila monster (top); Bothrops jararaca (bottom)
(Image Source: Wikipedia, adapted under GNU Free Documentation License - Original Images)
History of Toxicology
Prehistory
Poisonous plants and animals were recognized and their extracts used for hunting or in warfare.
1500 BC
Written records indicate that hemlock, opium, arrow poisons, and certain metals were used to poison enemies or for state executions.
c. 1198
With time, people began to make the connection between exposure to a specific substance and illness or death.
In 1198, Moses Maimonides wrote what may be the first collection of writings on toxicology, The Treatise on Poisons and Their Antidotes.
Renaissance and Age of Enlightenment
Certain fundamental toxicology concepts began to take shape. Noteworthy studies include those by Paracelsus in the 16th century and Orfila in the 19th century.
Paracelsus (16th Century)
Determined that specific chemicals were actually responsible for the toxicity of a plant or animal poison.
Documented that the body's response to those chemicals depended on the dose received.
Studies revealed that small doses of a substance might be harmless or beneficial, whereas larger doses could be toxic. This is now known as the dose-response relationship, a major concept in toxicology.
"All substances are poisons; there is none which is not a poison. The right dose differentiates a poison and a remedy."
- Paracelsus
Orfila, the founder of toxicology (19th Century)
A Spanish physician, Orfila is often referred to as the founder of toxicology.
Orfila was the first to describe a systematic correlation between the chemical and biological properties of poisons of the time.
Orfila demonstrated the effects of poisons on specific organs by analyzing autopsy materials for poisons and tissue damage associated with them.
20th and 21st Centuries
Marked by great advancements in the level of understanding of toxicology.
DNA and various biochemicals that maintain body functions have been discovered.
Our level of knowledge of toxic effects on organs and cells has expanded to the molecular level.
Virtually all toxic effects are recognized as being caused by changes in specific cellular molecules and biochemicals.
Remedy or Poison?
Xenobiotic is the general term that is used for a foreign substance taken into the body. It is derived from the Greek term xeno which means "foreigner." Xenobiotics may produce beneficial effects (such as pharmaceuticals) or they may be toxic (such as lead).
As Paracelsus proposed centuries ago, dose differentiates whether a substance will be a remedy or a poison. A xenobiotic in small amounts may be nontoxic and even beneficial, but when the dose is increased, toxic and lethal effects may result.
The following image provides some examples that illustrate this concept.
Figure \(2\): Figure 3. Examples of varying doses of the same substance as non-toxic or beneficial, toxic, and lethal
(Image Source: Adapted from T. Gossel and J. Bricker, eds) | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_1%3A_Introduction_to_Toxicology/1.1%3A_What_is_toxicology.txt |
Toxicology Defined
Toxicology is an evolving medical science and toxicology terminology is evolving with it. Most terms are very specific and will be defined as they appear in the tutorial. However, some terms are more general and used throughout the various sections. The most commonly used terms are introduced in this section.
• Toxicology is the study of the adverse effects of chemicals or physical agents on living organisms.
• A toxicologist is a scientist who determines the harmful effects of agents and the cellular, biochemical, and molecular mechanisms responsible for the effects.
• Toxinology, a specialized area of study, looks at microbial, plant and animal venoms, poisons, and toxins.
Terminology and definitions for materials that cause toxic effects are not always consistently used in the literature. The most common terms are toxicant, toxin, poison, toxic agent, toxic substance, and toxic chemical.
Toxicant, toxin, and poison are often used interchangeably in the literature but there are subtle differences as shown below:
Toxicants:
• Substances producing adverse biological effects of any kind.
• May be chemical or physical in nature.
• Effects may be acute or chronic.
Figure \(1\): Pesticide chemicals are toxicants
(Image Source: iStock Photos, ©)
Toxins:
• Peptides or proteins produced by living organisms.
• Venoms are toxins injected by a bite or sting.
Figure \(2\): Amanita muscaria mushroom contains a neurotoxin
(Image Source: iStock Photos, ©)
Poisons:
• Toxins produced by organisms.
Figure \(3\): Black Widow spiders produce a poison that is a toxin
(Image Source: Texas Parks & Wildlife, ©)
A toxic agent is anything that can produce an adverse biological effect. It may be chemical, physical, or biological in form. For example, toxic agents may be:
Chemical (such as cyanide)
Physical (such as radiation)
Biological (such as snake venom)
The toxicity of the agent is dependent on the dose.
A distinction is made for diseases people get from living organisms. Organisms that invade and multiply within another organism and produce their effects by biological activity are not classified as toxic agents but as biological agents. An example of this is a virus that damages cell membranes resulting in cell death.
If the invading organisms excrete chemicals which are the basis for their toxicity, the excreted substances are known as biological toxins. In that case, the organisms are called toxic organisms. A specific example is tetanus. Tetanus is caused by a bacterium, Clostridium tetani. The bacteria C. tetani itself does not cause disease by invading and destroying cells. Rather, a toxin (neurotoxin) that the bacteria excrete travels to the nervous system and produces the disease (Figure 8).
Toxic Substances
A toxic substance is simply a material that has toxic properties. It may be a discrete toxic chemical or a mixture of toxic chemicals. For example, lead chromate, asbestos, and gasoline are all toxic substances. More specifically:
• Lead chromate is a discrete toxic chemical.
• Asbestos is a toxic material that does not have an exact chemical composition but comprises a variety of fibers and minerals.
• Gasoline s a toxic substance rather than a toxic chemical in that it contains a mixture of many chemicals. Toxic substances may not always have a constant composition. The composition of gasoline varies with octane level, manufacturer, time of season, and other factors.
Figure \(4\): Examples of toxic substances: lead chromate (left), asbestos (center), and gasoline (right)
(Image Source: iStock Photos, ©)
Systemic Toxicants and Organ Toxicants
Toxic substances may be systemic toxicants or organ toxicants.
A systemic toxicant affects the entire body or many organs rather than a specific site. For example, potassium cyanide is a systemic toxicant in that it affects virtually every cell and organ in the body by interfering with the cells’ ability to use oxygen.
Toxicants may also affect only specific tissues or organs while not producing damage to the body as a whole. These specific sites are known as the target organs or target tissues.
• Benzene is a specific organ toxicant in that it is primarily toxic to the blood-forming tissues.
• Lead is also a specific organ toxicant; however, it has three target organs: the central nervous system, the kidneys, and the hematopoietic system.
A toxicant may affect a specific type of tissue (such as connective tissue) that is present in several organs. The toxic site is then considered the target tissue.
Figure \(5\): Systemic toxicant and organ toxicant
(Image Source: iStock Photos, ©)
Types of Cells
The body is composed of many types of cells, which can be classified in several ways. Table 1 shows examples of one classification of one type of cells.
Cell Types Examples
Basic structure cuboidal cells
Tissue type hepatocytes of the liver
Germ cells ova and sperm
Somatic cells non-reproductive cells of the body
Germ cells are involved in reproduction and can give rise to a new organism. They have only a single set of chromosomes peculiar to a specific sex. Male germ cells give rise to sperm and female germ cells develop into ova. Toxicity to germ cells can cause effects in a developing fetus that lead to outcomes such as birth defects or miscarriage.
Somatic cells are all body cells except the reproductive germ cells. (Somatic cells include the "basic structure" and "tissue type" cells listed in Table 1). They have two sets (or pairs) of chromosomes. In an exposed individual, toxicity to somatic cells causes a variety of toxic effects, such as dermatitis, death, and cancer.
Figure 6 illustrates the differences between germ cells and somatic cells.
Natural and Man-Made Chemicals
Often, people mistakenly assume that all man-made chemicals are harmful and natural chemicals are beneficial. In reality, natural chemicals can be just as harmful to human health as man-made chemicals, and in many cases, more harmful. Figure 12 compares the toxicity of several natural and man-made chemicals. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_1%3A_Introduction_to_Toxicology/1.2%3A_Basic_Terminology.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Explain absorption and its role in toxicokinetics.
• Describe the primary routes of exposure.
• Explain the role of cell membranes in absorption.
• Identify ways in which xenobiotics pass across cell membranes.
Topics include:
Section 10: Key Points What We've Covered This section made the following main points:
• Absorption is the process by which toxicants gain entrance into the body.
• Ingested and inhaled materials are considered outside the body until they cross the cellular barriers of the gastrointestinal tract or respiratory system.
• The likelihood of absorption depends on the:
• Route of exposure.
• Concentration of the substance at the site of contact.
• Chemical and physical properties of the substance.
• Exposure routes include:
• Primary routes:
• Gastrointestinal (GI) tract
• Mouth and esophagus — poorly absorbed under normal conditions due to short exposure time (nicotine and nitroglycerin are notable exceptions).
• Stomach — significant site for absorption of weak organic acids, but weak bases are poorly absorbed.
• Intestine — greatest absorption of both weak bases and weak acids, particularly in the small intestine.
• Colon and rectum — very little absorption, unless administered via suppository.
• Respiratory tract
• Mucociliary escalator — movements of the cilia push mucus and anything contained within up and out into the throat to be swallowed or removed through the mouth.
• Pulmonary region — most important site for absorption with about 50 times the surface area of the skin and very thin membranes.
• Skin
• Epidermis and stratum corneum — the only layer important in regulating the penetration of a skin contaminant.
• Toxicants move across the stratum corneum by passive diffusion.
• If a toxicant penetrates through the stratum corneum, it enters lower layers of the epidermis, dermis, and subcutaneous tissue, which are far less resistant to further diffusion.
• Other exposure routes:
• Injections
• Implants
• Conjunctival instillations (eye drops)
• Suppositories
• Cell membranes surround all body cells and are made up of a phospholipid bilayer in which each molecule contains a:
• Polar (hydrophilic, or attracted to water) phosphate head
• Lipophilic (attracted to lipid-soluble substances) lipid tail
• Xenobiotics must pass across cell membranes to enter, move within, and leave the body. This movement can be either:
• Passive transfer (most common) — simple diffusion or osmotic filtration with no cellular energy or assistance required.
• Facilitated transport — similar to passive transport, but a carrier-mediated transport mechanism and thus faster and capable of moving larger molecules.
• Active transport — movement against the concentration gradient (from lower to higher concentrations), requiring cellular energy from ATP.
• Endocytosis — the cell surrounds the substance with a section of its cell wall, separating from the membrane and moving into the interior of the cell.
Section 10: Absorption
Introduction to Absorption
Toxicants gain entrance into the body by absorption. The body considers ingested and inhaled materials as being outside of it until those materials cross the cellular barriers of the gastrointestinal tract or respiratory system. A substance must be absorbed to exert an effect on internal organs, although local toxicity, such as irritation, may occur.
Figure \(1\). Processes of toxicokinetics
(Image Source: Adapted from iStock Photos, ©)
Absorption Variability
Absorption varies greatly by specific chemicals and the route of exposure.
• For skin, oral or respiratory exposure, the absorbed dose is only a fraction of the exposure dose (external dose).
• For substances injected or implanted directly into the body, exposure dose is the same as the absorbed or internal dose.
Several factors affect the likelihood that a xenobiotic will be absorbed. The most important factors are the:
• Route of exposure.
• Concentration of the substance at the site of contact.
• Chemical and physical properties of the substance.
The route of exposure influences how the concentration and properties of the substance vary. In some cases, a high percentage of a substance may not be absorbed from one route whereas a low amount may be absorbed via another route.
• For example, very little DDT powder will penetrate the skin whereas a high percentage will be absorbed when it is swallowed.
Due to such route-specific differences in absorption, xenobiotics are often ranked for hazard in accordance with the route of exposure. A substance may be categorized as relatively non-toxic by one route and highly toxic via another route.
Routes of Exposure
The primary routes of exposure by which xenobiotics can gain entry into the body are:
• Gastrointestinal (GI) tract — important for environmental exposure to contaminants from food and water; the main route for many pharmaceuticals.
• Respiratory tract — important for environmental and occupational exposure to air contaminants; some pharmaceuticals (such as nasal or oral aerosol inhalers) use this route.
• Skin — important environmental and occupational exposure route; many consumer and pharmaceutical products are applied directly to the skin.
Other routes of exposure – used primarily for specific medical purposes:
• Injections — primarily used for pharmaceuticals.
• Implants — pharmaceuticals may be implanted to permit slow, time-release (for example, hormones). Many medical devices are implanted for which minimal absorption is desired (such as artificial lens or tendons). Some materials enter the body via skin penetration as the result of accidents or weapons.
• Conjunctival instillations (eye drops) — primarily for treating ocular conditions; however, in some cases, considerable absorption can occur and cause systemic toxicity.
• Suppositories — used for medicines that may not be adequately absorbed after oral administration or that are intended for local therapy; usual locations for suppositories are the rectum and vagina.
Cell Membranes
Cell membranes (often referred to as plasma membranes) surround all body cells and are similar in structure. They consist of two layers of phospholipid molecules arranged like a sandwich, referred to as a "phospholipid bilayer." Each phospholipid molecule consists of a phosphate head and a lipid tail. The phosphate head is polar, meaning it is hydrophilic (attracted to water). In contrast, the lipid tail is lipophilic (attracted to lipid-soluble substances).
The two phospholipid layers are oriented on opposing sides of the membrane so that they are approximate mirror images of each other. The polar heads face outward and the lipid tails face inward in the membrane sandwich (Figure \(2\)).
Figure \(2\). Each phospholipid molecule consists of a phosphate head and lipid tail
(Image Source: Adapted from Wikimedia Commons, obtained under Public Domain, original image)
The cell membrane is tightly packed with these phospholipid molecules interspersed with various proteins and cholesterol molecules. Some proteins span across the entire membrane which can create openings for aqueous channels or pores.
A typical cell membrane structure is illustrated in Figure \(3\).
Role of Cell Membranes in Absorption
For a xenobiotic to enter the body (as well as move within, and leave the body) it must pass across cell membranes (cell walls). Cell membranes are formidable barriers and a major body defense that prevents foreign invaders or substances from gaining entry into body tissues. Normally, cells in solid tissues (such as skin, or mucous membranes of the lung or intestine) are so tightly compacted that substances cannot pass between them. This requires that the xenobiotic have the ability to penetrate cell membranes. It must cross several membranes to go from one area of the body to another.
For a substance to move through one cell requires that it first move across the cell membrane into the cell, pass across the cell, and then cross the cell membrane again to leave the cell. This is true whether the cells are in the skin, the lining of a blood vessel, or an internal organ such as the liver. In many cases, in order for a substance to reach the site where it exerts toxic effects, it must pass through several membrane barriers.
Animation 1 depicts how a chemical from a theoretical consumer product called a "Shower Gel" might get to the surface of the skin during showering and then pass through several membranes before coming in contact with the inside of a liver cell
Animation \(1\). From a Gel to a Cell: Following the journey of a chemical from a theoretical shower gel product through several membranes and ultimately into a cell
(Image Source: iStock Photos, ©)
Movement of Toxicants Across Cell Membranes
Some toxicants move across a membrane barrier with relative ease while others find it difficult or impossible. Those that can cross the membrane use one of two general methods: 1) passive transfer or 2) facilitated transport.
Passive transfer consists of simple diffusion (or osmotic filtration) and is "passive" because no cellular energy or assistance is required.
Some toxicants cannot simply diffuse across the membrane but require assistance by specialized transport mechanisms. The primary types of specialized transport mechanisms are:
• Facilitated diffusion
• Active transport
• Endocytosis (phagocytosis and pinocytosis)
Passive Transfer
Passive transfer is the most common way that xenobiotics cross cell membranes. Two factors determine the rate of passive transfer:
1. The difference in concentrations of the substance on opposite sides of the membrane (this occurs when a substance moves from a region of high concentration to one having a lower concentration. Diffusion will continue until the concentration is equal on both sides of the membrane).
2. The ability of the substance to move either through the small pores in the membrane or the lipophilic interior of the membrane.
Properties affecting a chemical substance's ability for passive transfer are:
• Lipid solubility
• Molecular size
• The degree of ionization
Substances with high lipid solubility readily diffuse through the phospholipid membrane. Small water-soluble molecules can pass across a membrane through the aqueous pores, along with normal intracellular water flow.
Large water-soluble molecules usually cannot make it through the small pores, although some may diffuse through the lipid portion of the membrane, but at a slow rate.
• Most aqueous pores are about 4 Angstrom (Å) in size and allow chemicals of molecular weight 100-200 to pass through. Exceptions are membranes of capillaries and kidney glomeruli which have relatively large pores (about 40 Angstrom [Å]) that allow molecules up to a molecular weight of about 50,000 (molecules slightly smaller than albumin which has a molecular weight of 60,000) to pass through.
In general, highly ionized chemicals have low lipid solubility and pass with difficulty through the lipid membrane.
Figure 4 demonstrates the passive diffusion and filtration of xenobiotics through a typical cell membrane.
Figure \(4\). Cellular diffusion
(Image Source: Blausen.com staff (2014). "Medical gallery of Blausen Medical 2014". WikiJournal of Medicine 1 (2). DOI:10. 15347/wjm/2014.010. ISSN 2002-4436. Obtained under Creative Commons license.)
Facilitated Diffusion
Facilitated diffusion is similar to simple diffusion in that it does not require energy and follows a concentration gradient. The difference is that it is a carrier-mediated transport mechanism (Figure \(5\))—that is, special transport proteins, which are embedded within the cell membrane, facilitate movement of molecules across the membrane. The results are similar to passive transport but faster and capable of moving larger molecules that have difficulty diffusing through the membrane without a carrier.
• Examples are the transport of sugar and amino acids into RBCs and the CNS.
Figure \(5\). Facilitated Diffusion
(Image Source: Blausen.com staff (2014). "Medical gallery of Blausen Medical 2014". WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.010. ISSN 2002-4436. Obtained under Creative Commons license.)
Active Transport
Some substances are unable to move with diffusion, unable to dissolve in the lipid layer, and are too large to pass through the aqueous channels. For some of these substances, active transport processes exist in which movement through the membrane may be against the concentration gradient, that is, from low to higher concentrations. Cellular energy from adenosine triphosphate (ATP) is required in order to accomplish this. The transported substance can move from one side of the membrane to the other side by this energy process. Active transport is important in the transport of xenobiotics into the liver, kidney, and central nervous system and for maintenance of electrolyte and nutrient balance.
Figure \(6\) shows sodium and potassium moving against concentration gradient with the help of the ATP sodium-potassium exchange pump.
Figure \(6\). Sodium-Potassium Exchange Pump
(Image Source: Blausen.com staff (2014). "Medical gallery of Blausen Medical 2014". WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.010. ISSN 2002-4436. Obtained under Creative Commons license.)
Endocytosis (Phagocytosis and Pinocytosis)
Many large molecules and particles cannot enter cells via passive or active mechanisms. However, some may still enter by a process known as endocytosis.
In endocytosis, the cell surrounds the substance with a section of its cell wall. This engulfed substance and section of membrane then separates from the membrane and moves into the interior of the cell. The two main forms of endocytosis are 1) phagocytosis and 2) pinocytosis.
In phagocytosis (cell eating), large particles suspended in the extracellular fluid are engulfed and either transported into cells or are destroyed within the cell. This is a very important process for lung phagocytes and certain liver and spleen cells.
Pinocytosis (cell drinking) is a similar process but involves the engulfing of liquids or very small particles that are in suspension in the extracellular fluid.
Figure \(7\) demonstrates the types of membrane transport by endocytosis.
Figure \(7\). Types of Endocytosis
(Image Source: Wikimedia Commons, obtained by Public Domain License, original image)
Knowledge Check
1) The process whereby a substance moves from outside the body into the body is known as:
a) Distribution
b) Biotransformation
c) Absorption
Answer
Absorption - This is the correct answer.
Absorption is the first and crucial step in the toxicokinetics of a xenobiotic. Without absorption, a toxic substance does not represent a human health hazard.
2) For a xenobiotic to move from outside the body to a site of toxic action requires that it:
a) Possess hydrophilic (water-soluble) properties
b) Possess hydrophobic (lipophilic) properties
c) Pass through several cell membranes
Answer
Pass through several cell membranes - This is the correct answer.
In order for a xenobiotic to move from outside the body to an internal site of toxic action (target cells), a xenobiotic must pass through several membrane barriers. The first membranes are those at the portal of entry, for example, lung or intestinal tract.
3) The basic structure of the cell membrane consists of:
a) A thick protein layer containing phospholipid channels
b) A bilayer of phospholipids with scattered proteins within the layers
c) Cholesterol outer layer with a phospholipid inner layer
Answer
A bilayer of phospholipids with scattered proteins within the layers - This is the correct answer.
The typical cell membrane consists of two layers of phospholipids with polar head groups consisting of phospholipid molecules and the lipid inner portion consisting primarily of cholesterol molecules. The phospholipid layers are oriented on opposing sides of the membrane so that they are approximate mirror images of each other. Various proteins are scattered throughout the lipid bilayers of the membrane.
4) The membrane transport process by which large hydrophobic molecules cross membranes via the lipid portion of the membrane, follow the concentration gradient, and do not require energy or carrier molecules is known as:
a) Simple diffusion
b) Active transport
c) Facilitated diffusion
Answer
Simple diffusion - This is the correct answer.
Large hydrophobic molecules must diffuse through the lipid portion of the membrane, with the rate of transport correlating with its lipid solubility. In general, highly ionized chemicals have low lipid solubility and do not readily pass through the lipid membrane.
5) Endocytosis is a form of specialized membrane transport in which the cell surrounds the substance with a section of its cell membrane. The specific endocytosis process by which liquids or very small particles are engulfed and transported across the membrane is known as:
a) Phagocytosis
b) Pinocytosis
c) Exocytosis
Answer
Pinocytosis - This is the correct answer.
Pinocytosis (cell drinking) involves the engulfing of liquids or very small particles that are in suspension within the extracellular fluid. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_10%3A_Absorption/10.1%3A_Introduction_to_Absorption.txt |
Gastrointestinal Tract
The gastrointestinal (GI) tract can be viewed as a tube going through the body (Figure \(1\)). Its contents are considered exterior to the body until absorbed. Salivary glands, liver, and the pancreas are considered accessory glands of the GI tract as they have ducts entering the GI tract and secrete enzymes and other substances. For foreign substances to enter the body, they must pass through the gastrointestinal mucosa, crossing several membranes before entering the bloodstream.
Substances must be absorbed from the gastrointestinal tract in order to exert a toxic effect throughout the whole body, although local gastrointestinal damage may occur from direct exposures to toxicants. Absorption can occur at any place along the entire gastrointestinal tract. However, the degree of absorption depends on the site.
Three main factors affect absorption within the various sites of the gastrointestinal tract:
1. Type of cells at the specific site.
2. Period of time that the substance remains at the site.
3. pH of stomach or intestinal contents at the site.
Figure \(1\). Anatomy of gastrointestinal tract
(Image Source: adapted from iStock Photos, ©)
Mouth and Esophagus
Under normal conditions, xenobiotics are poorly absorbed within the mouth and esophagus, due mainly to the very short time that a substance resides within these portions of the gastrointestinal tract. There are some notable exceptions. For example:
• Nicotine readily penetrates the mouth mucosa.
• Nitroglycerin is placed under the tongue (sublingual) for immediate absorption and treatment of heart conditions.
The sublingual mucosa under the tongue and in some areas of the mouth is thin and highly vascularized and allows some substances to be rapidly absorbed.
Stomach
The stomach, with its high acidity (pH 1-3), is a significant site for the absorption of weak organic acids, which exist in a diffusible, nonionized and lipid-soluble form. In contrast, weak bases will be highly ionized and therefore poorly absorbed. The acidic stomach may chemically break down some substances. For this reason, those substances must be administered in gelatin capsules or coated tablets, which can pass through the stomach into the intestine before they dissolve and release their contents.
Another determinant that affects the amount of a substance that will be absorbed in the stomach is the presence of food in the stomach. Food ingested at the same time as the xenobiotic may result in a considerable difference in absorption of the xenobiotic.
Intestine
The greatest absorption of chemicals, as with nutrients, takes place in the intestine, particularly in the small intestine. The intestine has a large surface area consisting of outward projections of the thin (one-cell thick) mucosa into the lumen of the intestine (the villi) (Figure \(2\)). This large surface area facilitates diffusion of substances across the cell membranes of the intestinal mucosa.
Since the pH is near neutral (pH 5-8), both weak bases and weak acids are nonionized and are usually readily absorbed by passive diffusion. Lipid soluble, small molecules effectively enter the body from the intestine by passive diffusion.
Figure \(2\). Anatomy of structures in small intestine used for absorption
(Image Source: adapted from iStock Photos, ©)
In addition to passive diffusion, facilitated and active transport mechanisms move certain substances across the intestinal cells into the body, including essential nutrients such as glucose, amino acids, and calcium. These mechanisms also transport strong acids, strong bases, large molecules, and metals, including some important toxins.
• For example, lead, thallium, and paraquat (herbicide) are toxins that active transport systems move across the intestinal wall.
The slow movement of ingested substances through the intestinal tract can influence their absorption. This slow passage increases the length of time that a compound is available for absorption at the intestinal membrane barrier.
Intestinal microflora and gastrointestinal enzymes can affect the toxicity of ingested substances. Some ingested substances may be only poorly absorbed but they may be biotransformed within the gastrointestinal tract. In some cases, their biotransformed products may be absorbed and be more toxic than the ingested substance.
• An important example is the formation of carcinogenic nitrosamines from non-carcinogenic amines by intestinal flora.
Colon and Rectum
Very little absorption takes place in the colon and rectum. As a rule, if a xenobiotic has not been absorbed after passing through the stomach or small intestine, very little further absorption will occur. However, there are some exceptions, as some medicines may be administered as rectal suppositories with significant absorption.
• An example is Anusol (hydrocortisone preparation) used for the treatment of local inflammation which is partially absorbed (about 25%).
Knowledge Check
1) The most important factor that determines whether a substance will be absorbed within the stomach is the:
a) Physical form as a solid or liquid
b) Molecular size
c) pH
Answer
pH - This is the correct answer.
The most important factor that determines absorption within the stomach is pH. Weak organic acids, which exist in a diffusible, nonionized and lipid-soluble form are readily absorbed in the high acidity of the stomach (pH 1-3). In contrast, weak bases will be highly ionized and therefore poorly absorbed.
2) The primary routes for absorption of environmental agents are:
a) Gastrointestinal tract, respiratory tract, and skin
b) Conjunctival exposures and skin wounds
Answer
Gastrointestinal tract, respiratory tract, and skin - This is the correct answer.
Environmental agents may be found in contaminated food, water, or air. As such, they may be ingested, inhaled, or present on the skin.
3) The site of the gastrointestinal tract where most absorption takes place is:
a) Stomach
b) Small intestine
c) Colon and rectum
Answer
Small intestine - This is the correct answer.
By far, the greatest absorption takes place in the intestine. This is due to the neutral pH and the large, thin, surface area that allows easy penetrable by passive diffusion. Weak bases, weak acids, lipid soluble substances and small molecules effectively enter the body from the intestine. In addition, special carrier-mediated and active transport systems exist. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_10%3A_Absorption/10.2%3A_Gastrointestinal_Tract.txt |
Respiratory Tract
Many environmental and occupational agents as well as some pharmaceuticals enter the respiratory tract through inhalation. Absorption can occur at any place within the upper respiratory tract. However, the amount of a particular xenobiotic that can be absorbed at a specific location depends highly on its physical form and solubility.
There are three basic regions to the respiratory tract:
1. Nasopharyngeal region
2. Tracheobronchial region
3. Pulmonary region
Figure \(1\). Anatomy of the respiratory tract
(Image Source: adapted from iStock Photos, ©)
Mucociliary Escalator
The mucociliary escalator covers most of the bronchi, bronchioles, and nose. It contains mucus-producing goblet cells and ciliated epithelium. The movements of the cilia push it and anything in it such as inhaled particles or microorganisms up and out into the throat, which can either get swallowed or removed through the mouth.
Animation \(1\). The mucociliary escalator provides a barrier against infection
Pulmonary Region
By far, the most important site for absorption is the pulmonary region consisting of the very small airways called bronchioles and the alveolar sacs of the lung.
The alveolar region has a very large surface area, about 50 times that of the skin. In addition, the alveoli consist of only a single layer of cells with very thin membranes that separate the inhaled air from the blood stream. Oxygen, carbon dioxide, and other gases readily pass through this membrane. Gases and particles, which are water-soluble (and thus blood-soluble), are absorbed more efficiently from the lung alveoli compared to their absorption via the gastrointestinal tract or through the skin. Water-soluble gases and liquid aerosols can pass through the alveolar cell membrane by simple passive diffusion.
Figure \(2\). Detailed view of alveoli and bronchioles
(Image Source: adapted from iStock Photos, ©)
Impact of Physical Form on Absorption
In addition to solubility, the ability to be absorbed depends highly on the physical form of the agent (that is, whether the agent is a gas/vapor or a particle). The physical form determines the extent of its penetration into the deep lung.
Gases and Vapors
A gas or vapor can be inhaled deep into the lung and if it has high solubility in the blood, it is almost completely absorbed in one respiration (a single breath). Absorption through the alveolar membrane is by passive diffusion, following the concentration gradient. As the agent dissolves in the circulating blood, it leaves the lung and a large amount of gas or vapor can be absorbed and enter the body.
Blood-soluble gases or vapors can often be exhaled instead of absorbed right away. For blood-soluble gases, the equilibrium between the concentration of the agent in the inhaled air and that in the blood is difficult to achieve. Inhaled gases or vapors, which have poor solubility in the blood, have a limited capacity for absorption. The main reason for this is that the blood can become quickly saturated. Once saturated, blood will not be able to accept the gas and it will remain in the inhaled air and then get exhaled. One way that the amount of gas absorbed could increase is if the rate and depth of breathing were increased (this concept is known as ventilation limitation). More specifically, the amount of gas absorbed could be increased if the rate of blood supply to the lung were increased by flow limitation.
In contrast, insoluble gases or vapors can be absorbed into the body by the lungs before getting exhaled (for example, nitrogen dioxide and carbon monoxide). This is because the equilibrium between the inhaled air and the blood is reached more quickly for relatively insoluble gases than for soluble gases.
Airborne Particles
The absorption of airborne particles is usually quite different from that of gases or vapors. The absorption of solid particles, regardless of solubility, depends upon particle size:
• Large particles (>5 μM) are generally deposited in the nasopharyngeal (head airways region) region with little absorption.
• Particles 2-5 μM can penetrate into the tracheobronchial region.
• Very small particles (<1 μM) are able to penetrate deep into the alveolar sacs where they can deposit and be absorbed.
Differences in Absorption Among Regions of the Respiratory Tract
Nasopharyngeal Region
Minimal absorption takes place in the nasopharyngeal region due to the cell thickness of the mucosa and the rapid movement of gases and particles through the region.
Tracheobronchial Region
Relatively soluble gases can quickly enter the blood stream. Most deposited particles are moved back up to the mouth where they are swallowed.
Pulmonary Region
Absorption in the alveoli of the pulmonary region is quite efficient compared to other areas of the respiratory tract. Relatively soluble materials (gases or particles) are quickly absorbed into systemic circulation. Pulmonary macrophages exist on the surface of the alveoli. They are not fixed and not a part of the alveolar wall. They can engulf particles just as they engulf and kill microorganisms. These alveolar macrophages can scavenge and clear some insoluble particle into the lymphatic system.
Figure \(3\). Alveoli
(Image Source: iStock Photos, ©
Some other particles may remain in the alveoli indefinitely. For example:
• Coal dust and asbestos fibers may lead to black lung or asbestosis, respectively.
• Carbon nanotubes (CNT), tiny tube-shaped structures smaller than a human hair, have been found in the lungs long after exposure. CNT are used in materials like polymers and anti-static packaging for their electric, magnetic, and mechanical properties. Studies of what happens to different forms of single-walled and multi-walled carbon nanotubes (CNT) found that pristine CNT could remain in the lung for months or even years after pulmonary deposition. However, some CNT can move to the gastrointestinal (GI) tract via the mucocilary escalator where, if swallowed, there appears to be no uptake of CNT from the GI tract (with a possible exception of the smallest functionalized single-walled CNT). In addition, under some experimental conditions in animals, some carbon nanotubes moved from the alveolar space to the nearby pulmonary region including lymph nodes, subpleura and pleura, and smaller amounts went to distal organs including the liver, spleen, and bone marrow.
Factors Affecting the Toxicity of Inhaled Materials
The nature of toxicity of inhaled materials depends on whether the material is absorbed or remains within the alveoli and small bronchioles. If the agent is absorbed and is lipid soluble, it can rapidly distribute throughout the body, passing through the cell membranes of various organs or into fat depots. Lipid-soluble substances take a longer time to reach equilibrium than water soluble substances. Chloroform and ether are examples of lipid-soluble substances with high blood solubility.
Non-absorbed foreign material can also cause severe toxic reactions within the respiratory system. These reactions may take the form of chronic bronchitis, alveolar breakdown (emphysema), fibrotic lung disease, and even lung cancer. In some cases, the toxic particles can kill the alveolar macrophages, which results in lowering the body's respiratory defense mechanism.
Pharmaceuticals Targeted to the Respiratory Tract
Inhaled drug delivery devices can be very effective and safe for getting active agents directly to their site of action. Inhalation is used to deliver locally acting drugs to treat respiratory conditions, including asthma, chronic obstructive pulmonary disease (COPD), and airway infections. Advantages of targeted delivery to the lungs include a more rapid onset of action and an increased therapeutic effect. Depending on the drug inhaled, there can be reduced systemic side effects since a lower dose can deliver the required local concentration.
Toxicogenomics and Toxicity Testing
Toxicogenomics applies genomics concepts and technologies to study gene and protein activities within a type of tissue or type of cell in response to a chemical exposure. Toxicogenomics helps in the understanding of what genes and proteins interact with a chemical, and what diseases are associated with various genes, proteins, and chemicals. This example used toxicogenomics to evaluate the response of the respiratory tract to one type of inhaled material:
For example, titanium dioxide nanoparticles (TiO2NPs) induce lung inflammation in experimental animals and one study included a comprehensive toxicogenomic analysis of lung responses in mice exposed to six individual TiO2NPs exhibiting different sizes, crystalline structure, and surface modifications. The goal was to investigate whether the mechanisms leading to TiO2NP-induced lung inflammation are property specific. The results suggest that the severity of lung inflammation is property specific; however, the underlying mechanisms (genes and pathways perturbed) leading to inflammation were the same for all particle types. While the particle size clearly influenced the overall acute lung responses, a combination of small size, crystalline structure, and hydrophilic surface contributed to the long-term pathological effects observed at the highest dose.
Knowledge Check
1) An inhaled material will most likely be absorbed into the body if it has the following characteristics:
a) High lipid solubility and poorly ionized
b) Large particle size and low water solubility
c) High water solubility and small particle size
Answer
High water solubility and small particle size - This is the correct answer.
In contrast to absorption via the gastrointestinal tract or through the skin, gases and particles, which are water-soluble (and thus blood soluble), will be absorbed more efficiently from the lung alveoli. Very small particles (<1 μM) are able to penetrate deep into the alveolar sacs where they can deposit and be absorbed.
2) Particles of size 2-5 μM are most likely to settle out in which location of the respiratory tract?
a) Nasopharyngeal region
b) Tracheobronchial region
c) Pulmonary region
Answer
Tracheobronchial region - This is the correct answer.
Particles 2-5 µM can penetrate into the tracheobronchial region. Very small particles (<1 μM) are able to penetrate deep into the alveolar sacs where they can deposit and be absorbed. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_10%3A_Absorption/10.3%3A_Respiratory_Tract.txt |
Dermal Route
In contrast to the thin membranes of the respiratory alveoli and the gastrointestinal villi, the skin is a complex, multilayer tissue. It is relatively impermeable to most ions and aqueous solutions, and serves as a barrier to most xenobiotics.
Did you know?
Dimethyl sulfoxide (DMSO) has been used in research, human and veterinary medicine, and as a solvent. After applying to the skin, some people can quickly detect a garlic taste as the DMSO is absorbed and enters the body. DMSO also increases the rate of absorption of some other compounds through the skin.
For transdermal drug delivery (TDD), the big challenge is the barrier property of skin, especially the stratum corneum (SC). Different methods have been developed to enhance the penetration of drugs through the skin, with the most popular approach being the use of penetration enhancers (PEs), including natural terpenes. Terpenes, a large and diverse class of organic compounds produced by a variety of plants, are a very safe and effective class of PEs. Limonene is one example of a terpene used as a penetration enhancer. The main mechanism for the penetration enhancing action of terpenes is the interaction with SC intercellular lipids. The key factor affecting the enhancement is the lipophilicity of the terpenes and the drug molecules.
Entry of Toxicants via Skin
Some notable toxicants can gain entry into the body following skin contamination. For example:
• Certain commonly used organophosphate pesticides have poisoned agricultural workers following dermal exposure.
• The neurological warfare agent, sarin, readily passes through the skin and can produce quick death to exposed persons.
• Several industrial solvents can cause systemic toxicity by penetrating the skin. For example:
• Carbon tetrachloride enters the skin and causes liver injury.
• Hexane can pass through the skin and cause nerve damage.
The skin consists of three main layers of cells as illustrated in Figure \(1\):
1. Epidermis
2. Dermis
3. Subcutaneous tissue
Figure \(1\). Layers of the skin
(Image Source: Adapted from iStock Photos, ©)
Epidermis and Stratum Corneum
The epidermis (and particularly the stratum corneum) is the only layer that is important in regulating the penetration of a skin contaminant. It consists of an outer layer of cells, packed with keratin, known as the stratum corneum layer. The stratum corneum is devoid of blood vessels. The cell walls of the keratinized cells are apparently double in thickness due to the presence of the keratin, which is chemically resistant and an impenetrable material. The blood vessels are usually about 100 μM from the skin surface. To enter a blood vessel, an agent must pass through several layers of cells that are generally resistant to penetration by chemicals.
Factors Influencing Penetration of the Stratum Corneum
Thickness
The thickness of the stratum corneum varies greatly with regions of the body. The stratum corneum of the palms and soles is very thick (400-600 μM) whereas that of the arms, back, legs, and abdomen is much thinner (8-15 μM). The stratum corneum of the axillary (underarm) and inguinal (groin) regions is the thinnest with the scrotum especially thin. As expected, the ability of toxicants to penetrate that stratum corneum inversely relates to the thickness of the epidermis.
Damage
Any process that removes or damages the stratum corneum can enhance penetration of a xenobiotic. Abrasion, scratching, or cuts to the skin will make it more penetrable. Some acids, alkalis, and corrosives can injure the stratum corneum and make it easier for agents to penetrate this layer. The most prevalent skin conditions that enhance dermal absorption are skin burns and dermatitis.
Passive Diffusion
Toxicants move across the stratum corneum by passive diffusion. There are no known active transport mechanisms functioning within the epidermis. Polar and nonpolar toxicants diffuse through the stratum corneum by different mechanisms:
• Polar compounds, which are water soluble, appear to diffuse through the outer surface of the hydrated keratinized layer.
• Nonpolar compounds, which are lipid soluble, dissolve in and diffuse through the lipid material between the keratin filaments.
Water
Water plays an important role in dermal absorption. Normally, the stratum corneum is partially hydrated (approximately 7% by weight). Penetration of polar substances is about 10 times more effective than when the skin is completely dry. Additional hydration on the skin's surface increases penetration by 3–5 times, which further increases the ability of a polar compound to penetrate the epidermis.
Species
Skin penetration can vary by species which can influence the selection of species used for safety testing. Penetration of chemicals through the skin of the monkey, pig, and guinea pig is often similar to that of humans. The skin of the rat and rabbit is generally more permeable whereas the skin of the cat is generally less permeable. For practical reasons and to assure adequate safety, the rat and rabbit have been used for dermal toxicity safety tests.
Other Sites of Dermal Absorption
In addition to the stratum corneum, small amounts of chemicals may be absorbed through the sweat glands, sebaceous glands, and hair follicles. However, since these structures represent only a very small percentage of the skin's total surface area, they are not ordinarily viewed as important contributors to dermal absorption.
Dermis and Subcutaneous Tissue
Once a substance penetrates through the stratum corneum, it enters lower layers of the epidermis, the dermis, and subcutaneous tissue. These layers are far less resistant to further diffusion. They contain a porous, nonselective aqueous diffusion medium which can be penetrated by simple diffusion. Most toxicants that have passed through the stratum corneum can now readily move through the remainder of the skin and enter the circulatory system via the large numbers of venous and lymphatic capillaries in the dermis.
Semivolatile Organic Compounts (SVOCs)
Exposure to semivolatile organic compounds (SVOCs) via the dermal route can occur. The amount of SVOCs absorbed via air-to-skin uptake has been estimated to be comparable to or larger than the amount taken in via inhalation for many SVOCs encountered indoors, including:
• Butylated hydroxytoluene (BHT)
• Chlordane
• Chlorpyrifos
• Diethyl phthalate
• Nicotine (in free-base form)
• Other chemicals
The influence of particles and dust on dermal exposure, the role of clothing and bedding as transport vectors, and the potential significance of hair follicles as transport shunts through the epidermis are all areas of research interest.
Human exposure to indoor SVOCs through the dermal pathway has often been underestimated and not considered in exposure assessments. However, exposure scientists, risk assessors, and public health officials are increasingly aware of and interested in the health impacts of dermal exposure. Further, experts seek to understand how health consequences can vary by the exposure pathway. For example, an SVOC that enters the blood through the skin does not encounter the same detoxification pathways that it would encounter when ingested and processed by the stomach, intestines, and liver before entering the blood; its direct entry into the blood can make it potentially more toxic.
Figure \(2\). Examples of SVOCs from consumer goods
(Image Source: Adapted from iStock Photos, ©)
Knowledge Check
1) The main barrier to dermal absorption is the:
a) Stratum corneum
b) Dermis
c) Subcutaneous tissue
Answer
Stratum corneum - This is the correct answer.
The epidermis (and particularly the stratum corneum) is the only layer that is important in regulating penetration of a skin contaminant.
2) The two primary factors that can increase dermal penetration are:
a) Neutralizing pH and aerosolizing
b) Increasing hydration and disruption of the stratum corneum
c) Dehydrating a substance and increasing particle size
Answer
Increasing hydration and disruption of the stratum corneum - This is the correct answer.
Water plays an important role in dermal absorption. Normally, the stratum corneum is partially hydrated (~7% by weight). Penetration of polar substances is about 10 times as effective as when the skin is completely dry. Additional hydration can increase penetration by 3-5 times which further increases the ability of a polar compound to penetrate the epidermis. Any process that removes or damages the stratum corneum can enhance penetration of a xenobiotic. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_10%3A_Absorption/10.4%3A_Dermal_Route.txt |
Other Routes of Exposure
In addition to the common routes of environmental, occupational, and medical exposure (oral, respiratory, and dermal), other routes of exposure may be used for medical purposes. Many pharmaceuticals are given by parenteral routes via injection into the body using a syringe and hollow needle.
Other Exposure Routes
Intradermal injections are made directly into the skin, just under the stratum corneum. Tissue reactions are minimal and absorption is usually slow. A subcutaneous injection is beneath the skin. Since the subcutaneous tissue is quite vascular (consisting of vessels especially those carrying blood), absorption into the systemic circulation is generally rapid. Tissue sensitivity is also high and thus irritating substances may induce pain and an inflammatory reaction.
The intramuscular route is used to inject many pharmaceuticals, especially antibiotics and vaccines, directly into muscle tissue. It is an easy procedure and the muscle tissue is less likely to become inflamed compared to subcutaneous tissue. Absorption from muscle is about the same as from subcutaneous tissue.
The intravenous (vein) or intra-arterial (artery) routes are used to inject substances directly into large blood vessels when they are irritating or when an immediate action is desired, such as anesthesia.
Parenteral injections may also be made directly into body cavities, rarely in humans but frequently in laboratory animal studies. An intraperitoneal injection goes directly into the abdominal cavity and an intrapleural injection directly into the chest cavity. Since the pleura and peritoneum have minimal blood vessels, irritation is usually minimal and absorption is relatively slow.
Figure \(1\). Injection using a syringe
(Image Source: iStock Photos, ©)
Implantation is another route of exposure of increasing concern. A large number of pharmaceuticals and medical devices are now implanted in various areas of the body. Implants may be used to allow slow, time-release of a substance such as hormones. Implanted medical devices and materials like artificial lenses, tendons, and joints, and cosmetic reconstruction do not involve absorption.
Some materials enter the body via skin penetration as the result of accidents or violence (weapons, etc.). The absorption in these cases depends highly on the nature of the substance. Metallic objects (such as bullets) may be poorly absorbed whereas more soluble materials that thrust through the skin and into the body from accidents may be absorbed rapidly into the circulation.
Novel methods of introducing substances into specific areas of the body are often used in medicine. For example, conjunctival instillations (eye drops) treat ocular conditions where high concentrations are needed on the outer surface of the eye, not possible by other routes.
Therapies for certain conditions require that a substance is deposited in body openings where high concentrations and slow release may be needed while keeping systemic absorption to a minimum. For these substances, the pharmaceutical agent is suspended in a poorly absorbed material such as beeswax with the material known as a suppository. The usual locations for use of suppositories are the rectum and vagina.
Did you know?
Cinnamic aldehyde, also called cinnamaldehyde, gives cinnamon its flavor and odor. It occurs naturally in the bark of cinnamon trees and other species. Cinnamic aldehyde and cinnamic alcohol are well known in the scientific literature as being associated with skin allergy in humans. Skin allergy is also called skin sensitization or allergic contact dermatitis. Cinnamic aldehyde is a more potent sensitizer than cinnamic alcohol. The skin absorption and metabolism of cinnamic aldehyde and cinnamic alcohol play an important role in the development of skin sensitization following skin exposures. Cinnamic alcohol applied to human skin is converted to cinnamic aldehyde and cinnamic acid.
Cinnamic aldehyde is a good example of how an assessment of the risk of skin sensitization can be conducted prior to the introduction of new ingredients and products into the marketplace. A published quantitative risk assessment for cinnamic aldehyde used the understanding of its chemical, cellular, and molecular properties. By estimating the exposure to cinnamic aldehyde and knowing its allergenic potency, it was possible to assess the sensitization risk of cinnamic aldehyde in different types of consumer products. This publication applied exposure-based risk assessment tools to two hypothetical products containing cinnamic aldehyde. The risk assessment predicted that an eau de toilette leave-on product containing 1000 ppm or more of cinnamic aldehyde would pose an unacceptable risk of skin sensitization getting induced. However, a shampoo containing the same level of cinnamic aldehyde would pose an acceptable risk of skin sensitization getting induced, based on there being limited exposure to the ingredient from a rinse-off product application.
Figure \(2\). Cinnamon sticks and powder
(Image Source: iStock Photos, ©)
Knowledge Check
1) If an immediate therapeutic effect is needed, the route of exposure that would most likely be used is the:
a) Intradermal route
b) Intramuscular injection
c) Intravenous injection
Answer
Intravenous injection - This is the correct answer.
Substances injected into the circulatory system go directly to the target tissue where immediate reactions can occur.
2) A pharmaceutical may be implanted in the body to:
a) Allow slow-release over a long period of time
b) Assure that the substance is distributed equally throughout the body
c) Reduce irritation from the substance
Answer
Allow slow-release over a long period of time - This is the correct answer.
Treatment with pharmaceuticals in time-release implants is a relatively new therapeutic technique that has gained popularity for long-term chronic chemotherapy. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_10%3A_Absorption/10.5%3A_Other_Routes_of_Exposure.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Explain distribution and its role in toxicokinetics.
• Describe the impact of exposure route on distribution.
• Describe three models of disposition.
• Identify structural barriers to distribution.
Topics include:
Section 11: Key Points What We've Covered
This section made the following main points:
• Distribution is the process in which an absorbed chemical moves away from the site of absorption to other areas of the body.
• An absorbed chemical passes through cell linings of the absorbing organ (skin, lung, or gastrointestinal tract) into the interstitial fluid of that organ.
• The toxicant can leave the interstitial fluid by entering local tissue cells, blood capillaries and the blood circulatory system, or the lymphatic system.
• If the toxicant gains entrance into the blood plasma, it:
• Travels bound or unbound along with the blood.
• May be excreted, stored, or biotransformed, or may interact or bind with cellular components.
• The volume of distribution (VD) is the total volume (in liters) of body fluids in which a toxicant is distributed.
• The route of exposure is an important factor affecting the concentration of the toxicant or its metabolites at any specific location within the blood or lymph.
• Toxicants entering from the GI tract or peritoneum are immediately subject to biotransformation or excretion by the liver and elimination by the lung (this is often called the "first-pass effect").
• Toxicants absorbed through the lung or skin enter the blood and go directly to the heart and systemic circulation, thus being distributed to various organs before going to the liver (not subject to the first-pass effect).
• Toxicants that enter the lymph will not go to the liver first, but will slowly enter systemic circulation.
• The blood level of a toxicant depends on the site of absorption and the rate of biotransformation and excretion.
• Disposition is the combined processes of distirbution, biotransformation, and elimination. Disposition models can be:
• One-Compartment Open Model — disposition of a substance introduced and distributed instantaneously and evenly in the body and eliminated proportionally to the amount left in the body ("first-order" rate).
• Two-Compartment Open Model — the chemical enters and distributes in the first compartment (usually blood), then distributed to another compartment where it can be eliminated or may return to the first compartment.
• The biological half-life, the most commonly used measure of the kinetic behavior of a xenobiotic, is the half-life for a chemical in a two-compartment model.
• Multiple Compartment Model — the chemical involves several peripheral body compartments, including long-term storage, or biotransformation and elimination at varying rates as blood levels change.
• Organs or tissues differ in the amount of a chemical they may receive, depending on:
• Volume of blood — organs that receive larger blood volumes potentially accumulate more of a given toxicant.
• Tissue affinity — some tissues have a higher affinity for specific chemicals, accumulating a toxicant in great concentrations despite a rather low flow of blood.
• Structural barriers to distribution include the blood-brain barrier and the placental barrier.
• Toxicants can also be stored:
• When bound to plasma proteins in the blood
• In adipose tissues
• In bone
• In the liver
• In the kidneys
Section 11: Distribution
Introduction to Distribution
So far, we have described the absorption of substances into the body. Now we will focus on what happens next to substances in the body after they are absorbed.
Figure \(1\). The processes of toxicokinetics
(Image Source: Adapted from iStock Photos, ©)
Distribution Defined
Distribution is the process in which an absorbed chemical moves away from the site of absorption to other areas of the body. In this section, we will answer the following questions:
• How do chemicals move through the body?
• Does distribution vary with the route of exposure?
• Is a chemical distributed evenly to all organs or tissues?
• How fast is a chemical distributed?
• Why do some chemicals stay in the body for a long time while others are eliminated quickly?
Body Fluids
When a chemical is absorbed, it passes through cell linings of the absorbing organ (skin, lung, or gastrointestinal tract) into the interstitial fluid (fluid surrounding cells) of that organ.
The other body fluids are intracellular fluid (fluid inside cells) and blood plasma. However, the body fluids are not isolated but represent one large pool. The interstitial and intracellular fluids, in contrast to fast moving blood, remain in place with certain components (for example, water and electrolytes) moving slowly into and out of cells. A chemical, while immersed in the interstitial fluid, is not mechanically transported the way it is in blood. Table 1 lists the approximate percentage of body weight each of these body fluids comprise.
A toxicant can leave the interstitial fluid by entering:
• Local tissue cells.
• Blood capillaries and the blood circulatory system.
• The lymphatic system.
Blood Plasma
If the toxicant gains entrance into the blood plasma, it travels along with the blood, either in a bound or unbound form. Blood moves rapidly through the body via the cardiovascular circulatory system. In contrast, lymph (fluid) moves slowly through the lymphatic system. The major distribution of an absorbed chemical is by blood with only minor distribution by lymph. Since virtually all tissues have a blood supply, all organs and tissues of the body are potentially exposed to the absorbed chemical.
Distribution of a chemical to body cells and tissues requires that the toxicant penetrate a series of cell membranes. It must first penetrate the cells of the capillaries (small blood vessels) and later the cells of the target organs. The factors pertaining to passage across membranes apply to these other cell membranes as well. For example, important factors include the concentration gradient; molecular weight; lipid solubility; and polarity, with the smaller, nonpolar toxicants in high concentrations being most likely to gain entrance.
The distribution of a xenobiotic can be affected by whether it binds to plasma protein. Some toxicants may bind to these plasma proteins (especially albumin), which removes the toxicant from potential cell interaction. Within the circulating blood, the non-bound (free) portion is in equilibrium with the bound portion. However, only the free substance is available to pass through the capillary membranes. Those substances that are extensively bound are limited in terms of equilibrium and distribution throughout the body. Protein binding in the plasma greatly affects distribution, prolongs the half-life within the body, and affects the dose threshold for toxicity.
The plasma level of a xenobiotic is important since it generally reflects the concentration of the toxicant at the site of action. The passive diffusion of the toxicant into or out of these body fluids will be determined mainly by the toxicant's concentration gradient.
Volume of Distribution (written as V subscript D - see example in formula below)
The apparent volume of distribution (VD) is the total volume of body fluids in which a toxicant is distributed. The VD is expressed in liters.
If a toxicant is distributed only in the plasma fluid, the plasma concentration will remain high and a low VD results; however, if a toxicant is distributed in all sites (blood plasma, interstitial, and intracellular fluids) there is greater dilution in plasma concentration and a higher VD will result. Binding in effect reduces the concentration of free toxicants in the plasma or VD. Toxicants that undergo rapid storage, biotransformation, or elimination further affect the VD. Toxicologists determine the VD of a toxicant in order to know how extensively a toxicant is distributed in the body fluids. The volume of distribution can be calculated by the formula:
The volume of distribution may provide useful estimates as to how extensive the toxicant is distributed in the body. For example, a very high apparent VD may indicate that the toxicant has distributed to a particular tissue or storage area such as adipose tissue. In addition, the body burden for a toxicant can be estimated from knowledge of the VD by using the formula:
Once a chemical is in the blood stream:
• It may be excreted.
• It may be stored.
• It may be biotransformed into different chemicals (metabolites).
• Its metabolites may be excreted or stored.
• The chemical or its metabolites may interact or bind with cellular components.
Most chemicals undergo some biotransformation. The degree with which various chemicals are biotransformed and the degree with which the parent chemical and its metabolites are stored or excreted vary with the nature of the exposure (dose level, frequency, and route of exposure).
Knowledge Check
1) When an ingested toxicant is absorbed, it passes through the cells lining the GI tract into the:
a) Intracellular fluid
b) Gastric fluid
c) Interstitial fluid
Answer
Interstitial fluid - This is the correct answer.
When a chemical is absorbed it passes through cell linings of the absorbing organ (in this case, the gastrointestinal tract) into the interstitial fluid (fluid surrounding cells) within that organ.
2) The apparent volume of distribution (VD) represents the:
a) Total volume of body fluids in which a toxicant is distributed
b) Amount of blood plasma in which a toxicant is dissolved
c) Amount of interstitial fluid that contains a toxicant
Answer
Total volume of body fluids in which a toxicant is distributed - This is the correct answer.
The apparent volume of distribution (VD) represents the total volume of body fluids in which a toxicant is distributed. It consists of the interstitial fluid, intracellular fluid, and the blood plasma. Soon after absorption, a toxicant may be distributed to all three types of fluids, although the concentrations may be quite different. Rarely will a toxicant be distributed to only one type of fluid. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_11%3A_Distribution/11.1%3A_Introduction_to_Distribution.txt |
Influence of Exposure Route
The route of exposure is an important factor that can affect the concentration of the toxicant (or its metabolites) at any specific location within the blood or lymph. This can be important since the time and path taken by the chemical as it moves through the body influences the degree of biotransformation, storage, and elimination (and thus toxicity).
For example, if a chemical goes to the liver before going to other parts of the body, much of it may be biotransformed quickly. In this case, the blood levels of the toxicant "downstream" may be diminished or eliminated. This way of processing the chemical right away can dramatically affect its potential toxicity.
Gastrointestinal Tract and Peritoneum
When toxicants are absorbed through the gastrointestinal (GI) tract, a similar biotransformation process occurs. Blood carries absorbed toxicants entering the vascular system of the GI tract directly to the liver via the portal system. This is also true for those drugs administered by intraperitoneal injection. Blood from most of the peritoneum also enters the portal system and goes immediately to the liver. Blood from the liver then flows to the heart and then on to the lung, before going to other organs.
Thus, toxicants entering from the GI tract or peritoneum are immediately subject to biotransformation or excretion by the liver and elimination by the lung. This is often referred to as the "first-pass effect."
For example, first-pass biotransformation of the drug propranolol (cardiac depressant) is about 70% when given orally. This means the blood level of this medication is only about 30% of a comparable dose administered intravenously.
Figure \(1\). Movement of a toxicant through the portal system
(Image Source: Adapted from Kimball's Biology Pages. Original author: John W. Kimball, obtained under Creative Commons Attribution 3.0 Unported license, ©. View original image.)
Lung and Skin
Drugs and other substances that are absorbed through the lungs or skin enter the bloodstream to be carried throughout the body. Thus, they avoid the liver (hepatic) first-pass effect that would have occurred if they had been absorbed from the gastrointestinal tract. These substances can have local effects in the lungs or skin in addition to having systemic effects, and some cells in the lungs and skin may metabolize the drug or other substance. Examples of a "local first-pass effect" in the skin due to metabolism are when nitroglycerin and cortisol applied to the skin. Drugs administered intravenously or intramuscularly also enter the bloodstream to be carried throughout the body and avoid the liver (hepatic) first-pass effect.
Did you know?
Some advantages of transdermal drug delivery (skin patches):
• They are a better way to deliver substances that are broken down by the stomach acids, not well absorbed from the gut, or extensively broken down by the liver.
• They are a substitute for oral route.
• They permit constant dosing rather than the peaks and valley in medication level associated with orally administered medication.
• They can minimize undesirable side effects.
• They can be used to prescribe drugs that have short biological half-lives or a narrow therapeutic window.
• They can be removed, thereby terminating therapy easily.
• They are noninvasive, avoiding the inconvenience of IV therapy or injections.
• They can be used with patients who are nauseated or unconscious.
• They are cost-effective.
Figure \(2\). Nicotine patch
(Image Source: iStock Photos, ©)
Lymph
The delivery of drugs and bioactive compounds via the lymphatic system avoids first-pass metabolism by the liver and increases oral bioavailability. It is also a way to deliver drugs for diseases that spread through the lymphatic system such as certain types of cancer and the human immunodeficiency virus (HIV). For example, liposomes composed of phosphatidylethanol can enhance the oral bioavailability of poorly absorbed hydrophilic drugs such as cefotaxime.
Blood
The blood levels of a drug or other substance depend on the site of absorption, whether being absorbed after subcutaneous injection or more quickly from intramuscular injection. These blood levels also depend on the individual's rate of local and systemic biotransformation, and the rate of excretion. Uptake and release can occur in areas of the body away from the first site of absorption. Some anesthetics can be taken up by the lungs and later released, impacting blood levels. Lidocaine, given intravenously, is one example of this later release. Further, as noted elsewhere in ToxTutor, the metabolism of a substance can vary widely from person-to-person due to factors such as genetic differences, age, diet, and diseases that affect metabolism.
Some advantages of intramuscular injections:
• They are absorbed faster than subcutaneous injection, partly because muscle tissue has a larger blood supply than tissue just under the skin.
• They can hold a greater injected volume of drug (or vaccine) than a subcutaneous tissue injection can.
• They can be used instead of intravenous injection if a drug is irritating to veins or if a suitable vein cannot be located.
• They may be used instead of oral delivery if a drug is known to be degraded by stomach acids.
Knowledge Check
1) The main difference in distribution of a toxicant absorbed from the gastrointestinal tract from toxicants absorbed through the skin or from inhalation is:
a) The toxicant is distributed to more organs
b) A greater amount of the toxicant that is absorbed will be distributed to distant parts of the body
c) The toxicant enters the systemic circulatory system after first passing through the liver
Answer
The toxicant enters the systemic circulatory system after first passing through the liver - This is the correct answer.
Toxicants that enter the vascular system of the gastrointestinal tract are carried directly to the liver by the portal system. Thus, toxicants are immediately subject to biotransformation or excretion by the liver. This is often referred to as the "first pass effect." | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_11%3A_Distribution/11.2%3A_Influence_of_Route_of_Exposure.txt |
Disposition Models
Disposition is the term often used to describe the combined processes of distribution, biotransformation, and elimination. The most used disposition models are compartmental models, which are categorized as one-compartment, two-compartment, and multicompartment models. Compartmental models can be used to predict the time course of drug concentrations in the body. These compartments could represent a group of similar tissues or fluids.
For example:
• Blood is a compartment.
• Fat (adipose) tissue, bone, liver, kidneys, and brain are other major compartments.
Kinetic models may be a 1) one-compartment open model; 2) two-compartment open model; or 3) multiple compartment model.
One-Compartment Model
A one-compartment open model may be used for drugs like aminoglycosides which rapidly distribute (equilibrate) to tissues and fluids within the body. In other words, the entire body acts like one uniform compartment. This model is also called a one-compartment open model, with "open" being the assumption that the drug can enter and leave the body via excretion. The figure below shows the disposition of a drug or other substance that distributes instantaneously and evenly in the body, and is eliminated at a rate and amount that is proportional to the amount left in the body. This is known as a "first-order" rate and represented as the logarithm of concentration in blood as a linear function of time (Figure \(1\)).
Figure \(1\). One-compartment open model
(Image Source: NLM)
The half-life of the chemical that follows a one-compartment model is simply the time required for half of the chemical to be lost from the plasma. Only a few chemicals actually follow the simple, first-order, one compartment model.
Two-Compartment Model
For most chemicals, it is necessary to describe the kinetics in terms of at least a two-compartment model. A two-compartment model is used for drugs which distribute slowly within the body. This model is also called a two-compartment open model, with "open" being the assumption that the drug can enter and leave the body.
For example, a one-time (bolus) intravenous administration over a short time period could lead to a drug distributing rapidly in the blood and also to highly perfused (by blood) organs like the liver and kidneys. This would be one compartment of the two-compartment model. There would be a slower distribution to other parts of the body as the second compartment.
• Two examples are vancomycin and digoxin. As shown in Figure \(2\), the drug or other substance enters and distributes in the first compartment. It is then distributed to another compartment. The concentration in the first compartment declines with time while the concentration in the second compartment rises, peaks, and declines as the chemical is eliminated from the body.
Figure \(1\). Two-compartment open model
(Image Source: NLM)
A half-life for a chemical whose kinetic behavior fits a two-compartment model is often referred to as the "biological half-life." This is the most commonly used measure of the kinetic behavior of a xenobiotic.
Multiple Compartment Model
Frequently the one- and two-compartment models cannot adequately describe the kinetics of a chemical within the body since there may be several peripheral body compartments to which the chemical may go, including long-term storage. In addition, biotransformation and elimination of a chemical may not be simple processes but subject to different rates as blood levels change.
Knowledge Check
1) Disposition models describe:
a) How a toxicant moves within the body over time
b) How the body eliminates the toxicant
c) The pathway for biotransformation of the toxicant within the liver
Answer
How a toxicant moves within the body over time - This is the correct answer.
Disposition models, also known as kinetic models, describe how a toxicant moves within the body compartments with time. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_11%3A_Distribution/11.3%3A_Disposition_Models.txt |
Structural Barriers to Distribution
Organs or tissues differ in the amount of a chemical that they receive or to which they are exposed. This is primarily due to two factors: the 1) volume of blood flowing through a specific tissue and the 2) presence of special barriers to slow down a toxicant's entrance.
Volume of Blood and Tissue Affinity
Organs that receive larger blood volumes can potentially accumulate more of a given toxicant. Body regions that receive a large percentage of the total cardiac output include the liver (28%), kidneys (23%), heart muscle, and brain. Bone and adipose tissues have relatively low blood flow, even though they serve as primary storage sites for many toxicants. This is especially true for toxicants that are fat-soluble and those that readily associate (or form complexes) with minerals commonly found in bone.
Tissue affinity determines the degree of concentration of a toxicant. In fact, some tissues have a higher affinity for specific chemicals and accumulate a toxicant in great concentrations despite a rather low flow of blood.
For example, adipose tissue, which has a meager blood supply, concentrates lipid-soluble toxicants. Once deposited in these storage tissues, toxicants may remain for long periods, due to their solubility in the tissue and the relatively low blood flow.
Structural Barriers
During distribution, the passage of toxicants from capillaries into various tissues or organs is not uniform. Structural barriers exist that restrict the entrance of toxicants into certain organs or tissues. The primary barriers are those of the brain, placenta, and testes.
Blood-Brain Barrier
The blood-brain barrier protects the brain from most toxicants. Specialized cells called astrocytes possess many small branches, which form a barrier between the capillary endothelium and the neurons of the brain. Lipids in the astrocyte cell walls and very tight junctions between adjacent endothelial cells limit the passage of water-soluble molecules. The blood-brain barrier is not completely impenetrable and its penetrability can vary with health status/disease state, but it does slow down the rate at which toxicants cross into brain tissue while allowing essential nutrients, including oxygen, to pass through.
Placental Barrier
The placental barrier protects the sensitive, developing fetus from most toxicants distributed in the maternal circulation. This barrier consists of several cell layers between the maternal and fetal circulatory vessels in the placenta. Lipids in the cell membranes limit the diffusion of water-soluble toxicants. However, nutrients, gases, and wastes of the developing fetus can pass through the placental barrier. As in the case of the blood-brain barrier, the placental barrier is not completely impenetrable but effectively slows down the diffusion of most toxicants from the mother into the fetus.
Knowledge Check
1) Organs may differ greatly in the concentration of a toxicant in them, due primarily to the:
a) Rate of elimination of the toxicant by the kidneys
b) Distance of the organ from the heart since the toxicant disintegrates quickly in the blood plasma
c) Volume of blood flow and the presence of special barriers
Answer
Volume of blood flow and the presence of special barriers - This is the correct answer.
Organs or tissues differ in the amount of a chemical that they receive or to which they are exposed. This is primarily due to two factors, the volume of blood flowing through a specific tissue and the presence of special "barriers" to slow down toxicant entrance. Organs that receive larger blood volumes can potentially accumulate more of a given toxicant.
2) The placental barrier protects the fetus from toxicants in the maternal blood because:
a) Substances in the maternal blood must move through several layers of cells in order to gain entrance to placental blood
b) The placenta does not contain circulating fetal blood that can absorb toxicants from the maternal blood
c) Toxicants in maternal blood are usually lipid soluble and must be water-soluble in order to penetrate through the placental cell layers
Answer
Substances in the maternal blood must move through several layers of cells in order to gain entrance to placental blood - This is the correct answer.
The placental barrier protects the developing and sensitive fetus from most toxicants distributed in the maternal circulation. This barrier consists of several cell layers between the maternal and fetal circulatory vessels in the placenta. Lipids in the cell membranes limit the diffusion of water-soluble toxicants.
11.5: Storage Sites
Storage Sites
Storage of toxicants in body tissues sometimes occurs. Initially, when a toxicant enters the blood plasma, it may be bound to plasma proteins. Toxicants attached to proteins are considered a form of storage because they do not contribute to the chemical's toxic potential. Albumin is the most abundant plasma protein that binds toxicants. Normally, the toxicant is only bound to the albumin for a relatively short time.
The primary sites for toxicant storage are adipose tissue, bone, liver, and kidneys.
Adipose Tissue
Lipid-soluble toxicants are often stored in adipose tissues. Adipose tissue is located in several areas of the body but mainly in subcutaneous tissue. Lipid-soluble toxicants can be deposited along with triglycerides in adipose tissues. The lipids are in a continual exchange with blood and thus the toxicant may be mobilized into the blood for further distribution and elimination, or redeposited in other adipose tissue cells.
Bone
Bone is another major site for storage. Bone is composed of proteins and the mineral salt hydroxyapatite. Bone contains a sparse blood supply but is a live organ. During the normal processes that form bone, calcium and hydroxyl ions are incorporated into the hydroxyapatite-calcium matrix. Several chemicals, primarily elements, follow the same kinetics as calcium and hydroxyl ions and therefore can be substituted for them in the bone matrix.
For example, strontium (Sr) or lead (Pb) may be substituted for calcium (Ca), and fluoride (F-) may be substituted for hydroxyl (OH-) ions. Bone is continually being remodeled under normal conditions. Calcium and other minerals are continually being resorbed and replaced, on the average about every 10 years. Thus, any toxicants stored in the matrix will eventually be released to re-enter the circulatory system.
Liver and Kidneys
The liver is a storage site for some toxicants. It has a large blood flow and its hepatocytes (that is, liver cells) contain proteins that bind to some chemicals, including toxicants.
As with the liver, the kidneys have a high blood flow, which preferentially exposes these organs to toxicants in high concentrations. Storage in the kidneys is associated primarily with the cells of the nephron (the functional unit for urine formation).
Knowledge Check
1) The areas of the body which most frequently store toxicants are:
a) Adrenal gland, thyroid gland, and pancreas
b) Adipose tissue, bone, liver, and kidney
c) Skeletal muscle, tendons, and leg joints
Answer
1) Adipose tissue, bone, liver, and kidney - This is the correct answer.
The primary sites for toxicant storage are adipose tissue, bone, liver and kidneys. Lipid-soluble toxicants store in adipose tissues; chemicals that follow calcium or hydroxyl ion kinetics store in bone; and the liver and kidney cells are subjected to high concentrations of toxicants. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_11%3A_Distribution/11.4%3A_Structural_Barriers_to_Distribution.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Explain biotransformation, including its importance to survival and the body sites it involves.
• Define enzymes and the three types of enzyme specificity.
• Explain the two phases of biotransformation.
• Identify factors that influence the effectiveness of biotransformation.
Topics include:
Section 12: Key Points
What We've Covered This section made the following main points:
• Biotransformation is the process by which a substance changes from one chemical to another (transformed) by a chemical reaction within the body.
• Biotransformation is vital to survival because it transforms absorbed nutrients into substances required for normal body functions.
• Potential complications of biotransformation include:
• Detoxification — biotransformation results in metabolites of lower toxicity than the parent substance.
• Bioactivation — biotransformation results in metabolites of greater toxicity than the parent substance.
• Chemical reactions continually occur in the body to build up new tissue, tear down old tissue, convert food to energy, dispose of waste materials, and eliminate toxic xenobiotics.
• Enzymes are catalysts for nearly all biochemical reactions in the body; essential biotransformation reactions would be slowed or prevented without these enzymes, causing major health problems.
• There are generally three types of enzyme specificity:
1. Enzymes with absolute specificity catalyze only one reaction.
2. Enzymes with group specificity act only on molecules that have specific functional groups.
3. Enzymes with linkage specificity act on a particular type of chemical bond regardless of the rest of the molecular structure.
• There are two biotransformation reaction phases:
1. Phase I reactions modify the chemical by adding a functional structure, allowing the substance to "fit" into a second (Phase II) enzyme:
• Oxidation — the substrate loses electrons.
• Reduction — the substrate gains electrons.
• Hydrolysis — the addition of water splits the toxicant into two fragments or smaller molecules.
2. Phase II reactions conjugate (join together) the modified xenobiotic with another substance. The most important Phase II reactions are:
• Glucuronide conjugation, a high-capacity pathway — glucuronic acid is added directly to the toxicant or its Phase I metabolite, generally resulting in hydrophilic conjugates excreted by the kidney or bile.
• Sulfate conjugation, a low-capacity pathway — decreases the toxicity of xenobiotics, resulting in highly polar sulfate conjugates readily secreted in the urine.
• Biotransformation sites are the:
• Liver (primary site, which also makes it the most susceptible to damage by ingested toxicants).
• Kidneys (about 10-30% of the liver's capacity).
• Skin, intestines, testes, and placenta (low capacity).
• Biotransformation effectiveness depends on factors that can inhibit or induce enzymes and dose levels, including species, age, gender, genetic variability, nutrition, disease, exposure to other chemicals, and the dose level.
Section 12: Biotransformation
Biotransformation is the process by which a substance changes from one chemical to another (transformed) by a chemical reaction within the body. Metabolism or metabolic transformations are terms frequently used for the biotransformation process. However, metabolism is sometimes not specific for the transformation process but may include other phases of toxicokinetics.
Figure \(1\). Processes of toxicokinetics (Image Source: Adapted from iStock Photos, ©)
Importance of Biotransformation
Biotransformation is vital to survival because it transforms absorbed nutrients (food, oxygen, etc.) into substances required for normal body functions. For some pharmaceuticals, it is a metabolite that is therapeutic and not the absorbed drug. For example, phenoxybenzamine, a drug given to relieve hypertension caused by pheochromocytoma, a kind of tumor, is biotransformed into a metabolite, which is the active agent. Biotransformation also serves as an important defense mechanism since toxic xenobiotics and body wastes are converted into less harmful substances and substances that can be excreted from the body.
Toxicants that are lipophilic, non-polar, and of low molecular weight are readily absorbed through the cell membranes of the skin, GI tract, and lung. These same chemical and physical properties control the distribution of a chemical throughout the body and its penetration into tissue cells. Lipophilic toxicants are hard for the body to eliminate and can accumulate to hazardous levels. However, most lipophilic toxicants can be transformed into hydrophilic metabolites that are less likely to pass through membranes of critical cells. Hydrophilic chemicals are easier for the body to eliminate than lipophilic substances. Biotransformation is thus a key body defense mechanism.
Fortunately, the human body has a well-developed capacity to biotransform most xenobiotics as well as body wastes.
Did you know?
Hemoglobin, the oxygen-carrying iron-protein complex in red blood cells, is an example of a body waste that must be eliminated. The normal destruction of aged red blood cells releases hemoglobin. Bilirubin is one of several hemoglobin metabolites. If the body cannot eliminate bilirubin via the liver because of disease, medicine, or infection, bilirubin builds up in the body and the whites of the eyes and the skin may look yellow. Bilirubin is toxic to the brain of newborns and, if present in high concentrations, may cause irreversible brain injury. Biotransformation of the lipophilic bilirubin molecule in the liver results in the production of water-soluble (hydrophilic) metabolites excreted into bile and eliminated via the feces.
Figure \(2\). Human hemoglobin (Image Source: Adapted from iStock Photos, ©)
Potential Complications
The biotransformation process is not perfect. Detoxification occurs when biotransformation results in metabolites of lower toxicity. In many cases, however, the metabolites are more toxic than the parent substance, a process called bioactivation. Occasionally, biotransformation can produce an unusually reactive metabolite that may interact with cellular macromolecules like DNA. This can lead to very serious health effects such as cancer or birth defects.
An example is the biotransformation of vinyl chloride into vinyl chloride epoxide, which covalently binds to DNA and RNA, a step leading to cancer of the liver.
Knowledge Check
The term "biotransformation" refers to:
1. An increase in electrical charge in tissues produced by a biological transformer
2. Chemical reactions in the body that create a new chemical from another chemical
3. The transformation of one type of cell in a tissue to another type of cell
Answer
Chemical reactions in the body that create a new chemical from another chemical - This is the correct answer.
Biotransformation is the process whereby a substance is changed from one chemical to another (transformed) by a chemical reaction within the body.
Detoxification is a biotransformation process in which:
1. Metabolites of lower toxicity are produced
2. Metabolites of higher toxicity are produced
Answer
Metabolites of lower toxicity are produced - This is the correct answer.
When biotransformation results in metabolites of lower toxicity, the process is known as detoxification.
Bioactivation is a biotransformation process in which:
1. Metabolites of lower toxicity are produced
2. Metabolites of higher toxicity are produced
Answer
Metabolites of higher toxicity are produced - This is the correct answer.
When biotransformation results in metabolites of higher toxicity, this is known as bioactivation. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_12%3A_Biotransformation/12.1%3A_Introduction_to_Biotransformation.txt |
Chemical Reactions
Chemical reactions are continually taking place in the body. They are a normal aspect of life, participating in the:
• Building up of new tissue.
• Tearing down of old tissue.
• Conversion of food to energy.
• Disposal of waste materials.
• Elimination of toxic xenobiotics.
Within the body is a magnificent assembly of chemical reactions, which is well orchestrated and called upon as needed. Most of these chemical reactions occur at significant rates only because specific proteins, known as enzymes, are present to catalyze them, that is, accelerate the reaction. A catalyst is a substance that can accelerate a chemical reaction of another substance without itself undergoing a permanent chemical change.
Enzymes
Enzymes are the catalysts for nearly all biochemical reactions in the body. Without these enzymes, essential biotransformation reactions would take place slowly or not at all, causing major health problems.
Did you know?
Phenylketonuria (PKU) is the genetic condition in which the enzyme that biotransforms phenylalanine to tyrosine (another amino acid) is defective. As the result, phenylalanine can build up in the body and cause severe mental retardation. Babies are routinely checked at birth for PKU. If they have PKU, they need to follow a special diet to restrict the intake of phenylalanine in infancy and childhood.
Figure \(1\). Phenylketonuia (PKU) testing in an infant
(Image Source: Wikimedia Commons, obtained under Public Domain license. Author: U.S. Air Force Photographic Archives)
These enzymatic reactions are not always simple biochemical reactions. Some enzymes require the presence of cofactors or coenzymes in addition to the substrate (the substance to be catalyzed) before their catalytic activity can be exerted. These co-factors exist as a normal component in most cells and are frequently involved in common reactions to convert nutrients into energy (vitamins are an example of co-factors). It is the drug or chemical transforming enzymes that hold the key to xenobiotic transformation. The relationship of substrate, enzyme, coenzyme, and transformed product can be shown as:
\(substrate*\dfrac{enzyme}{co-enzyme} = transformed product\)
Most biotransforming enzymes are high molecular weight proteins, composed of chains of amino acids linked together by peptide bonds. A wide variety of biotransforming enzymes exist. Most enzymes will catalyze the reaction of only a few substrates, meaning that they have high specificity. Specificity is a function of the enzyme's structure and its catalytic sites. While an enzyme may encounter many different chemicals, only those chemicals (substrates) that fit within the enzyme's convoluted structure and spatial arrangement will be locked on and affected. This is sometimes referred to as the "lock and key" relationship.
As shown in Figure \(2\), when a substrate fits into the enzyme's structure, an enzyme-substrate complex can be formed. This allows the enzyme to react with the substrate with the result that two different products are formed. If the substrate does not fit into the enzyme ("incompatible"), no complex will be formed and thus no reaction can occur.
Figure \(2\). If the substrate does not fit into the enzyme, no complex will be formed and no reaction will occur.
(Image Source: NLM)
Enzyme Specificity
Enzymes range from having absolute specificity to broad and overlapping specificity. In general, there are three main types of specificity:
1. Absolute — the enzyme will catalyze only one reaction. Examples:
• Formaldehyde dehydrogenase catalyzes only the reaction for formaldehyde.
• Acetylcholinesterase biotransforms the neurotransmitting chemical, acetylcholine.
2. Group — the enzyme will act only on molecules that have specific functional groups, such as amino, phosphate, or methyl groups.
• For example, alcohol dehydrogenase can biotransform several different alcohols, including methanol and ethanol.
3. Linkage — the enzyme will act on a particular type of chemical bond regardless of the rest of the molecular structure.
• For example, N-oxidation can catalyze a reaction of a nitrogen bond, replacing the nitrogen with oxygen.
Enzyme Naming Convention
The names assigned to enzymes may seem confusing at first. However, except for some of the originally studied enzymes (such as pepsin and trypsin), a convention has been adopted to name enzymes. Enzyme names end in "ase" and usually combine the substrate acted on and the type of reaction catalyzed.
For example, alcohol dehydrogenase is an enzyme that biotransforms alcohols by the removal of a hydrogen. The result is a completely different chemical, an aldehyde or ketone.
The biotransformation of ethyl alcohol to acetaldehyde is depicted in Figure \(3\).
ADH = alcohol dehydrogenase, a specific catalyzing enzyme
Figure \(3\). Biotransformation of ethyl alcohol
(Image Source: NLM)
Beneficial or Harmful?
At this point in ToxTutor you likely see that the transformation of a specific xenobiotic can be either beneficial or harmful, and perhaps both depending on the dose and circumstances.
A good example is the biotransformation of acetaminophen (Tylenol®). When the prescribed doses are taken, the desired therapeutic response is observed with little or no toxicity. However, when excessive doses of acetaminophen are taken, hepatotoxicity can occur. This is because acetaminophen normally undergoes rapid biotransformation with the metabolites quickly eliminated in the urine and feces.
At high doses, the normal level of enzymes may be depleted and the acetaminophen is available to undergo the reaction by an additional biosynthetic pathway, which produces a reactive metabolite that is toxic to the liver. For this reason, a user of Tylenol® is warned not to take the prescribed dose more frequently than every 4–6 hours and not to consume more than four doses within a 24-hour period.
Biotransforming enzymes, like most other biochemicals, are available in a normal amount and in some situations can be "used up" at a rate that exceeds the body's ability to replenish them. This illustrates the frequently used phrase, the "dose makes the poison."
Figure \(4\). Generic acetaminophen tablets
(Image Source: iStock Photos, ©)
Biotransformation Reaction Phases
Biotransformation reactions are categorized not only by the nature of their reactions, for example, oxidation, but also by the normal sequence with which they tend to react with a xenobiotic. They are usually classified as Phase I and Phase II reactions.
Phase I reactions are generally reactions which modify the chemical by adding a functional structure. This allows the substance to "fit" into a second, or Phase II enzyme, so that it can become conjugated (joined together) with another substance.
Phase II reactions consist of those enzymatic reactions that conjugate the modified xenobiotic with another substance. The conjugated products are larger molecules than the substrate and generally polar in nature (water soluble). Thus, they can be readily excreted from the body. Conjugated compounds also have poor ability to cross cell membranes.
In some cases, the xenobiotic already has a functional group that can be conjugated and the xenobiotic can be biotransformed by a Phase II reaction without going through a Phase I reaction.
For example, phenol can be directly conjugated into a metabolite that can then be excreted. The biotransformation of benzene requires both Phase I and Phase II reactions. As illustrated in Figure \(5\), benzene is biotransformed initially to phenol by a Phase I reaction (oxidation). Phenol has a structure including a functional hydroxyl group that is then conjugated by a Phase II reaction (sulfation) to phenyl sulfate.
Figure \(5\). Biotransformation of benzene into phenol in Phase 1 (oxidation), which is then conjugated by a Phase 2 reaction (sulfation) to phenyl sulfate
(Image Source: NLM)
Table 1 lists the major transformation reactions for xenobiotics broken into Phase I and Phase II reactions. These reactions are discussed in more detailed below.
Phase I Reactions
Phase I biotransformation reactions are simple reactions compared to Phase II reactions. In Phase I reactions, a small polar group (containing both positive and negative charges) is either exposed on the toxicant or added to the toxicant. The three main Phase I reactions are 1) oxidation; 2) reduction; and 3) hydrolysis.
Oxidation
Oxidation is a chemical reaction in which a substrate loses electrons. There are a number of reactions that can achieve the removal of electrons from the substrate.
• The addition of oxygen, or oxygenation, was the first of these reactions discovered and thus the reaction was named oxidation. However, many of the oxidizing reactions do not involve oxygen.
• The simplest type of oxidation reaction is dehydrogenation, which is the removal of hydrogen from the molecule.
• Another example of oxidation is electron transfer that consists simply of the transfer of an electron from the substrate.
Figure \(6\) shows these types of oxidizing reactions.
Figure \(6\). Three types of oxidation reactions
(Image Source: NLM)
The specific oxidizing reactions and oxidizing enzymes are numerous and several textbooks are devoted to this subject. Most of the reactions are described by the name of the reaction or enzyme involved. Some of these oxidizing reactions include:
• Alcohol dehydrogenation
• Aldehyde dehydrogenation
• Alkyl/acyclic hydroxylation
• Aromatic hydroxylation
• Deamination
• Desulfuration
• N-dealkylation
• N-hydroxylation
• N-oxidation
• O-dealkylation
• Sulphoxidation
Reduction
Reduction is a chemical reaction in which the substrate gains electrons. Reductions are most likely to occur with xenobiotics in which oxygen content is low. Reductions can occur across nitrogen-nitrogen double bonds (azo reduction) or on nitro groups (NO2). Frequently, the resulting amino compounds are oxidized which forms toxic metabolites. Some chemicals such as carbon tetrachloride can be reduced to free radicals, which are quite reactive with biological tissues. Thus, reduction reactions frequently result in activation of a xenobiotic rather than detoxification. An example of a reduction reaction in which the nitro group is reduced is illustrated in Figure \(7\).
There are fewer specific reduction reactions than oxidizing reactions. The nature of these reactions is also described by their name. Some reducing reactions include:
• Azo reduction
• Dehalogenation
• Disulfide reduction
• Nitro reduction
• N-oxide reduction
• Sulfoxide reduction
Hydrolysis
Hydrolysis is a chemical reaction in which the addition of water splits the toxicant into two fragments or smaller molecules. The hydroxyl group (OH-) is incorporated into one fragment and the hydrogen atom is incorporated into the other. Larger chemicals such as esters, amines, hydrazines, and carbamates are generally biotransformed by hydrolysis.
An example of hydrolysis is illustrated in the biotransformation of procaine (local anesthetic) which is hydrolyzed to two smaller chemicals (Figure \(8\)).
Figure \(8\). Hydrolysis of procaine
(Image Source: Adapted from Humboldt State University, Department of Chemistry. Author: Richard A. Paselk, Professor Emeritus. View original image.)
Toxicants that have undergone Phase I biotransformation are converted to metabolites that are sufficiently ionized, or hydrophilic, to be either eliminated from the body without further biotransformation or converted to an intermediate metabolite that is ready for Phase II biotransformation. The intermediates from Phase I transformations may be pharmacologically more effective and in many cases more toxic than the parent xenobiotic.
Phase II Reactions
A xenobiotic that has undergone a Phase I reaction is now a new intermediate metabolite that contains a reactive chemical group such as hydroxyl (-OH), amino (-NH2), and carboxyl (-COOH). Many of these intermediate metabolites do not possess sufficient hydrophilicity to permit elimination from the body. These metabolites must undergo additional biotransformation as a Phase II reaction.
Phase II reactions are conjugation reactions where a molecule normally present in the body is added to the reactive site of the Phase I metabolite. The result is a conjugated metabolite that is more water soluble than the original xenobiotic or Phase I metabolite. Usually, the Phase II metabolite is quite hydrophilic and can be readily eliminated from the body. The primary Phase II reactions are:
• Glucuronide conjugation – most important reaction (detailed below)
• Sulfate conjugation – important reaction (detailed below)
• Acetylation
• Amino acid conjugation
• Glutathione conjugation
• Methylation
Glucuronide Conjugation
Glucuronide conjugation is one of the most important and common Phase II reactions. The glucuronic acid molecule is used in this reaction. It is derived from glucose, a common carbohydrate (sugar) that is the primary source of energy for cells. In this reaction, glucuronic acid is added directly to the toxicant or its phase I metabolite. The sites of glucuronidation reactions are substrates having an oxygen, nitrogen, or sulfur bond, which apply to a wide array of xenobiotics as well as endogenous substances, such as bilirubin, steroid hormones, and thyroid hormones.
Glucuronidation is a pathway that conjugates xenobiotics at a high capacity ("high-capacity pathway"). Glucuronide conjugation usually decreases toxicity although there are some notable exceptions, for example, where it can result in producing carcinogenic substances. The glucuronide conjugates are generally quite hydrophilic and are excreted by the kidney or bile, depending on the size of the conjugate. The glucuronide conjugation of aniline is illustrated in Figure \(9\).
Figure \(9\). Glucuronide conjugation of aniline (which is used to make polyurethane, pharmaceuticals, and industrial chemicals)
(Image Source: NLM)
Sulfate Conjugation
Sulfate conjugation is another important Phase II reaction that occurs with many xenobiotics. In general, sulfation decreases the toxicity of xenobiotics. Unlike glucuronic acid conjugates that are often eliminated in the bile, the highly polar sulfate conjugates are readily secreted in the urine. In general, sulfation is a low-capacity pathway for xenobiotic conjugation. Often glucuronidation or sulfation can conjugate the same xenobiotics.
Knowledge Check
1) The substances in the body that accelerate chemical reactions are known as:
a) Amino acids
b) Enzymes
c) Substrates
Answer
Enzymes - This is the correct answer.
Enzymes are proteins that catalyze nearly all biochemical reactions in the body.
2) The convention used to name specific enzymes consists of combining:
a) The substrate name with the type of chemical reaction
b) The target organ and the type of chemical reaction
c) The substrate name with the form of toxicity
Answer
The substrate name with the type of chemical reaction - This is the correct answer.
Enzyme names end in "ase" and usually combine the substrate acted on and the type of reaction catalyzed.
3) Biotransformation reactions are classified as Phase I and Phase II. The basic difference is:
a) Phase I reactions conjugate a substrate whereas Phase II reactions oxidize the substance
b) Phase I reactions generally add a functional structure whereas Phase II reactions conjugate the substance
c) A Phase I reaction generally makes a substance more hydrophilic than a Phase II reaction
Answer
Phase I reactions generally add a functional structure whereas Phase II reactions conjugate the substance - This is the correct answer.
Phase I reactions are generally reactions which modify the chemical by adding a functional structure. This allows the substance to "fit" into the Phase II enzyme so that it can become conjugated (joined together) with another substance. Phase II reactions consist of those enzymatic reactions that conjugate the modified xenobiotic with another substance.
4) The difference between oxidation and reduction reactions is:
a) A substrate gains electrons from an oxidation reaction whereas it loses electrons by a reduction reaction
b) Oxygen is removed from a substrate in oxidation and added in the reduction reaction
c) A substrate losses electrons from an oxidation reaction whereas it gains electrons by a reduction reaction
Answer
A substrate losses electrons from an oxidation reaction whereas it gains electrons by a reduction reaction - This is the correct answer.
Oxidation is a chemical reaction in which a substrate loses electrons. Reduction is a chemical reaction in which the substrate gains electrons.
5) Which conjugation reaction is the most common in the biotransformation of xenobiotics?
a) Amino acid conjugation
b) Glucuronide conjugation
c) Methylation
Answer
Glucuronide conjugation - This is the correct answer.
Glucuronide conjugation is one of the most important and common Phase II reactions. Glucuronidation is a high-capacity pathway for xenobiotic conjugation. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_12%3A_Biotransformation/12.2%3A_Chemical_Reactions.txt |
Biotransformation Sites
Biotransforming enzymes are widely distributed throughout the body.
• The liver is the primary biotransforming organ due to its large size and high concentration of biotransforming enzymes.
• The kidneys and lungs are next with 10-30% of the liver's capacity.
• A low capacity exists in the skin, intestines, testes, and placenta.
Primary Biotransformation Site: The Liver
Since the liver is the primary site for biotransformation, it is also potentially vulnerable to the toxic action of a xenobiotic that is activated to a more toxic compound.
Within the liver cell, the primary subcellular components containing the transforming enzymes are the microsomes (small vesicles) of the endoplasmic reticulum and the soluble fraction of the cytoplasm (cytosol). The mitochondria, nuclei, and lysosomes contain a small level of transforming activity.
• Microsomal enzymes are associated with most Phase I reactions. Glucuronidation enzymes are also contained in microsomes.
• Cytosolic enzymes are non-membrane-bound and occur free within the cytoplasm. They are generally associated with Phase II reactions, although some oxidation and reduction enzymes are contained in the cytosol.
• The most important enzyme system involved in Phase I reactions are the cytochromes P450, also called the cytochrome P-450 system or the mixed function oxidase (MFO) system, but now mostly called CYP450 or CYPs by scientists and in research publications. It is found in microsomes and is responsible for oxidation reactions of a wide array of chemicals.
Susceptibility of the Liver
The liver is particularly susceptible to damage by ingested toxicants because it biotransforms most xenobiotics and receives blood directly from the gastrointestinal tract. Blood leaving the gastrointestinal tract does not flow directly into the general circulatory system. Instead, it flows into the liver first via the portal vein. This process is known as the "first pass." Blood leaving the liver is eventually distributed to all other areas of the body; however, much of the absorbed xenobiotic has undergone detoxification or bioactivation. The liver may have removed most of the potentially toxic chemical. On the other hand, some toxic metabolites are highly concentrated in the liver.
Knowledge Check
1) The organ that has the greatest ability to biotransform xenobiotics is the:
a) Liver
b) Pancreas
c) Skin
Answer
Liver - This is the correct answer.
Biotransforming enzymes are widely distributed throughout the body. However, the liver has the largest concentration of all organs and thus has a very high capacity for biotransformation.
2) The "first pass" phenomenon pertains to:
a) The situation where xenobiotics that are absorbed from the GI tract first enter the circulating blood before going to the liver
b) A condition where the liver first biotransforms a xenobiotic by Phase II reaction before it is biotransformed by a Phase I reaction
c) An anatomical arrangement in which xenobiotics absorbed from the intestine go to the liver first rather than into the systemic circulation
Answer
An anatomical arrangement in which xenobiotics absorbed from the intestine go to the liver first rather than into the systemic circulation - This is the correct answer.
Blood leaving the gastrointestinal tract does not directly flow into the general circulatory system. Instead, it flows into the liver first via the portal vein. This is known as the "first pass" phenomena. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_12%3A_Biotransformation/12.3%3A_Biotransformation_Sites.txt |
Modifiers of Biotransformation
The relative effectiveness of biotransformation depends on several factors that can inhibit or induce enzymes and dose levels. Factors include:
• Species
• Age
• Gender
• Genetic variability
• Nutrition
• Disease
• Exposure to other chemicals
Species
It is well known that the capability to biotransform specific chemicals varies by species. These differences are termed selective toxicity, which refers to differences in toxicity between species similarly exposed. Research uses what is known about selective toxicity to develop chemicals that are effective but relatively safe in humans.
• For example, the pesticide malathion in mammals is biotransformed by hydrolysis to relatively safe metabolites, but in insects, it is oxidized to malaoxon, which is lethal to insects.
Age and Gender
Age may affect the efficiency of biotransformation. In general, human fetuses and newborns have limited abilities to carry out xenobiotic biotransformations. This limitation is due to inherent deficiencies in many of the enzymes responsible for catalyzing Phase I and Phase II biotransformations. While the capacity for biotransformation fluctuates with age in adolescents, by early adulthood the enzyme activities have essentially stabilized. The aged also have decreased biotransformation capability.
Gender may influence the efficiency of biotransformation for specific xenobiotics. This is usually limited to hormone-related differences in the oxidizing cytochrome P-450 enzymes.
Genetic Variability
Genetic variability in biotransforming capability accounts for most of the large variation among humans. In particular, human genetic differences influence the Phase II acetylation reaction. Some persons have rapid acetylation ("rapid acetylator") while others have a slow ability to carry out this reaction ("slow acetylator"). The most serious drug-related toxicity occurs in those who have slow acetylators, often referred to as "slow metabolizers." With slow acetylators, acetylation is so slow that blood or tissue levels of certain drugs (or Phase I metabolites) exceed their toxic threshold.
Table \(1\) includes examples of drugs that build up to toxic levels in slow metabolizers who have specific genetic-related defects in biotransforming enzymes.
Nutrition
Poor nutrition can have a detrimental effect on biotransforming ability. Poor nutrition relates to inadequate levels of protein, vitamins, and essential minerals. These deficiencies can decrease a person's ability to synthesize biotransforming enzymes. Many diseases can impair an individual's capacity to biotransform xenobiotics.
For example, hepatitis (a liver disease) is well known to reduce hepatic biotransformation to less than half of its normal capacity.
Prior or Simultaneous ExposurePrior or simultaneous exposure to xenobiotics can cause enzyme inhibition and enzyme induction. In some situations, exposure to a substance will inhibit the biotransformation capacity for another chemical due to inhibition of specific enzymes. A major mechanism for the inhibition is competition between the two substances for the available oxidizing or conjugating enzymes. The presence of one substance uses up the enzyme needed to metabolize the second substance.
Exposure to Other Environmental Chemicals and DrugsEnzyme induction is a situation where prior exposure to certain environmental chemicals and drugs results in an enhanced capability for biotransforming a xenobiotic. The prior exposures stimulate the body to increase the production of some enzymes. This increased level of enzyme activity results in increased biotransformation of a chemical subsequently absorbed.
Examples of enzyme inducers include:
• Alcohol
• Isoniazid
• Polycyclic halogenated aromatic hydrocarbons (for example, dioxin)
• Phenobarbital
• Cigarette smoke
The most commonly induced enzyme reactions involve the cytochrome P450 enzymes.
Dose level can affect the nature of the biotransformation. In certain situations, the biotransformation may be quite different at high doses compared to low dose levels. This difference in biotransformation contributes to a dose threshold for toxicity. The existence of different biotransformation pathways can usually explain what causes this dose-related difference in biotransformation. At low doses, a xenobiotic may follow a biotransformation pathway that detoxifies the substance. However, if the amount of xenobiotic exceeds the specific enzyme capacity, the biotransformation pathway is saturated. In that case, it is possible that the level of parent toxin builds up. In other cases, the xenobiotic may enter a different biotransformation pathway that may end up producing a toxic metabolite.
An example of a dose-related difference in biotransformation occurs with acetaminophen (Tylenol®):
• At normal doses:
• About 96% of acetaminophen is biotransformed to non-toxic metabolites by sulfate and glucuronide conjugation.
• About 4% of the acetaminophen oxidizes to a toxic metabolite.
• That toxic metabolite is conjugated with glutathione and excreted.
• At 7-10 times the recommended therapeutic level:
• The sulfate and glucuronide conjugation pathways become saturated and more of the toxic metabolite is formed.
• The glutathione in the liver may also be depleted so that the toxic metabolite is not detoxified and eliminated.
• It can react with liver proteins and cause fatal liver damage.
Knowledge Check
1) Selective toxicity refers to a difference in the toxicity of a xenobiotic to different species. This selective toxicity can usually be attributed to differences in:
a) The ability to absorb the xenobiotic
b) Organ systems between species
c) Capability to biotransform the xenobiotic
Answer
Capability to biotransform the xenobiotic - This is the correct answer.
A difference between species in their capability to biotransform a specific chemical is normally the basis for a chemical's selective toxicity. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_12%3A_Biotransformation/12.4%3A_Modifiers_of_Biotransformation.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Define excretion.
• Identify the primary organ systems involved in excretion.
• Describe the basic mechanisms of excretion within each primary organ system involved.
Topics include:
Section 13: Key Points What We've Covered This section made the following main points:
• Excretion, as used in ToxTutor, pertains to the elimination of a xenobiotic and its metabolites by specific excretory organs.
• The primary organ systems involved in excretion are the:
• Urinary system, which involves:
1. Filtration in the glomerulus.
2. Secretion in the proximal tubule section of the nephron to transport certain molecules out of the blood and into the urine.
3. Reabsorption in the proximal convoluted tubule of the nephron to reenter nearly all of the water, glucose, potassium, and amino acids lost during filtration back into the blood.
• Gastrointestinal system, which occurs from two processes:
1. Biliary excretion — generally active secretion by the liver into the bile and then into the intestinal tract, where it can be eliminated in the feces or reabsorbed.
2. Intestinal excretion — an important elimination route only for xenobiotics that have slow biotransformation or slow urinary or biliary excretion.
• Respiratory system, which is important for xenobiotics and metabolites that exist in a gaseous phase in the blood:
• Excreted by passive diffusion from the blood into the alveolus.
• Minor routes of excretion occur including breast milk, sweat, saliva, tears, and semen.
Section 13: Excretion
Introduction to Excretion
Elimination from the body is very important in determining the potential toxicity of a xenobiotic. When the body rapidly eliminates a toxic xenobiotic (or its metabolites), it is less likely that they will be able to concentrate in and damage critical cells. The terms excretion and elimination are frequently used to describe the same process in which a substance leaves the body. Elimination is sometimes used in a broader sense and includes the removal of the absorbed xenobiotic through metabolic pathways as well as through excretion. Excretion, as used here, pertains to the elimination of the xenobiotic and its metabolites by specific excretory organs.
Except for the lung, polar (hydrophilic) substances are more likely than lipid-soluble toxicants to be eliminated from the body. Chemicals must again pass through membranes in order to leave the body, and the same chemical and physical properties that governed passage across other membranes apply to excretory organs as well.
Figure \(1\). Processes of toxicokinetics
(Image Source: Adapted from iStock Photos, ©)
Primary Routes of Excretion
The body uses several routes to eliminate toxicants or their metabolites. The main routes of excretion are via urine, feces, and exhaled air. Thus, the primary organ systems involved in excretion are the:
• Urinary system
• Gastrointestinal system
• Respiratory system
A few other avenues for elimination exist but they are relatively unimportant, except in exceptional circumstances.
Knowledge Check
1) The three major routes of excretion are:
a) Gastrointestinal tract, sweat, and saliva
b) Mother's milk, tears, and semen
c) Urinary excretion, fecal excretion, and exhaled air
Answer
Urinary excretion, fecal excretion, and exhaled air - This is the correct answer.
The main routes of excretion are via urine, feces, and exhaled air. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_13%3A_Excretion/13.1%3A_Introduction_to_Secretion.txt |
Urinary Excretion
The primary route in which the body eliminates substances is through the kidneys. The main function of the kidney is the excretion of body wastes and harmful chemicals into the urine. The functional unit of the kidney responsible for excretion is the nephron. Each kidney contains about one million nephrons. The nephron has three primary regions that function in the renal excretion process: the glomerulus, proximal tubule, and the distal tubule (Figure \(2\)).
Figure \(1\). Components of the urinary system
(Image Source: Adapted from iStock Photos, ©)
Three processes are involved in urinary excretion:
1. Filtration
2. Secretion
3. Reabsorption
Legend:
1. Glomerulus
2. Efferent arteriole
3. Bowman's capsule
4. Proximal convoluted tubule
5. Cortical collecting duct
6. Distal convoluted tubule
7. Loop of Henle
8. Papillary duct
9. Peritubular capillaries
10. Arcuate vein
11. Arcuate artery
12. Afferent arteriole
13. Juxtaglomerular apparatus
Figure \(2\). Nephron of the kidney
(Image Source: Adapted from Wikimedia Commons, obtained by Public Domain, Creative Commons CC0 1.0 Universal Public Domain Dedication.
View original image.)
Filtration
Filtration takes place in the glomerulus, which is the vascular beginning of the nephron. Approximately one-fourth of the blood flow from cardiac output circulates through the kidney, the greatest rate of blood flow for any organ. A considerable amount of the blood plasma filters through the glomerulus into the nephron tubule. This results from the large amount of blood flow through the glomerulus, the large pores (40 Angstrom [Å]) in the glomerular capillaries, and the hydrostatic pressure of the blood. Small molecules, including water, readily pass through the sieve-like filter into the nephron tubule. Both lipid soluble and polar substances will pass through the glomerulus into the tubule filtrate. The amount of filtrate is very large, about 45 gallons per day in an adult human. About 99% of the water-like filtrate, small molecules, and lipid-soluble substances, are reabsorbed downstream in the nephron tubule. This means that the amount of urine eliminated is only about one percent of the amount of fluid filtrated through the glomeruli into the renal tubules.
Molecules with molecular weights greater than 60,000 (which include large protein molecules and blood cells) cannot pass through the capillary pores and remain in the blood. If urine contains albumin or blood cells, it indicates that the glomeruli have been damaged. Binding to plasma proteins will influence urinary excretion. Polar substances usually do not bind with the plasma proteins and thus can be filtered out of the blood into the tubule filtrate. In contrast, substances extensively bound to plasma proteins remain in the blood.
Secretion
Secretion, which occurs in the proximal tubule section of the nephron, is responsible for the transport of certain molecules out of the blood and into the urine. Secreted substances include potassium ions, hydrogen ions, and some xenobiotics. Secretion occurs by active transport mechanisms that are capable of differentiating among compounds based on polarity. Two systems exist, one that transports weak acids (such as many conjugated drugs and penicillins) and the other that transports basic substances (such as histamine and choline).
Reabsorption
Reabsorption takes place mainly in the proximal convoluted tubule of the nephron. Nearly all of the water, glucose, potassium, and amino acids lost during glomerular filtration reenter the blood from the renal tubules. Reabsorption occurs primarily by passive transfer based on a concentration gradient, moving from a high concentration in the proximal tubule to the lower concentration in the capillaries surrounding the tubule (Figures 4-6).
A factor that greatly affects reabsorption and urinary excretion is the pH of the urine. This is especially the case with weak electrolytes. If the urine is alkaline, weak acids are more ionized and excretion is increased. Weak acids (such as glucuronide and sulfate conjugates) are less ionized if the urine is acidic and undergo reabsorption and renal excretion is reduced. Since the urinary pH varies in humans, the urinary excretion rates of weak electrolytes also vary.
• Examples are phenobarbital (an acidic drug) which is ionized in alkaline urine and amphetamine (a basic drug) which is ionized in acidic urine. Treatment of barbiturate poisoning (such as an overdose of phenobarbital) may include changing the pH of the urine to facilitate excretion.
• Diet may have an influence on urinary pH and thus the elimination of some toxicants. For example, a high-protein diet results in acidic urine.
The physical properties (primarily molecular size) and polarity of a substance in the urinary filtrate greatly affect its ultimate elimination by the kidney. Small toxicants (both polar and lipid-soluble) are filtered with ease by the glomerulus. In some cases, large molecules (including some that are protein-bound) may be secreted (by passive transfer) from the blood across capillary endothelial cells and nephron tubule membranes to enter the urine. The major difference in ultimate fate is governed by a substance's polarity. Those substances that are ionized remain in the urine and leave the body. Lipid-soluble toxicants can be reabsorbed and re-enter the blood circulation, which lengthens their half-life in the body and potential for toxicity.
Kidneys, which have been damaged by toxins, infectious diseases, or because of age, have diminished ability to eliminate toxicants thus making those individuals more susceptible to toxins that enter the body. The presence of albumin in the urine indicates that the glomerulus filtering system is damaged, letting large molecules pass through. The presence of glucose in the urine is an indication that tubular reabsorption has been impaired.
Knowledge Check
1) The reason that much of the blood plasma filters into the renal tubule is due to:
a) The large amount of blood, under relatively high pressure, that flows through kidney glomerulae whose capillaries have large pores
b) Its high lipid content
c) The high binding content of plasma
Answer
The large amount of blood, under relatively high pressure, that flows through kidney glomerulae whose capillaries have large pores - This is the correct answer.
A considerable amount of the blood plasma filters through the glomerulus into the nephron tubule. This results from the large amount of blood flow through the glomerulus, the large pores (40 Angstrom [Å]) in the glomerular capillaries, and the hydrostatic pressure of the blood.
2) In which area of the nephron does active secretion take place?
a) The collecting duct of the nephron
b) The proximal tubule of the nephron
c) The glomerulus of the nephron
Answer
The proximal tubule of the nephron - This is the correct answer.
Secretion occurs in the proximal tubule section of the nephron and is responsible for the transport of certain molecules out of the blood and into the urine.
3) Most of the material filtered through the glomerulus is reabsorbed in the proximal convoluted tubule of the nephron. The primary property of a xenobiotic that determines whether it will be reabsorbed is:
a) Protein binding
b) Molecular size
c) Polarity
Answer
Polarity - This is the correct answer.
The ultimate fate of a substance filtered into the renal tubule is governed by its polarity. Those substances that are ionized remain in the urine and leave the body. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_13%3A_Excretion/13.2%3A_Urinary_Excretion.txt |
Fecal Excretion
Elimination of toxicants in the feces occurs from two processes:
1. Excretion in bile, which then enters the intestine ("biliary excretion").
2. Direct excretion into the lumen of the gastrointestinal tract ("intestinal excretion").
Biliary Excretion
The biliary route is an important mechanism for fecal excretion of xenobiotics and is even more important for the excretion of their metabolites. This route generally involves active secretion rather than passive diffusion. Specific transport systems appear to exist for certain types of substances, for example, organic bases, organic acids, and neutral substances. Some heavy metals are excreted in the bile, for example, arsenic, lead, and mercury. However, the most likely substances to be excreted via the bile are comparatively large, ionized molecules, such as those having a large molecular weight (conjugates greater than 300).
Once a substance has been excreted by the liver into the bile, and then into the intestinal tract, it can be eliminated from the body in the feces, or it may be reabsorbed. Since most of the substances excreted in the bile are water soluble, they are not likely to be reabsorbed as such. However, enzymes in the intestinal flora are capable of hydrolyzing some glucuronide and sulfate conjugates, which can release the less polar compounds that may then be reabsorbed. This process of excretion into the intestinal tract via the bile and reabsorption and return to the liver by the portal circulation is known as the enterohepatic circulation (Figure 1).
Enterohepatic circulation prolongs the life of the xenobiotic in the body. In some cases, the metabolite is more toxic than the excreted conjugate. Continuous enterohepatic recycling can occur and lead to very long half-lives of some substances. For this reason, drugs may be given orally to bind substances excreted in the bile.
• For example, a resin can be taken orally to bind with dimethylmercury, which had been secreted in the bile. The binding of the resin to dimethylmercury prevents its reabsorption and further toxicity.
Changes in the production and flow of bile into the liver affect the efficiency of biliary excretion.
• Liver disease usually causes a decrease in bile flow.
• Some drugs such as phenobarbital can produce an increase in bile flow rate. Administration of phenobarbital has been shown to enhance the excretion of methylmercury by this mechanism.
Figure \(1\). Biliary excretion and enterohepatic circulation
(Image Source: Adapted from iStock Photos, ©)
Intestinal Excretion
Another way that xenobiotics can be eliminated via the feces is by direct intestinal excretion. While this is not a major route of elimination, a large number of substances can be excreted into the intestinal tract and eliminated via feces. Some substances, especially those that are poorly ionized in plasma (such as weak bases), may passively diffuse through the walls of the capillaries, through the intestinal submucosa, and into the intestinal lumen to be eliminated in feces.
Figure \(2\). Layers of the Alimentary Canal
(Image Source: Wikimedia Commons, obtained under Creative Commons Attribution-Share Alike 3.0 Unported license. Author: Goran tek-en.
View original image.)
Intestinal excretion is a relatively slow process and therefore, it is an important elimination route only for those xenobiotics that have slow biotransformation, or slow urinary or biliary excretion. Increasing the lipid content of the intestinal tract can enhance intestinal excretion of some lipophilic substances. For this reason, mineral oil (liquid paraffin, derived from petroleum) is sometimes added to the diet to help eliminate toxic substances, which are known to be excreted directly into the intestinal tract.
Knowledge Check
1) Substances excreted in the bile are primarily:
a) Small, lipid soluble molecules
b) Comparatively large, ionized molecules
c) Large, lipid soluble molecules
Answer
Comparatively large, ionized molecules - This is the correct answer.
The most likely substances to be excreted via the bile are comparatively large, ionized molecules, such as large molecular weight (greater than 300) conjugates.
2) Many substances excreted in bile undergo enterohepatic circulation, which involves:
a) Excretion of substances into the circulating system rather than into the intestine
b) Excretion into the intestinal tract and reabsorption and return to the liver by the portal circulation
c) The recycling of xenobiotics between the liver and gall bladder
Answer
Excretion into the intestinal tract and reabsorption and return to the liver by the portal circulation - This is the correct answer.
The process of excretion into the intestinal tract via the bile and reabsorption and return to the liver by the portal circulation is known as the enterohepatic circulation. The effect of this enterohepatic circulation is to prolong the life of the xenobiotic in the body. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_13%3A_Excretion/13.3%3A_Fecal_Excretion.txt |
Exhaled Air
The lungs are an important route of excretion for xenobiotics (and metabolites) that exist in a gaseous phase in the blood.
Passive Diffusion
Blood gases are excreted by passive diffusion from the blood into the alveolus, following a concentration gradient. This type of excretion occurs when the concentration of the xenobiotic dissolved in capillary blood is greater than the concentration of the substance in the alveolar air. Gases with a low solubility in blood are more rapidly eliminated than those gases with a high solubility. Volatile liquids dissolved in the blood are also readily excreted via the expired air.
For example, breathalyzer devices can measure blood alcohol concentration because as alcohol in the blood moves across the alveoli the alcohol in the blood evaporates and is exhaled. The concentration of alcohol in the exhaled air relates to the level of alcohol in the blood.
Impact of Vapor Pressure
The amount of a liquid excreted by the lungs is proportional to its vapor pressure. Exhalation is an exception to most other routes of excretion in that it can be a very efficient route of excretion for lipid soluble substances. This is due to the very close proximity of capillary and alveolar membranes, which are thin and allow for the normal gaseous exchange that occurs in breathing.
Knowledge Check
1) Xenobiotics are eliminated in exhaled air by:
a) Passive diffusion
b) Active transport
c) Facilitated transport
Answer
Passive diffusion - This is the correct answer.
Blood gases are excreted by passive diffusion from the blood into the alveolus, following a concentration gradient. This occurs when the concentration of the xenobiotic dissolved in capillary blood is greater than the concentration of the substance in the alveolar air.
13.5: Other Routes
Other Routes of Excretion
Several minor routes of excretion occur including mother's milk, sweat, saliva, tears, and semen.
Excretion into Breast Milk
Excretion into milk can be important since toxicants can be passed with milk to the nursing offspring. In addition, toxic substances can pass from cow's milk to people. Toxic substances are excreted into milk by simple diffusion. Both basic substances and lipid soluble compounds can be excreted into milk (The National Library of Medicine's LactMed is a resource for information on drugs, dietary supplements, and herbs that pass into breast milk.).
Basic substances can be concentrated in milk since milk is more acidic (pH approximately 6.5) than blood plasma. Since milk contains 3–4% lipids, lipid soluble xenobiotics can diffuse along with fats from plasma into the mammary gland and thus can be present in mother's milk. Substances such as lead, mercury, Bisphenol A (BPA), and phthalates that are chemically similar to calcium can also be excreted into milk along with calcium.
Did you know?
Volatile organic compounds (VOCs) found in indoor air can also be found in breast milk.
Examples include MTBE (methyl tert-butyl ether), chloroform, benzene, and toluene. For benzene, toluene, and MTBE, the levels in breast milk followed the indoor air concentrations. However, the infant average daily dose by inhalation exceeded ingestion rates by 25-to-135 fold. Thus, the amount of VOC exposure from indoor air in nonsmoking households is much greater than the VOC exposure from breast milk. Strategies to lessen infant VOC exposure should focus on improving indoor air quality.
Excretion into All Other Body Secretions or Tissues
Excretion of xenobiotics in all other body secretions or tissues (including the saliva, sweat, tears, hair, and skin) are of only minor importance. Under conditions of great sweat production, excretion in sweat may reach a significant degree. Some metals, including cadmium, copper, iron, lead, nickel, and zinc, may be eliminated in sweat to some extent. Xenobiotics that passively diffuse into saliva may be swallowed and absorbed by the gastrointestinal system. The excretion of some substances into saliva is responsible for the unpleasant taste that sometimes occurs with time after exposure to a substance.
Knowledge Check
1) The following are minor routes of excretion:
a) Sweat and saliva
b) Urinary excretion, fecal excretion, and exhaled air
Answer
Sweat and saliva - This is the correct answer.
Several minor routes of excretion exist, primarily via mother's milk, sweat, saliva, tears, and semen. The main routes of excretion are via urine, feces, and exhaled air. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_13%3A_Excretion/13.4%3A_Exhaled_Air.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Explain cellular adaptation.
• Identify four possible endpoints to toxic damage to cells and tissues.
• Define commonly used cancer terms.
• Describe the phases of and genetic activity associated with carcinogenesis.
• Identify mechanisms and potential outcomes of neurotoxicity.
Topics include:
Section 14: Key Points What We've Covered This section made the following main points:
• To maintain homeostasis, cells and tissues undergo:
• Physiological adaptation, which is beneficial in nature — for example, increased skeletal muscle cells in athletes.
• Pathological adaptation, which is detrimental — for example, cellular changes in people who smoke cigarettes.
• Specific types of adaptation include:
• Atrophy — a decrease in the size of cells.
• Hypertrophy — an increase in the size of individual cells.
• Hyperplasia — an increase in the number of cells in a tissue.
• Metaplasia — the conversion from one type of mature cell to another type.
• Dysplasia — abnormal cell changes or deranged cell growth.
• Anaplasia — cells that are undifferentiated.
• Neoplasia — new growth of tissue.
• Most toxic effects, especially due to xenobiotics, are due to specific biochemical interactions without causing recognizable damage to a cell or its organelles. Cellular or biochemical toxicity leads to:
• The tissue being completely repaired and returned to normal.
• The tissue being incompletely repaired but capable of functioning with reduced capacity.
• Death of the organism or complete loss of a tissue or organ.
• Neoplasm or cancers.
• Tumors are either:
• Benign — similar to the cell of origin, slow-growing, and usually without systemic effects.
• Malignant — dissimilar from the cell of origin, rapid-growing, and commonly with systemic effects and life-threatening. Most malignant tumors are either:
• Carcinomas — arising in epithelium, the most common form of cancer, usually spread in the lymphatic system.
• Sarcomas — arising in connective or muscle tissue, usually spread by the blood stream.
• Carcinogenesis is a multi-step, multi-factorial genetic disease consisting of at least three main phases:
1. Initiation — irreversible alteration of the DNA (mutation) of a normal cell.
2. Promotion/Conversion — promoters enhance further development of the initiated cells, often influencing further expression of the mutated DNA such that the initiated cell proliferates and progresses further.
3. Progression — development of the initiated cell into a biologically malignant cell population, often with metastasis to other areas of the body.
• Regulatory genes control the activity of structural genes and direct the proliferation process of the cell. Regulatory genes that play roles in carcinogenesis include:
• Proto-oncogenes — normal cellular genes that encode and instruct the production of regulatory proteins and growth factors within a cell or its membrane.
• Oncogenes — altered or misdirected proto-oncogenes with the ability to direct the production of proteins within the cell that change or transform the normal cell into a neoplastic cell.
• Tumor suppressor genes (anti-oncogenes) — present in normal cells and counteract and change the proto-oncogenes and altered proteins, preventing a cell with damaged DNA from proliferating and evolving into an uncontrolled growth.
• The p53 gene normally halts cell division, stimulates repair enzymes, and if necessary, commands the mutated cell to self-destruct
• p53 is the most frequently altered in human tumors and is incapable of its defense mechanisms
• Toxic damage to the nervous system is divided into three categories:
1. Damage to sensory receptors and sensory neurons impacting the sensory functions.
2. Damage to motor neurons causing muscular weakness and paralysis.
3. Interneuronal damage causing learning deficiencies, memory loss, incoordination, and emotional conditions.
Section 14: Cellular Toxicology
This section discusses cellular effects yet cell and chemical effects cannot be conveniently separated because cells are constructed of a variety of chemicals of diverse types. Specific intracellular chemical changes may occur as changes in the cell and may affect either its appearance or function. The actual mechanisms leading to cell damage are usually biochemical in nature.
Adaptation Explained
To maintain homeostasis, cells and tissues:
• "Cope" with new demands placed on them by constantly adapting to changes in the tissue environment.
• Are usually capable of an amazing degree of cellular adaptability.
• Adapt in a way that may be beneficial in nature (physiological) or detrimental (pathological).
Examples of physiological adaptation are:
• An increase in skeletal muscle cells in athletes due to exercise and increased metabolic demand.
• The increase in number and size of epithelial cells in breasts of women resulting from endocrine stimulation during pregnancy.
When these cells or tissues are damaged, the body attempts to adapt and repair or limit the harmful effects. Often the adaptive changes result in cells or organs that cannot function normally. This imperfect adaptation is a pathological change.
Examples of pathological adaptations are:
• Cellular changes in people who smoke cigarettes: The ciliated columnar epithelium changes to non-ciliated squamous epithelium in the trachea and bronchi of cigarette smokers. The replacement of squamous epithelium can better withstand the irritation of the cigarette smoke. However, the loss of cilia and mucous secretions of columnar epithelium diminish the tracheobronchial defense mechanisms.
• Replacement of normal liver cells by fibrotic cells in chronic alcoholics (known as cirrhosis of the liver): A severely cirrhotic liver is incapable of normal metabolism, maintenance of nutrition, and detoxification of xenobiotics.
If the change is minor, cellular adaptation may result and the cells return to normal. When damage is very severe, the result may be cell death or permanent functional incapacitation.
Cellular adaptation to toxic agents includes three basic types:
1. Increase in cell activity.
2. Decrease in cell activity.
3. Alteration in cell morphology (structure and appearance) or cell function.
Specific Types of Cellular Adaptations
Atrophy
Atrophy is a decrease in the size of cells. If a sufficient number of cells are involved, the tissue or organ may also decrease in size. When cells atrophy, they have:
• Reduced oxygen needs.
• Reduced protein synthesis.
• Decreased number and size of the organelles.
The most common causes of atrophy are reduced use of the cells, lack of hormonal or nerve stimulation, decrease in nutrition, reduced blood flow to the tissue, and natural aging.
• An example of atrophy is the decrease in the size of muscles and muscle cells in persons whose legs are paralyzed, in a cast, or infrequently used as when a patient is on bedrest.
Hypertrophy
Hypertrophy is an increase in the size of individual cells. This frequently results in an increase in the size of a tissue or organ. When cells hypertrophy, components of the cell increase in numbers with increased functional capacity to meeting increased cell needs. Hypertrophy generally occurs in situations where the organ or tissue cannot adapt to an increased demand by formation of more cells. This is commonly seen in cardiac and skeletal muscle cells, which do not divide to form more cells. Common causes for hypertrophy are increased work or stress placed on an organ or hormonal stimulation.
• An example of hypertrophy is the compensatory increase in the size of cells in one kidney after the other kidney has been removed or is in a diseased state.
Hyperplasia
Hyperplasia is an increase in the number of cells in a tissue. This generally results in an enlargement of tissue mass and organ size. It occurs only in tissues capable of mitosis such as the epithelium of skin, intestine, and glands. Some cells do not divide and thus cannot undergo hyperplasia, for example, nerve and muscle cells. Hyperplasia is often a compensatory measure to meet an increase in body demands. Hyperplasia is a frequent response to toxic agents and damage to tissues such as wounds or trauma. In wound healing, hyperplasia of connective tissue (for example, fibroblasts and blood vessels) contributes to the wound repair. In many cases, when the toxic stress is removed, the tissue returns to normal. Hyperplasia may result from hormonal stimulation, for example, breast and uterine enlargement due to increased estrogen production during pregnancy.
Metaplasia
Metaplasia is the conversion from one type of mature cell to another type of mature cell. It is a cellular replacement process. A metaplastic response often occurs with chronic irritation and inflammation. This results in a tissue more resistant to the external stress since the replacement cells are capable of survival under circumstances in which the original cell type could not survive. However, the cellular changes usually result in a loss of function, which was performed by the original cells that were lost and replaced.
Examples of metaplasia are:
• The common condition in which a person suffers from chronic reflux of acid from the stomach into the esophagus (Gastroesophageal Reflux Disease). The normal esophageal cells (squamous epithelium) are sensitive to the refluxed acid and die. They are replaced with the columnar cells of the stomach that are resistant to the stomach's acidity. This pathological condition is known as "Barrett's Esophagus."
• The change in the cells of the trachea and bronchi of chronic cigarette smokers from ciliated columnar epithelium to non-ciliated stratified squamous epithelium. The sites of metaplasia frequently are also sites for neoplastic transformations. The replacement cells lack the defense mechanism performed by the cilia in moving particles up and out of the trachea.
• With cirrhosis of the liver, which is a common condition of chronic alcoholics, the normal functional hepatic cells are replaced by nonfunctional fibrous tissue.
Dysplasia
Dysplasia is a condition of abnormal cell changes or deranged cell growth in which the cells are structurally changed in size, shape, and appearance from the original cell type. Cellular organelles also become abnormal. A common feature of dysplastic cells is that the nuclei are larger than normal and the dysplastic cells have a mitotic rate higher than the predecessor normal cells. Causes of dysplasia include chronic irritation and infection. In many cases, the dysplasia can be reversed if the stress is removed and normal cells return. In other cases, dysplasia may be permanent or represent a precancerous change.
• An example of dysplasia is the atypical cervical cells that precede cervical cancer. Routine examination of cervical cells is a routine screening test for dysplasia and possible early stage cervical cancer (Papanicolaou test).
• Cancer occurs at the site of Barrett's syndrome and in the bronchi of chronic smokers (bronchogenic squamous cell carcinoma).
Anaplasia
Anaplasia refers to cells that are undifferentiated. They have irregular nuclei and cell structure with numerous mitotic figures. Anaplasia is frequently associated with malignancies and serves as one criterion for grading the aggressiveness of a cancer. For example, an anaplastic carcinoma is one in which the cell appearance has changed from the highly differentiated cell of origin to a cell type lacking the normal characteristics of the original cell. In general, anaplastic cells have lost the normal cellular controls, which regulate division and differentiation.
Neoplasia
Neoplasia is a new growth of tissue and is commonly referred to as a tumor. There are two types of neoplasia: benign and malignant. Malignant neoplasia are cancers. Since cancer is such an important and complex medical problem, a separate section is devoted to cancer.
Interactions
Interactions between two or more toxic agents can produce damage by chemical-chemical interactions, chemical-receptor interactions, or by modification, by a first agent, of the cell and tissue response to a second agent. Interactions may occur by simultaneous exposure and if exposure to the two agents is separated in time.
Chemical-chemical interactions have been mostly studied in the toxicology of air pollutants, where it was shown that the untoward effect of certain oxidants may be enhanced in the presence of other aerosols.
Interactions at the receptor site have been found in isolated perfused lung experiments. Oxygen tolerance may be an example, when pre-exposure to one concentration of oxygen mitigates later exposure to 100% oxygen by modifying cellular and enzymatic composition of the lung.
Damage of the alveolar zone by the antioxidant butylated hydroxytoluene (BHT) in mice can be greatly enhanced by subsequent exposure to oxygen concentration which, otherwise, would have little if any demonstrable effect.
The synergistic interaction between BHT and oxygen in mice results in interstitial pulmonary fibrosis. Acute or chronic lung disease may then be caused not only by one agent, but also in many instances by the interaction of several agents.
Knowledge Check
1) An increase in skeletal muscle cells in athletes due to exercise and increased metabolic demand is an example of:
a) Pathological adaptation
b) Physiological adaptation
Answer
Physiological adaptation - This is the correct answer.
The increase in skeletal muscle cells in athletes due to exercise and increased metabolic demand is an example of physiological adaptation since the increased muscle is beneficial rather than harmful.
2) A cellular response in which there is an increase in the number of cells in a tissue is known as:
a) Atrophy
b) Hypertrophy
c) Hyperplasia
d) Metaplasia
Answer
Hyperplasia - This is the correct answer.
Hyperplasia is an increase in the number of cells in a tissue.
3) A condition of abnormal cell changes or deranged cell growth in which the cells are structurally changed in size, shape, and appearance from the original cell type is known as:
a) Dysplasia
b) Anaplasia
c) Neoplasia
Answer
Dysplasia - This is the correct answer.
Dysplasia is a condition of abnormal cell changes or deranged cell growth in which the cells are structurally changed in size, shape, and appearance from the original cell type. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_14%3A_Cellular_Toxicology/14.1%3A_Adaptation.txt |
Cell Damage and Tissue Repair
Toxic damage to cells can cause individual cell death and if sufficient cells are lost, the result can be tissue or organ failure, ultimately leading to death of the organism. It is nearly impossible to separate a discussion of cellular toxicity and biochemical toxicity. Most observable cellular changes and cell death are due to specific biochemical changes within the cell or in the surrounding tissue. However, there are a few situations where a toxic chemical or physical agent can cause cell damage without actually affecting a specific chemical in the cell or its membrane. Physical agents such as heat and radiation may damage a cell by coagulating their contents (similar to cooking). In this case, there are no specific chemical interactions. Impaired nutrient supply (such as glucose and oxygen) may deprive the cell of essential materials needed for survival.
Toxic Effects
The majority of toxic effects, especially due to xenobiotics, are due to specific biochemical interactions without causing recognizable damage to a cell or its organelles.
Examples of these toxic effects include:
• Interference with a chemical that transmits a message across a neural synapse such as the inhibition of the enzyme acetylcholinesterase by organophosphate pesticides.
• When one toxic chemical inhibits or replaces another essential chemical such as the replacement of oxygen on the hemoglobin molecule with carbon monoxide.
The human body is extremely complex. In addition to over 200 different cell types and about as many types of tissues, there are literally thousands of different biochemicals, which may act alone or in concert to keep the body functions operating correctly. To illustrate the cell's structures and functions and the chemical toxicity of all tissues and organs would be impossible in this brief tutorial. This section presents only a general overview of toxic effects along with some specific types of toxicity that include cancer and neurotoxicity.
Capacity for Repair
Some tissues have a great capacity for repair, such as most epithelial tissues. Others have limited or no capacity to regenerate and repair, such as nervous tissue. Most organs have a functional reserve capacity so that they can continue to perform their body function although perhaps in somewhat diminished ability. For example:
• Half of a person's liver can be damaged, and the body can regenerate sufficient new liver or repair the damaged section by fibrous replacement to maintain most of the capacity of the original liver.
• The hypertrophy of one kidney to assume the capacity lost when the other kidney has been lost or surgically removed.
Toxic Damage to Cells and Tissues
Toxic damage to cells and tissues can be transient and non-lethal or, in severe situations, the damage may cause death of the cells or tissues. The following diagram illustrates the various effects that can occur with damage to cells. There are four main final endpoints to the cellular or biochemical toxicity:
1. The tissue may be completely repaired and return to normal.
2. The tissue may be incompletely repaired but is capable of sustaining its function with reduced capacity.
3. Death of the organism or the complete loss of a tissue or organ. In some instances, the organism can continue to live with the aid of medical treatment, for example, replacement of insulin or by organ transplantations.
4. Neoplasm or cancers may result, many of which will result in death of the organism and some of which may be cured by medical treatment.
Figure \(1\). Toxic damage to cells
(Image Source: NLM)
Reversible Cell Damage
The response of cells to toxic injury may be transient and reversible once the stress has been removed or the compensatory cellular changes are made. In some cases, the full capability of the damaged cells returns. In other cases, a degree of permanent injury remains with a diminished cellular or tissue capacity. In addition to the adaptive cell changes discussed previously, two commonly encountered specific cell changes are associated with toxic exposures, cellular swelling, and fatty change.
Cellular swelling, which is associated with hypertrophy, is due to cellular hypoxia, which damages the sodium-potassium membrane pump. This in turn changes the intracellular electrolyte balance with an influx of fluids into the cell, causing it to swell. Cell swelling is reversible when the cause is eliminated.
Fatty change is more serious and occurs with severe cellular injury. In this situation, the cell has become damaged and is unable to adequately metabolize fat. The result is that small vacuoles of fat accumulate and become dispersed within the cytoplasm. While fatty change can occur in several organs, it is usually observed in the liver. This is because most fat is synthesized and metabolized in liver cells. Fatty change can be reversed but it is a much slower process than the reversal of cellular swelling.
Lethal Injury (Cell Death)
In many situations, the damage to a cell may be so severe that the cell cannot survive. Cell death occurs mainly by two methods: necrosis and apoptosis.
Necrosis is a progressive failure of essential metabolic and structural cell components usually in the cytoplasm. Necrosis generally involves a group of contiguous cells or occurs at the tissue level. Such progressive deterioration in structure and function rapidly leads to cell death or "necrotic cells." Necrosis begins as a reduced production of cellular proteins, changes in electrolyte gradient, or loss of membrane integrity (especially increased membrane permeability). Cytoplasmic organelles (such as mitochondria and endoplasmic reticulum) swell while others (especially ribosomes) disappear. This early phase progresses to fluid accumulation in the cells making them pale-staining or showing vacuoles, which pathologists call "cloudy swelling" or "hydropic degeneration." In some cells, they no longer can metabolize fatty acids so that lipids accumulate in the cytoplasmic vacuoles, referred to as "fatty accumulation" or "fatty degeneration." In the final stages of "cell dying," the nucleus becomes shrunken (pyknosis) or fragmented (karyorrhexis).
Apoptosis or "programmed cell death" is a process of self-destruction of the cell nucleus. Apoptosis is an individual or single cell death in that dying cells are not contiguous but are scattered throughout a tissue. Apoptosis is a normal process in cell turnover in that cells have a finite lifespan and spontaneously die. During embryonic development, certain cells are programmed to die and are not replaced, such as the cells between each developing finger. If the programmed cells do not die, the fetus ends up with incomplete or fingers joined together in a web fashion.
In apoptosis, the cells shrink from a decrease of cytosol and the nucleus. The organelles (other than the nucleus) appear normal in apoptosis. The cell disintegrates into fragments referred to as "apoptotic bodies." These apoptotic bodies and the organelles are phagocytized by adjacent cells and local macrophages without initiation of an inflammatory response as is seen in necrosis. The cells undergo apoptosis and just appear to "fade away." Some toxicants induce apoptosis or, in other cases, they inhibit normal physiological apoptosis.
Following necrosis, the tissue attempts to regenerate with the same type of cells that have died. When the injury is minimal, the tissue may effectively replace the damaged or lost cells. In severely damaged tissues or long-term chronic situations, the ability of the tissue to regenerate the same cell types and tissue structure may be exceeded, so that a different and imperfect repair occurs.
• An example of this is with chronic alcoholic damage to liver tissue in which the body can no longer replace hepatocytes with hepatocytes but rather connective tissue replacement occurs. Fibrocytes with collagen replace the hepatocytes and normal liver structure with scar tissue. The fibrotic scar tissue shores up the damage but it cannot replace the function of the lost hepatic tissue. With constant fibrotic change, the liver function is continually diminished so that eventually the liver can no longer maintain homeostasis. This fibrotic replacement of the liver is known as cirrhosis (Figure \(2\)). The normal dark-red, glistening smooth appearance of the liver has been replaced with light, irregular fibrous scar tissue that permeates the entire liver.
Figure \(2\). A healthy liver (left) and a liver with cirrhosis (right)
(Image Source: iStock Photos, ©)
We have so far discussed primarily changes to individual cells. However, a tissue and an organ consist of different types of cells that work together to achieve a particular function. As with a football team, when one member falters, the others rally to compensate. It is the same with a tissue. Damage to one cell type prompts reactions within the tissue to compensate for the injury. Within organs, there are two basic types of tissues: the parenchymal and stromal tissues. The parenchymal tissues contain the functional cells (for example, squamous dermal cells, liver hepatocytes, and pulmonary alveolar cells). The stromal cells are the supporting connective tissues (for example, blood vessels and elastic fibers).
Cell Repair
Repair of injured cells can be accomplished by either:
1. Regeneration of the parenchymal cells.
2. Repair and replacement by the stromal connective tissue.
The goal of the repair process is to fill the gap that results from the tissue damage and restore the structural continuity of the injured tissue. Normally a tissue attempts to regenerate the same cells that are damaged; however, in many cases, this cannot be achieved so that replacement with a stromal connective tissue is the best means for achieving the structural continuity.
The ability to regenerate varies greatly with the type of parenchymal cell. The regenerating cells come from the proliferation of nearby parenchymal cells, which serve to replace the lost cells. Based on regenerating ability, there are three types of cells:
1. Labile cells — cells that routinely divide and replace cells that have a limited lifespan (for example, skin epithelial cells, and hematopoietic stem cells).
2. Stable cells — cells that usually have a long lifespan with normally a low rate of division; they can rapidly divide upon demand.
3. Permanent cells — cells that never divide and do not have the ability for replication even when stressed or when some cells die.
Table \(1\) shows examples of cell types.
The labile cells have a great potential for regeneration by replication and repopulation with the same cell type so long as the supporting structure remains intact. Stable cells can also respond and regenerate but to a lesser degree and are quite dependent on the supporting stromal framework. When the stromal framework is damaged, the regenerated parenchymal cells may be irregularly dispersed in the organ resulting in diminished organ function. The tissue response for the labile and stable cells is initially hyperplasia until the organ function becomes normal again. When permanent cells die they are not replaced in kind but instead connective tissue (usually fibrous tissue) moves in to occupy the damaged area. This is a form of metaplasia.
Examples of replacement by metaplasia are:
• Cirrhosis of the liver — liver cells (hepatocytes) are replaced by bands of fibrous tissue, which cannot carry out the metabolic functions of the liver.
• Cardiac infarcts — cardiac muscle cells do not regenerate and thus are replaced by fibrous connective tissue (scar). The scar cannot transmit electrical impulses or participate in contraction of the heart.
• Pulmonary fibrosis — damaged or dead epithelial cells lining the pulmonary alveoli are replaced by fibrous tissue. Gases cannot diffuse across the fibrous cells and thus gas exchange is drastically reduced in the lungs.
Figure \(3\). Activation of Toxicity Pathways
(Image Source: Adapted from Dr. Andrew Maier, adapted from National Research Council (NRC) 2007a.)
Knowledge Check
1) The process of self-destruction of the cell nucleus (often referred to as "programmed cell death") is known as:
a) Necrosis
b) Apoptosis
c) Cellular swelling
d) Fatty change
Answer
Apoptosis - This is the correct answer.
Apoptosis (referred to as "programmed cell death") is a process of self-destruction of the cell nucleus.
2) The category of cells that routinely divide and replace cells that have a limited lifespan is known as:
a) Labile cells
b) Stable cells
c) Permanent cells
Answer
Labile cells - This is the correct answer.
Labile cells are cells that routinely divide and replace cells that have a limited lifespan (e.g., skin epithelial cells and hematopoietic stem cells). | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_14%3A_Cellular_Toxicology/14.2%3A_Cell_Damage_and_Tissue_Repair.txt |
Cancer
Cancer has long been considered a cellular disease since cancers are composed of cells that grow without restraint in various areas of the body. Such growths of cancerous cells can replace normal cells or tissues causing severe malformations (such as with skin and bone cancers) and failure of internal organs which frequently leads to death. How do cells become cancerous? The development of cancer is an enormously complex process. For once a cell has started on the cancer path, it progresses through a series of steps, which continue long after the initial cause has disappeared.
Overview
There are about as many types of cancers as there are different types of cells in the body (over 100 types). Some cell types constantly divide and are replaced (such as skin and blood cells). Other types of cells rarely or never divide (such as bone cells and neurons). Sophisticated mechanisms exist in cells to control when, if, and how cells replicate. Cancer occurs when these mechanisms are lost and replication takes place in an uncontrolled and disorderly manner. It can arise when one cell or a small group of cells multiplies too many times because of damage to its DNA.
Recent research has begun to unravel the extremely complex pathogenesis of cancer. There is an intricate array of biochemical changes that take place within cells and between cells underlying the progression of cancer that transforms normal cells into cancerous cells. These biochemical changes lead a cell through a series of steps, changing it gradually from a normal to a cancer cell. The altered cell is no longer bound by the regulatory controls that govern the life and behavior of normal cells.
Cancer is not a single disease but a large group of diseases. The common aspect is that all cancers have the same basic property: they are composed of cells that no longer conform to the usual constraints on cell proliferation. In other words, they are uncontrolled growths of cells.
Terminology
The terminology associated with cancer can be confusing and may be used differently among the public and medical communities.
Here are definitions of the most frequently used cancer terms:
• Cancer — a malignant tumor that has the ability to metastasize or invade into surrounding tissues.
• Tumor — a general term for an uncontrolled growth of cells that becomes progressively worse with time. Tumors may be benign or malignant.
• Neoplasm — same as a tumor.
• Neoplasia — the growth of new tissue with abnormal and unregulated cellular proliferation.
• Benign Tumor — a tumor that does not metastasize or invade surrounding tissue.
• Malignant Tumor — a tumor that has the ability to metastasize or invade into surrounding tissues (same as cancer).
• Metastasis — ability to establish secondary tumor growth at a new location away from the original site.
• Carcinogenesis — the production of a carcinoma (epithelial cancer). Sometimes carcinogenesis is used as a general term for production of any type of tumor.
How are Cancers Named?
While most tumors are generally named in accordance with an internationally agreed-upon classification scheme, there are exceptions. Tumors are generally named and classified based on:
• The cell or tissue of origin
• Whether benign or malignant
Most tumor names end with the suffix "oma" which indicates a swelling or tissue enlargement. [Note: some terms ending with -oma are not cancers; for example, a hematoma is merely a swelling consisting of blood].
In naming tumors, qualifiers may be added in addition to the tissue of origin and structural features. For example, a "poorly-differentiated bronchogenic squamous cell carcinoma" is a malignant tumor (carcinoma) of squamous cell type (original cell type), which arose in the bronchi of the lung (site where the cancer started), and in which the cancer cells are poorly differentiated, meaning they have lost much of the normal appearance of squamous cells.
There are several historical exceptions to the standard nomenclature system, often based on their early and accepted use in the literature.
Examples include:
• Some tumors are named after the person that first described the tumor, for example, Wilms tumor (kidney tumor) and Hodgkin lymphoma (a specific form of lymphoid cancer).
• A few cancers are named for their physical characteristics such as pheochromocytomas (dark-colored tumors of the adrenal gland).
• A few cancers are composed of mixtures of cells, for example, fibrosarcoma and carcinosarcoma.
Most malignant tumors fall into one of two categories: carcinomas or sarcomas. The major differences between carcinomas and sarcomas are listed in Table \(1\):
Differences between Benign and Malignant Tumors
The biological and medical consequences of a tumor depend on whether it is benign or malignant.
Table \(2\) provides a comparison of the primary differences between benign and malignant tumors:
Common Sites for Cancer
Cancer can occur in almost any tissue or organ. Some cells and tissues are more likely to become cancerous than others are, particularly those cells that normally undergo proliferation to replace cells that have been lost due to injury or cell death. Those cells that don't proliferate (for example, neurons and heart muscle cells) rarely give rise to cancers. Figure \(1\) illustrates the most frequent occurrence of cancers in various body sites.
Figure \(1\). Top 10 Cancer Sites for Males and Females from All Races in the United States in 2013
(Image Source: CDC, https://nccd.cdc.gov/uscs/toptencancers.aspx)
‡ Rates are age-adjusted to the 2000 U.S. standard population (19 age groups – Census P25–1130)
While the prostate is the most common type of cancer that occurs in men, most survive with treatment. In contrast, other types of cancer are more often fatal. For example, the most common cancer, which causes death in men, even with treatment, is lung cancer. With women, a similar situation exists in that the breast is the most common site for cancer but more women die as a result of lung cancer.
What Do Cancers Look Like?
Cancer is a general term for more than 100 different cellular diseases, all with the same characteristic – the uncontrolled abnormal growth of cells in different parts of the body. Cancers appear in many forms. A few types are visible to the unaided eye but others grow inside the body and slowly destroy or replace internal tissues.
Skin Cancer
An example of a cancer that can be easily seen by the unaided eye is skin cancer. Skin cancers appear as raised, usually dark-colored, irregularly-shaped growths on the skin. As the cancer grows, it spreads to nearby areas of the skin. In advanced cases, the cancer metastasizes to lymph nodes and organs far away from the original site. The skin cancer illustrated in Figure \(2\) is known as a basal cell carcinoma. Melanomas and squamous cell carcinomas are other common skin cancers. Melanomas are usually the most malignant of the skin cancers.
Figure \(2\). Photograph of basal cell carcinoma of the skin
(Image Source: NLM)
Other Cancers
Most cancers involve internal organs and require elaborate diagnostic tests to diagnose. Some large internal tumors can be felt or will push the skin outward and can be detected by noting abnormal bulges or an abnormal feel (for example, a hard area) to the body. Thyroid tumors, bone tumors, breast tumors, and testicular tumors are cancers that might be felt or observed by the patient. Other internal tumors may only be suspected based on diminished organ function (such as difficulty breathing with lung cancers), pain, bleeding (for example, blood in feces with colon cancer), weakness, or other unusual symptoms. To confirm the actual existence of a cancer may require diagnostic tests. This is especially the case where the cancer is not growing as a single large lump, but rather as a series of small tumors (metastatic foci) or when widely dispersed throughout the body (such as leukemia).
A few examples of internal cancers are presented in the following figures.
Liver Cancer
Numerous cancer nodules can be seen showing that much of the liver has been destroyed (Figure \(3\)).
Figure \(3\). A liver with numerous cancer nodules
(Image Source: NLM)
Lung Cancer
An early developing squamous cell carcinoma can be seen growing in the middle of the lung (Figure \(4\)). As the cancer develops, it will consume more of the lung and metastasize to other areas of the body.
Figure \(4\). A cancerous lung
(Image Source: NLM)
Kidney Cancer
The photograph in Figure \(5\) shows the cancer has consumed much of the upper portion of the kidney.
Figure \(5\). A kidney with cancer
(Image Source: NLM)
Historical Changes in Incidence of Cancer
Cancer has been recognized in humans for centuries. However, the incidence of various types of cancer has changed since the mid-1900s. This is especially true for lung and stomach cancer. Deaths from lung cancer hit a peak in the early 1990s and have been slowly declining since 2001. During that same period, deaths from stomach cancer decreased substantially. Breast cancer caused more deaths than any other type cancer in women for many decades. However, when women began smoking cigarettes, deaths from lung cancer outpaced deaths from breast cancer. These changes in types and incidences of cancer reflect the increased longevity of people as well as personal habits and environmental changes.
Latency Period for Cancer Development
Cancer is a chronic condition, which develops gradually over a period of time, and may become a clinical concern many years following the initial exposure to a carcinogen. This period of time is referred to as the latency period. The latency period varies with the type of cancers and may be as short as a few years to over 30 years. For example, the latency period for leukemia after benzene or radiation exposure may only be five years. In contrast, the latency period may be 20–30 years for skin cancer after arsenic exposure and mesothelioma (cancer of the pleura around the lungs) after asbestos exposure.
Survival Time
Success in treating cancer varies greatly with the type of cancer with some cancers responding to treatment whereas others do not. For example, medical treatment of cancers of the pancreas, liver, esophagus, and lung are largely unsuccessful. In contrast, cancers of the thyroid, testes, and skin respond quite well to treatment. Table \(3\) shows the 5-year survival rate by cancer location.
Table \(3\). Five year survival rate by primary cancer site (2006-2012)
(Source: Table 1. Surveillance, Epidemiology, and End Results Program (SEER), National Cancer Institute,
https://seer.cancer.gov/csr/1975_2013/results_merged/topic_survival.pdf)
What Causes Cancer?
A large number of industrial, pharmaceutical, and environmental chemicals have been identified as potential carcinogens by animal tests. Human epidemiology studies have confirmed that many are human carcinogens as well. However, while it is apparent that chemicals and radiation play a substantial role, it appears that lifestyle factors (such as diet, obesity, and smoking), and infections (such as hepatitis B, hepatitis C, and Human Papillomaviruses) are also major factors leading to the likelihood that a person will develop cancer. Additional factors that are involved in the development of cancer include aging and heredity.
Pathogenesis of Cancer
Carcinogenesis is a multi-step, multi-factorial genetic disease. All known tumors are composed of cells with genetic alterations that make them perform differently from their progenitor (parent) cells. The carcinogenesis process is very complex and unpredictable consisting of several phases and involving multiple genetic events (mutations) that take place over a very long period of time, at least 10 years for most types of cancer.
Cancer cells do not necessarily proliferate faster than their normal progenitors. In contrast to normal proliferating tissues where there is a strict and controlled balance between cell death and replacement, cancers grow and expand since more cancer cells are produced than die in a given time period. For a tumor to be detected it must attain a size of at least one cubic centimeter (about the size of a pea). This small tumor contains 100 million to a billion cells at that time. The development from a single cell to that size also means that the mass has doubled at least 30 times. During the long and active period of cell proliferation, the cancerous cells may have become aggressive in growth and have reverted to a less differentiated type cell that is not similar to the original cell type.
While knowledge of carcinogenesis continues to evolve, it is clear that there are at least three main phases in cancer development:
1. Initiation
2. Promotion/Conversion
3. Progression
Figure \(6\). Phases of carcinogenesis
(Image Source: NLM)
1. Initiation
The initiation phase consists of the alteration of the DNA (mutation) of a normal cell, which is an irreversible change. The initiated cell has developed a capacity for individual growth. At this time, the initiated cell is indistinguishable from other similar cells in the tissue. The initiating event can consist of a single exposure to a carcinogenic agent or, in some cases, it may be an inherited genetic defect.
• An example is retinoblastoma in which some children who develop the disease may have inherited an altered copy of the gene involved and are at risk of passing the altered gene to successive generations.
The initiated cell, whether inherited or newly mutated, may remain dormant for months to years and, unless a promoting event occurs, may never develop into a clinical cancer case.
2. Promotion/Conversion
The promotion/conversion phase is the second major step in the carcinogenesis process in which specific agents (referred to as promoters) enhance the further development of the initiated cells. Promoters often, but not always, interact with the cell's DNA and influence the further expression of the mutated DNA so that the initiated cell proliferates and progresses further through the carcinogenesis process. The clone of proliferating cells in this stage takes a form consistent with a benign tumor. The mass of cells remains as a cohesive group and physically keeps in contact with each other.
3. Progression
Progression is the third recognized step and is associated with the development of the initiated cell into a biologically malignant cell population. In this stage, a portion of the benign tumor cells may be converted into malignant forms so that a true cancer has evolved. Individual cells in this final stage can break away and start new clones of growth distant from the original site of development of the tumor. This is known as metastasis.
Genetic Activity
While the three-stage pathogenesis scheme describes the basic sequence of events in the carcinogenesis process, the actual events that take place in these various steps are due to activities of specific genes within the DNA of the cells. Cellular DNA contains two types of genes:
1. Structural genes direct the production of specific proteins within the cell.
2. Regulatory genes control the activity of the structural genes and direct the proliferation process of the cell.
The three classes of regulatory genes considered to have major roles in the carcinogenesis process are known as:
1. Proto-oncogenes
2. Oncogenes
3. Suppressor genes
Proto-oncogenes are normal cellular genes that encode and instruct the production of the regulatory proteins and growth factors within the cell or its membrane. The proteins encoded by proto-oncogenes are necessary for normal cellular cell growth and differentiation. Activation of a proto-oncogene can cause the alteration in the normal growth and differentiation of cells, which leads to neoplasia. Several agents can activate proto-oncogenes. This is the result of point mutations or by DNA rearrangements of the proto-oncogenes. The product of this proto-oncogene activation is an oncogene. Many proto-oncogenes have been identified and have usually been named after the source of their discovery, for example, the KRAS proto-oncogene was named for the discovery using the Kirsten rat sarcoma virus. HRAS, MYC, MYB, and SRC are other examples of proto-oncogenes. The proto-oncogenes are not specific for the original species but have been found in many other species, including humans. These proto-oncogenes are present in many cells but remain dormant until activated. Either a point mutation or chromosomal damage of various types can induce activation. Once activated they become an oncogene.
Oncogenes are altered or misdirected proto-oncogenes which now have the ability to direct the production of proteins within the cell that can change or transform the normal cell into a neoplastic cell. Most oncogenes differ from their proto-oncogenes by a single point mutation located at a specific codon (a group of three DNA bases that encodes for a specific amino acid) of a chromosome. The altered DNA in the oncogene results in the production of an abnormal protein that can alter cell growth and differentiation. It appears that a single activated oncogene is not sufficient for the growth and progression of a cell and its offspring to form a cancerous growth. However, it is a major step in the carcinogenesis process.
Tumor suppressor genes, sometimes referred to as anti-oncogenes, are present in normal cells and serve to counteract and change the proto-oncogenes and altered proteins that they are responsible for. The tumor suppressor genes serve to prevent a cell with damaged DNA from proliferating and evolving into an uncontrolled growth. They actively function to effectively oppose the action of an oncogene. If a tumor suppressor gene is inactivated (usually by a point mutation), its control over the oncogene and transformed cell may be lost. Thus the tumor-potential cell can now grow without restraint and is free of the normal cellular regulatory control. The suppressor gene most frequently altered in human tumors is the p53 gene. Damaged p53 genes have been identified in over 50% of human cancers.
The p53 gene normally halts cell division and stimulates repair enzymes to rebuild and restore the damaged regions of the DNA. If the damage is too extensive, the p53 commands the cell to self-destruct. An altered p53 is incapable of these defensive actions and cannot prevent the cell with damaged DNA from dividing and proliferating in an erratic and uncontrolled manner. This is the essence of cancer.
This section represents only a brief overview of an enormously complex process for which knowledge is continuously evolving with the tools of molecular biology. New factors are continuously being identified; however, many pieces of this cancer puzzle remain elusive at this time.
Knowledge Check
1) A body growth with the ability to metastasize or invade into surrounding tissues is known as a:
a) Benign tumor
b) Malignant tumor
c) Hyperplasia
Answer
Malignant tumor - This is the correct answer.
A malignant tumor that has the ability to metastasize or invade into surrounding tissues. It is the same as cancer.
2) Most cancers are thought to be due to the following:
a) Infections
b) Food additives
c) Lifestyle factors
d) Pollution
Answer
Lifestyle factors - This is the correct answer.
Lifestyle (including diet, tobacco use, reproductive and sexual behavior, and alcohol consumption) is considered to cause about 75% of all cancers.
3) The initial stage in carcinogenesis in which there is an alteration of the DNA (mutation) is referred to as the:
a) Progression stage
b) Promotion stage
c) Initiation stage
Answer
Initiation stage - This is the correct answer.
The initiation phase consists of the alteration of the DNA (mutation) of a normal cell, which is irreversible change.
4) The cellular gene which is present in most normal cells and serves as a balance to the genes for tumor expression is known as a:
a) Tumor suppressor gene
b) Oncogene
c) Proto-oncogene
Answer
Tumor suppressor gene - This is the correct answer.
Tumor suppressor genes, sometimes referred to as anti-oncogenes, are present in normal cells and serve as a balance to the genes for expression or proto-oncogenes. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_14%3A_Cellular_Toxicology/14.3%3A_Cancer.txt |
Neurotoxicity
The nervous system is very complex and toxins can act at many different points in this complex system. The focus of this section is to provide a basic overview of how the nervous system works and how neurotoxins affect it. Due to the complexity of these topics, this section does not include extensive details related to the anatomy and physiology of the nervous system or the many neurotoxins in our environment and the subtle ways they can damage the nervous system or interfere with its functions.
Since the nervous system innervates all areas of the body, some toxic effects may be quite specific and others generalized depending upon where in the nervous system the toxin exerts its effect. Before discussing how neurotoxins cause damage, we will look at the basic anatomy and physiology of the nervous system.
Anatomy and Physiology of the Nervous System
The nervous system has three basic functions:
1. Specialized cells detect sensory information from the environment and relay that information to other parts of the nervous system.
2. It directs motor functions of the body usually in response to sensory input.
3. It integrates the thought processes, learning, and memory.
All of these functions are potentially vulnerable to the actions of toxicants.
The nervous system consists of two fundamental anatomical divisions:
1. Central nervous system (CNS)
2. Peripheral nervous system (PNS)
Central Nervous System
The CNS includes the brain and spinal cord. The CNS serves as the control center and processes and analyzes information received from sensory receptors and in response issues motor commands to control body functions. The brain, which is the most complex organ of the body, structurally consists of six primary areas (Figure \(1\)):
1. Cerebrum — controls thought processes, intelligence, memory, sensations, and complex motor functions.
2. Diencephalon (thalamus, hypothalamus, pituitary gland) — relays and processes sensory information; controlls emotions, autonomic functions, and hormone production.
3. Midbrain — processes auditory and visual data; generates involuntary motor responses.
4. Pons — a tract and relay center which also assists in somatic and visceral motor control.
5. Cerebellum — voluntary and involuntary motor activities based on memory and sensory input.
6. Medulla oblongata — relays sensory information to the rest of the brain; regulates autonomic function, including heart rate and respiration.
Figure \(1\). Internal anatomy of the brain
(Image Source: Adapted from iStock Photos, ©)
Peripheral Nervous System
The PNS consists of all nervous tissue outside the CNS (Figure \(2\)). The PNS contains two forms of nerves:
1. Afferent nerves, which relay sensory information to the CNS.
2. Efferent nerves, which relay motor commands from the CNS to various muscles and glands.
Efferent nerves are organized into two systems. One is the somatic nervous system that is also known as the voluntary system and which carries motor information to skeletal muscles. The second efferent system is the autonomic nervous system, which carries motor information to smooth muscles, cardiac muscle, and various glands. The major difference between these two systems pertains to conscious control.
• The somatic system is under our voluntary control such as moving our arms by consciously telling our muscles to contract.
• In contrast, we cannot consciously control the smooth muscles of the intestine, heart muscle, or secretion of hormones. Those functions are automatic and involuntary as controlled by the autonomic nervous system.
Figure \(2\). Structures of the central nervous system and peripheral nervous system
(Image Source: NLM)
Cells of the Nervous System
There are two categories of cells found in the nervous system: neurons and glial cells. Neurons are the functional nerve cells directly responsible for transmission of information to and from the CNS to other areas of the body. Glial cells (also known as neuroglia) provide support to the neural tissue, regulate the environment around the neurons, and protect against foreign invaders.
Neurons communicate with all areas of the body and are present within both the CNS and PNS. They serve to transmit rapid impulses to and from the brain and spinal cord to virtually all tissues and organs of the body. As such, they are an essential cell and their damage or death can have critical effects on body function and survival. When neurons die, they are not replaced. As neurons are lost, so are certain neural functions such as memory, ability to think, quick reactions, coordination, muscular strength, and our various senses such as sight, hearing, and taste. If the neuron loss or impairment is substantial, severe and permanent disorders can occur, such as blindness, paralysis, and death.
A neuron consists of a cell body and two types of extensions, numerous dendrites, and a single axon (Figure \(3\)). Dendrites are specialized in receiving incoming information and sending it to the neuron cell body with transmission (electrical charge) on down the axon to one or more junctions with other neurons or muscle cells (known as synapses). The axon may extend long distances, over a meter in some cases, to transmit information from one part of the body to another. The myelin sheath is a multi-layer coating that wraps some axons and helps insulate the axon from surrounding tissues and fluids, and prevents the electrical charge from escaping from the axon.
Figure \(3\). Neuron structure
(Image Source: Adapted from iStock Photos, ©)
Figure \(4\). Complete neuron cell diagram
(Image Source: Adapted from Wikimedia Commons, obtained under Public Domain. Author: LadyofHats.)
Information passes along the network of neurons between the CNS and the sensory receptors and the effectors by a combination of electrical pulses and chemical neurotransmitters. The information (electrical charge) moves from the dendrites through the cell body and down the axon. The mechanism by which an electrical impulse moves down the neuron is quite complex. When the neuron is at rest, it has a negative internal electrical potential. This changes when a neurotransmitter binds to a dendrite receptor. Protein channels of the dendrite membrane open allowing the movement of charged chemicals across the membrane, which creates an electrical charge. The propagation of an electrical impulse (known as action potential) proceeds down the axon by a continuous series of openings and closings of sodium-potassium channels and pumps. The action potential moves like a wave from one end (dendritic end) to the terminal end of the axon.
However, the electrical charge cannot cross the gap (synapse) between the axon of one neuron and the dendrite of another neuron or an axon and a connection with a muscle cell (neuromuscular junction). Chemicals called neurotransmitters move the information across the synapse.
Neurons do not make actual contact with one another but have a gap, known as a synapse. As the electrical pulse proceeds up or down an axon, it encounters at least one junction or synapse. An electrical pulse cannot pass across the synapse. At the terminal end of an axon is a synaptic knob, which contains the neurotransmitters.
Neurotransmitters
Vesicles release neurotransmitters upon stimulus by an impulse moving down the presynaptic neuron. The neurotransmitters diffuse across the synaptic junction and bind to receptors on the postsynaptic membrane. The neurotransmitter-receptor complex then initiates the generation of an impulse on the next neuron or the effector cell, for example, a muscle cell or secretory cell.
After the impulse has again been initiated, the neurotransmitter-complex must be inactivated or continuous impulses (beyond the original impulse) will be generated. Enzymes perform this inactivation, which serves to break down the complex at precisely the right time and after the exact impulse has been generated. There are several types of neurotransmitters and corresponding inactivating enzymes. One of the major neurotransmitters is acetylcholine with acetylcholinesterase as the specific inactivator.
Figure \(5\). Impulse transmission across synapse
(Image Source: Adapted from iStock Photos, ©)
There are over 100 known neurotransmitters. Among the most well-known are:
• Acetylcholine
• Dopamine
• Serotonin
• Norepinephrine
• GABA (gamma-aminobutyric acid)
Types of Neurons
Neurons are categorized by their function and consist of three types:
1. Sensory neurons (afferent neurons) carry information from sensory receptors (usually processes of the neuron) to the CNS. Some sensory receptors detect external changes such as temperature, pressure, and the senses of touching and vision. Others monitor internal changes such as balance, muscle position, taste, deep pressure, and pain.
2. Motor neurons (effector neurons) relay information from the CNS to other organs terminating at the effectors. Motor neurons make up the efferent neurons of both the somatic and autonomic nervous systems.
3. Interneurons (association neurons) are located only in the CNS and provide connections between sensory and motor neurons. They can carry either sensory or motor impulses. They are involved in spinal reflexes, analysis of sensory input, and coordination of motor impulses. They also play a major role in memory and the ability to think and learn.
Glial Cells
Glial cells are important as they provide a structure for the neurons by protecting them from outside invading organisms, and maintaining a favorable environment (nutrients, oxygen supply, etc.). The neurons are highly specialized and do not have all the usual cellular organelles to provide them with the same life-support capability. They are highly dependent on the glial cells for their survival and function. For example, neurons have such a limited storage capacity for oxygen that they are extremely sensitive to decreases in oxygen (anoxia) and will die within a few minutes. The list below describes the types of glial cells:
• Astrocytes are big cells, only in the CNS, and maintain the blood-brain barrier that controls the entry of fluid and substances from the circulatory system into the CNS. They also provide rigidity to the brain structure.
• Schwann cells and oligodendrocytes wrap themselves around some axons to form myelin, which serves like insulation. Myelinated neurons usually transmit impulses at high speed, such as needed in motor neurons. Loss of myelination causes a dysfunction of these cells.
• Microglia are small, mobile, phagocytic cells.
• Ependymal cells produce the cerebrospinal fluid (CSF) which surrounds and cushions the central nervous system.
Figure \(6\). Neurons and neuroglial cells
(Image Source: Adapted from iStock Photos, ©)
Figure \(7\). Comparison of somatic and visceral reflects
(Image Source: Wikimedia Commons, obtained under Creative Commons Attribution 3.0 Unported License. Author: OpenStax College. View original image. Source: Anatomy & Physiology, Connexions Web site. http://cnx.org/content/col11496/1.6/, Jun 19, 2013.)
Toxic Damage to Nervous System
The nervous system is quite vulnerable to toxins since chemicals interacting with neurons can change the critical voltages, which must be carefully maintained. However, the nervous system has defense mechanisms that can protect it from toxins.
Most of the CNS is protected by an anatomical barrier between the neurons and blood vessels, known as the blood-brain barrier. It is protected from some toxin exposures by tightening junctions between endothelial cells of the blood vessels in the CNS and having astrocytes surround the blood vessels. This prevents the diffusion of chemicals out of the blood vessels and into the intracellular fluid except for small, lipid-soluble, non-polar molecules. Specific transport mechanisms exist to transport essential nutrients (such as glucose and amino acids and ions) into the brain. Another defense mechanism within the brain to counter chemicals that pass through the vascular barrier is the presence of metabolizing enzymes. Certain detoxifying enzymes, such as monoamine oxidase, can biotransform many chemicals to less toxic forms as soon as they enter the intercellular fluid.
The basic types of changes due to toxins can be divided into three categories – 1) sensory; 2) motor; and 3) interneuronal – depending on the type of damage sustained.
1. Damage can occur to sensory receptors and sensory neurons, which can affect the basic senses of pressure, temperature, vision, hearing, taste, smell, touch, and pain.
• For example, heavy metal poisoning (especially lead and mercury) can cause deafness and loss of vision.
• Several chemicals including inorganic salts and organophosphorus compounds can cause a loss of sensory functions.
2. Damage to motor neurons can cause muscular weakness and paralysis.
• Isonicotinic hydrazide (used to treat tuberculosis) can cause such damage.
3. Interneuronal damage can cause learning deficiencies, loss of memory, incoordination, and emotional conditions.
• Low levels of inorganic mercury and carbon monoxide can cause depression and loss of memory.
Mechanisms for Toxic Damage to the Nervous System
Toxic damage to the nervous system occurs by the following basic mechanisms:
1. Direct damage and death of neurons and glial cells.
2. Interference with electrical transmission.
3. Interference with chemical neurotransmission.
A. Death of Neurons and Glial Cells
The most common cause of death of neurons and glial cells is anoxia, an inadequate oxygen supply to the cells or their inability to utilize oxygen. Anoxia may result from the blood's decreased ability to provide oxygen to the tissues (impaired hemoglobin or decreased circulation) or from the cells unable to utilize oxygen.
• For example, carbon monoxide and sodium nitrite can bind to hemoglobin preventing the blood from being able to transport oxygen to the tissues.
• Hydrogen cyanide and hydrogen sulfide can penetrate the blood-brain barrier and is rapidly taken up by neurons and glial cells.
• Another example is sodium fluoroacetate (commonly known as Compound 1080, a rodent pesticide) which inhibits a cellular enzyme.
Those chemicals interfere with cellular metabolism and prevent nerve cells from being able to utilize oxygen. This is called histoxic anoxia.
Neurons are among the most sensitive cells in the body to inadequate oxygenation. Lowered oxygen for only a few minutes is sufficient to cause irreparable changes leading to the death of neurons.
Several other neurotoxins directly damage or kill neurons, including:
• Lead
• Mercury
• Some halogenated industrial solvents including methanol (wood alcohol)
• Toluene
• Trimethyltin polybrominated diphenyl ethers (PBDEs)
While some neurotoxic agents affect neurons throughout the body, others are quite selective.
• For example, methanol specifically affects the optic nerve, retina, and related ganglion cells while trimethyltin kills neurons in the hippocampus, a region of the cerebrum.
Other agents can degrade neuronal cell function by diminishing its ability to synthesize protein, which is required for the normal function of the neuron.
• Organomercury compounds exert their toxic effect in this manner.
With some toxins, only a portion of the neuron is affected. If the cell body is killed, the entire neuron will die. Some toxins can cause death or loss of only a portion of the dendrites or axon while the cell itself survives but with diminished or total loss of function. Commonly axons begin to die at the very distal end of the axon with necrosis slowly progressing toward the cell body. This is referred to as "dying-back neuropathy."
• Some organophosphate chemicals (including some pesticides) cause this distal axonopathy. The mechanism for the dying back is not clear but may be related to the inhibition of an enzyme (neurotoxic esterase) within the axon.
• Other well-known chemicals can cause distal axonopathy include ethanol, carbon disulfide, arsenic, ethylene glycol (in antifreeze), and acrylamide.
B. Interference with Electrical Transmission
There are two basic ways that a foreign chemical can interrupt or interfere with the propagation of the electrical potential (impulse) down the axon to the synaptic junction:
1. To interfere with the movement of the action potential down the intact axon.
2. To cause structural damage to the axon or its myelin coating. Without an intact axon, transmission of the electrical potential is not possible.
Agents that can block or interfere with the sodium and potassium channels and sodium-potassium pump cause interruption of the propagation of the electrical potential. This will weaken, slow, or completely interrupt the movement of the electrical potential. Many potent neurotoxins exert their toxicity by this mechanism.
• Tetrodotoxin (a toxin in frogs, pufferfish, and other invertebrates) and saxitoxin (a cause of shellfish poisoning) blocks sodium channels. Batrachotoxin (a toxin in South American frogs used as arrow poison) and some pesticides (DDT and pyrethroids) increases the permeability of the neuron membrane preventing closure of sodium channels which leads to repetitive firing of the electrical charge and an exaggerated impulse.
A number of chemicals can cause demyelination. Many axons (especially in the PNS) are wrapped with a protective myelin sheath that acts as insulation and restricts the electrical impulse within the axon. Agents that selectively damage these coverings disrupt or interrupt the conduction of high-speed neuronal impulses. Loss of a portion of the myelin can allow the electrical impulse to leak out into the tissue surrounding the neuron so that the pulse does not reach the synapse with the intended intensity.
• In some diseases, such as Multiple Sclerosis (MS) and Amyotrophic Lateral Sclerosis (ALS), the myelin is lost, causing paralysis and loss of sensory and motor function.
A number of chemicals can cause demyelination:
• Diphtheria toxin causes loss of myelin by interfering with the production of protein by the Schwann cells that produce and maintain myelin in the PNS.
• Triethyltin (used as a biocide, preservative, and polymer stabilizer) interrupts the myelin sheath around peripheral nerves.
• Lead causes loss of myelin primarily around peripheral motor axons.
C. Interference with Chemical Neurotransmission
Synaptic dysfunction is a common mechanism for the toxicity of a wide variety of chemicals. There are two types of synapses: those between two neurons (axon of one neuron and dendrites of another) and those between a neuron and a muscle cell or gland. The basic mechanism for the chemical transmission is the same. The major difference is that the neurotransmitting chemical between a neuron and muscle cell is acetylcholine whereas there are several other types of neurotransmitting chemicals involved between neurons, depending on where in the nervous system the synapse is located.
There are four basic steps involved in neurotransmission at the synapse:
1. Synthesis and storage of neurotransmitter (synaptic knob of axon).
2. Release of the neurotransmitter (synaptic knob with movement across synaptic cleft).
3. Receptor activation (effector membrane).
4. Inactivation of the transmitter (enzyme breaks down neurotransmitter stopping induction of action potential).
The arrival of the action potential at the synaptic knob initiates a series of events culminating in the release of the chemical neurotransmitter from its storage depots in vesicles. After the neurotransmitter diffuses across the synaptic cleft, it complexes with a receptor (membrane-bound macromolecule) on the post-synaptic side. This binding causes an ion channel to open, changing the membrane potential of the post-synaptic neuron or muscle or gland. This starts the process of impulse formation or action potential in the next neuron or receptor cell. However, unless this receptor-transmitter complex is inactivated, the channel remains open with continued pulsing. Thus, the transmitter action must be terminated. Specific enzymes that can break the bond and return the receptor-membrane to its resting state do this.
Drugs and environmental chemicals can interact at specific points in this process to change the neurotransmission. Depending on where and how the xenobiotics act, the result may be either an increase or a decrease in neurotransmission. Many drugs (such as tranquilizers, sedatives, stimulants, beta-blockers) are used to correct imbalances to neurotransmissions (such as occurs in depression, anxiety, and cardiac muscular weakness). The mode of action of some analgesics is to block receptors, which prevent transmission of pain sensations to the brain.
Exposure to environmental chemicals that can perturb neurotransmission is a very important area of toxicology. Generally, neurotoxins affecting neurotransmission act to:
1. Increase or decrease the release of a neurotransmitter at the presynaptic membrane.
2. Block receptors at the postsynaptic membrane.
3. Modify the inactivation of the neurotransmitter.
This is a list of only a few examples of neurotoxins to show the range of mechanisms:
• α-Bungarotoxin (a potent venom of elapid snakes) prevents the release of neurotransmitters.
• Scorpion venom potentiates the release of a neurotransmitter (acetylcholine).
• Black widow spider venom causes an explosive release of neurotransmitters.
• Botulinum toxin blocks the release of acetylcholine at neuromuscular junctions.
• Atropine blocks acetylcholine receptors.
• Strychnine inhibits the neurotransmitter glycine at postsynaptic sites resulting in an increased level of neuronal excitability in the CNS.
• Nicotine binds to certain cholinergic receptors.
A particularly important type of neurotoxicity is the inhibition of acetylcholinesterase. The specific function of acetylcholinesterase is to stop the action of acetylcholine once it has bound to a receptor and initiated the action potential in the second nerve or at the neuro-muscular or glandular junction. If the acetylcholine-receptor complex is not inactivated, continual stimulation will result leading to paralysis and death.
• Many commonly used chemicals, especially organophosphate and carbamate pesticides, poison mammals by this mechanism.
• The major military nerve gases are also cholinesterase inhibitors.
Acetylcholine is a common neurotransmitter. It is responsible for transmission at all neuromuscular and glandular junctions as well as many synapses within the CNS.
Events Involved in a Typical Cholinergic SynapseThe complexity of the sequence of events that takes place at a typical cholinergic synapse is indicated below:
The nervous system is the most complex system of the body. There are still many gaps in understanding how many neurotoxins act, yet research is discovering their possible effects on the body's structures and functions. It is important to understand that the most potent toxins (on a weight basis) are neurotoxins with extremely minute amounts sufficient to cause death.
Knowledge Check
1) The two fundamental anatomical divisions of the nervous system are the:
a) Cerebrum and cerebellum
b) Central nervous system and peripheral nervous system
c) Brain and spinal cord
Answer
Central nervous system and peripheral nervous system - This is the correct answer.
The two fundamental anatomical divisions of the nervous system are the Central Nervous System (brain and spinal cord) and the Peripheral Nervous System, which consists of all nerves outside the brain and spinal cord.
2) The two major categories of cells found in the nervous system are:
a) Neurons and glial cells
b) Astrocytes and microglia
c) Schwann cells and oligodendrocytes
Answer
Neurons and glial cells - This is the correct answer.
The two major categories of cells found in the nervous system are neurons and glial cells. Neurons are the functional nerve cells directly responsible for transmission of information to and from the CNS to other areas of the body. Glial cells (also known as neuroglia) provide support to the neural tissue, regulate the environment around the neurons, and protect against foreign invaders.
3) The propagation of an electrical impulse (action potential) down an axon consists of:
a) The transmission of the action potential by chemical neurotransmitters
b) The movement of sodium ions from the dendrite to the axon
c) A continuous series of opening and closing of sodium-potassium channels and pumps
Answer
A continuous series of opening and closing of sodium-potassium channels and pumps - This is the correct answer.
The propagation of an electrical impulse (action potential) down an axon consists of a continuous series of opening and closing of sodium-potassium channels and pumps. The action potential moves like a wave from one end (dendritic end) to the terminal end of the axon.
4) The type of neuron that relays information from the CNS to other organs is a:
a) Motor neuron
b) Sensory neuron
c) Interneuron
Answer
Motor neuron - This is the correct answer.
Motor Neurons (effector neurons) relay information from the CNS to other organs terminating at the effectors.
5) The primary cause of death to neurons and glial cells is:
a) Interference with chemical transmission
b) Interference with electrical transmission
c) Anoxia
Answer
Anoxia - This is the correct answer.
The most common cause of death of neurons and glial cells is anoxia, an inadequate oxygen supply to the cells or their inability to utilize oxygen.
6) A major mechanism that prevents the action potential (impulse) from moving down an axon is:
a) Blockage or interference with movement of sodium and potassium ions in and out of neuron membrane, changing the action potential
b) Excessive release of chemical neurotransmitters
c) Blocking receptors at the post-synaptic membrane
Answer
Blockage or interference with movement of sodium and potassium ions in and out of neuron membrane, changing the action potential - This is the correct answer.
Interruption of the propagation of the electrical potential is caused by agents that can block or interfere with the sodium and potassium channels and sodium-potassium pump. This will weaken, slow, or completely interrupt the movement of the electrical potential.
7) What are the two basic types of synapses?
a) Neuro-muscular and neuro-glandular
b) CNS and PNS synapses
c) Those between two neurons and a neuron and effector
Answer
Those between two neurons and a neuron and effector - This is the correct answer.
The two basic types of synapses are those between two neurons and those between a neuron and effectors, such as muscle cell or gland. The major difference in the two basic types is that the neurotransmitting chemical between a neuron and muscle cell is acetylcholine whereas there are several other types of neurotransmitting chemicals involved between neurons, depending on where in the nervous system the synapse is located. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_14%3A_Cellular_Toxicology/14.4%3A_Neurotoxicity.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Describe aspects of Intuitive Toxicology.
• Describe aspects of Risk Communication.
Topics include:
Section 15: Key Points What We've Covered This section made the following main points:
• Intuitive toxicology:
• Combines the intuitive elements of expert and public risk judgments involved with exposure assessment, toxicology, and risk assessment.
• Asks if toxicologists agree or disagree that an animal's reactions to a chemical reliably predict human reactions.
• Asks if a chemical found in a scientific study to cause cancer in animals can be reasonably assumed to cause cancer in humans.
• When communicating risk, one needs to:
• Accept and involve the public as a legitimate partner.
• Listen to the public’s specific concerns.
• Be honest, frank, and open.
• Coordinate and collaborate with other credible sources.
• Meet the needs of the media.
• Speak clearly and with compassion.
• Plan carefully and evaluate your efforts.
• Acknowledge and describe uncertainty, such as any data gaps or issues relating to methodology.
Section 15: Intuitive Toxicology and Risk Communication
Intuitive Toxicology
Intuitive can be defined as "using or based on what one feels to be true even without conscious reasoning." Humans have always been intuitive toxicologists via the use of the senses of sight, taste, and smell to try to detect harmful or unsafe food, water, and air.
Intuition and Professional Judgment in Toxicology
Even well-established scientific approaches used in human risk assessment depend on extrapolations and judgments when assessing human, animal and other toxicology data. This led to the study of intuitive toxicology—the intuitive elements of expert and public risk judgments involved with exposure assessment, toxicology, and risk assessment.
Figure \(1\). Albert Einstein understood the importance of intuition along with knowledge and experience
(Image Source: Wikimedia Commons, Public Domain - original image)
Studies of Intuitive Toxicology
The studies of intuitive toxicology have surveyed toxicologists (for example, members of the Society of Toxicology) and others about a wide range of attitudes, beliefs, and perceptions regarding risks from chemicals. These have included basic concepts, assumptions, and interpretations related to the effects of chemical concentration, dose, and exposure on risk, and the value of animal studies for predicting the effects of chemicals on humans.
Two questions that have been studied repeatedly in intuitive toxicology are:
1. Would you agree or disagree that the way an animal reacts to a chemical is a reliable predictor of how a human would react to it?
2. If a scientific study produces evidence that a chemical causes cancer in animals, can we can then be reasonably sure that the chemical will cause cancer in humans?
Figure \(2\). Understanding and application of toxicology sometimes involves elements of expert judgment and intuition
(Image Source: Adapted from iStock Photos, ©)
Examples of Findings from Intuitive Toxicology
Examples of the findings from studies of intuitive toxicology in the United States, Canada, and the United Kingdom include:
• The public is more likely than toxicologists to think chemicals pose greater risks.
• The public finds it difficult to understand the concept of dose–response relationships.
• Much disagreement between toxicologists about how to interpret various results.
• Technical judgments of toxicologists were also found to be associated with factors such as their type of employment (for example, academia, government, or industry), gender, and age.
These types of studies have identified misconceptions that experts should try to clarify in interactions with the public. The results also suggest that disagreement among experts, especially as perceived by the news media and the public, can play a key role in controversies over toxicology-related risks.
Learn more about Intuitive Toxicology
Knowledge Check
1) Intuitive toxicology studies show that:
a) All toxicologists think the same
b) Members of the public usually think like toxicologists
c) There can be meaningful differences among toxicologists in how they look at the same set of toxicology study results
d) Intuitive toxicology is not important to consider in communication efforts
Answer
There can be meaningful differences among toxicologists in how they look at the same set of toxicology study results - This is the correct answer.
Intuitive toxicology studies show that there can be meaningful differences among toxicologists in how they look at the same set of toxicology study results.
2) Which of the following statements is correct?
a) The concept of dose-response relationships is easily understood by the public
b) The public and toxicologists tend to agree about the risks of chemicals
c) Technical judgments of toxicologists have been found to not be associated with factors such as their type of employment (for example, academia, government of industry, gender, and age)
d) Technical judgments of toxicologists have been found to be associated with factors such as their type of employment (for example, academia, government of industry, gender, and age)
Answer
Technical judgments of toxicologists have been found to be associated with factors such as their type of employment (for example, academia, government of industry, gender, and age) - This is the correct answer.
Technical judgments of toxicologists have been found to be associated with factors such as their type of employment (for example, academia, government of industry, gender, and age). | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_15%3A_Intuitive_Toxicology_and_Risk_Communication/15.1%3A_Intuitive_Toxicology.txt |
Risk Communication
Risk communication is the exchange of information about risks.
Rules for Communicating Risk
Much information about how risks could be communicated is available. Some key points about risk communication are identified in the "Seven Cardinal Rules for Communicating Risk" from the work of Dr. Vincent Covello and used by U.S. EPA and others:
• Accept and involve the public as a legitimate partner.
• Listen to the public’s specific concerns.
• Be honest, frank, and open.
• Coordinate and collaborate with other credible sources.
• Meet the needs of the media.
• Speak clearly and with compassion.
• Plan carefully and evaluate your efforts.
Lessons Learned About Communicating Risk
Some of the lessons that organizations have learned about communicating exposure and health effects information to study subjects, the community, and the public include:
• Communication is not a "cheap add-on" to a study. It must be planned and budgeted at the start. The researcher must know the community and establish relationships early in the project. Communications should be tailored to the project and should contain what people really need to know. The study results that are most significant for the community should be emphasized. Moreover, results should be communicated in a format and a manner that subjects can readily understand. Researchers should evaluate and learn from each study.
• Ignoring communication may lead to legal problems.
• Communicating risk is part of societal accountability.
• Principles and guidelines, including proper terminology, are needed.
• Guidelines should be enforceable.
• Communication requires resources.
• It should be determined early in the project who has control of the release of results, and whether results will be presented in stages or all at once.
• A professional's credibility is at risk when decisions about communication of study results are being made.
• Mechanisms may be needed to proactively consider communication.
• The role of Institutional Review Boards (IRBs) must be considered in developing communication.
Learn more about communicating risk
Lessons Learned from a Crisis and Emergency
Six principles of effective crisis and risk communication are:
1. Be first
2. Be right
3. Be credible
4. Express empathy
5. Promote action
6. Show respect
"The CDC acknowledges that less-than-clear communication about what was known and not known about the possible health effects of the Elk River spill may have affected communities' trust in government." Learn more
Figure \(1\). Charleston, West Virginia viewed from across the Kanawha River, of which the Elk River is a tributary
(Image Source: iStock Photos, ©)
Uncertainty
Uncertainty is defined as "imperfect knowledge concerning the present or future state of an organism, system, or (sub)population under consideration." In other sources (EFSA, 2018), "uncertainty is defined as referring to all types of limitations in the knowledge available to assessors at the time an assessment is conducted and within the time and resources available for the assessment." There are different types of uncertainty, some quantifiable and others not, some reducible and others not.
Due to lack of knowledge, variability adds to the overall uncertainty. Ignoring uncertainty may lead to incomplete risk assessments, poor decision-making, and poor risk communication (European Commission, 2015). The degree to which characterization of uncertainty (and variability) is needed will depend on the risk assessment and risk management contexts as determined in the questions asked (problem formulation).
Figure \(1\). Uncertainty
(Image Source: Adapted from iStock Photos, ©)
Uncertainty should be acknowledged and described, such as outlining any data gaps or issues relating to methodology. What is being done to address the areas of uncertainty is also important. In its guideline When Food Is Cooking Up a Storm, the European Food Safety Authority provides a framework to assist decision-making about appropriate communications approaches in a wide variety of situations that can occur when assessing and communicating on risks related to food safety in Europe. It is directed towards governmental agencies that regulate the food sector.
EFSA has developed a harmonized approach to assessing and taking account of uncertainties in food safety, and animal and plant health. In 2018, EFSA published its Guidance on Uncertainty Analysis in Scientific Assessment which offers a diverse toolbox of scientific methods and technical tools for uncertainty analysis. It is sufficiently flexible to be implemented in such diverse areas as plant pests, microbiological hazards and chemical substances. Further, in a separate document EFSA (2018) describes the principles and methods behind its guidance. It provides a flexible framework within which different methods may be selected, according to the needs of each risk assessment. It is recommended that assessors should systematically identify sources of uncertainty, checking each part of their assessment to minimize the risk of overlooking important uncertainties.
Communicating Uncertainty in Risk Assessments and in Risk Management
Figure \(3\). Uncertainty
(Image Source: Adapted from iStock Photos, ©)
By late 2018, EFSA is expected to have a practical guidance for communication specialists on how to communicate the results of uncertainty analysis to different target audiences, including the public. The document aims to help EFSA to communicate scientific uncertainties to its different audiences by using more accessible language tailored to their needs.
Learn more about uncertainty and communicating about it
Knowledge Check
1) Which of the following is not true about communicating risk to a community about exposures and health effects?
a) Results should be communicated in a format and a manner that subjects can readily understand
b) Results do not need to be communicated in a format and a manner that subjects can readily understand
c) Communications should be tailored to the project and should contain what people really need to know
d) Researchers and others can learn from studying good and bad risk communication efforts
Answer
Results do not need to be communicated in a format and a manner that subjects can readily understand - This is the correct answer.
2) According to the European Commission, ignoring uncertainty may lead to:
a) Great decision-making
b) Poor decision-making
c) Effective risk communication
d) Use of the most accurate knowledge available
Answer
Poor decision-making - This is the correct answer.
According to the European Commission, ignoring uncertainty may lead to poor decision-making. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_15%3A_Intuitive_Toxicology_and_Risk_Communication/15.2%3A_Risk_Communication.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Define Environmental Toxicology.
• Describe the differences between environmental toxicology and environmental health.
• Define One Health.
Topics include:
Section 16: Key Points What We've Covered
This section made the following main points:
• Environmental Toxicology:
• Is the multidisciplinary study of the effects of manmade and natural chemicals on health and the environment.
• Includes work in academia, companies, government agencies, and elsewhere.
• Environmental Health:
• Is a branch of public health focused on the relationships between people and their environment.
• Seeks to advance policies and programs to reduce chemical and other environmental exposures in air, water, soil, and food.
• Saves lives, saves money, and protects your future.
• One Health:
• Is a global concept and strategy recognizing the links between the health of people, animals, and the environment.
• Examines changes in the interactions between people, animals, and the environment and how these changes impact human and animal health.
Section 16: Environmental Toxicology Environmental Health and One Health
Environmental Toxicology
Environmental Toxicology is the multidisciplinary study of the effects of manmade and natural chemicals on health and the environment. This includes the study of the effects of chemicals on organisms in their natural environments and in the ecosystems to which they belong.
Branches of Environmental Toxicology
Environmental Toxicology covers a wide range of interdisciplinary studies, as illustrated in Figure 1:
Figure \(1\). Environmental Toxicology interdisciplinary core (not comprehensive)
(Image Source: Adapted from Wikipedia under the Creative Commons Attribution-ShareAlike 3.0 License)
Scope of Work and Study
Environmental toxicologists work in academia, companies, government agencies, and elsewhere. The work can include laboratory studies, computer modeling, and work "in the field." It is not unusual for an environmental toxicologist to also have training in other areas—for example, public health, environmental chemistry, and pharmacology.
Some examples of what Environmental Toxicologists study include:
• The effects of a chemical or other substance at various concentrations on various species.
• Whether a chemical or other substance can bioaccumulate (increase over time) in animals or other organisms. This is important for human exposures if the bioaccumulation occurs in animals that are part of the human food chain, such as fish.
• Emerging issues such as the study of the sources and effects of microplastics that could become part of the human food chain.
Learn more about microplastics
Figure \(2\). Microplastics are plastic debris less than five millimeters in length
(Image Source: National Ocean Service, National Oceanic and Atmospheric Administration. Original image)
Another emerging global issue is the health of bees. In the news in recent years are terms like the Colony Collapse Disorder (CCD), pesticides like the neonicotinic pesticides (also called neonicotinoids), and parasites that only reproduce in bee colonies. About 75% of all flowering plants rely on animal pollinators and about one-third of our food production is dependent on animal pollinators. The general declining health of honeybees and other bees is thought to be related to complex interactions among multiple stressors, including pesticides and parasites, and stressors like poor nutrition due to declining foraging habitats, bee management practices, and a lack of genetic diversity.
Neonicotinoids have been a focus of international attention and their mode of action is on the central nervous system of insects. Neonicotinoids are highly toxic to honeybees and also native bees like bumble bees and blue orchard bees, and sub-lethal levels can affect foraging and the ability to reproduce. Further, neonicotinoids can be persistent in the environment, and can be absorbed by plants and found in pollen and nectar.
Figure \(3\). A honeybee gathers pollen from a flower
(Image Source: iStock Photos, ©)
Learn more about pollinators
Figure \(4\). Neonicotinoid Pesticides
(Image Source: Compound Interest, © 2015 - used under Creative Commons Attribution-NonCommercial-NoDerivatives license. Bee image captured by Karunakar Rayker. Original Image. Original Infographic)
Knowledge Check
1) The interdisciplinary core ("branches") of environmental toxicology includes:
a) Environmental sciences; physics; toxicology; chemistry; biology
b) Environmental sciences; engineering; toxicology; biology; computer and math
c) Environmental sciences; computer and math; toxicology; biology; law
d) Environmental sciences; biology; toxicology; chemistry; computer and math
Answer
Environmental sciences; biology; toxicology; chemistry; computer and math - This is the correct answer.
The interdisciplinary core ("branches") of environmental toxicology includes environmental sciences; biology; toxicology; chemistry; computer and math.
2) Which issues in environmental toxicology relate to the human food supply?
a) Bioaccumulation of substances in fish
b) Effects of neonicotinic pesticides on bees
c) Both the bioaccumulation of substances in fish and effects of neonicotinic pesticides on bees
Answer
Both the bioaccumulation of substances in fish and effects of neonicotinic pesticides on bees - This is the correct answer.
Both the bioaccumulation of substances in fish and effects of neonicotinic pesticides on bees relate to the human food supply. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_16%3A_Environmental_Toxicology_Environmental_Health_and_One_Health/16.1%3A_Environmental_Toxicology.txt |
Environmental Health
Environmental health is a branch of public health. It focuses on the relationships between people and their environment, and promotes human health and well-being. Further, it fosters healthy and safe communities and is a key part of any comprehensive public health system. The field works to advance policies and programs to reduce chemical and other environmental exposures in air, water, soil, and food to protect people and provide communities with an healthier environments.
Toxicology vs. Environmental Health
The two fields are closely connected, with a large intersection.
The terminology is loose. For example, is environmental health about the health of the environment and/or the effect of the environment on health?
Scope of Work and Study
Environmental health specialists identify and evaluate environmental hazards and their sources. They also limit exposures to physical, chemical, and biological agents in air, water, soil, food, and other environmental media or settings that may adversely affect human health (Source).
Figure \(1\). Environmental health saves lives, saves money, and protects your future
(Image Source: National Environmental Health Association (NEHA), © 2016. Original image)
National Center for Environmental Health (NCEH)
The CDC National Center for Environmental Health (NCEH) plans, directs, and coordinates a program to protect the American people from environmental hazards. The NCEH seeks to prevent premature death, avoidable illness, and disability caused by non-infectious, non-occupational environmental and related factors. One focus is on safeguarding the health of venerable populations, such as children, the elderly, and people with disabilities, from certain environmental hazards.
Learn more about environmental health
Knowledge Check
1) Which of the following are important environmental health issues?
a) Radon in air
b) Lead contamination in drinking water
c) Foodborne illness from contaminated food
d) All of these are important for human health
Answer
All of these are important for human health
16.3: One Health
One Health
One Health is a worldwide concept and strategy recognizing that the health of people, animals, and the environment are all connected. For example, the Centers for Disease Control and Prevention (CDC) works with physicians, veterinarians, ecologists, and many others to monitor and control public health threats and to learn about how diseases spread among people, animals, and the environment.
Link Between Human, Animal, and Environmental Health
One Health is important at the local, regional, national, and global levels, and there are many examples of its importance. One example of how human, animal, and environmental health are linked involves bacteria, cows, farms, food, lettuce, and humans:
Cows graze next to a lettuce field. Cows can carry E. coli but still look healthy.
E. coli from cow manure in the nearby farm can contaminate the lettuce field.
People eat contaminated lettuce and can become infected with E. coli. Serious illness or sometimes death can result.
Figure \(1\). Human, animal, and environmental health are linked
(Image Source: CDC - original image)
Another example of One Health involving animals and humans is the shared susceptibility to some diseases and environmental hazards. Animals can serve as early warning signs of potential human illness. An example is birds dying from West Nile virus before people in the same areas get sick from exposures to this virus.
Factors that Affect Human and Animal Health
Some interactions between people, animals, and the environment have changed in recent years and these changes have impacted animal and human health.
Table \(1\). Factors that affect human and animal health
(Source: CDC: One Health Basics)
Learn more about One Health
Knowledge Check
1) One Health is a concept and strategy recognizing that the health of ________, ________, and __________ are all connected.
a) People, plants, the environment
b) People, animals, the environment
c) People, animals, microbes
Answer
People, animals, the environment - This is the correct answer.
One Health is a concept and strategy recognizing that the health of people, animals, and the environment are all connected.
2) Which of the following factors that affect human and animal health is not correct?
a) Fewer people in recent years live in close contact with wild and domestic animals
b) Disruptions in environmental conditions and habitats provide new opportunities for diseases to pass to animals
c) International travel and trade have increased, and diseases can spread quickly across the globe
Answer
Fewer people in recent years live in close contact with wild and domestic animals - This is the correct answer.
The statement "Fewer people in recent years live in close contact with wild and domestic animals" is incorrect.
Section 17: Conclusion
Thank you for completing ToxTutor! We trust it has given you a strong foundation of understanding in this important area of science.
In conclusion, we have now covered the following topics throughout this course:
1. Introduction to Toxicology
2. Dose and Dose Response
3. Toxic Effects
4. Interactions
5. Toxicity Testing Methods
6. Risk Assessment
7. Exposure Standards and Guidelines
8. Basic Physiology
9. Introduction to Toxicokinetics
10. Absorption
11. Distribution
12. Biotransformation
13. Excretion
14. Cellular Toxicology
15. Intuitive Toxicology and Risk Communication
16. Environmental Toxicology, Environmental Health, and One Health
Contact Information
If you would like additional information about toxicology and environmental health, or if you have any questions or comments, please contact us at [email protected]
Credits
ToxTutor was adopted from the U.S National Library of Medicine in 2021. More Information can be seen in the ToxTutor Bibliography. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_16%3A_Environmental_Toxicology_Environmental_Health_and_One_Health/16.2%3A_Environmental_Health.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Define dose and explain its importance in determining toxicity.
• Analyze and compare dose-response curves.
• Describe the safety or toxicity of substances based on specific dose levels.
Topics include:
What We've Covered
In this section, we explored the following main points:
• Dose is the amount of a substance administered; however, several parameters are required to characterize exposure to xenobiotics, including the:
• Number of doses
• Frequency of doses
• Total time period of exposure
• The dose-response relationship helps establish causality, or that the chemical induced the observed effects; the threshold effect, or the lowest dose that induced effects; and the slope, or the rate at which effects increase with dose increases.
• Estimating doses for toxic effects involves:
• Lethal Doses/Concentrations, such as LD0, LD10, and LC50, which denote doses or concentrations that are expected to lead to death in specific percentages of a population.
• Effective Doses, such as ED50 and ED90, which denote doses that are effective in achieving a desired endpoint in specific percentages of a population.
• Toxic Doses, such as TD0 and TD50, which denote doses that cause adverse toxic effects in specific percentages of a population.
• The Therapeutic Index (TI) compares the effective dose to the toxic dose of a drug.
• The Margin of Safety (MOS) compares the toxic dose to 1% of the population to the effective dose to 99% of the population.
• NOAEL is the highest dose at which there is no observed toxic effect.
• LOAEL is the lowest dose at which there is an observed toxic effect.
Coming Up...
In the next section, we will take a closer look at various types of toxic effects.
Section 2: Dose and Dose Response
Dose Defined
Dose by definition is the amount of a substance administered at one time. However, other parameters are needed to characterize the exposure to xenobiotics. The most important are the number of doses, frequency, and total time period of the treatment.
For example:
• 650 -3 g)." tabindex="0">mg acetaminophen (Tylenol® products) as a single dose.
• 500 -3 g)." tabindex="0">mg penicillin every 8 hours for 10 days.
• 10 -3 g)." tabindex="0">mg DDT per day for 90 days.
Substances can enter the body from either:
1. Encountering them in the environment (exposure).
2. Intentionally consuming or administering a certain quantity of a substance.
Environments in which xenobiotics are present include outdoor air, indoor air, and water. Xenobiotics can travel into the body through the skin, eyes, lungs, and digestive tract. Exposure to a xenobiotic can occur in any environment where a substance can enter the:
• Skin through dermal absorption (air and water).
• Respiratory tract through inhalation.
• Digestive tract through ingestion.
Figure \(1\): Dermal
(Image Source: ORAU, ©)
Figure \(2\): Inhalation
(Image Source: ORAU, ©)
Figure \(3\): Ingestion
(Image Source: ORAU, ©)
A dose can be considered either:
1. A measurement of environmental exposures.
2. The amount of a substance administered over a period of time.
Types of doses include:
• Absorbed dose — the amount of a substance that entered the body through the skin, eyes, lungs, or digestive tract and was taken up by organs or particular tissues. Absorbed dose can also be called internal dose.
• Administered dose — the quantity administered usually orally or by injection (note that an administered dose taken orally may not necessarily be absorbed).
• Total dose — the sum of all individual doses.
Not all substances that enter the body are necessarily absorbed by it. This concept applies to water intake. When a person drinks a large quantity of water at one time, some of it is absorbed while the rest of the water is eliminated.
If an individual drinks 1 liter of water every hour for 3 hours, each administered dose would be 1 liter. The total dose would be the amount the person drank over the time period that the water was consumed. The absorbed dose, however, would likely be less than the total dose because it would depend on how much of the water the individual's body absorbed which can be affected by various factors. The water intake example is represented in Table 1.
Doses Administered Dose Total Dose Absorbed Dose
Dose 1 (8:00 a.m.): 1 L water 1 L 1 L Less than 1 L
Dose 2 (9:00 a.m.): 1 L water 1 L 2 L Less than 2 L
Dose 3 (10:00 a.m.): 1 L water 1 L 3 L Less than 3 L
The terms for types of doses help account for the amount of a substance that entered the body by different means, but the amount absorbed is what is most important.
In a later section, we will review specifics about how the body handles substances after they enter the body.
Fractionating Doses
Fractionating a total dose usually decreases the probability that the total dose will cause toxicity. The reason is that the body often can repair the effect of each subtoxic dose if sufficient time elapses before the next dose is received. In that case, a total dose that would be harmful if received all at once is non-toxic when administered over a period of time. For example, 30 -3 g)." tabindex="0">mg of strychnine swallowed at one time could be fatal to an adult whereas 3 -3 g)." tabindex="0">mg of strychnine swallowed each day for 10 days is not considered a fatal dose.
The units used in toxicology are basically the same as those used in medicine. The gram (g) is the standard unit. Because most exposures are in smaller quantities, the -3 g)." tabindex="0">milligram (-3 g)." tabindex="0">mg) is commonly used. For example, the common adult dose of acetaminophen is 650 -3 g)." tabindex="0">mg.
Importance of Age, Body Size, and Time
A person’s age and body size affect the clinical and toxic effects of a given dose. Age and body size usually are connected, particularly in children. This relationship is important because a person's body size can affect the burden that a substance has on it. For example, a 650--3 g)." tabindex="0">mg dose of acetaminophen is typical for adults but it would be toxic to young children. Therefore, a tablet of an acetaminophen product designed for children (Children's Tylenol®) contains only 80 -3 g)." tabindex="0">mg of the drug.
Figure \(4\): Age, body size, and time are key factors when considering the clinical and toxic effects of a dose
(Image Source: iStock Photos, ©)
One way to compare the effectiveness of a dose and its toxicity is to assess the amount of a substance administered with respect to body weight. A common dose measurement is mg/kg which stands for -3 g)." tabindex="0">mg of substance per kg of body weight. Another method used to compare doses among different species is to use body surface area, rather than simply body weight.
Units
Because some xenobiotics are toxic in quantities much smaller than the -3 g)." tabindex="0">milligram, smaller fractions of the gram, such as -6) of a gram." tabindex="0">microgram (µg). Table 2 shows other units.
Unit Gram Equivalents Exp. Form
Kilogram (kg) 1000.0 g 103g
Gram (g) 1.0 g 1 g
-3 g)." tabindex="0">Milligram (-3 g)." tabindex="0">mg) 0.001 g 10-3 g
-6) of a gram." tabindex="0">Microgram (µg) 0.000,001 g 10-6 g
-9 g)." tabindex="0">Nanogram (-9 g)." tabindex="0">ng) 0.000,000,001 g 10-9 g
-12 g)." tabindex="0">Picogram (-12 g)." tabindex="0">pg) 0.000,000,000,001 g 10-12 g
Femtogram (fg) 0.000,000,000,000,001 g 10-15 g
Table \(1\): Various units, their gram equivalents, and their exponential form
Concentration
Environmental exposure units are expressed as the amount of a xenobiotic in a unit of the media, which could be liquid, solid, or air. Concentration is the amount of a substance found in a certain amount of another substance, such as water, air, soil, food, blood, hair, urine, or breath. For example, the weight of a toxic substance found in a certain weight of food is indicated as a measure of concentration rather than the total amount. Knowing how concentrated the toxic substance is in a sample of food that weighs 100 g allows for easy comparison when testing for that toxic substance in other samples of food that weigh more or less than 100 g.
Figure 7 illustrates this concept. The two glasses contain samples of juice that are being tested for contamination with lead. The volume of juice in Glass A is 100 mL and the volume of juice in Glass B is 50 mL. The concentration of lead is the same in both samples of juice: 20 parts per billion (ppb). The total amount of lead would be higher in Glass A but the concentration of lead per unit volume is the same in both glasses.
Figure \(5\): The concentration of lead is the same in samples with different volumes, but the total amount of lead in each is not
(Image Source: Adapted from iStock Photos, ©)
Assessing Exposure
An individual’s exposure to a substance can be assessed based on the relationship between the person's body weight and these factors:
• Concentration of the substance in the environmental media (for example, in µg/ml).
• Amount of the substance taken into the body.
• Duration and frequency of individual events during which the body was in contact with the environmental media.
Environmental exposure units used in toxicology include:
• -3 g)." tabindex="0">mg/liter (-3 g)." tabindex="0">mg/L) for liquids.
• -3 g)." tabindex="0">mg/gram (-3 g)." tabindex="0">mg/g) for solids.
• -3 g)." tabindex="0">mg/cubic meter (-3 g)." tabindex="0">mg/m3) for air.
Smaller units are used as needed; for example, -6) of a gram." tabindex="0">µg/mL. Other commonly used dose units for substances in media are parts per million (ppm), parts per billion (ppb), and parts per trillion (ppt). When smaller units are used to quantify exposure, the mg/kg/day unit can be adapted to the smaller unit. For example, parts per billion per kg per day (ppb/kg/day) could be used.
An important thing to remember is that the use of a small dose unit is not related to the burden a substance has on the body. An exposure unit describes only the quantity of the substance. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_2%3A_Dose_and_Dose_Response/2.1%3A_Dose_and_It%27s_Impact_on_Toxicity.txt |
The Dose-Response Relationship
The dose-response relationship is an essential concept in toxicology. It correlates exposures with changes in body functions or health.
In general, the higher the dose, the more severe the response. The dose-response relationship is based on observed data from experimental animal, human clinical, or cell studies.
Knowledge of the dose-response relationship establishes:
• Causality — that the chemical has induced the observed effects.
• The threshold effect — the lowest dose where an induced effect occurs.
• The slope for the dose response — the rate at which injury builds up.
Within a population, the majority of responses to a toxicant are similar; however, there are differences in how responses may be encountered – some individuals are susceptible and others resistant. As demonstrated in Animation 1, a graph of the individual responses can be depicted as a bell-shaped standard distribution curve. There is a wide variance in responses as demonstrated by the mild reaction in resistant individuals, the typical response in the majority of individuals, and the severe reaction in sensitive individuals.
Animation 1: A graph of individual responses to a substance, which generally take the form of a bell-shaped curve (view full-text, PDF version)
The dose-response curve is a visual representation of the response rates of a population to a range of doses of a substance, as demonstrated in Animation 2 (available at ToxTutor). The graph of a dose-response relationship typically has an "s" shape. (view full-text, PDF version)
Knowledge Check
1. The quantity of a substance administered to an individual over a period of time or in several individual doses is known as the:
Answer
total dose
It is the quantity of a substance administered to an individual over a period of time or in several individual doses. It is particularly important when evaluating cumulative poisons.
2. Fractionation of a total dose so that the total amount administered is given over a period of time usually results in:
Answer
Fractionation of a total dose so that the total amount administered is given over a period of time usually results in decreased toxicity. This applies to most forms of toxicity but not necessarily to carcinogenicity or mutagenicity.
3. The usual dosage unit that incorporates the amount of material administered or absorbed in accordance with the size of the individual over a period of time is:
Answer
mg/kg/day
The usual dosage unit that incorporates the amount of material administered or absorbed in accordance with the size of the individual over a period of time is mg/kg/day. In some cases, much smaller dosage units, such as µg/kg/day, are used.
4. The dose at which a toxic effect is first encountered is called the:
Answer
threshold dose
5. The dose-response relationship helps a toxicologist determine:
Answer
all of the above
The dose-response relationship demonstrates whether any effect has occurred, the threshold dose, and the rate at which the effect increases with increasing dose levels.
2.3: Dose Estimates of Toxic Effects
Dose Estimates
Dose-response curves are used to derive dose estimates of chemical substances.
Historically, LD50 (Lethal Dose 50%) has been a common dose estimate for acute toxicity. It is a statistically derived maximum dose at which 50% of the group of organisms (rat, mouse, or other species) would be expected to die. LD50 testing is no longer the recommended method for assessing toxicity because of the ethics of using large numbers of animals, the variability of responses in animals and humans, and the use of mortality as the only endpoint. Regulatory agencies use LD50 only if it is justified by scientific necessity and ethical considerations.
The Three Rs
The current practice for estimating acute toxicity emphasizes the following approaches, known as the Three Rs:
1. Replacing animals in science by in vitro, in silico, and other approaches.
2. Reducing the number of animals used. For example, the oral LD50 approach has been replaced in some circumstances by an up-and-down method in which animals are dosed one at a time.
3. Refining care and procedures to minimize pain and distress.
Other dose estimates also may be used.
Lethal Doses/Concentrations
• Lethal Dose 0% (LD0) — represents the dose at which no individuals are expected to die. This is just below the threshold for lethality.
• Lethal Dose 10% (LD10) — refers to the dose at which 10% of the individuals will die.
• Lethal Concentration 50% (LC50) — for inhalation toxicity, air concentrations are used for exposure values. The LC50 refers to the calculated concentration of a gas lethal to 50% of a group. Occasionally LC0 and LC10 are also used.
Effective Doses (EDs)
Effective Doses (EDs) are used to indicate the effectiveness of a substance. Normally, effective dose refers to a beneficial effect such as relief of pain. It may also stand for a harmful effect such as paralysis. Thus, the specific endpoint must be indicated. The usual terms are:
Term Effective for this percentage of the population
ED0 0%
ED10 10%
ED50 50%
ED90 90%
Table \(1\). Typical terms for describing effective doses
Toxic Doses (TDs)
Toxic Doses (TDs) are used to indicate doses that cause adverse toxic effects. The usual dose estimates include:
Term Toxic to this percentage of the population
TD0 0%
TD10 10%
TD50 50%
TD90 90%
Table \(2\). Typical terms for describing toxic doses
Determining the Relative Safety of Pharmaceuticals
Toxicologists, pharmacologists, and others use effective and toxic dose levels to determine the relative safety of pharmaceuticals. As shown in Figure 1, two dose-response curves are presented for the same drug, one for effectiveness and the other for toxicity. In this case, a dose that is 50% to 75% effective does not cause toxicity. However, a 90% effective dose may result in a small amount of toxicity.
Figure \(1\). Dose-response curves representing effective dose and toxic dose for the same drug
(Image Source: NLM)
It should be noted that a desired effect in a drug is often an undesired effect with an environmental chemical. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_2%3A_Dose_and_Dose_Response/2.2%3A_The_Dose_Response_Relationship.txt |
Do you know?
What are measures for describing the safety of a drug? This section describes the:
• Therapeutic index
• Margin of safety
Therapeutic Index
The Therapeutic Index (TI) is used to compare the therapeutically effective dose to the toxic dose of a pharmaceutical agent. The TI is a statement of relative safety of a drug. It is the ratio of the dose that produces toxicity to the dose needed to produce the desired therapeutic response. The common method used to derive the TI is to use the 50% dose-response points, including TD50 (toxic dose) and ED50 (effective dose).
For example, if the TD50 is 200 and the ED50 is 20 -3 g)." tabindex="0">mg, the TI would be 10.
Figure \(2\). A higher value on the Therapeutic Index indicates a more favorable safety profile
(Image Source: ORAU, ©)
However, the use of the ED50 and TD50 doses to derive the TI may be misleading about a drug's safety, depending on the slope of the dose-response curves for therapeutic and toxic effects. To overcome this deficiency, toxicologists often use another term to denote the safety of a drug: the Margin of Safety.
Margin of Safety (MOS)
The Margin of Safety (MOS) is usually calculated as the ratio of the toxic dose to 1% of the population (TD01) to the dose that is 99% effective to the population (ED99).
Figure \(3\). Relationship between effective dose response and toxic dose response
(Image Source: NLM)
Because of differences in slopes and threshold doses, low doses may be effective without producing toxicity. Although more patients may benefit from higher doses, that is offset by the probability that toxicity will occur.
The toxicity of various substances can be compared using the slopes for each curve (Figure 3).
Figure \(4\). Comparison of the toxicity of two substances
(Image Source: NLM)
For some substances, a small increase in dose causes a large increase in response, which is seen in Toxicant A's steep slope. For other substances, a much larger increase in dose is required to cause the same increase in response, as indicated in Toxicant B's shallow slope.
2.5: NOAEL and LOAEL
NOAEL and LOAEL
Results from research studies establish the highest doses at which no toxic effects were identified and the lowest doses at which toxic or adverse effects were observed. The terms often used to describe these outcomes are:
• No Observed Adverse Effect Level (NOAEL)
• Lowest Observed Adverse Effect Level (LOAEL)
These terms refer to the actual doses used in human clinical or experimental animal studies. They are defined as follows:
• NOAELHighest dose at which there was not an observed toxic or adverse effect.
• LOAELLowest dose at which there was an observed toxic or adverse effect.
Figure 1 shows a dose-response curve where the NOAEL occurs at 10 mg and the LOAEL occurs at 18 mg.
Figure \(1\). A dose-response curve showing doses where the NOAEL and LOAEL occur for a substance
(Image Source: NLM)
Sometimes the terms No Observed Effect Level (NOEL) and Lowest Observed Effect Level (LOEL) are also used. NOELs and LOELs do not necessarily imply toxic or harmful effects and can be used to describe beneficial effects of substances.
The NOAEL, LOAEL, NOEL, and LOEL are commonly used in risk assessments and research. For example, this U.S. Food and Drug Administration (FDA) publication for industry describes a process for estimating the maximum safe starting dose of drugs tested in clinical trials. It provides extensive information about these concepts and their utility when developing new drugs.
NOEALs and LOAELs are also included in the Noncarcinogenic Risk Assessment section where they are applied using the benchmark dose (BMD) method.
Knowledge Check
1. Which of the following is not one of the Three Rs of estimating acute toxicity?
Answer
The Three Rs involve replacing animals in science by in vitro, in silico, and other approaches; reducing the number of animals used in testing; and refining care and procedures to minimize pain and distress.
2.
The Therapeutic Index (TI) is used to:
Answer
3. The Margin of Safety (MOS) of a drug is the:
Answer
4. The No Observed Adverse Effect Level (NOAEL) is the:
Answer
5. The Lowest Observed Adverse Effect Level (LOAEL) is the:
Answer | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_2%3A_Dose_and_Dose_Response/2.4%3A_Determining_the_Safety_of_a_Drug.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Explain factors that influence the toxicity of a substance.
• Define types of systemic and organ-specific toxic effects.
In this section...
Topics include:
Did you know?
In December 1984, the world's worst industrial accident occurred in Bhopal, India. More than 40 tons of methyl isocyanate leaked from a pesticide plant, killing thousands of people and injuring hundreds of thousands. Follow-up studies have shown that the incident caused increased mortality and continued effects on health, including airway disease, eye diseases, and pregnancy losses.
The company involved in the leak tried to distance itself from the accident and prevent those affected from learning the true nature of the accident. The legal case went on for years. Eventually, families of the dead received an average of about \$2,200. While the company ceased operation at its Bhopal plant after the disaster, it did not clean up the site completely. The plant continues to leak several toxic chemicals and heavy metals into local aquifers.
Figure 1. Survivors of the Bhopal disaster of 1984 protest over the mishandling of the disaster
Bhopal disaster protestors. [Photo]. In Encyclopædia Britannica. Retrieved from http://www.britannica.com/event/Bhopal-disaster/images-videos/Survivors-of-the-1984-deadly-industrial-accident-in-Bhopal-India/192038
What We've Covered
This section made the following main points:
• Toxicity can result from adverse cellular, biochemical, or macromolecular changes.
• Some chemicals affect only specific target organs; others can damage any cell or tissue they contact.
• Chemicals can affect organisms by multiple mechanisms and at the molecular level, leading to modern approaches such as Adverse Outcome Pathways (AOPs) and Mechanism of Actions (MOAs).
• Several factors influence toxicity, including form and innate chemical activity, dosage, exposure route, species, life stage, gender, absorption ability, metabolism, distribution, excretion, health and nutritional status, the presence of other chemicals, and circadian rhythms.
• Systemic toxic effects, which can occur at multiple sites, include:
• Acute toxicity, which occurs almost immediately (seconds/minutes) after a single dose or series of doses within 24 hours.
• Subchronic toxicity, which results from repeated exposure for several weeks or months.
• Chronic toxicity, which damages specific organ systems over the course of many months or years.
• Carcinogenicity, or abnormal cell growth and differentiation that can lead to cancer.
• Developmental toxicity, which adversely affects the developing embryo or fetus.
• Genetic toxicity, caused by damage to DNA and altered genetic expressions.
• Organ specific toxic effects include:
• Blood/cardiovascular toxicity, affecting the blood, bone marrow, or heart.
• Dermal toxicity, impacting the skin.
• Epigenetic alterations, changing genetic programming.
• Optical toxicity, adversely affecting the eyes.
• Hepatotoxicity, impacting the liver, bile duct, or gall bladder.
• Immunotoxicity, affecting the immune system.
• Nephrotoxicity, affecting the kidneys.
• Neurotoxicity, impacting the central nervous system.
• Reproductive toxicity, damaging the reproductive system.
• Respiratory toxicity, affecting the respiratory system.
Coming Up...
In the next section, we will look at effects that can occur when two or more chemicals interact.
Section 3: Toxic Effects
Types of Toxic Effects
Many factors play a potential role in toxicity. The dosage (or amount of exposure) is the most important factor. A well-known saying, "the dose makes the poison" speaks to this principle.
Toxicity can result from adverse cellular, biochemical, or macromolecular changes. Some examples are noted below.
Many chemicals distribute in the body and often affect only specific target organs. However, other chemicals can damage any cell or tissue that they contact. The target organs that are affected may vary depending on dosage and route of exposure. For example, the central nervous system may be the target organ for toxicity from a chemical after acute exposure whereas the liver may be affected after chronic exposures.
Figure \(1\). Central nervous system (left); Liver (right)
(Image Source: iStock Photos, ©)
Chemicals can cause many types of toxicity by a variety of mechanisms. Some act locally such as when direct exposure triggers skin or eye irritation, whereas other chemical cause systemic effects in the body in sites remote from where the actual exposure occurred. Toxicity can act directly affect subcellular components, such as cell receptors, or it can cause problems at the cellular level, such as with exposures to caustic or corrosive substances.
For example, chemicals might:
• Themselves be toxic or require metabolism (chemical change within the body) before they cause toxicity.
• Cause damage leading to fibrosis as the body attempts to repair the toxicity.
• Damage or disrupt an enzyme system or protein synthesis.
• Produce reactive chemicals in cells.
• Cause changes in hormone signaling or other effects.
• Produce DNA damage or epigenetic changes.
Some chemicals may also act indirectly by:
• Modifying an essential biochemical function.
• Interfering with nutrition.
• Altering a physiological mechanism.
Figure \(2\). Chemicals can have a wide range of toxic effects
(Image Source: iStock Photos, ©)
Did you know?
Mercury is a naturally occurring heavy metal. Methylmercury , the most common organic mercury compound, can be formed in water and soil by bacteria. It builds up in the tissues of fish. Exposure to high levels of mercury and mercury compounds can cause death or permanently damage the brain and kidneys.
In the late 1950s, people living around Japan's Minamata Bay developed symptoms of severe methylmercury poisoning, some of whom died. Children exposed in utero were born with disabilities. Investigations showed that heavily contaminated sludge from a factory had been released into the bay, contaminating fish and shellfish. People who ate the fish and shellfish became ill. The events led to a better understanding of industrial pollution and how heavy metals can accumulate in systems.
In January 2013 the Minamata Convention on Mercury global treaty was agreed to by an intergovernmental committee. It seeks to protect human health and the environment from the adverse effects of mercury.
Figure \(3\). Fish and shellfish became contaminated with mercury, causing severe methylmercury poisoning in people and animals who consumed them
(Image Source: iStock Photos, ©)
Because chemicals can affect organisms by different mechanisms and at the molecular level, there are new ways to conduct toxicity testing.
An emerging approach is to use Adverse Outcome Pathways (AOPs), which evaluate changes in normal cellular pathways. AOPs reflect the move away from high-dose studies in laboratory animals for toxicity testing to in vitro methods that evaluate changes in normal cellular pathways using human-relevant cells or tissues.
Other terms that describe changes resulting from the exposure of a living organism to a substance include mode of action (MoA) and mechanism of action (MOA).
• Mode of action (MoA) (older term) — describes a functional or anatomical change at the cellular level.
• Mechanism of action (MOA) — describes such changes at the molecular level.
More information about toxicity testing can be found in the Hazard Identification section. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_3%3A_Toxic_Effects/3.1%3A_Types_of_Toxic_Effects.txt |
Factors Influencing Toxicity
In some instances, individuals can have unpredictable reactions, or idiosyncratic responses, to a drug or other substance. An idiosyncratic response is uncommon, and it is sometimes impossible to understand whether it is the result of a genetic predisposition or has some other cause such as the status of the immune system. It could result in an abnormally small or short, or abnormally large or long response to the drug or other substance. Or, the response could be qualitatively different than what has been observed in most other individuals.
The toxicity of a substance usually depends on the following factors:
• Form and innate chemical activity
• Dosage, especially dose-time relationship
• Exposure route
• Species
• Life stage, such as infant, young adult, or elderly adult
• Gender
• Ability to be absorbed
• Metabolism
• Distribution within the body
• Excretion
• Health of the individual, including organ function and pregnancy, which involves physiological changes that could influence toxicity
• Nutritional status
• Presence of other chemicals
• Circadian rhythms (the time of day a drug or other substance is administered)
Factors Related to the Substance
Form and Innate Chemical Activity
The form of a substance may have a profound impact on its toxicity especially for metallic elements, also termed heavy metals. For example, the toxicity of mercury vapor differs greatly from methyl mercury. Another example is chromium. Cr3+ is relatively nontoxic whereas Cr6+ causes skin or nasal corrosion and lung cancer.
The innate chemical activity of substances also varies greatly. Some can quickly damage cells causing immediate cell death. Others slowly interfere only with a cell's function. For example:
• Hydrogen cyanide binds to the enzyme cytochrome oxidase resulting in cellular hypoxia and rapid death.
• Nicotine binds to cholinergic receptors in the central nervous system (CNS) altering nerve conduction and inducing gradual onset of paralysis.
Dosage
The dosage is the most important and critical factor in determining if a substance will be an acute or a chronic toxicant. Virtually all chemicals can be acute toxicants if sufficiently large doses are administered. Often the toxic mechanisms and target organs are different for acute and chronic toxicity. Examples are:
Toxicant Acute Toxicity Chronic Toxic Effects
Ethanol CNS depression Liver cirrhosis
Arsenic Gastrointestinal damage Skin/liver cancer
Table \(1\). Examples of acute and chronic toxicity
Exposure Route
The way an individual comes in contact with a toxic substance, or exposure route, is important in determining toxicity. Some chemicals may be highly toxic by one route but not by others. Two major reasons are differences in absorption and distribution within the body. For example:
• Ingested chemicals, when absorbed from the intestine, distribute first to the liver and may be immediately detoxified.
• Inhaled toxicants immediately enter the general blood circulation and can distribute throughout the body prior to being detoxified by the liver.
Different target organs often are affected by different routes of exposure.
Figure \(1\). Ingestion
(Image Source: ORAU, ©)
Figure \(2\). Inhalation
(Image Source: ORAU, ©)
Absorption
The ability to be absorbed is essential to systemic toxicity. Some chemicals are readily absorbed and others are poorly absorbed. For example, nearly all alcohols are readily absorbed when ingested, whereas there is virtually no absorption for most polymers. The rates and extent of absorption may vary greatly depending on the form of a chemical and the route of exposure to it. For example:
• Ethanol is readily absorbed from the gastrointestinal tract but poorly absorbed through the skin.
• Organic mercury is readily absorbed from the gastrointestinal tract; inorganic lead sulfate is not.
Factors Related to the Organism
Species
Toxic responses can vary substantially depending on the species. Most differences between species are attributable to differences in metabolism. Others may be due to anatomical or physiological differences. For example, rats cannot vomit and expel toxicants before they are absorbed or cause severe irritation, whereas humans and dogs are capable of vomiting.
Selective toxicity refers to species differences in toxicity between two species simultaneously exposed. This is the basis for the effectiveness of pesticides and drugs. For example:
• An insecticide is lethal to insects but relatively nontoxic to animals.
• Antibiotics are selectively toxic to microorganisms while virtually nontoxic to humans.
Life Stage
An individual's age or life stage may be important in determining his or her response to toxicants. Some chemicals are more toxic to infants or the elderly than to young adults. For example:
• Parathion is more toxic to young animals.
• Nitrosamines are more carcinogenic to newborn or young animals.
Figure \(3\). An individual's life stage can impact that person's response to toxicants
(Image Source: iStock Photos, ©)
Gender
Gender can play a big role in influencing toxicity. Physiologic differences between men and women, including differences in pharmacokinetics and pharmacodynamics, can affect drug activity.
In comparison with men, pharmacokinetics in women generally can be impacted by their lower body weight, slower gastrointestinal motility, reduced intestinal enzymatic activity, and slower kidney function (glomerular filtration rate). Delayed gastric emptying in women may result in a need for them to extend the interval between eating and taking medications that require absorption on an empty stomach. Other physiologic differences between men and women also exist. Slower renal clearance in women, for example, may result in a need for dosage adjustment for drugs such as digoxin that are excreted via the kidneys.
In general, pharmacodynamic differences between women and men include greater sensitivity to and enhanced effectiveness, in women, of some drugs, such as beta blockers, opioids, and some antipsychotics.
Studies in animals also have identified gender-related differences. For example:
• Male rats are 10 times more sensitive than females to liver damage from DDT.
• Female rats are twice as sensitive to parathion as are male rats.
Figure \(4\). Gender symbols for female (left) and male (right)
(Image Source: iStock Photos, ©)
Metabolism
Metabolism, also known as biotransformation, is the conversion of a chemical from one form to another by a biological organism. Metabolism is a major factor in determining toxicity. The products of metabolism are known as metabolites. There are two types of metabolism:
1. Detoxification
2. Bioactivation
In detoxification, a xenobiotic is converted to a less toxic form. This is a natural defense mechanism of the organism. Generally, detoxification converts lipid-soluble compounds to polar compounds.
In bioactivation, a xenobiotic may be converted to more reactive or toxic forms. Cytochrome P-450 (CYP450) is an example of an enzyme pathway used to metabolize drugs. In the elderly, CYP450 metabolism of drugs such as phenytoin and carbamazepine may be decreased. Therefore, the effect of those drugs may be less pronounced. CYP450 metabolism also can be inhibited by many drugs. Risk of toxicity may be increased if a CYP450 enzyme-inhibiting drug is given with one that depends on that pathway for metabolism.
There is awareness that the gut microbiota can impact the toxicity of drugs and other chemicals. For example, gut microbes can metabolize some environmental chemicals and bacteria-dependent metabolism of some chemicals can modulate their toxicity. Also, environmental chemicals can alter the composition and/or the metabolic activity of the gastrointestinal bacteria, thus contributing in a meaningful way to shape an individual's microbiome. The study of the consequences of these changes is an emerging area of toxicology.
Learn more about human exposure to pollutants and their interaction with the GI microbiota.
Learn more about the microbiome and toxicology.
Distribution Within the Body
The distribution of toxicants and toxic metabolites throughout the body ultimately determines the sites where toxicity occurs. A major determinant of whether a toxicant will damage cells is its lipid solubility. If a toxicant is lipid-soluble, it readily penetrates cell membranes. Many toxicants are stored in the body. Fat tissue, liver, kidney, and bone are the most common storage sites. Blood serves as the main avenue for distribution. Lymph also distributes some materials.
Excretion
The site and rate of excretion is another major factor affecting the toxicity of a xenobiotic. The kidney is the primary excretory organ, followed by the gastrointestinal tract, and the lungs (for gases). Xenobiotics may also be excreted in sweat, tears, and milk.
A large volume of blood serum is filtered through the kidney. Lipid-soluble toxicants are reabsorbed and concentrated in kidney cells. Impaired kidney function causes slower elimination of toxicants and increases their toxic potential.
Health Status
The health of an individual or organism can play a major role in determining the levels and types of potential toxicity. For example, an individual may have pre-existing kidney or liver disease. Certain conditions, such as pregnancy, also are associated with physiological changes in kidney function that could influence toxicity.
Nutritional Status
Diet (nutritional status) can be a major factor in determining who does or does not develop toxicity. For example:
• Consumption of fish that have absorbed mercury from contaminated water can result in mercury toxicity; an antagonist for mercury toxicity is the nutrient selenium.
• Some vegetables can accumulate cadmium from contaminated soil; an antagonist for cadmium toxicity is the nutrient zinc.
• Grapefruit contains a substance that inhibits the P450 drug detoxification pathway, making some drugs more toxic.
Find out more about nutrition and chemical toxicity here.
Circadian Rhythms
Circadian rhythms can play a role in toxicity. For example, rats administered an immunosuppressive drug had severe toxicity in their intestines 7 hours after light onset compared to controls and to other times in the day. The rats had changes in their digestive enzyme activity and other physiological indicators at this dosing time.
Find out more about circadian rhythm and gut toxicity here.
Other Factors
Presence of Other Chemicals
The presence of other chemicals, at the same time, earlier, or later may:
• Decrease toxicity (antagonism)
• Add to toxicity (additivity)
• Increase toxicity (synergism or potentiation)
For example:
• Antidotes used to counteract the effects of poisons function through antagonism (atropine counteracts poisoning by organophosphate insecticides).
• Alcohol may enhance the effect of many antihistamines and sedatives.
• A synergistic interaction between the antioxidant butylated hydroxytoluene (BHT) and a certain concentration of oxygen results in lung damage in the form of interstitial pulmonary fibrosis.
Information on additional examples of lung damage from chemical interactions can be found here.
Knowledge Check
1. A target organ is an organ that:
Answer
A target organ is an organ in which a substance exerts a toxic effect.
2. What are the important factors that influence the degree of toxicity of a substance?
Answer
3. Metabolism, or biotransformation, of a xenobiotic:
Answer
May result in detoxification or bioactivation
Metabolism of a xenobiotic results in either detoxification, which converts the xenobiotic to a less toxic form, or bioactivation, which converts the xenobiotic to more reactive or toxic forms. For example, a xenobiotic itself might not be carcinogenic, but a metabolite of the xenobiotic might be.
4. An antibiotic administered to humans kills bacteria in the body but does not harm human tissues. This is an example of:
Answer
Selective Toxicity
Selective toxicity refers to differences in toxicity between two species simultaneously exposed, much like the antibiotic in this example.
Answer
A major determinant of whether or not a toxicant will damage cells is its lipid solubility. If a toxicant is lipid-soluble, it readily penetrates cell membranes. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_3%3A_Toxic_Effects/3.2%3A_Factors_Affecting_Toxicity.txt |
Types of Systemic Toxic Effects
Toxic effects are generally categorized according to the site of the toxic effect. In some cases, the effect may occur at only one site. This site is termed the specific target organ.
In other cases, toxic effects may occur at multiple sites. This is known as systemic toxicity. Types of systemic toxicity include:
• Acute Toxicity
• Subchronic Toxicity
• Chronic Toxicity
• Carcinogenicity
• Developmental Toxicity
• Genetic Toxicity (somatic cells)
Acute Toxicity
Acute toxicity occurs almost immediately (seconds/minutes/hours/days) after an exposure. An acute exposure is usually a single dose or a series of doses received within a 24-hour period. Death can be a major concern in cases of acute exposures. For example:
• In 1989, 5,000 people died and 30,000 were permanently disabled due to exposure to methyl isocyanate from an industrial accident in India.
• Many people die each year from inhaling carbon monoxide from faulty heaters.
Figure \(1\). Faulty gas heaters can emit toxic carbon monoxide
(Image Source: iStock Photos, ©)
Subchronic Toxicity
Subchronic toxicity results from repeated exposure for several weeks or months. This is a common human exposure pattern for some pharmaceuticals and environmental agents. For example:
• Ingestion of warfarin (Coumadin®) tablets (blood thinners) for several weeks as a treatment for venous thrombosis can cause internal bleeding.
• Workplace exposure to lead over a period of several weeks can result in anemia.
Figure \(2\). Warfarin Tablets (left); old lead pipes (right)
(Image Source: iStock Photos, ©)
Chronic Toxicity
Chronic toxicity represents cumulative damage to specific organ systems and takes many months or years to become a recognizable clinical disease. Damage due to subclinical individual exposures may go unnoticed. With repeated exposures or long-term continual exposure, the damage from this type of exposure slowly builds up (cumulative damage) until the damage exceeds the threshold for chronic toxicity. Ultimately, the damage becomes so severe that the organ can no longer function normally and a variety of chronic toxic effects may result.
Chronic toxic effects include:
• Cirrhosis in alcoholics who have ingested ethanol for several years.
• Chronic kidney disease in workmen with several years of exposure to lead.
• Chronic bronchitis in long-term cigarette smokers.
• Pulmonary fibrosis in coal miners (black lung disease).
Figure \(3\). Smoking cigarettes and/or drinking alcohol over a long period of time can lead to chronic toxicity
(Image Source: iStock Photos, ©)
Carcinogenicity
Carcinogenicity is a complex multistage process of abnormal cell growth and differentiation that can lead to cancer. The two stages of carcinogenicity are:
1. Initiation — a normal cell undergoes irreversible changes.
2. Promotion — initiated cells are stimulated to progress to cancer.
Chemicals can act as initiators or promoters.
The initial transformation that causes normal cells to undergo irreversible changes results from the mutation of the cellular genes that control normal cell functions. The mutation may lead to abnormal cell growth. It may involve a loss of suppresser genes that usually restrict abnormal cell growth. Many other factors are involved, such as growth factors, immune suppression, and hormones.
A tumor (neoplasm) is simply an uncontrolled growth of cells:
• Benign tumors grow at the site of origin; do not invade adjacent tissues or metastasize; and generally are treatable.
• Malignant tumors (cancer) invade adjacent tissues or migrate to distant sites (metastasis). They are more difficult to treat and often cause death.
Developmental Toxicity
Developmental toxicity pertains to adverse toxic effects to the developing embryo or fetus. It can result from toxicant exposure to either parent before conception or to the mother and her developing embryo or fetus. The three basic types of developmental toxicity are:
1. Embryolethality — failure to conceive, spontaneous abortion, or stillbirth.
2. Embryotoxicity — growth retardation or delayed growth of specific organ systems.
3. Teratogenicity — irreversible conditions that leave permanent birth defects in live offspring, such as cleft palette or missing limbs.
Chemicals cause developmental toxicity in two ways:
1. They act directly on cells of the embryo, causing cell death or cell damage, leading to abnormal organ development.
2. They induce a mutation in a parent's germ cell, which is transmitted to the fertilized ovum. Some mutated fertilized ova develop into abnormal embryos.
Figure \(4\). Ultrasound images of a developing fetus
(Image Source: iStock Photos, ©)
Genetic Toxicity
Genetic toxicity results from damage to DNA and altered genetic expression. This process is known as mutagenesis. The genetic change is referred to as a mutation and the agent causing the change is called a mutagen. There are three types of genetic changes:
1. Gene mutation — change in DNA sequence within a gene.
2. Chromosome aberration — changes in the chromosome structure.
3. Aneuploidy or polyploidy — increase or decrease in number of chromosomes.
If the mutation occurs in a germ cell, the effect is heritable. This means there is no effect on the exposed person; rather, the effect is passed on to future generations.
If the mutation occurs in a somatic cell, it can cause altered cell growth (for example, cancer) or cell death (for example, teratogenesis) in the exposed person.
Figure \(5\). Genetic toxicity results from damage to DNA and altered genetic expression
(Image Source: iStock Photos, ©) | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_3%3A_Toxic_Effects/3.3%3A_Systemic_Toxic_Effects.txt |
Organ Specific Toxic Effects
Toxic effects that pertain to specific organs and organ systems include:
Figure \(1\). Organ-specific toxic effects pertain to specific organs and organ systems
(Image Source: Adapted from iStock Photos, ©)
Blood and Cardiovascular/Cardiac Toxicity
Blood and Cardiovascular/Cardiac Toxicity results from xenobiotics acting directly on cells in circulating blood, bone marrow, and the heart. Examples of blood and cardiovascular/cardiac toxicity are:
• Hypoxia due to carbon monoxide binding of hemoglobin preventing transport of oxygen.
• Decrease in circulating leukocytes due to chloramphenicol damage to bone marrow cells.
• Leukemia due to benzene damage of bone marrow cells.
• Arteriosclerosis due to cholesterol accumulation in arteries and veins.
• Death of normal cells in and around the heart as a result of exposure to drugs used to treat cancer.
Dermal Toxicity
Dermal Toxicity can occur when a toxicant comes into direct contact with the skin or is distributed to it internally. Effects range from mild irritation to severe changes, such as irreversible damage, hypersensitivity, and skin cancer. Examples of dermal toxicity include:
• Dermal irritation from skin exposure to gasoline.
• Dermal corrosion from skin exposure to sodium hydroxide (lye).
• Dermal itching, irritation, and sometimes painful rash from poison ivy, caused by urushiol.
• Skin cancer due to ingestion of arsenic or skin exposure to UV light.
Epigenetic Alterations
Epigenetics is an emerging area in toxicology. In the field of genetics, epigenetics involves studying how external or environmental factors can switch genes on and off and change the programming of cells.
More specifically, epigenetics refers to stable changes in the programming of gene expression which can alter the phenotype without changing the DNA sequence (genotype). Epigenetic modifications include DNA methylation, covalent modifications of histone tails, and regulation by non-coding RNAs, among others.
Toxicants are examples of factors that can alter genetic programming.
In the past, toxicology studies have assessed toxicity without measuring its impact at the level where gene expression occurs. Exogenous agents could cause long-term toxicity that continues after the initial exposure has disappeared, and such toxicities remain undetected by current screening methods. Thus, a current challenge in toxicology is to develop screening methods that would detect epigenetic alterations caused by toxicants.
Research is being done to assess epigenetic changes caused by toxicants. For example, the National Institutes of Health (NIH) National Institute of Environmental Health Sciences (NIEHS) Environmental Epigenetics program provides funding for a variety of research projects that use state-of-the-art technologies to analyze epigenetic changes caused by environmental exposures. NIEHS-supported researchers use animals, cell cultures, and human tissue samples to pinpoint how epigenetic changes can lead to harmful health effects and can potentially be passed down to the next generation.
Eye Toxicity
Eye Toxicity results from direct contact with or internal distribution to the eye. Because the cornea and conjunctiva are directly exposed to toxicants, conjunctivitis and corneal erosion may be observed following occupational exposure to chemicals. Many household items can cause conjunctivitis. Chemicals in the circulatory system can distribute to the eye and cause corneal opacity, cataracts, and retinal and optic nerve damage. For example:
• Acids and strong alkalis may cause severe corneal corrosion.
• Corticosteroids may cause cataracts.
• Methanol (wood alcohol) may damage the optic nerve.
Hepatotoxicity
Hepatotoxicity is toxicity to the liver, bile duct, and gall bladder. Because of its extensive blood supply and significant role in metabolism, the liver is particularly susceptible to xenobiotics Thus, it is exposed to high doses of the toxicant or its toxic metabolites. The primary forms of hepatotoxicity are:
• Steatosislipid accumulation in the hepatocytes.
• Chemical hepatitis — inflammation of the liver.
• Hepatic necrosis — death of the hepatocytes.
• Intrahepatic cholestasis — backup of bile salts into the liver cells.
• Hepatic cancercancer of the liver.
• Cirrhosis — chronic fibrosis, often due to alcohol.
• Hypersensitivity — immune reaction resulting in hepatic necrosis.
Immunotoxicity
Immunotoxicity is toxicity of the immune system. It can take several forms:
• Hypersensitivity (allergy and autoimmunity)
• Immunodeficiency
• Uncontrolled proliferation (leukemia and lymphoma)
The normal function of the immune system is to recognize and defend against foreign invaders. This is accomplished by production of cells that engulf and destroy the invaders or by antibodies that inactivate foreign material. Examples include:
• Contact dermatitis due to exposure to poison ivy.
• Systemic lupus erythematosus ("lupus") in workers exposed to hydrazine.
• Immunosuppression by cocaine.
• Leukemia induced by benzene.
Figure \(7\). Bone (which contains bone marrow) and spleen, both components of the immune system, which recognizes and defends against foreign invaders
(Image Source: iStock Photos, ©)
Nephrotoxicity
The kidney is highly susceptible to toxicants because a high volume of blood flows through the organ and it filters large amounts of toxins which can concentrate in the kidney tubules.
Nephrotoxicity is toxicity to the kidneys. It can result in systemic toxicity causing:
• Decreased ability to excrete body wastes.
• Inability to maintain body fluid and electrolyte balance.
• Decreased synthesis of essential hormones (for example, erythropoietin, which increases the rate of blood cell production).
Neurotoxicity
Neurotoxicity represents toxicant damage to cells of the central nervous system (brain and spinal cord) and the peripheral nervous system (nerves outside the CNS). The primary types of neurotoxicity are:
• Neuronopathies (neuron injury)
• Axonopathies (axon injury)
• Demyelination (loss of axon insulation)
• Interference with neurotransmission
Reproductive Toxicity
Reproductive Toxicity involves toxicant damage to either the male or female reproductive system. Toxic effects may cause:
• Decreased libido and impotence.
• Infertility.
• Interrupted pregnancy (abortion, fetal death, or premature delivery).
• Infant death or childhood morbidity.
• Altered sex ratio and multiple births.
• Chromosome abnormalities and birth defects.
• Childhood cancer.
Respiratory Toxicity
Respiratory Toxicity relates to effects on the upper respiratory system (nose, pharynx, larynx, and trachea) and the lower respiratory system (bronchi, bronchioles, and lung alveoli). The primary types of respiratory toxicity are:
• Pulmonary irritation
• Asthma/bronchitis
• Reactive airway disease
• Emphysema
• Allergic alveolitis
• Fibrotic lung disease
• Pneumoconiosis
• Lung cancer
Knowledge Check
Knowledge Check
1. Toxic effects are primarily categorized into two general types:
Answer
Systemic or organ-specific effects - This is the correct answer.
Toxic effects are broadly categorized as either systemic or organ- specific effects.
2. What is the main difference between acute and chronic toxicity?
Answer
Acute toxicity appears within hours or days of an exposure, whereas chronic toxicity takes many months or years to become a recognizable clinical disease - This is the correct answer.
3. Police respond to a 911 call in which two people are found dead in an enclosed bedroom heated by an unvented kerosene stove. There was no sign of trauma or violence. A likely cause of death is:
Answer
Acute toxicity due to carbon monoxide poisoning - This is the correct answer.
The victims most likely died as a result of acute toxicity from exposure to carbon monoxide.
4. Genetic toxicity can result in:
Answer
All of the above - This is the correct answer.
Genetic toxicity can cause gene mutations, changes in chromosome structure (aberration), increases or decreases in the number of chromosomes (aneuploidy or polyploidy), and changes to genetic programming (epigenetic alterations). | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_3%3A_Toxic_Effects/3.4%3A_Organ_Specific_Toxic_Effects.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Explain the impacts that can be experienced when two or more chemicals interact.
• List realistic examples of chemical interactions.
In this section...
Topics include:
Did you know?
Gasoline is a volatile, complex mixture of hydrocarbon compounds. The mixture is easily vaporized during handling in normal conditions. People are exposed to this complex substance during refueling at service stations. More information is available on consumer exposure to gasoline.
In this section, we will look into the effects of interactions among such chemicals.
Figure 1. Refueling car
(Image Source: iStock Photos, ©)
What We've Covered
In this section, we explored the following key points:
• Interactions between multiple chemicals can:
• Decrease toxicity (antagonism).
• Add to toxicity (additivity).
• Increase toxicity (synergism or potentiation).
• Interactions can occur by simultaneous exposure or if exposure to the agents is separated by time.
• People are normally exposed to many chemicals and combinations of chemicals every day.
• Emerging approaches in assessing interactions include:
• Adverse outcome pathways (AOPs).
• In vitro methods.
• "Omics" techniques.
• In silico approaches.
Coming Up...
In the next section, we will explore various methods for testing toxicity.
Section 4: Interactions
Interactions
As noted in "Factors Influencing Toxicity," the presence of other chemicals, at the same time, earlier, or later may:
• Decrease toxicity (antagonism).
• Add to toxicity (additivity).
• Increase toxicity (synergism or potentiation) of some chemicals.
For example, interactions between two or more toxic agents can produce lung damage by interactions:
• Between chemicals.
• Between chemicals and receptors.
• In which a first agent modifies the cell and tissue response to a second agent.
Interactions may occur by:
• Simultaneous exposure to two or more agents.
• Exposure to two or more agents at different times.
Figure \(1\). Interactions between two or more substances often impact toxicity
(Image Source: iStock Photos, ©)
Sources of Interactions
Humans are normally exposed to many chemicals at one time. For example, the use of consumer products, medical treatments, and exposures from the diet and environment (such as from soil, air, and water) can consist of exposures to hundreds, if not thousands, of chemicals. Other examples include:
• Hospital patients receive an average of six drugs daily.
• Consumers may use five or more consumer products before breakfast (for example, soap, shampoo, conditioner, toothpaste, and deodorant).
• Home influenza treatment consists of aspirin, antihistamines, and cough syrup taken simultaneously.
• Drinking water may contain small amounts of pesticides, heavy metals, solvents, and other organic chemicals.
• Air often contains mixtures of hundreds of chemicals such as automobile exhaust and cigarette smoke.
• Gasoline vapor at service stations is a mixture of 40-50 chemicals.
Figure \(2\). The use of personal care products can result in expsoures to hundreds of chemicals
(Image Source: iStock Photos, ©)
Figure \(3\). Cold and flu remedies are another source of chemical exposure
(Image Source: iStock Photos, ©)
Toxicology studies and human health risk assessments have traditionally focused primarily on a single chemical. However, as noted above, people are exposed to many chemicals every day. They are also exposed to non-chemical stressors every day and throughout a lifetime.
In addition, non-chemical stressors include infectious agents, diet, and psychosocial stress, all of which have potential roles in contributing to the health effects associated with chemical exposures.
Approaches for Assessing Interactions
Development of methods to assess the health effects associated with complex exposures is underway at various organizations.
Non-animal tools and approaches are demonstrating high potential for use in assessing combined effects of chemicals on humans and the environment. These tools and approaches may help uncover information about new mixture components or entire mixtures, which can promote understanding of the underlying mechanisms of their combined effects. The strategies for assessing interactions rely less on in vivo testing and more on non-animal studies and computational tools and incorporate emerging approaches such as:
• The adverse outcome pathway (AOP) concept.
• In vitro methods.
• “Omics” techniques.
• In silico approaches such as quantitative structure activity relationships (QSARs).
• Read-across.
• Toxicokinetic modeling.
• Integrated approaches to testing and assessment (IATA).
The goals include the ability to development more effective and comprehensive regulatory assessments while reducing the reliance on animal testing.
Figure \(4\). Modern testing methods rely heavily on computational toxicology
(Image Source: iStock Photos, ©)
Knowledge Check
The presence of one chemical decreasing toxicity of another chemical is called:
Answer
Antagonism - This is the correct answer.
When the presence of another chemical decreases toxicity of a chemical, this is called antagonism.
Additivity is when the combined toxic effect of two chemicals when given together is less than the sum of their individually measured toxic effects.
Answer
False - This is the correct answer.
An additive effect occurs when the combined effects of two or more chemicals is equal to the sum of the effects of each chemical given alone.
Piperonyl butoxide is not an insecticide; however, it can greatly increase the effects of a pyrethrum insecticide. Thus, piperonyl butoxide can be called a synergist and this interaction can be called synergism.
Answer
True - This is the correct answer.
The interaction of this combination is synergism. Synergists are used to enhance the toxicity of several commonly used insecticides. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_4%3A_Interactions/4.1%3A_Interactions.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Explain modern approaches to testing for and assessing toxicity.
• Identify sources of information related to alternatives to using animals to assess toxicity.
• Explain how clinical investigations and epidemiology studies are used to evaluate toxicity to humans.
Topics include:
What We've Covered
This section included the following key points:
• The 3Rs concept of using test methods replace the use of animals with other types of studies and approaches, reduce the number of animals used in studies, and refine study procedures to cause less pain or stress to animals.
• ALTBIB is a comprehensive starting point provided by NLM to find information related to alternatives to animal testing.
• Animal tests for toxicity have been conducted prior to and in parallel with human clinical investigations.
• Standardized animal tests have been developed for testing:
• Acute toxicity
• Subchronic toxicity
• Chronic toxicity
• Carcinogenicity
• Reproductive toxicity
• Developmental toxicity
• Dermal toxicity
• Ocular toxicity
• Neurotoxicity
• Genetic toxicity
• Modern approaches to toxicity testing are preferred over animal testing and include:
• In vitro methods, which are performed outside living organisms.
• In silico methods, which are performed using computers and computer simulation.
• Chip models, which include human cell cultures placed on computer chips for study.
• Approaches used for testing pharmaceuticals include:
• Clinical investigations, in which human subjects are studied with clinical observations and laboratory measurements.
• Epidemiological studies, involving observation of humans exposed to xenobiotics in their regular life or occupation.
• Reports of adverse reactions to drugs.
• Consumer products and the chemicals they contain are tested through:
• In silico data from computer models.
• In vitro data from tests performed as alternatives to animal testing.
• Animal study data.
• Human data from premarketing and postmarketing studies.
Coming Up...
In the next section, we will explore the concept of risk assessment.
Section 5: Toxicity Testing Methods
Testing and Assessing Toxicity
Alternatives to animal testing have emerged in recent years.
Since about 1990, numerous attempts have been made around the world to reduce the use of and replace laboratory animals in toxicology and other studies. These efforts have involved finding alternatives to animal testing and incorporating the "3Rs" concept (Replace, Reduce and Refine), which means using test methods that:
• Replace the use of animals with other types of studies and approaches.
• Reduce the number of animals in studies.
• Refine the procedures to make studies less painful or stressful to the animals.
Regulatory authorities, companies, and others have endorsed the principle of the 3Rs, and alternative testing methods have been and are being developed. An international group that has played a key role is the International Cooperation on Alternative Test Methods (ICATM). Established in 2009, ICATM includes representatives of organizations from various countries.
Finding Information about Alternatives to Animal Testing
Many countries including the United States, Canada, and the European Union member states, require that a comprehensive search for possible alternatives be completed before some or all research involving animals is begun. Because numerous Web resources are now available to provide guidance and other information on in vitro and other alternatives to animal testing, completing such searches and keeping current with information associated with alternatives to animal testing is much easier than it used to be.
ALTBIB, from NLM
The NLM ALTBIB ("Resources for Alternatives to the Use of Live Vertebrates in Biomedical Research and Testing") portal is a comprehensive starting point for finding information related to alternatives to animal testing. ALTBIB is available at http://toxnet.nlm.nih.gov/altbib.html.
It provides access to PubMed®/MEDLINE® citations relevant to alternatives to use of live vertebrates in biomedical research and testing.
ALTBIB's topics and subtopics are aligned with current approaches. For example, information is provided on in silico, in vitro, and improved (refined) animal testing methods and on testing strategies that incorporate these methods and other approaches.
ALTBIB also provides access to news and additional resources, including information on the status of the evaluation and acceptance of alternative methods. Main categories include:
• Animal Alternatives News
• Additional Resources
• Evaluation/Acceptance of Test Methods
Links to Specific Resources (Sources Providing Animal Alternatives News, Key Organizations Providing Resources, and the Regulatory Acceptance of Specific Alternative Methods and Milestones in Non-animal Toxicity Testing)
Animal Tests
NOTE: This information is provided for historical and other reasons, especially since animal testing is still being done in some cases, and because toxicologists, risk assessors, and others are faced with interpreting the results of new and old studies that used animals.
Animal tests for toxicity have been conducted prior to and in parallel with human clinical investigations as part of the non-clinical laboratory tests of pharmaceuticals. For pesticides and industrial chemicals, human testing is rarely conducted. Years ago, results from animal tests were often the only way to effectively predict toxicity in humans.
Animal tests were developed and used because:
• Chemical exposure can be precisely controlled.
• Environmental conditions can be well-controlled.
• Virtually any type of toxic effect can be evaluated.
• The mechanism by which toxicity occurs can be studied.
Figure \(2\). Rats have traditionally been used in toxicity studies using animals
(Image Source: iStock Photos, ©)
Standardized Animal Toxicity Tests
Animal methods to evaluate toxicity have been developed for a wide variety of toxic effects. Some procedures for routine safety testing have been standardized. Standardized animal toxicity tests have been highly effective in detecting toxicity that may occur in humans. As noted above, concern for animal welfare has resulted in tests that use humane procedures and only as many animals as are needed for statistical reliability.
To be standardized, a test procedure must have scientific acceptance as the most meaningful assay for the toxic effect. Toxicity testing can be very specific for a particular effect, such as dermal irritation, or it may be general, such as testing for unknown chronic effects.
Standardized tests have been developed for the following effects:
• Acute Toxicity
• Subchronic Toxicity
• Chronic Toxicity
• Carcinogenicity
• Reproductive Toxicity
• Developmental Toxicity
• Dermal Toxicity
• Ocular Toxicity
• Neurotoxicity
• Genetic Toxicity
Species Selection
Species selection varies with the toxicity test to be performed. There is no single species of animal that can be used for all toxicity tests. Different species may be needed to assess different types of toxicity. The published literature (such as via PubMed) and online databases (such as TOXNET) should be searched for information from non-animal and animal studies, as well as for possible best approaches, most applicable species, and strains and gender of a species. Here are two examples:
• It would have been invaluable years ago for toxicologists and risk assessors to have known that carcinogenic effects in male rats are considered irrelevant for humans if the α(2u)-globulin protein is involved because humans lack that protein. See another example
• Many physiological, pharmacological, and toxicological findings related to organic anion and cation transport and transporters in rodents and rabbits do not apply to humans. Learn more
In some cases, it may not be possible to use the most desirable animal for testing because of animal welfare or cost considerations.
• For example, use of dogs and non-human primates is now restricted to special cases or banned by some organizations, even though they represent the species that may respond the closest to humans in terms of chemical and other exposures (however, note the examples above).
Rodents and rabbits are the most commonly used laboratory species because they are readily available, inexpensive to breed and house, and they have a history of producing reliable results in experiments.
The toxicologist attempts to design an experiment to duplicate the potential exposure of humans as closely as possible. For example:
• The route of exposure should simulate that of human exposure. Most standard tests use inhalation, oral, or dermal routes of exposure.
• The age of test animals should relate to that of humans. Testing is normally conducted with young adults, although in some cases, newborn or pregnant animals may be used.
• For most routine tests, both sexes are used. Sex differences in toxic response are usually minimal, except for toxic substances with hormonal properties.
• Dose levels are normally selected so as to determine the threshold as well as a dose-response relationship. Usually, a minimum of three dose levels are used.
Figure \(3\). Rodents have commonly been used in animal testing
(Image Source: iStock Photos, ©)
Acute Toxicity
Historically, acute toxicity tests were the first tests conducted. They provide data on the relative toxicity likely to arise from a single or brief exposure, or sometimes multiple doses over a brief period of time. Standardized tests are available for oral, dermal, and inhalation exposures, and many regulatory agencies still require the use of all or some of these tests. Table \(1\) lists basic parameters historically used in acute toxicity testing.
Category Parameter
Species Rats preferred for oral and inhalation tests; rabbits preferred for dermal tests
Age Young adults
Number of animals 5 of each sex per dose level
Dosage Three dose levels recommended; exposures are single doses or fractionated doses up to 24 hours for oral and dermal studies and 4-hour exposure for inhalation studies
Observation period 14 days
Table \(1\). Acute toxicity test parameter
Subchronic Toxicity
Subchronic toxicity tests are employed to determine toxicity likely to arise from repeated exposures of several weeks to several months. Standardized tests are available for oral, dermal, and inhalation exposures. Detailed information is obtained during and after the study, ranging from body weight, food and water consumption measurements, effects on eyes and behavior, composition of blood, and microscopic examination of selected tissues and organs.
Table \(2\) lists basic parameters previously used in subchronic toxicity testing.
Category Parameter
Species Rodents (usually rats) preferred for oral and inhalation studies; rabbits for dermal studies; non-rodents (usually dogs) recommended as a second species for oral tests
Age Young adults
Number of animals 10 of each sex for rodents; 4 of each sex for non-rodents per dose level
Dosage Three dose levels plus a control group; includes a toxic dose level plus NOAEL; exposures are 90 days
Observation period 90 days (same as treatment period)
Table \(2\). Subchronic toxicity test parameter
Chronic Toxicity
Chronic toxicity tests determine toxicity from exposure for a substantial portion of a subject's life. They are similar to the subchronic tests except that they extend over a longer period of time and involve larger groups of animals.
Table \(3\) includes basic parameters previously used in chronic toxicity testing.
Category Parameter
Species Two species recommended; rodent and non-rodent (rat and dog)
Age Young adults
Number of animals 20 of each sex for rodents, 4 of each sex for non-rodents per dose level
Dosage Three dose levels recommended; includes a toxic dose level plus NOAEL. The recommended maximum chronic testing durations for pharmaceuticals are now 6 and 9 months in rodents and non-rodents, respectively. (Historically exposures were for 12 months, 24 months for food chemicals.)
Observation period 12-24 months
Table \(3\). Chronic toxicity test parameter
Carcinogenicity
Carcinogenicity tests are similar to chronic toxicity tests. However, they extend over a longer period of time and require larger groups of animals in order to assess the potential for cancer.
Table \(4\) lists basic parameters used in the past in carcinogenicity testing.
Category Parameter
Species Testing in two rodents species—rats and mice—is preferable given their relatively short life spans.
Age Young adults
Number of animals 50 of each sex per dose level
Dosage Three dose levels recommended; highest should produce minimal toxicity; exposure periods are at least 18 months for mice and 24 months for rats
Observation period 18-24 months for mice and 24-30 months for rats
Table 4. Carcinogenicity test parameter
Reproductive Toxicity
Reproductive toxicity testing is intended to determine the effects of substances on gonadal function, conception, birth, and the growth and development of offspring. The oral route of administration is preferable.
Table \(5\) lists basic parameters historically used in reproductive toxicity testing.
Category Parameter
Species Rat is recommended
Age Young adults
Number of animals 20 of each sex per dose level
Dosage Three dose levels recommended; highest dose should produce toxicity but not mortality in parents; lowest dose should not produce toxicity
Observation period Test substance given to parental animals (P1) prior to mating, during pregnancy, and through weaning of first generation (F1) offspring; substance then given to selected F1 offspring during their growth into adulthood, mating, and production of second generation (F2) until the F2 generation is 21 days old.
Table \(5\). Reproductive toxicity test parameter
Developmental Toxicity
Developmental toxicity testing detects the potential for substances to produce embryotoxicity and birth defects.
Table \(6\) lists basic parameters previously used in developmental toxicity tests.
Category Parameter
Species Two species recommended; rat, mouse, hamster, and rabbit are used most commonly.
Age Young adult females
Number of animals 20 pregnant females per dose level
Dosage At least three dose levels are used; includes a toxic dose level plus NOAEL; occurs throughout organ development in the fetus for teratogenic effects; starts with parents prior to breeding, continues during pregnancy for all developmental effects
Observation period Offspring sacrificed and examined the day prior to expected birth for teratogenic effects; offspring observed for growth retardation and abnormal function through infancy and examined for teratogenic effects
Table \(6\). Developmental toxicity test parameter
Dermal Toxicity
Dermal toxicity tests determine the potential for an agent to cause irritation and inflammation of the skin. Those reactions may be a result of direct damage to the skin cells by a substance or an indirect response due to sensitization from prior exposure. In vitro approaches to dermal toxicity testing are being developed, in part because this type of testing has received so much publicity.
Table \(7\) lists basic parameters historically used in dermal toxicity testing.
Category Parameter
Primary dermal irritation Determines direct toxicity. The substance is applied to the skin of 6 albino rabbits for 4 hours and the rabbits are observed for 72 hours for irritation.
Dermal sensitization Assays for immune hypersensitivity of the skin, consisting of two phases: 1) Application of the test substance to the skin of guinea pigs for 4 hours in the sensitization phase; and 2) a challenge phase at least 1 week later in which the substance is reapplied to the skin. An inflammatory reaction indicates that the skin has been sensitized to the substance.
Table \(7\). Dermal toxicity test parameter
Ocular Toxicity
Ocular toxicity was at one time determined by applying a test substance for 1 second to the eyes of 6 test animals, usually rabbits. The eyes were then carefully examined for 72 hours, using a magnifying instrument to detect minor effects. An ocular reaction can occur on the cornea, conjunctiva, or iris. It may be simple irritation that is reversible and quickly disappears.
This eye irritation test was commonly known as the "Draize Test." This test has received much attention, such as the development of a "low volume" variation and in vitro approaches.
Neurotoxicity
A battery of standardized neurotoxicity tests were developed to supplement the delayed neurotoxicity test in domestic chickens (hens). The hen assay determines delayed neurotoxicity resulting from exposure to anticholinergic substances, such as certain pesticides. The hens are protected from immediate neurological effects of the test substance and observed for 21 days for delayed neurotoxicity.
Table \(8\) lists measurements included in other neurotoxicity tests.
Category Parameter
Motor activity Tests for decreased motor activity, such as cage movement. Rats or mice are used.
Peripheral nerve conduction Tests for electrical conduction in motor and sensory nerves. Rodents are exposed to the test substance for 90 days.
Neuropathy Tests for nerve damage by microscopic examination. This is one aspect of other standardized toxicity tests.
Table \(8\). Neurotoxicity test parameter
Genetic Toxicity
Genetic toxicity is determined using a wide range of test species including whole animals and plants (for example, rodents, insects, and corn), microorganisms, and mammalian cells. A large variety of tests have been developed to measure gene mutations, chromosome changes, and DNA activity.
Table \(9\) lists parameters used for common gene mutation tests.
Category Parameter
Microorganisms Salmonella typhimurium and Escherichia coli are commonly used bacterial tests. The S. typhimurium assay is known as the Ames Test. Yeasts are also used to detect gene mutations in culture systems.
Mammalian cells The two main cell lines are mouse lymphoma and Chinese hamster ovary (CHO) cells.
Fruit Flies Drosophila melanogaster is used to detect sex-linked recessive lethal mutations.
Mice The Mouse Specific-Locus Test is the major gene mutation test that employs whole animals. Exposed mice are bred and observed for hereditary changes.
Table \(9\). Genetic toxicity test parameter
Chromosomal effects can be detected with a variety of tests, some of which utilize entire animals (in vivo) and some which use cell systems (in vitro). Several assays are available to test for chemically induced chromosome aberrations in whole animals. Table \(10\) lists common in vivo means of testing chromosomal effects.
Category Parameter
Rodent chromosomal assay Involves exposure of mice or rats to a single dose of a substance. Their bone marrow is analyzed for chromosome aberrations over a 48-hour period.
Dominant lethal assay Exposed male mice or rats are mated with untreated females. The presence of dead implants or fetuses is the result of the fertilized ovum receiving damaged DNA from the sperm. This leads to the death of the embryo or fetus. The genetic defect in the sperm is thus a heritable dominant lethal mutation.
Micronucleus test Mice are exposed once and their bone marrow or peripheral blood cells are examined for 72 hours for the presence of micronuclei, such as broken pieces of chromosomes surrounded by a nuclear membrane.
Heritable translocation assay Exposed male Drosophila or mice are bred to non-exposed females. The offspring males (F1 generation) are then bred to detect chromosomal translocations.
Sister chromatid exchange assay (SCE) Mice are exposed to a substance and their bone marrow cells or lymphocytes are examined microscopically for complete chromosomal damage. This is indicated by chromatid fragments joining sister chromatids rather than their own.
Table \(10\). Chromosomal effects (In vivo) test parameter
In Vitro Testing
In vitro tests for chromosomal effects involve exposure of cell cultures and followed by microscopic examination of them for chromosome damage.
The most commonly used cell lines are Chinese Hamster Ovary (CHO) cells and human lymphocyte cells. The CHO cells are easy to culture, grow rapidly, and have a low chromosome number (22), which makes for easier identification of chromosome damage.
Human lymphocytes are more difficult to culture. They are obtained from healthy human donors with known medical histories. The results of these assays are potentially more relevant to determine effects of xenobiotics that induce mutations in humans.
Two widely used genotoxicity tests measure DNA damage and repair that is not mutagenicity. DNA damage is considered the first step in the process of mutagenesis. Common assays for detecting DNA damage include:
1. Unscheduled DNA synthesis (UDS) — involves exposure of mammalian cells in culture to a test substance. UDS is measured by the uptake of tritium-labeled thymidine into the DNA of the cells. Rat hepatocytes or human fibroblasts are the mammalian cell lines most commonly used.
2. Exposure of repair-deficient E. coli or B. subtilisDNA damage cannot be repaired so the cells die or their growth may be inhibited.
Emerging Approaches and Methods
In the future, there will likely be additional and refined in vitro methods, and the emergence of in silico and "chip" approaches.
Many current efforts are underway to refine, develop, and validate in vitro methods.
Did you know?
The Human Toxicology Project Consortium provides a video series called Pathways to a Better Future. These videos discuss the future of toxicology, if you would like to know more about where the field is headed.
In Silico Methods
Also emerging are in silico methods, meaning "performed on computer or via computer simulation." This term was developed as an analogy to the Latin phrases in vivo and in vitro.
Advanced computer models called "Virtual Tissue Models" are being developed by the U.S. EPA's National Center for Computational Toxicology (NCCT). The EPA's Virtual Tissue Models are described as using "new computational methods to construct advanced computer models capable of simulating how chemicals may affect human development. Virtual tissue models are some of the most advanced methods being developed today. The models will help reduce dependence on animal study data and provide much faster chemical risk assessments" (source).
One example is the Virtual Embryo (v-Embryo) research effort, aimed at developing prediction models to increase our understanding of how chemical exposure may affect unborn children. Researchers are integrating new types of in vitro, in vivo, and in silico models that simulate critical steps in fetal development. Virtual Embryo models simulate biological interactions observed during development and predict chemical disruption of key biological events in pathways that is believed to lead to adverse effects.
"Chip" Models
Also emerging are microphysiological systems (MPS) that are used in "tissue chip" and "organs-on-chips" models. Chip models include human cell cultures that are placed on a computer chip and studied there. The Wyss (pronounced "Veese") Institute for Biologically Inspired Engineering is a helpful resource for more information.
For example, the "Lung-on-a-chip" is described as "combining microfabrication techniques with modern tissue engineering, lung-on-a-chip offers a new in vitro approach to drug screening by mimicking the complicated mechanical and biochemical behaviors of a human lung." To learn more, watch a video from the Wyss Institute that shows a human lung-on-a-chip. Another Wyss Institute video illustrates how researches have used long-on-a-chip to mimic pulmonary edema.
Figure \(5\). Lung-on-a-chip used to mimic pulmonary edema
(Image Source: The Wyss Institute for Biologically Inspired Engineering)
Using a connected series of tissue chips as an integrated multi-organ system can allow for the creation of a "human-on-a-chip," to be used to model the metabolism and effects of drugs and other substances moving through a human. For example, a liver chip could provide fluids and metabolites to a kidney chip, allowing for the assessment of the nephrotoxic (kidney damage) potential of a substance metabolized in the liver.
Induced Pluripotent Stem Cells (iPSCs)
Induced pluripotent stem cells (iPSCs) are an emerging approach using in vitro cultures of cells. The cells of mammals and plants can be reprogrammed via "cellular reprogramming" to generate iPSCs. Like human embryonic stem cells, iPSCs are pluripotent (capable of giving rise to several different cell types) and these cells can renew themselves. As examples, iPSC-derived hepatocytes, cardiomyocytes, and neural cells can serve as tools for the screening of drugs and other substances for potential toxicity, and also can be used to study disease mechanisms and pathways. Further, iPSCs have been studied in immunotherapy and regenerative cellular therapies.
Figure \(6\). Promise of hiPSCs. Schematic representation of how somatic cells taken from a patient can be reprogrammed into induced pluripotent stem cells (iPSCs) using the ‘Yamanaka’ factors, OCT4, KLF4, c-MYC and SOX2. Subsequent differentiation of human iPSCs (hiPSCs) into neurons of define lineage allow for investigations into disease pathophysiology and identification of potential drug targets. In addition, hiPSC derived neurons may function as a cellular platform in which drug screens can be carried out using disease relevant neurons.
(Image Source: Adapted under Creative Commons Attribution License (CC BY).
doi: 10.1016/j.yhbeh.2015.06.014
Original image: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4579404/figure/f0005/)
Did you know?
Professor Shinya Yamanaka (Kyoto University, Japan) received the 2012 Nobel Prize in Physiology or Medicine for discovering that mature cells can be reprogrammed to iPSCs that can differentiate into any type of cell. Key to this discovery was his use of four "reprogramming factors" referred to as c-Myc, Klf4, Oct3/4, and Sox2.
Learn more
Combining "Chips" and iPSCs
The emerging approaches of "chips" and iPSCs are being combined. One example is for the evaluation of drugs as potential countermeasures for biological and chemical threats that can be a substitute for human clinical trials. The "chips" and "humans on a chip" can be used as complex in vitro human models to simulate the biology and function of an organ. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_5%3A_Toxicity_Testing_Methods/5.1%3A_Testing_and_Assessing_Toxicity.txt |
For Drugs
The focus of this section is on the U.S. Food and Drug Administration (FDA), but regulatory agencies worldwide have very similar approaches. The main methods of determining the toxicity of drugs to humans are:
• Clinical investigations — administration of chemicals to human subjects with careful clinical observations and laboratory measurements.
• Epidemiological studies — observation of humans who have been exposed to xenobiotics in the normal course of their life or occupation.
• Adverse reactions to drug reports — reports voluntarily submitted by physicians to the FDA after a drug has been approved and is in widespread use.
Figure \(1\). Drugs can be toxic to humans
(Image Source: iStock Photos, ©)
Clinical Investigations
Clinical investigations are a component of Investigational New Drug Applications (INDs) submitted to the FDA. Clinical investigations are conducted only after a minimal battery of nonclinical laboratory studies has been completed.
Toxicity studies using human subjects require strict ethical considerations. They are primarily conducted for new pharmaceutical applications submitted to the FDA for approval.
Generally, toxicity found in animal studies occurs with similar incidence and severity in humans. Differences sometimes occur, thus clinical tests with humans are needed to confirm the results of nonclinical laboratory studies.
FDA clinical investigations are conducted in three phases, as outlined below.
Figure \(2\). Portion of the Investigational New Drug Application (IND)
(Image Source: FDA)
Phase 1 consists of testing the drug in a small group of 20 to 80 healthy volunteers. Information obtained in Phase 1 studies is used to design Phase 2 studies, in particular, to determine the drug's:
• Initial tolerability in human subjects.
• Pharmacokinetics and pharmacological effects.
Phase 2 studies are more extensive, involving several hundred patients and are used to:
• Determine the short-term side effects of the drug.
• Determine the risks associated with the drug.
• Evaluate the effectiveness of the drug for treatment of a particular disease or condition.
• Elucidate the drug's metabolism.
Phase 3 studies are controlled and uncontrolled trials conducted with several hundred to several thousand patients. They are designed to:
• Gather additional information about effectiveness and safety.
• Evaluate overall risk:benefit profile of the drug.
• Provide the basis for the precautionary information that accompanies the drug.
For Consumer Products
Health-related data for a chemical in a consumer product (and for the consumer product itself for the human studies) can come from the following types of studies:
• In silico data — from computer programs that estimate toxic properties based on data for similar chemicals, and/or from the physical chemical properties.
• In vitro data — from the results of alternatives to animal tests, such as from cell cultures used to assess the potential for eye or skin irritation.
• Animal (toxicological) study data — for example, from studies that assessed eye or skin irritation potential.
• Human data – from studies conducted before (premarketing) and after (postmarketing) a product had been sold to consumers. More specifically, from:
• Premarketing clinical studies, such as from patch tests to assess skin irritation potential.
• Premarketing "controlled use" studies that are designed to assess the skin effects from using a new type of personal care product.
• Postmarketing studies conducted by physicians or dermatologists, such as testing a diagnostic patch with their patients.
• Postmarketing epidemiological studies, including studies developed by Poison Control Centers, companies, and academia that look at the "real world" health reports of effects associated with consumer use of a product.
Figure \(3\). Toxicologists in a lab, using a computer for research
(Image Source: iStock Photos, ©)
Did you know?
Bisphenol A (BPA) and phthalates are chemicals that have been widely found in consumer products. BPA has been used in some food can linings, polycarbonate food and beverage containers, tooth sealants applied to dentists, and even in cash register receipts! Examples of potential exposures to BPA include eating or drinking foods or liquids from those containers, and skin exposures from handing the cash register receipts. Workers involved in making products with BPA can be exposed during production.
Often called plasticizers, phthalates are used to make plastics more flexible. Some phthalates are used as solvents. They can be found in vinyl flooring and shower curtains, children's toys, personal care products, and as contaminants in the food supply. As with BPA, exposures can come from many sources.
Toxicologists and others are still assessing the full extent of the potential impacts on health. Studies suggest that BPA and phthalates affect the reproductive system, impacting how hormones such as estrogen and testosterone work in the body. The impact of fetal or early childhood exposures is still being assessed. Because of the ubiquity of the possible products containing these chemicals, thorough assessments of potential exposures, toxicities, and potential substitutes are essential.
Figure \(4\). Plastic food and beverage containers are common sources of BPA and phthalates in consumer products
(Image Source: iStock Photos, ©) | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_5%3A_Toxicity_Testing_Methods/5.2%3A_Clinical_Investigations_and_Other_Types_of_Human_Data.txt |
What Are Epidemiology Studies?
Epidemiology studies are conducted using human populations to evaluate whether there is a correlation or causal relationship between exposure to a substance and adverse health effects.
These studies differ from clinical investigations in that individuals have already been administered the drug during medical treatment or have been exposed to it in the workplace or environment.
Epidemiological studies measure the risk of illness or death in an exposed population compared to that risk in an identical, unexposed population (for example, a population the same age, sex, race and social status as the exposed population).
Figure \(1\). Epidemiology studies tend to produce graphs and charts for data analysis and presentation
(Image Source: Adapted from iStock Photos, ©)
Types of Studies
There are four primary types of epidemiology studies. They are:
1. Cohort studies — A cohort (group) of individuals with exposure to a chemical and a cohort without exposure are followed over time to compare disease occurrence.
2. Case control studies — Individuals with a disease (such as cancer) are compared with similar individuals without the disease to determine if there is an association of the disease with prior exposure to an agent.
3. Cross-sectional studies — The prevalence of a disease or clinical parameter among one or more exposed groups is studied, such as:
• The prevalence of respiratory conditions among furniture makers.
4. Ecological studies – The incidence of a disease in one geographical area is compared to that of another area, such as:
• Cancer mortality in areas with hazardous waste sites as compared to similar areas without waste sites.
Cohort Studies
Cohort studies are the most commonly conducted epidemiology studies and they frequently involve occupational exposures. Exposed persons are easy to identify and their exposure levels are usually higher than in the general public. There are two types of cohort studies:
1. Prospective, in which cohorts are identified based on current exposures and followed into the future.
2. Retrospective, in which cohorts are identified based on past exposure conditions and study "follow-up" proceeds forward in time; data come from past records.
Common Statistical Measures
Standard, quantitative measures are used to determine if epidemiological data are meaningful. The most commonly used measures are:
• Odds Ratio (O/R) — The ratio of risk of disease in a case-control study for an exposed group to an unexposed group. An odds ratio equal to 2 (O/R = 2) means that the exposed group has twice the risk as the non-exposed group.
• Standard Mortality Ratio (SMR) — The relative risk of death based on a comparison of an exposed group to non-exposed group. A standard mortality ratio equal to 150 (SMR = 150) indicates that there is a 50% greater risk.
• Relative Risk (RR) — The ratio expressing the occurrence of disease in an exposed population to that of an unexposed population. A relative risk of 175 (RR = 175) indicates a 75% increase in risk.
Study Design
When designing an epidemiology study, the most critical aspects include:
• An appropriate control group.
• An adequate time span.
• The statistical ability to detect an effect.
More specifically, the control population used as a comparison group must be as similar as possible to that of the test group, for example, same age, sex, race, social status, geographical area, and environmental and lifestyle influences.
Many epidemiology studies evaluate the potential for an agent to cause cancer. Because most cancers require long latency periods, the study must cover that period of time.
The statistical ability to detect an effect is referred to as the power of the study. To gain precision, the study and control populations should be as large as possible.
Bias Errors
Epidemiologists attempt to control errors that can occur in the collection of data, which are known as bias errors. The three main types of bias errors are:
1. Selection bias, which occurs when the study group is not representative of the population from which it came.
2. Information bias, which occurs when study subjects are misclassified as to disease or exposure status. Recall bias occurs when individuals are asked to remember exposures or conditions that existed years before.
3. Confounding factors, which occur when the study and control populations differ with respect to factors which might influence the occurrence of the disease. For example, smoking might be a confounding factor and should be considered when designing studies.
Postmarketing Studies
Finally, for consumer products, postmarketing epidemiological studies can be performed. Examples include studies developed by Poison Control Centers, companies, academia, and other sources to look at the "real world" health reports of effects associated with consumer use of a product or article under reasonably foreseeable conditions.
Knowledge Check
While animal testing was historically the primary method used in testing for toxicity, modern testing methods prefer:
Answer
All of the above - This is the correct answer.
Modern approaches to testing for toxicity include in silico, in vitro, and improved (refined) animal testing.
In testing a pharmaceutical to comply with FDA requirements, the initial testing consists of:
Answer
Non-clinical laboratory studies - This is the correct answer.
Investigational New Drug Applications (IND) require clinical investigations. Before clinical investigations begin, a minimal battery of non-clinical laboratory studies must be completed.
The primary goal of a Phase 1 clinical investigation is to:
Answer
Obtain information to design a more definitive Phase 2 clinical investigation - This is the correct answer.
The primary goal of a Phase 1 clinical investigation is to obtain information that is used to design more extensive, Phase 2 studies.
Determining the overall risk versus the benefit of a new pharmaceutical is part of:
Answer
Phase 3 clinical investigation - This is the correct answer.
Determining the overall risk versus the benefit of a new pharmaceutical is part of Phase 3 clinical study. The risk versus benefit is one of the last steps in the drug evaluation process.
The type of epidemiology study in which individuals are identified according to exposure and followed to determine subsequent disease risk is known as:
Answer
Cohort study - This is the correct answer.
The type of epidemiology study in which individuals are identified according to exposure and followed to determine subsequent disease risk is known as a cohort study. In a cohort study individuals are selected to be part of the group based on their exposure to a particular substance.
An epidemiological study in which the individuals that make up the test cohort are identified according to past exposures is known as:
Answer
Retrospective cohort study - This is the correct answer.
This is known as a retrospective cohort study. As the name implies, retrospective cohorts are identified according to past exposure conditions and the follow-up study proceeds forward in time. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_5%3A_Toxicity_Testing_Methods/5.3%3A_Epidemiology_Studies.txt |
Learning Objectives
After completing this lesson, you will be able to:
• Identify the basic steps in the risk assessment process.
• Explain the framework for risk-based decision-making.
• Describe methods for identifying hazards.
• Explain methods for toxicity assessment, including dose-response and exposure.
In this section...
Topics include:
What We've Covered This section made the following main points:
• A hazard is the capability of a substance to cause an adverse effect.
A risk is the probability that the hazard will occur under specific conditions.
• Risk assessment is the process of determining hazard, exposure, and risk.
Risk management is the process of weighing policy alternatives and deciding on the most appropriate regulatory action.
• There are four basic steps to risk assessment:
1. Hazard Identification
• Identify or develop information suggesting or confirming whether a chemical poses a potential hazard to humans.
• (Quantitative) Structure Activity, or (Q)SAR methods, including computer models, help consider closely related chemicals as a group or category.
• Read-across involves estimating what a chemical may be like, including the presence or absence of certain properties or activities, based on one or more other chemicals.
• Adverse Outcome Pathways (AOPs) involve in vitro methods that evaluate changes in normal cellular signaling pathways.
• Other emerging methods include (Quantitative) in vitro to in vivo extrapolation, or (Q)IVIVE, Integrated Testing Strategies, and Integrated Approaches to Testing and Assessment (IATA).
2. Dose-Response Assessment
• Carcinogenic (cancer) risk assessment involves two steps:
1. Perform qualitative evaluation of all epidemiology studies, animal bioassay data, and biological activity.
2. Quantitation of the risk for substances classified as definite or probably human carcinogens.
• Non-carcinogenic risk assessment includes:
• Acceptable Daily Intake (ADI), which divides the NOAEL by uncertainty/safety factors.
• Reference Dose (RfD), which divides the NOAEL or LOAEL by uncertainty/safety factors.
• Benchmark Dose Method (BMD), which extrapolates data to determine a point of departure (POD) that accounts for study quality.
• Assessments for noncancer toxicity effects, acute or short-term exposures, and occupational exposures.
3. Exposure Assessment
• People are exposed to mixtures of hundreds of chemicals in everyday life.
• An exposure pathway describes the:
• Route a substance takes from its source to its endpoint.
• How people can be exposed to the substance.
• The three steps of exposure assessment are to:
1. Characterize the point of exposure setting and exposure scenario.
2. Identify exposure pathways.
3. Quantify the exposure.
• Exposure models are commonly used because actual exposure measurements are often not available.
4. Risk Characterization
• This final phase predicts the frequency and severity of effects in exposed populations.
• Biological and statistical uncertainties are described.
• For carcinogenic risks, the probability of a person developing cancer over a lifetime is estimated by multiplying the cancer slope factor for the substance by the chronic, 70-year average daily intake.
• For noncarcinogenic effects, the exposure level is compared with an ADI, RfD, or MRL derived for similar exposure periods.
Section 6: Risk Assessment
Did you know?
For many years, the terminology and methods used in human risk or hazard assessment were inconsistent, which led to confusion among scientists, the public, and others.
(Image Source: iStock Photos, ©)
"Red Book" for Risk Assessment (1983)
In 1983, the (U.S.) National Academy of Sciences (NAS) published Risk Assessment in the Federal Government: Managing the Process. Often called the "Red Book" by toxicologists and others, it addressed the standard terminology and concepts for risk assessments.
Figure \(1\). Toxicology-based approaches to hazard identification, dose-response assessment, exposure analysis, and characterization of risks were described in the 1983 Red Book
(Image Source: National Academies Press)
Key Terms
The following terms are routinely used in risk assessments:
• Hazard — capability of a substance to cause an adverse effect.
• Risk — probability that the hazard will occur under specific exposure conditions.
• Risk assessment — the process by which hazard, exposure, and risk are determined.
• Risk management — the process of weighing policy alternatives and selecting the most appropriate regulatory action based on the results of risk assessment and social, economic, and political concerns.
Risk Assessment Steps
The four basic steps in the risk assessment process as defined by the NAS are:
1. Hazard identification — characterization of innate adverse toxic effects of agents.
2. Dose-response assessment — characterization of the relation between doses and incidences of adverse effects in exposed populations.
3. Exposure assessment — measurement or estimation of the intensity, frequency, and duration of human exposures to agents.
4. Risk characterization — estimation of the incidence of health effects under the various conditions of human exposure.
Once risks are characterized in step 4, the process of risk management begins (Figure 2).
Figure \(2\). Interaction between processes of risk assessment and risk management
(Image Source: ORAU, ©)
"Silver Book" for Advancing Risk Assessment (2009)
A newer book by the NAS, Science and Decisions: Advancing Risk Assessment (2009), often called the “Silver Book” by toxicologists and others, emphasizes uncertainty and variability and cumulative risk, and notes that risk assessment "is at a crossroads."
Figure \(3\). The 2009 Silver Book includes approaches for improving risk analysis and a framework for risk-based decision-making
(Image Source: National Academies Press)
Risk-Based Decision Making
The co-authors of this Silver Book proposed a framework for risk-based decision-making (Figure 4). The framework consists of three phases:
Enhanced problem formulation and scoping — available risk-management options are identified.
Planning and assessment risk-assessment tools are used to determine risks under existing conditions and under potential risk-management options.
Risk management risk and non-risk information is integrated to inform choices among options.
The core of the framework, as noted in the Silver Book, includes the risk assessment paradigm of the Red Book, but differs primarily in its initial and final steps:
• "The framework systematically identifies problems and options that risk assessors should evaluate at the earliest stages of decision-making."
• "It expands the array of impacts assessed beyond individual effects (for example, cancer, respiratory problems, and individual species) to include broader questions of health status and ecosystem protection."
• "It provides a formal process for stakeholder involvement throughout all stages but has time constraints to ensure that decisions are made."
• "It increases understanding of the strengths and limitations of risk assessment by decision-makers at all levels, for example, by making uncertainties and choices more transparent."
Figure \(4\). Framework for risk-based decision-making
(Source: "Silver Book," chapter 8)
Latest Approaches
Other parts of ToxTutor highlight the latest approaches used in risk assessment. For example:
Knowledge Check
Risk is the:
Answer
Probability that a hazard will occur under specific exposure conditions - This is the correct answer.
Risk is the probability that a hazard will occur.
In the risk assessment process, what happens during the hazard identification step?
Answer
Characterization of innate adverse toxic effects of agents - This is the correct answer.
Hazard identification is the first step in the risk assessment process as defined by the National Academy of Sciences.
What are the phases of the risk-based decision-making framework proposed by the co-authors of the "Silver Book?"
Answer
Enhanced problem formulation and scoping; planning and assessment; and risk management - This is the correct answer.
The framework proposed by the co-authors of the "Silver Book" involves enhanced problem formulation and scoping; planning and assessment; and risk management. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_6%3A_Risk_Assessment/6.1%3A_Risk_Assessment.txt |
The goal of hazard identification in toxicology is to identify or develop information suggesting or confirming that a chemical (or, for example, a consumer product) poses or does not pose a potential hazard to humans.
During earlier years of toxicology, this process relied primarily on human epidemiology data and on various types of animal testing data, supplemented in more recent years with the development of in vitro methods such as those focused on assessing the potential for mutations and DNA damage. The future of hazard identification is promising and toxicologists now have various types of in vitro methods to explore for hazard identification, along with the emergence of "chip" approaches.
Figure \(1\). Hazard identification is the first component of risk assessment
(Image Source: ORAU, ©)
These emerging methods are based, in part, on (Quantitative) Structure Activity, or (Q)SAR methods. Q(SAR) methods, such as computer models, help toxicologists and others to consider closely related chemicals as a group, or chemical category, rather than as individual chemicals. Not every chemical needs to be tested for every toxicity endpoint, and the data for chemicals and endpoints that have been tested are used to estimate the corresponding properties for other chemicals and endpoints of interest. Data from a chemical category must be judged as adequate to support at least a "screening-level" hazard identification.
One approach involves using endpoint information for one chemical to predict the same endpoint for another chemical that is considered "similar" in some way (such as having structural similarity and similar properties and/or activities).
Read-Across
Another approach for hazard identification used since about 2000 is read-across. Read-across can be qualitative or quantitative:
• In qualitative read-across, the presence (or absence) of a property/activity such as a particular type of toxic effect for the chemical of interest is inferred from the presence (or absence) of the same property/activity for one or more other chemicals. This qualitative approach provides a "yes/no" answer.
• Quantitative read-across uses information for one or more chemicals to estimate what the chemical of interest will be like. Thus, quantitative read-across can be used to obtain a quantitative value for an endpoint, such as a dose-response relationship.
Adverse Outcome Pathways (AOPs)
An emerging approach to hazard identification is the use of Adverse Outcome Pathways (AOPs). AOPs reflect the move in toxicity testing from high-dose studies in laboratory animals to in vitro methods that evaluate changes in normal cellular signaling pathways using human-relevant cells or tissues. The AOP concept has emerged as a framework for connecting high throughput toxicity testing (HTT, or high throughput toxicity screening, HTS) and other results.
AOP Learning Channel
The Human Toxicology Project Consortium provides a collection of informational videos about AOPs for your further exploration. These videos are available on the AOP Learning Channel.
Other Computer Models
Another emerging term is (quantitative) in vitro to in vivo extrapolation, or (Q)IVIVE, used together with what are being called Integrated Testing Strategies and Integrated Approaches to Testing and Assessment (IATA).
Toxicology Testing in the 21st Century - A New Strategy
The High Throughput Screening (HTS) Initiative is part of the new toxicology testing strategy developed from the 2004 National Toxicology Program (NTP) Vision and Roadmap for the 21st Century.
Traditional toxicological testing is based largely on the use of laboratory animals. However, this approach suffers from low throughput, high cost, and difficulties inherent to inter-species extrapolation – making it of limited use in evaluating the very large number of chemicals with inadequate toxicological data.
NTP recognized that the dramatic technological advances in molecular biology and computer science offered an opportunity to use in vitro biochemical- and cell-based assays and non-rodent animal models for toxicological testing. These assays allow for much higher throughput at a much reduced cost. In some assays, many thousands of chemicals can be tested simultaneously in days.
The goal is to move toxicology from a predominantly observational science at the level of disease-specific models to a predominantly predictive science focused upon broad inclusion of target-specific, mechanism-based, biological observations.
The High Throughput Screening program represents a new paradigm in toxicological testing. The HTS program approach to toxicological testing screens for mechanistic targets active within cellular pathways considered critical to adverse health effects such as carcinogenicity, reproductive and developmental toxicity, genotoxicity, neurotoxicity, and immunotoxicity in humans.
Figure \(2\). National Toxicology Program vision and roadmap
(Image Source: National Toxicology Program)
Goals of the HTS Program
• To prioritize substances for further in-depth toxicological evaluation.
• To identify mechanisms of action for further investigation (for example, disease-associated pathways).
• To develop predictive models for in vivo biological response (predictive toxicology).
Reference:
National Toxicology Program. (2016, March 21). Tox 21. U.S. Department of Health and Human Services. Retrieved from http://ntp.niehs.nih.gov/results/tox21/index.html
As described in the Testing for, and Assessing Toxicity section, the EPA is developing "Virtual Tissue Models" such as the Virtual Embryo (v-Embryo™). These types of advanced computer models are being designed to be capable of simulating how chemicals may affect human development and will help reduce dependence on animal study data. They will also provide faster ways of developing chemical risk assessments.
Finally, also noted in the Testing for and Assessing Toxicity section, emerging in the toxicologist's tool box are "chip" models (for example, an "organ on a chip"). One example is the "Lung-on-a-chip" that "…offers a new in vitro approach to drug screening by mimicking the complicated mechanical and biochemical behaviors of a human lung."
Figure \(3\). Lung-on-a-chip used to mimic pulmonary edema
(Image Source: The Wyss Institute for Biologically Inspired Engineering)
Knowledge Check
Answer
Computer models like (Q)SAR - This is the correct answer.
Part of the basis for emerging approaches to hazard identification, such as assessing for potential mutations and DNA damage, relies on (Quantitative) Structure Activity (Q)SAR methods.
Adverse Outcome Pathways (AOPs) are methods of hazard identification that:
Answer
Evaluate changes in normal cellular signaling pathways using human-relevant cells or tissues - This is the correct answer.
Adverse Outcome Pathways (AOPs) are in vitro methods that evaluate changes in normal cellular signaling pathways using human-relevant cells or tissues.
Can quantitative read-across be used to determine the value of an endpoint, such as dose-response relationship?
Answer
Yes - This is the correct answer.
Quantitative read-across can lead to a measurable value for an endpoint, such as a dose-response relationship. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_6%3A_Risk_Assessment/6.2%3A_Hazard_Identification.txt |
Dose-Response Assessment
The dose-response assessment step of the risk assessment process quantitates the hazards that were identified in the previous step. It determines the relationship between dose and incidence of effects in humans. There are normally two major extrapolations required:
1. From high experimental doses to low environmental doses.
2. From animal doses to human doses.
The procedures used to extrapolate from high to low doses are different for assessing carcinogenic effects and noncarcinogenic effects:
• Carcinogenic effects in general are not considered to have a threshold and mathematical models are generally used to provide estimates of carcinogenic risk at very low dose levels.
• Noncarcinogenic effects (for example neurotoxicity) are considered to have dose thresholds below which the effect does not occur. The lowest dose with an effect in animal or human studies is divided by safety factors to provide a margin of safety.
Figure \(1\). Dose-response assessment is a step in the risk assessment process
(Image Source: ORAU, ©)
Carcinogen (Cancer) Risk Assessment
Cancer risk assessment involves two steps:
1. Perform qualitative evaluation of all epidemiology studies, animal bioassay data, and biological activity (for example, mutagenicity).The substance is classified as to its carcinogenic risk to humans based on the weight of evidence. If the evidence is sufficient, the substance may be classified as a definite, probable or possible human carcinogen.
2. Quantitate the risk for those substances classified as definite or probable human carcinogens. Mathematical models are used to extrapolate from the high experimental doses to the lower environmental doses.
The two primary cancer classification schemes are those of the Environmental Protection Agency (EPA) and the International Agency for Research on Cancer (IARC). The EPA and IARC classification systems are quite similar.
1. Qualitative Evaluation of Cancer Risk
The EPA's cancer assessment procedures have been used by several Federal and State agencies. The Agency for Toxic Substances and Disease Registry (ATSDR) relies on EPA's carcinogen assessments. A substance is assigned to one of five descriptors shown below in Table \(1\).
Descriptor Definition
Carcinogenic to Humans Strong evidence of human carcinogenicity
Likely to Be Carcinogenic to Humans Evidence is adequate to demonstrate carcinogenic potential to humans but does not reach the weight of evidence for the "Carcinogenic to Humans" descriptor.
Suggestive Evidence of Carcinogenic Potential The weight of evidence is suggestive of carcinogenicity; a concern for potential carcinogenic effects in humans is raised, but the data are judged insufficient for a stronger conclusion.
Inadequate Information to Assess Carcinogenic Potential Available data are judged inadequate for applying one of the other descriptors. Additional studies generally would be expected to provide further insights.
Not Likely to Be Carcinogenic to Humans Available data are considered robust for deciding that there is no basis for a substance to be considered a human carcinogen.
Table \(1\). Hazard Descriptors from the EPA’s Guidelines for Carcinogen Risk Assessment (March 2005)
Cancer Data for Humans
The basis for sufficient human evidence is an epidemiology study that clearly demonstrates a causal relationship between exposure to the substance and cancer in humans.
The data are determined to be limited evidence in humans if there are alternative explanations for the observed effect.
The data are considered to be inadequate evidence in humans if no satisfactory epidemiology studies exist.
Cancer Data for Animals
An increase in cancer in more than one species or strain of laboratory animals or in more than one experiment is considered sufficient evidence in animals. Data from a single experiment can also be considered sufficient animal evidence if there is a high incidence or unusual type of tumor induced. Normally, however, a carcinogenic response in only one species, strain, or study is considered as only limited evidence in animals.
2. Quantitative Evaluation of Cancer Risk
When an agent is classified as a Human or Probable Human Carcinogen, it is then subjected to a quantitative risk assessment. For those designated as a Possible Human Carcinogen, the risk assessor can determine on a case-by-case basis whether a quantitative risk assessment is warranted.
The key risk assessment parameter derived from the EPA carcinogen risk assessment is the cancer slope factor. This is a toxicity value that quantitatively defines the relationship between dose and response. The cancer slope factor is a plausible upper-bound estimate of the probability that an individual will develop cancer if exposed to a chemical for a lifetime of 70 years. The cancer slope factor is expressed as mg/kg/day.
Linearized Multistage Model (LMS)
Mathematical models are used to extrapolate from animal bioassay or epidemiology data to predict low-dose risk. Most assume linearity with a zero threshold dose.
Figure \(2\). The Linearized Multistage Model is used to extrapolate cancer risk from a dose-response curve using the cancer slope factor
(Image Source: NLM)
EPA uses the Linearized Multistage Model (LMS) illustrated in Figure 2 to conduct its cancer risk assessments. It yields a cancer slope factor, known as the q1* (pronounced "Q1-star"), which can be used to predict cancer risk at a specific dose. It assumes linear extrapolation with a zero dose threshold from the upper confidence level of the lowest dose that produced cancer in an animal test or in a human epidemiology study.
Other Models
Other models that have been used for cancer assessments include:
• One-hit model, which assumes there is a single stage for cancer and that one molecular event induces a cell transformation. This is a very conservative model.
• Multi-hit model, which assumes several interactions are needed before a cell can be transformed. This is one of the least conservative models.
• Probit model, which assumes log normal distribution (Probit) for tolerances of exposed population. This model is sometimes used, but generally considered inappropriate for assessing cancer risk.
• Physiologically Based Pharmacokinetic (PBPK) Models, which incorporate pharmacokinetic and mechanistic data into the extrapolation process. This model requires extensive data and is becoming commonly used.
Application of Models to Estimate Chemical Concentrations in Drinking Water
The chemical chlordane has been found to cause a lifetime risk of one cancer death in a million persons. Different cancer risk assessment models vary in their estimates of drinking water concentrations for chlordane as illustrated in Table \(2\):
Model Concentration (μg/L)
Probit 50
Multi-hit 2
Linearized multistage 0.07
One-hit 0.03
Table \(2\). Estimates of drinking water chlordane concentrations by various cancer assessment models
PBPK models are relatively new and are being employed when biological data are available. They quantitate the absorption of a foreign substance, its distribution, metabolism, tissue compartments, and elimination. Some compartments store the chemical (such as bone and adipose tissue) whereas others biotransform or eliminate it (such as liver or kidney). All these biological parameters are used to derive the target dose and comparable human doses.
Noncarcinogenic Risk Assessment
Historically, the Acceptable Daily Intake (ADI) procedure has been used to calculate permissible chronic exposure levels for humans based on noncarcinogenic effects. The ADI is the amount of a chemical to which a person can be exposed each day for a long time (usually lifetime) without suffering harmful effects. It is determined by applying safety factors (to account for the uncertainty in the data) to the highest dose in human or animal studies that has been demonstrated not to cause toxicity (NOAEL).
The EPA has slightly modified the ADI approach and calculates a Reference Dose (RfD) as the acceptable safety level for chronic noncarcinogenic and developmental effects. Similarly, the ATSDR calculates Minimal Risk Levels (MRLs) for noncancer endpoints.
The critical toxic effect used in the calculation of an ADI, RfD, or MRL is the serious adverse effect that occurs at the lowest exposure level. It may range from lethality to minor toxic effects. It is assumed that humans are as sensitive as the animal species unless evidence indicates otherwise.
Assessment of Chronic Exposures
In determining the ADIs, RfDs or MRLs, the NOAEL is divided by safety factors (uncertainty factors) in order to provide a margin of safety for allowable human exposure.
When a NOAEL is not available, a LOAEL can be used to calculate the RfD.
An additional safety factor is included if a LOAEL is used. A Modifying Factor of 0.1–10 allows risk assessors to use scientific judgment in upgrading or downgrading the total uncertainty factor based on the reliability and quality of the data. For example, if a particularly good study is the basis for the risk assessment, a modifying factor of <1 may be used. If a poor study is used, a factor of >1 can be incorporated to compensate for the uncertainty associated with the quality of the study.
Figure \(3\). Dose-response curve for noncarcinogenic effects
(Image Source: NLM)
Figure 3 above shows a dose-response curve for noncarcinogenic effects which also identifies the NOAEL and LOAEL. Any toxic effect might be used for the NOAEL/LOAEL so long as it is the most sensitive toxic effect and considered likely to occur in humans.
The Uncertainty Factors or Safety Factors used to derive an ADI or RfD are listed in Table \(3\).
Situation Uncertainty/Safety Factor
Human variability 10x
Extrapolation from animals to humans 10x
Use of less than chronic data 10x
Use of LOAEL instead of NOAEL 10x
Modifying factor 0.1—10x
Table \(3\). Uncertainty/Safety factors used to derive an Acceptable Daily Intake (ADI) or Reference Dose (RfD)
The modifying factor is used only in deriving EPA Reference Doses. The number of factors included in calculating the ADI or RfD depends upon the study used to provide the appropriate NOAEL or LOAEL.
The general formula for deriving the RfD is:
The more uncertain or unreliable the data become, the higher the total uncertainty factor that is applied. An example of an RfD calculation is provided below. A subchronic animal study with a LOAEL of 50 mg/kg/day was used in the numerator. Uncertainty factors used in the denominator are 10 for human variability, 10 for an animal study, 10 for less than chronic exposure, and 10 for use of an LOAEL instead of a NOAEL.
In addition to chronic effects, RfDs can also be derived for other long-term toxic effects, including developmental toxicity.
Traditionally, the NOAEL method has been used to determine the point of departure (POD) from animal toxicology data for use in risk assessments. However, this approach has limitations such as a strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, using the NOAEL does not take into consideration the shape of the dose-response curve and other related information.
Benchmark Dose Method
The benchmark dose (BMD) method, first proposed as an alternative in the 1980s, addresses many limitations of the NOAEL method. It is less dependent on dose selection and spacing and takes into account the shape of the dose-response curve (Figure 4). In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the availability of user-friendly BMD software programs, including the EPA’s Benchmark Dose Software (BMDS), the BMD has become the method of choice for many health organizations worldwide.
Figure \(4\). Extrapolated values using the benchmark dose method reflect the shape of a dose-response curve
(Image Source: EPA)
Assessment of Noncancer Toxicity Effects
While the Agency for Toxic Substances and Disease Registry (ATSDR) does not conduct cancer risk assessments, it does derive Minimal Risk Levels (MRLs) for noncancer toxicity effects (such as birth defects or liver damage). The MRL is defined as an estimate of daily human exposure to a substance that is likely to be without an appreciable risk of adverse effects over a specified duration of exposure. For inhalation or oral routes, MRLs are derived for acute (14 days or less), intermediate (15–364 days), and chronic (365 days or more) durations of exposures.
The method used to derive MRLs is a modification of the EPA's RfD methodology. The primary modification is that the uncertainty factors of 10 may be lower, either 1 or 3, based on scientific judgment. These uncertainty factors are applied for human variability, interspecies variability (extrapolation from animals to humans), and use of a LOAEL instead of NOAEL. As in the case of RfDs, the product of uncertainty factors multiplied together is divided into the NOAEL or LOAEL to derive the MRL.
Assessment of Acute or Short-Term Exposures
Risk assessments are also conducted to derive permissible exposure levels for acute or short-term exposures to chemicals. Health Advisories (HAs) are determined for chemicals in drinking water. HAs are the allowable human exposures for 1 day, 10 days, longer-term, and lifetime durations. The method used to calculate HAs is similar to that for the RfDs using uncertainty factors. Data from toxicity studies with durations of length appropriate to the HA are being developed.
Assessment of Occupational Exposures
For occupational exposures, Permissible Exposure Levels (PELs), Threshold Limit Values (TLVs), and National Institute for Occupational Safety and Health (NIOSH) Recommended Exposure Levels (RELs) are developed. They represent dose levels that will not produce adverse health effects from repeated daily exposures in the workplace. The method used to derive is conceptually the same. Safety factors are used to derive the PELs, TLVs, and RELs.
Conversion of Animal Doses to Human Dose Equivalents
Animal doses must be converted to human dose equivalents. The human dose equivalent is based on the assumption that different species are equally sensitive to the effects of a substance per unit of body weight or body surface area.
Historically, the FDA used a ratio of body weights of humans to animals to calculate the human dose equivalent. The EPA has used a ratio of surface areas of humans to animals to calculate the human dose equivalent. Some current approaches include multiplying the animal dose by the ratio of human to animal body weight raised to either the 2/3rd or 3/4th power (to convert from body weight to surface area). Toxicologists and risk assessors should check to make sure that the approach they are using is the one mandated or recommended by the regulatory agency of most relevance to their efforts.
Allowable Exposures to Contamination Sources
The last step in risk assessment is to express the risk in terms of allowable exposure to a contaminated source. Risk is expressed in terms of the concentration of the substance in the environment where human contact occurs. For example, the unit for assessing risk in air is risk per -3 g)." tabindex="0">mg/m3 whereas the unit for assessing risk in drinking water is risk per -3 g)." tabindex="0">mg/L.
For carcinogens, the media risk estimates are calculated by dividing cancer slope factors by 70 kg (average weight of a man) and multiplying by 20 m3/day (average inhalation rate of an adult) or 2 liters/day (average water consumption rate of an adult).
Knowledge Check
The procedures used to extrapolate from high to low doses primarily depend upon the:
Answer
Genotoxic carcinogenicity of the substance - This is the correct answer.
The procedure for extrapolation from high to low doses depend on whether or not the effects are carcinogenic. Carcinogenic effects are not considered to have a threshold dose and mathematical models are used to estimate the risk of carcinogenicity at very low doses. Noncarcinogenic effects are considered to have threshold doses and the margin of safety (MOS) is calculated.
According to EPA, a substance is classified as likely to be carcinogenic to humans when:
Answer
Evidence is adequate to demonstrate potential carcinogenicity to humans, but not strongly enough to definitively classify as carcinogenic - This is the correct answer.
A substance is classified as likely to be carcinogenic to humans when evidence is adequate to demonstrate carcinogenic potential to humans but does not reach the weight of evidence for the descriptor Carcinogenic to Humans.
The primary cancer risk assessment model used by the EPA is known as the:
Answer
Linearized Multistage Model (LMS) - This is the correct answer.
EPA uses the Linearized Multistage Model (LMS) to conduct its cancer risk assessments, producing the q1* that is used to predict cancer risk at a specific dose.
The Acceptable Daily Intake (ADI) is calculated by:
Answer
Dividing the NOAEL by safety factors - This is the correct answer.
The ADI is calculated by dividing the NOAEL by safety factors.
Animal doses must be converted to human dose equivalents for risk assessment. When doing this, toxicologists and risk assessors must:
Answer
Ensure they use the conversion method mandated or recommended by the regulatory agency most relevant to their efforts - This is the correct answer.
Toxicologists and risk assessors should check to ensure they use the approach mandated or recommended by the regulatory agency most relevant to their efforts.
Minimal Risk Levels (MRLs) are derived:
Answer
Similarly to deriving the RfD, but with a potentially lower uncertainty factor - This is the correct answer.
The MRL is calculated much like the RfD, except that the uncertainty factors of 10 may be lower (1 or 3), based on scientific judgment. | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_6%3A_Risk_Assessment/6.3%3A_Dose-Response_Assessment.txt |
Exposure Assessment
No Exposure = No Risk
An expression used in toxicology is "no exposure = no risk." Exposure assessment is a key step in the risk assessment process because without an exposure, even the most toxic chemical does not present a threat. Our understanding of potential exposures to chemicals has grown significantly since approximately 1980. For example, research has identified previously "missing" sources and pathways of potential indoor air exposures such as chemicals from consumer products or elsewhere that end up in household dust.
Environmental contaminants are analyzed according to their releases, movement and fate in the environment, and the exposed populations. Consumer products and pharmaceuticals are analyzed in terms of reasonably foreseeable potential exposures.
Figure \(1\). Exposure assessment as a component of risk assessment
(Image Source: ORAU, ©)
Sources of Exposure
Everyday life can involve a person being exposed to mixtures of hundreds of chemicals. Examples of documented non-occupational sources include:
• Consumer products (perhaps 20 or more during a day).
• Clothing.
• Residential and other water.
• Indoor and outdoor air.
• Food and food packaging.
• Beverages.
• Toys.
• Furniture.
• Carpeting, paint, and other building materials.
• Tobacco smoke.
• Household dust containing chemicals from consumer products.
• Outdoor soil and soil tracked indoors.
Figure \(2\) highlights potential exposures from a chemical used in furniture fabric.
Figure \(2\). Exposure to a chemical used in furniture textiles
(Image Source: Adapted from U.S. National Academy Press publication and iStock Photos, ©)
Figure \(3\) illustrates some of the types of possible consumer exposures from a hand dishwashing product.
Figure \(3\). The many uses and types of exposures for a dishwashing product
(Image Source: Adapted from iStock Photos, ©)
Other residential sources of chemicals can come from household air and water. For example, chemicals in air can deposit on, absorb into, or adsorb (Adsorption occurs when molecules of a gas, liquid, or dissolved solid adhere to a surface and create a thin film around it) onto household materials (such as carpets and foods and food packaging), which can lead to dermal and oral exposures. When chemicals are confined to indoor spaces and not diluted in outdoor air, there can be large differences in indoor versus outdoor levels of a chemical.
Figure \(4\) illustrates some other residential exposures to chemicals via water.
Figure \(4\). Water is a household source of chemicals
(Data Source: American Water Works Association Research Foundation, "Residential End Users of Water," 1999.
Learn more at EPA WaterSense.
Image Source: Adapted from iStock Photos, ©)
Exposure Pathways
The route a substance takes from its source (where it began) to its endpoint (where it ends), and how people can come into contact with (or be exposed to) it is defined as an exposure pathway.
An exposure pathway has five parts:
1. A source of exposure, such as using a consumer product for a household task or a chemical spilled from a truck onto a highway.
2. An environmental media and transport mechanism, such as movement through the indoor or outdoor air or groundwater.
3. A point of exposure, such as a person's house or a private well.
4. A route of exposure — eating, drinking, breathing, or touching.
5. A receptor population — a person or group of people potentially or actually exposed.
When all five parts are present, the exposure pathway is termed a completed exposure pathway.
(Source: ATSDR Glossary)
Process of Exposure Assessment
Exposure assessment is a three-step process:
The main variables in the exposure assessment are:
• Exposed populations.
• Types of substances.
• Single substance or mixture of substances.
• Frequency and duration of exposure.
• Pathways and types of exposure.
All possible types of reasonably foreseeable exposures are considered in order to assess the toxicity and risk that might occur.
Considerations for Environmental Exposure
For an environmental exposure, the risk assessor would look at the physical environment and the potentially exposed populations. The physical environment may include considerations about climate, vegetation, soil type, groundwater and surface water. Populations that may be exposed as the result of chemicals that migrate from the site of pollution are also considered.
Subpopulations may be at greater risk due to a higher level of exposure or because they have increased sensitivity. Examples include infants, the elderly, pregnant women, and those with chronic illness.
Pollutants may be transported away from the source and may be physically, chemically, or biologically transformed. They may also accumulate in various materials. Assessment of the chemical fate requires knowledge of many factors, including:
• Organic carbon and water partitioning at equilibrium (Koc).
• Chemical partitioning between soil and water (Kd).
• Partitioning between air and water (Henry's Law Constant).
• Solubility constants.
• Vapor pressures.
• Partitioning between water and octanol (Kow).
• Bioconcentration factors.
These factors are integrated with the data on sources, releases, and routes of the pollutants to determine the exposure pathways of importance, such as groundwater, surface water, air, soil, food, and/or breastmilk.
Use of Exposure Models
Because actual measurements of exposures are often not available, exposure models may be used. For example:
• In air quality studies, chemical emission and air dispersion models are used to predict the air concentrations to downwind residents.
• Residential wells downstream from a site may not currently show signs of contamination. They may become contaminated in the future as chemicals in the groundwater migrate to the well site.
In these situations, groundwater transport models can be used to estimate when chemicals of potential concern will reach the wells.
Information Sources on Chemical Exposures and Health
The future of exposure assessment promises to involve more information and more approaches.
Find out more information about:
Figure \(5\). Topics covered by CTD
(Image Source: CTD)
Considerations When Reading About a New Exposure Study
When reading about a new exposure study, there are several questions you can consider to critically evaluate studies about everyday chemical exposures (source):
1. Is the study published in a peer-reviewed journal?
2. Are there other publications that lend or do not lend support to the current research, or is this study possibly the first of its kind that suggests an emerging issue for regulators, chemical companies, consumer product manufacturers, and others to consider?
3. In addition to scientific journal publications, is there information available from government or other Web sites that provides perspectives about the exposures and potential risks?
4. How was the study conducted? For example, is it a preliminary or "pilot" study involving a small number of people who were studied, and did the participants represent a narrow or broad, wide range of the types of potentially affected consumers?
5. Did the study use household products/materials or food to which some consumers are likely to be exposed? If yes, are there geographical or other limitations that should be noted by the authors such as the products or food being likely to be sold only in one country or region of the world?
6. Did the study try to approach "reasonably foreseeable" consumer exposure conditions?
7. Is there a known or reasonably foreseeable association between these types of exposures and human adverse effects? | textbooks/chem/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology/Section_6%3A_Risk_Assessment/6.4%3A_Exposure_Assessment.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.